There has been a rapid democratization of data and tools in the past year. New tools and techniques are discovered, get converted to code, and released to the public via APIs very quickly thereby shortening product lifecycles in the ML and AI space. One way to note this trend is by observing the rapid open sourcing of ML algorithms and AI machines since 2015.
In the first couple of rounds of this open source movement, companies were releasing just the algorithms (Google’s TensorFlow) or the architectures (Facebook). Then there were datasets (Yahoo) and APIs (Microsoft’s Project Oxford). Now Google set a new trend when it released a fully trained AI (Parsey McParseface and SyntaxNet). Also from earlier this month, we have Ambry by LinkedIn and DSSTNE (pronounced Destiny) by Amazon.
The most exciting thing today is that even if one doesn’t have a rigorous theoretical background in ML, one can still apply pre-implemented advanced algorithms like Conditional Random Field or Hidden Markov Models to their data analysis projects. One doesn’t need to know the details of the implementation of CRFs or HMMs before they can use it.
In addition to the democratization of tools, we are also seeing ML cloud companies constantly wooing users towards their platforms. Many of them give you instantaneous access to the software, the algorithms and also the hardware architecture one needs. Once you adopt a platform, it is likely that you will stick to that platform as you grow more sophisticated. Here is a great comparative review on all the top 6 ML clouds.
What all this essentially means is that it is perfectly alright if one doesn’t know about the Sequential Minimal Optimization algorithm developed by John Platt in 1998 to solve the quadratic programming problem that occurs when you train a support vector machine (SVM). Knowing that (1) SVM is just a supervised learning model that fits a linear hyperplane to classify high dimensional data (2) SVM is capable of transforming nonlinear data to a linear classifiable form using kernels, and (3) SVM allows for errors using the soft margin technique allows one to implement SVM from scratch. But we don’t need to re-do that implementation anymore.
What we really need today is not necessarily an ML theoretician, but people who can apply ML very well in their respective domains. We may not necessarily need someone who can derive a new result, say by extending the Vapnik–Chervonenkis dimension. Similarly re-implementing singular value, eigen value or even Cholesky decompositions one more time today may not be the most innovative endeavor. However, we definitely need people who can come up with creative applications that will eventually enable ML and AI to become more mainstream in our lives, almost to a point where ML now becomes a part of human culture itself.
A preview of this future can be found at the Google I/O 2016 Keynote speech.
This doesn’t mean that research in ML has stopped. It just means that the go-to-market timelines for implementations of new tools and techniques from the time they get published in the academic community to the time the practicing ML community lays hands on it, have shortened tremendously.
Long live Machine Learning!
source: http://www.datasciencecentral.com/profiles/blogs/machine-learning-is-dead-long-live-machine-learning