Deep learning has shown to have obliterated much of the computer vision area and other examples where classical approaches such as feature engineering are naturally hard. This comes at a price of a massive hunger for data of Deep Learning models to achieve an acceptable prediction accuracy. So with use cases like image recognition, etc. the relative performance of deep learning models is much better than with classical algorithms.
In general (you'll probably find specific counterexamples for this, but as I said _in general_) the relative performance of deep learning models to classical machine learning may drop below that of classical models when you start looking at medium-sized data sets. And I can assure you, there are many data sets, with many applications for machine learning. And while the world talks about deep learning, I know of many companies, where random forests, suport vector machines or bayesian models have been running for years (which means, cross-validated by data unavailable during model development) with a prediction performance that a business can depend on.
I agree with you that Reinforcement learning will as a technology become much more important in the next years. However, only if exploration is cheap enough. I expect Deep Reinforcement learning not to be the answer, at least not in its current state, but I can very well imagine that we will see more machine-learning-algorithm in the reinforcement-learning-loop experiments. I personally would hope to see more research in the bayesian reinforcement learning area.
I know, DL works only if you have massive datasets, DRL is even worse for the number of training episodes. Maybe you've heard about recent craziness of using GANs to generate training set fillers when your training sets are small, i.e. you have only 1,000 examples but need 10,000 for reasonable performance. Instead of gathering more examples, you use GAN to create believable training data, and it seems to be working quite well (i.e. a bump from 60% accuracy to 80% while bigger training dataset with real examples would bump you to 90%).
What I observed is that many ML companies now run two pipelines in parallel, one based on Deep Learning and the other on classical ML, then cherry pick solutions that work best for the problem/scale they have.
> I know, DL works only if you have massive datasets, DRL is even worse for the number of training episodes. Maybe you've heard about recent craziness of using GANs to generate training set fillers when your training sets are small, i.e. you have only 1,000 examples but need 10,000 for reasonable performance. Instead of gathering more examples, you use GAN to create believable training data, and it seems to be working quite well (i.e. a bump from 60% accuracy to 80% while bigger training dataset with real examples would bump you to 90%).
Sounds a bit like Baron Münchhausen pulling himself and the horse on which he was sitting out of a mire by his own hair.
I'd assume that instead of pulling such stunts, a reasonable generative model might have done the trick.
> What I observed is that many ML companies now run two pipelines in parallel, one based on Deep Learning and the other on classical ML, then cherry pick solutions that work best for the problem/scale they have.
Putting it this way, I agree. And my personal addendum here is: classical ML outperforms DL more often than the hype might make people think.
> Sounds a bit like Baron Münchhausen pulling himself and the horse on which he was sitting out of a mire by his own hair.
It sounds crazy, but you've likely seen what NVidia did with high-resolution synthetic faces using their progressive GANs; I'd totally use them as training examples without any hesitation.
In general (you'll probably find specific counterexamples for this, but as I said _in general_) the relative performance of deep learning models to classical machine learning may drop below that of classical models when you start looking at medium-sized data sets. And I can assure you, there are many data sets, with many applications for machine learning. And while the world talks about deep learning, I know of many companies, where random forests, suport vector machines or bayesian models have been running for years (which means, cross-validated by data unavailable during model development) with a prediction performance that a business can depend on.
I agree with you that Reinforcement learning will as a technology become much more important in the next years. However, only if exploration is cheap enough. I expect Deep Reinforcement learning not to be the answer, at least not in its current state, but I can very well imagine that we will see more machine-learning-algorithm in the reinforcement-learning-loop experiments. I personally would hope to see more research in the bayesian reinforcement learning area.