Andrew Ng said in the Coursera ML course that if you know linear regression, logistic regression, advanced optimization tools and regularization, then you may know more ML than many engineers using ML at Silicon Valley.
Is that true? ad by DigitalOcean. Answer Wiki. Well if it is true then you are in luck. I think Harvard Business Review predicted that there will be a shortage of about 200,000 data scientists by 2018. Prof. Ng class is a good first choice. Everything from linear models to neural networks is really just the single topic of neural nets. After that he does unsupervised learning with a couple classic techniques. But unsupervised learning is being done with neural net/deep learning approaches as well. He could have changed his class by adding a bit more on stochastic and mini-batch gradient descent. You do not only use these for functions with local minima, they also work better on functions with a single minima. Also, with temporal data you need recurrent neural nets. This then brings up the "vanishing gradient problem" and how to deal with it. So this could come easily after he ended his neural net lectures. It's really all about back propagation and picking features. Convolutional nets would then follow and those are front ends that input into neural nets. They are used in unsupervised learning in many areas and hold all records so far in classifying image, text, speech, language, audio, etc. Extremely basic features are started with then these covnets will find the higher order features for you. They have been kicking butt lately.
Комментариев нет:
Отправить комментарий