Feeds:
Posts
Comments

Archive for February, 2011

The first part is just to motivate this upcoming Stanford video series.

Deep Learning? Supervised Learning is the process where an entity has to “teach” or “supervise” the learning. The learning algorithm (such as a neural network) is shown some features (which are carefully extracted) and then it is told the correct answer (training). Over time it learns a function that maps features to labels. It thus  focuses on finding what would be the class label given a set of features i.e. P(Y|X) where Y is the class and X the features. For example in face recognition, after we have extracted features using a technique such as PCA or ICA, the task is to use these features and label information (person name or ID etc) to learn a function that can make predictions. But we see in everyday life that label information is not as important in learning. Humans do some kind of “clustering” and generative modeling of whatever they see all the time. Given a set of objects we tend to form a generative model of those objects, and then assign labels, labels thus give very little information in actual learning. Another interesting question is how features are learnt in the first place? Is it an unsupervised task? How can a computer learn features in an unsupervised manner?

Unsupervised Feature Learning? Now consider a task where you have to improve accuracy on classifying an image as that of an elephant or a Rhino. But the catch is that you are not given any labeled examples of elephants or Rhinos, not even that, suppose you are not even given unlabeled examples of them. But you are given random images of rivers and mountains and you have to learn a feature representation from these that can help you in your task. This can be done by sparse coding as shown by Raina et al.

______________

Lectures: Recently I came across a series of lectures (which are a work in progress) by Professor Andrew Y. Ng on Unsupervised Feature Learning and Deep Learning. This course will help present some issues such as the above to a wider audience. Though still not yet uploaded, I am really excited about these as I had really enjoyed his CS 229 lectures a long time ago. This course needs some basic knowledge of Machine Learning, but does brush up some basics.

I have been working on Meta-Learning for a while, but have been getting more interested in Deep Learning Methods recently and hence am looking forward for these lectures to come online.

I wrote to Professor Ng about them and in his opinion it would take a few months before they can be put up. I think that works fine as I plan to work on Deep Learning in the summers and that these would really help. Even now expertise in Deep Learning Methods is restricted to only a few places and thus such lectures would be a great advantage.

Here is a description to the Unsupervised Feature Learning and Deep Learning course:

Machine learning has seen numerous successes, but applying learning algorithms today often means spending a long time hand-engineering the input feature representation. This is true for many problems in vision, audio, NLP, robotics, and other areas. In this course, you’ll learn about methods for unsupervised feature learning and deep learning, which automatically learn a good representation of the input from unlabeled data. You’ll also pick up the “hands-on,” practical skills and tricks-of-the-trade needed to get these algorithms to work well.

Basic knowledge of machine learning (supervised learning) is assumed, though we’ll quickly review logistic regression and gradient descent.

I hope this would be as widely viewed as the CS 229 lectures. I say that as I know these would be fantastic.

______________

Onionesque Reality Home >>

Read Full Post »