Feeds:
Posts
Comments

Posts Tagged ‘Stanford University’

The first part is just to motivate this upcoming Stanford video series.

Deep Learning? Supervised Learning is the process where an entity has to “teach” or “supervise” the learning. The learning algorithm (such as a neural network) is shown some features (which are carefully extracted) and then it is told the correct answer (training). Over time it learns a function that maps features to labels. It thus  focuses on finding what would be the class label given a set of features i.e. P(Y|X) where Y is the class and X the features. For example in face recognition, after we have extracted features using a technique such as PCA or ICA, the task is to use these features and label information (person name or ID etc) to learn a function that can make predictions. But we see in everyday life that label information is not as important in learning. Humans do some kind of “clustering” and generative modeling of whatever they see all the time. Given a set of objects we tend to form a generative model of those objects, and then assign labels, labels thus give very little information in actual learning. Another interesting question is how features are learnt in the first place? Is it an unsupervised task? How can a computer learn features in an unsupervised manner?

Unsupervised Feature Learning? Now consider a task where you have to improve accuracy on classifying an image as that of an elephant or a Rhino. But the catch is that you are not given any labeled examples of elephants or Rhinos, not even that, suppose you are not even given unlabeled examples of them. But you are given random images of rivers and mountains and you have to learn a feature representation from these that can help you in your task. This can be done by sparse coding as shown by Raina et al.

______________

Lectures: Recently I came across a series of lectures (which are a work in progress) by Professor Andrew Y. Ng on Unsupervised Feature Learning and Deep Learning. This course will help present some issues such as the above to a wider audience. Though still not yet uploaded, I am really excited about these as I had really enjoyed his CS 229 lectures a long time ago. This course needs some basic knowledge of Machine Learning, but does brush up some basics.

I have been working on Meta-Learning for a while, but have been getting more interested in Deep Learning Methods recently and hence am looking forward for these lectures to come online.

I wrote to Professor Ng about them and in his opinion it would take a few months before they can be put up. I think that works fine as I plan to work on Deep Learning in the summers and that these would really help. Even now expertise in Deep Learning Methods is restricted to only a few places and thus such lectures would be a great advantage.

Here is a description to the Unsupervised Feature Learning and Deep Learning course:

Machine learning has seen numerous successes, but applying learning algorithms today often means spending a long time hand-engineering the input feature representation. This is true for many problems in vision, audio, NLP, robotics, and other areas. In this course, you’ll learn about methods for unsupervised feature learning and deep learning, which automatically learn a good representation of the input from unlabeled data. You’ll also pick up the “hands-on,” practical skills and tricks-of-the-trade needed to get these algorithms to work well.

Basic knowledge of machine learning (supervised learning) is assumed, though we’ll quickly review logistic regression and gradient descent.

I hope this would be as widely viewed as the CS 229 lectures. I say that as I know these would be fantastic.

______________

Onionesque Reality Home >>

Read Full Post »

In the past month or so I have been looking at a series of lectures on Data Mining that I had long bookmarked. I’ve had a look at the lectures twice and I found them extremely useful, hence I thought it was not a bad idea to share them here, though I am aware that they are pretty old and rather well circulated.

These lectures delivered by Professor David Mease as Google Tech Talks/Stanford Stat202 course lectures, work equally well for beginners as for experts who need to brush up with basic ideas. The course uses R extensively.

data mining icon11Statistical Aspects of Data Mining

Links:

Course Video Lectures.

Course website.

Lecture Slides.

_____

I’d end with some Dilbert strips on Data-Mining that I have liked in the past!

Data Mining

_____

DilbertMiningData2

_____

DilbertMiningData3

_____

Onionesque Reality Home >>

Read Full Post »

Edit (November 05, 2011): Note that this post was made over THREE years ago. That time this was the only comprehensive Machine Learning course available online. Since then situation has changed. Professor Andrew Ng’s course has been offered online for everyone. Many other courses have also become available. Find some of these at the bottom of this post.

Just two weeks ago I posted a few lectures on Machine Learning, Learning Theory, Kernel Methods etc on this post. Since then my friend and guide Sumedh Kulkarni informed me of a new course on the Stanford University youtube channel on Machine Learning I have also indexed this channel on my post on Video Lectures.

Since then I have already seen half of it, and though it covers a very broad range, and is meant to be a first course on Machine Learning, it is in my opinion the best course on the web on the same. Most others I find boring because of either poor English of the instructor or bad recording or both.

The course is taken by Dr Andrew Ng, who has very good experience in teaching this course and working in Robotics, AI and Machine Learning in general. Incidentally he has been the guide of a PhD candidate Ashutosh Saxena, whose research papers we have used for a previous project on pattern recognition.

Dr Ng’s deep knowledge in the field can be felt in just some minutes into the first course which he makes even more interesting by his good communication skills and ability to make lectures more exciting and intuitive by adding fun videos in between.

The course details are as follows.

Course: Machine Learning (CS 229). It can be accessed over here: Stanford Machine Learning.

Instructor: Dr Andrew Ng.

Course Overview:

Lecture 1: Overview of the broad field of Machine Learning.

Lecture 2: Linear regression, Gradient descent, and normal equations and discussion on how they relate to machine learning.

Lecture 3: Locally weighted regression, Probabilistic interpretation and Logistic regression.

Lecture 4: Newton’s method, exponential families, and generalized linear models.

Lecture 5: Generative learning algorithms and Gaussian discriminative analysis.

Lecture 6: Applications of naive Bayes, neural networks, and support vector machine.

Lecture 7: Optimal margin classifiers, KKT conditions, and SUM duals.

Lecture 8: Support vector machines, including soft margin optimization and kernels.

Lecture 9: Learning theory, covering bias, variance, empirical risk minimization, union bound and Hoeffding’s inequalities.

Lecture 10: VC dimension and model selection.

Lecture 11: Bayesian statistics, regularization, digression-online learning, and the applications of machine learning algorithms.

Lecture 12: Unsupervised learning in the context of clustering, Jensen’s inequality, mixture of Gaussians, and expectation-maximization.

Lecture 13: Expectation-maximization in the context of the mixture of Gaussian and naive Bayes models, as well as factor analysis and digression.

Lecture 14: Factor analysis and expectation-maximization steps, and continues on to discuss principal component analysis (PCA).

Lecture 15: Principal component analysis (PCA) and independent component analysis (ICA) in relation to unsupervised machine learning.

Lecture 16: Reinforcement learning, focusing particularly on MDPs, value functions, and policy and value iteration.

Lecture 17: Reinforcement learning, focusing particularly on continuous state MDPs, discretization, and policy and value iterations.

Lecture 18: State action rewards, linear dynamical systems in the context of linear quadratic regulation, models, and the Riccati equation, and finite horizon MDPs.

Lecture 19: Debugging process, linear quadratic regulation, Kalmer filters, and linear quadratic Gaussian in the context of reinforcement learning.

Lecture 20: POMDPs, policy search, and Pegasus in the context of reinforcement learning.

Course Notes: CS 229 Machine Learning.

My gratitude to Stanford and Prof Andrew Ng for providing this wonderful course to the general public.

Other Machine Learning Video Courses:

1. Tom Mitchell’s Machine Learning Course.

Related Posts:

1. Demystifying Support Vector Machines for Beginners. (Papers, Tutorials on Learning Theory, Machine Learing)

2. Video Lectures.

Onionesque Reality Home >>

Read Full Post »