Feeds:
Posts
Comments

Posts Tagged ‘Video Lectures’

A second post in a series of posts about Information Theory/Learning based perspectives in Evolution, that started off from the last post.

Although the last post was mostly about a historical perspective, it had a section where the main motivation for some work in metabiology due to Chaitin (now published as a book) was reviewed. The starting point about that work was to view evolution solely through an information processing lens (and hence the use of Algorithmic Information Theory). Ofcourse this lens by itself is not a recent acquisition and goes back a few decades (although in hindsight the fact that it goes back just a few decades is very surprising to me at least). To illustrate this I wanted to share some analogies by John Maynard Smith (perhaps one of my favourite scientists), which I had found to be particularly incisive and clear. To avoid clutter, they are shared here instead (note that most of the stuff he talks about is something we study in high school, however the talk is quite good, especially because it tends to emphasize on the centrality of information throughout). I also want this post to act as a reference for some upcoming posts.

Coda:

Molecular Biology is all about Information. I want to be a little more general than that; the last century, the 19th century was a century in which Science discovered how energy could be transformed from one form to another […] This century will be seen […] where it became clear that information could be translated from one from to another.

[Other parts: Part 2, Part 3, Part 4, Part 5, Part 6]

Throughout this talk he gives wonderful analogies on how information translation underlies the so called Central Dogma of Molecular Biology, and how if the translation was one-way in some stages it could have implications (i.e how August Weismann noted that acquired characters are not inherited by giving a “Chinese telegram translation analogy”; since there was no mechanism to translate acquired traits (acquired information) into the organism so that it could be propagated).

However, the most important point from the talk: One could see evolution as being punctuated by about 6 or so major changes or shifts. Each of these events was marked by the way information was stored and processed in a different way. Some that he talks about are:

1. The origin of replicating molecules.

2. The Evolution of chromosomes: Chromosomes are just strings of the above replicating molecules. The property that they have is that when one of these molecules is replicated, the others have to be as well. The utility of this is the following: Since they are all separate genes, they might have different rates of replication and the gene that replicates fastest will soon outnumber all the others and all the information would be lost. Thus this transition underlies a kind of evolution of cooperation between replicating molecules or in other other words chromosomes are a way for forced cooperation between genes.

3. The Evolution of the Code: That information in the nucleic could be translated to sequences of amino acids i.e. proteins.

4. The Origin of Sex: The evolution of sex is considered an open question. However one argument goes that (details in next or next to next post) the fact that sexual reproduction hastens the acquisition from the environment (as compared to asexual reproduction) explains why it should evolve.

5. The Evolution of multicellular organisms: A large, complex signalling system had to evolve for these different kind of cells to function in an organism properly (like muscle cells or neurons to name some in Humans).

6. Transition from solitary individuals to societies: What made these societies of individuals (ants, humans) possible at all? Say if we stick to humans, this could have only happened only if there was a new way to transmit information from generation to generation – one such possible information transducing machine could be language! Thus giving an additional mechanism to transmit information from one generation to another other than the genetic mechanisms (he compares the genetic code and replication of nucleic acids and the passage of information by language). This momentous event (evolution of language ) itself dependent on genetics. With the evolution of language, other things came by:  Writing, Memes etc. Which might reproduce and self-replicate, mutate and pass on and accelerate the process of evolution. He ends by saying this stage of evolution could perhaps be as profound as the evolution of language itself.

________________

As a side comment: I highly recommend the following interview of John Maynard Smith as well. I rate it higher than the above lecture, although it is sort of unrelated to the topic.

________________

Interesting books to perhaps explore:

1. The Major Transitions in EvolutionJohn Maynard Smith and Eörs Szathmáry.

2. The Evolution of Sex: John Maynard Smith (more on this theme in later blog posts, mostly related to learning and information theory).

________________

Read Full Post »

One interesting project that I am involved in these days involves certain problems in Intelligent Tutors. It turns out that perhaps one of the best ways to tackle them is by using Conditional Random Fields (CRFs). Many attempts to solving these problems still involve Hidden Markov Models (HMMs). Since I have never really been a Graphical Models guy (though I am always fascinated) so I found the going on studying CRFs quite difficult. Now that the survey is more or less over, here are my suggestions for beginners to go about learning them.

Tutorials and Theory

1. Log-Linear Models and Conditional Random Fields (Tutorial by Charles Elkan)


Log-linear Models and Conditional Random Fields
Charles Elkan

6 videos: Click on Image above to view

Two directions of approaching CRFs are especially useful to get a good perspective on their use. One of these is considering CRFs as an alternate to Hidden Markov Models (HMMs) while another is to think of CRFs building over Logistic Regression.

This tutorial makes an approach from the second direction and is easily one of the most basic around. Most people interested in CRFs would ofcourse be familiar with ideas of maximum likelihood, logistic regression etc. This tutorial does a good job, starting with the absolute basics – talking about logistic regression (for a two class problem) to a more general multi-label machine learning problem with a structured output (outputs having a structure). I tried reading a few tutorials before this one, but found this to be the most comprehensive and the best place to start. It however seems that there is one lecture missing in this series which (going by the notes) covered more training algorithms.

2. Survey Papers on Relational Learning

These are not really tutorials on CRFs, but talk of sequential learning in general. For beginners, these surveys are useful to clarify the range of problems in which CRFs might be useful while also discussing other methods for the same briefly. I would recommend these two tutorials to help put CRFs in perspective in the broader machine learning sub-area of Relational Learning.

— Machine Learning for Sequential Learning: A Survey (Thomas Dietterich)

PDF

This is a very broad survey that talks of sequential learning, defines the problem and some of the most used methods.

— An Introduction to Structured Discriminative Learning (R Memisevic)

PS

This tutorial is like the above, however focuses more on comparing CRFs with large margin methods such as SVM. Giving yet another interesting perspective in placing CRFs.

3. Comprehensive CRF Tutorial (Andrew McCallum and Charles Sutton)

 PDF

This tutorial is the most compendious tutorial available for CRF. While it claims to start from the bare bone basics, I found it hard for a start and took it on third (after the above two). It is potentially the starting and ending point for a more advanced Graphical Models student. It is extensive (90 pages) and gives a feeling of comfort with CRFs when done. It is definitely the best tutorial available though by no means the most easiest point to start if you have never done any sequential learning before.

This might be considered an extension to this tutorial by McCallum et al : CRFs for Relational Learning (PDF)

4. Original CRF Paper (John Lafferty et al.)

PDF

Though not necessary to learn CRFs given many better tutorials, this paper is still recommended, being the first on CRFs.

5. Training/Derivations (Rahul Gupta)

PDF

This report is good for the various training methods and for one to go through the derivations associated.

6. Applications to Vision (Nowozin/Lampert)

If your primary focus is using structured prediction in Computer Vision/Image Analysis then a good tutorial (with a large section on CRFs) can be found over here:

Structured prediction and learning in Computer Vision (Foundations and Trends Volume).

PDF

___________________

Extensions to the CRF concept

There are a number of extensions to CRFs. The two that I have found most helpful in my work are (these are easy to follow given the above):

1. Hidden State Conditional Random Fields (H CRF)

2. Latent Dynamic Conditional Random Fields (LDCRF)

Both of these extensions work to include hidden variables in the CRF framework.

___________________

Software Packages

1. Kevin Murphy’s CRF toolbox (MATLAB)

2. MALLET (I haven’t used MALLET, it is Java based)

3. HCRF – LDCRF Library (MATLAB, C++, Python). As as the name suggests, this package is for HCRF and LDCRF, though can be used as a standalone package for CRF as well.

Read Full Post »

Here are a number of interesting courses, two of which I am looking at for the past two weeks and that i would hopefully finish by the end of August-September.

Introduction to Neural Networks (MIT):

These days, amongst the other things that I have at hand including a project on content based image retrieval. I have been making it a point to look at a MIT course on Neural Networks. And needless to say, I am getting to learn loads.

neurons1

I would like to emphasize that though I have implemented a signature verification system using Neural Nets, I am by no means good with them. I can be classified a beginner. The tool that I am more comfortable with are Support Vector Machines.

I have been wanting to know more about them for some years now, but I never really got the time or you can say the opportunity. Now that I can invest some time, I am glad I came across this course. So far I have been able to look at 7 lectures and I should say that I am MORE than very happy with the course. I think it is very detailed and extremely well suited for the beginner as well as the expert.

The instructor is H. Sebastian Seung who is the professor of computational neuroscience at the MIT.

The course has 25 lectures each one packed with a great amount of information. Meaning, the lectures might work slow for those who are not very familiar with this stuff.

The video lectures can be accessed over here. I must admit that i am a little disappointed that these lectures are not available on you-tube. That’s because the downloads are rather large in size. But I found them worth it any way.

The lectures cover the following:

Lecture 1: Classical neurodynamics
Lecture 2: Linear threshold neuron
Lecture 3: Multilayer perceptrons
Lecture 4: Convolutional networks and vision
Lecture 5: Amplification and attenuation
Lecture 6: Lateral inhibition in the retina
Lecture 7: Linear recurrent networks
Lecture 8: Nonlinear global inhibition
Lecture 9: Permitted and forbidden sets
Lecture 10: Lateral excitation and inhibition
Lecture 11: Objectives and optimization
Lecture 12: Excitatory-inhibitory networks
Lecture 13: Associative memory I
Lecture 14: Associative memory II
Lecture 15: Vector quantization and competitive learning
Lecture 16: Principal component analysis
Lecture 17: Models of neural development
Lecture 18: Independent component analysis
Lecture 19: Nonnegative matrix factorization. Delta rule.
Lecture 20: Backpropagation I
Lecture 21: Backpropagation II
Lecture 22: Contrastive Hebbian learning
Lecture 23: Reinforcement Learning I
Lecture 24: Reinforcement Learning II
Lecture 25: Review session

The good thing is that I have formally studied most of the stuff after lecture 13 , but going by the quality of lectures so far (first 7), I would not mind seeing them again.

Quick Links:

Course Home Page.

Course Video Lectures.

Prof H. Sebastian Seung’s Homepage.

_____

Visualization:

This is a Harvard course. I don’t know when I’ll get the time to have a look at this course, but it sure looks extremely interesting. And I am sure a number of people would be interested in having a look at it. It looks like a course that be covered up pretty quickly actually.tornado

[Image Source]

The course description says the following:

The amount and complexity of information produced in science, engineering, business, and everyday human activity is increasing at staggering rates. The goal of this course is to expose you to visual representation methods and techniques that increase the understanding of complex data. Good visualizations not only present a visual interpretation of data, but do so by improving comprehension, communication, and decision making.

In this course you will learn how the human visual system processes and perceives images, good design practices for visualization, tools for visualization of data from a variety of fields, collecting data from web sites with Python, and programming of interactive visualization applications using Processing.

The topics covered are:

  • Data and Image Models
  • Visual Perception & Cognitive Principles
  • Color Encoding
  • Design Principles of Effective Visualizations
  • Interaction
  • Graphs & Charts
  • Trees and Networks
  • Maps & Google Earth
  • Higher-dimensional Data
  • Unstructured Text and Document Collections
  • Images and Video
  • Scientific Visualization
  • Medical Visualization
  • Social Visualization
  • Visualization & The Arts

Quick Links:

Course Home Page.

Course Syllabus.

Lectures, Slides and other materials.

Video Lectures

_____

Advanced AI Techniques:

This is one course that I would  be looking at some parts of after I have covered the course on Neural Nets.  I am yet to glance at the first lecture or the materials, so i can not say how they would be like. But I sure am expecting a lot from them going by the topics they are covering.

The topics covered in a broad sense are:

  • Bayesian Networks
  • Statistical NLP
  • Reinforcement Learning
  • Bayes Filtering
  • Distributed AI and Multi-Agent systems
  • An Introduction to Game Theory

Quick Link:

Course Home.

_____

Astrophysical Chemistry:

I don’t know if I would be able to squeeze in time for these. But because of my amateurish interest in chemistry (If I were not an electrical engineer, I would have been into Chemistry), and because I have very high regard for Dr Harry Kroto (who is delivering them) I would try and make it a point to have a look at them. I think I’ll skip gym for some days to have a look at them. ;-)

kroto2006

[Nobel Laureate Harry Kroto with a Bucky-Ball model – Image Source : richarddawkins.net]

Quick Links:

Dr Harold Kroto’s Homepage.

Astrophysical Chemistry Lectures

_____

Onionesque Reality Home >>

Read Full Post »

In the past month or so I have been looking at a series of lectures on Data Mining that I had long bookmarked. I’ve had a look at the lectures twice and I found them extremely useful, hence I thought it was not a bad idea to share them here, though I am aware that they are pretty old and rather well circulated.

These lectures delivered by Professor David Mease as Google Tech Talks/Stanford Stat202 course lectures, work equally well for beginners as for experts who need to brush up with basic ideas. The course uses R extensively.

data mining icon11Statistical Aspects of Data Mining

Links:

Course Video Lectures.

Course website.

Lecture Slides.

_____

I’d end with some Dilbert strips on Data-Mining that I have liked in the past!

Data Mining

_____

DilbertMiningData2

_____

DilbertMiningData3

_____

Onionesque Reality Home >>

Read Full Post »

I recently discovered a series of three lectures by the legendary physicist Hans Bethe given in 1999. Bethe was a professor at  Cornell University almost all his life and these lectures given at age 93 had been made public by the University quite a while ago.

betheblackboard2

[Hans Bethe at the blackboard at Cornell in 1967: Image Source and Copyright – Cornell University ]

These lectures are on the Quantum theory for expert and the non- expert alike. Due to some engagements I am yet to view them, however I am still posting them as I am sure these as given by Bethe himself would be great.

From the Cornell University Webpage for these lectures:

IN 1999, legendary theoretical physicist Hans Bethe delivered three lectures on quantum theory to his neighbors at the Kendal of Ithaca retirement community (near Cornell University). Given by Professor Bethe at age 93, the lectures are presented here as QuickTime videos synchronized with slides of his talking points and archival material.

Intended for an audience of Professor Bethe’s neighbors at Kendal, the lectures hold appeal for experts and non-experts alike. The presentation makes use of limited mathematics while focusing on the personal and historical perspectives of one of the principal architects of quantum theory whose career in physics spans 75 years.

A video introduction and appreciation are provided by Professor Silvan S. Schweber, the physicist and science historian who is Professor Bethe’s biographer, and Edwin E. Salpeter, the J. G. White Distinguished Professor of Physical Science Emeritus at Cornell, who was a post-doctoral student of Professor Bethe.


Introduction

video0-2

View Introduction (Quick Time Required)

The introduction has been given by Edwin E. Salpeter and Silvan S. Schweber.


Lecture 1

video1

View Lecture 1 (Quick Time Required)

Lecture 2

video2

View Lecture 2 (Quick Time Required)

Lecture 3

video3

View Lecture 3 (Quick Time Required)

Appreciation

video4-2

View Appreciation (Quick Time Required)

Note: All the images above and also the text giving an introduction to the lectures are a copyright of Cornell. Please comply with the terms of use associated with them.

Links:

1. Download Apple Quick Time

Onionesque Reality Home >>

Read Full Post »

Edit (November 05, 2011): Note that this post was made over THREE years ago. That time this was the only comprehensive Machine Learning course available online. Since then situation has changed. Professor Andrew Ng’s course has been offered online for everyone. Many other courses have also become available. Find some of these at the bottom of this post.

Just two weeks ago I posted a few lectures on Machine Learning, Learning Theory, Kernel Methods etc on this post. Since then my friend and guide Sumedh Kulkarni informed me of a new course on the Stanford University youtube channel on Machine Learning I have also indexed this channel on my post on Video Lectures.

Since then I have already seen half of it, and though it covers a very broad range, and is meant to be a first course on Machine Learning, it is in my opinion the best course on the web on the same. Most others I find boring because of either poor English of the instructor or bad recording or both.

The course is taken by Dr Andrew Ng, who has very good experience in teaching this course and working in Robotics, AI and Machine Learning in general. Incidentally he has been the guide of a PhD candidate Ashutosh Saxena, whose research papers we have used for a previous project on pattern recognition.

Dr Ng’s deep knowledge in the field can be felt in just some minutes into the first course which he makes even more interesting by his good communication skills and ability to make lectures more exciting and intuitive by adding fun videos in between.

The course details are as follows.

Course: Machine Learning (CS 229). It can be accessed over here: Stanford Machine Learning.

Instructor: Dr Andrew Ng.

Course Overview:

Lecture 1: Overview of the broad field of Machine Learning.

Lecture 2: Linear regression, Gradient descent, and normal equations and discussion on how they relate to machine learning.

Lecture 3: Locally weighted regression, Probabilistic interpretation and Logistic regression.

Lecture 4: Newton’s method, exponential families, and generalized linear models.

Lecture 5: Generative learning algorithms and Gaussian discriminative analysis.

Lecture 6: Applications of naive Bayes, neural networks, and support vector machine.

Lecture 7: Optimal margin classifiers, KKT conditions, and SUM duals.

Lecture 8: Support vector machines, including soft margin optimization and kernels.

Lecture 9: Learning theory, covering bias, variance, empirical risk minimization, union bound and Hoeffding’s inequalities.

Lecture 10: VC dimension and model selection.

Lecture 11: Bayesian statistics, regularization, digression-online learning, and the applications of machine learning algorithms.

Lecture 12: Unsupervised learning in the context of clustering, Jensen’s inequality, mixture of Gaussians, and expectation-maximization.

Lecture 13: Expectation-maximization in the context of the mixture of Gaussian and naive Bayes models, as well as factor analysis and digression.

Lecture 14: Factor analysis and expectation-maximization steps, and continues on to discuss principal component analysis (PCA).

Lecture 15: Principal component analysis (PCA) and independent component analysis (ICA) in relation to unsupervised machine learning.

Lecture 16: Reinforcement learning, focusing particularly on MDPs, value functions, and policy and value iteration.

Lecture 17: Reinforcement learning, focusing particularly on continuous state MDPs, discretization, and policy and value iterations.

Lecture 18: State action rewards, linear dynamical systems in the context of linear quadratic regulation, models, and the Riccati equation, and finite horizon MDPs.

Lecture 19: Debugging process, linear quadratic regulation, Kalmer filters, and linear quadratic Gaussian in the context of reinforcement learning.

Lecture 20: POMDPs, policy search, and Pegasus in the context of reinforcement learning.

Course Notes: CS 229 Machine Learning.

My gratitude to Stanford and Prof Andrew Ng for providing this wonderful course to the general public.

Other Machine Learning Video Courses:

1. Tom Mitchell’s Machine Learning Course.

Related Posts:

1. Demystifying Support Vector Machines for Beginners. (Papers, Tutorials on Learning Theory, Machine Learing)

2. Video Lectures.

Onionesque Reality Home >>

Read Full Post »

Though I have worked on pattern recognition in the past I have always wanted to work with Neural Networks for the same. However for some reason or the other I could never do so, I could not even take it as an elective subject due to some constraints. Over the last two years or so I have been promising myself and ordering myself to stick to a schedule and study ANNs properly, however due to a combination of procrastination, over-work and bad planning I have never been able to do anything with them.

However I have now got the opportunity to work with Support Vector Machines and over the past some time I have been reading extensively on the same and have been trying to get playing with them. Now that the actual implementation and work is set to start I am pretty excited to work with them. It is nice that I get to work with SVMs though I could not with ANNs.

Support Vector Machine is a classifier derived from statistical learning theory by Vladimir Vapnik and his co-workers. The foundations for the same were laid by him as late as the 1970s SVM shot to prominence when using pixel maps as input it gave an accuracy comparable with sophisticated Neural Networks with elaborate features in a handwriting recognition task.

Traditionally Neural Networks based approaches have suffered some serious drawbacks, especially with generalization, producing models that can overfit the data. SVMs embodies the structural risk minimization principle that is shown superior to the empirical risk minimization that neural networks use. This difference gives SVMs the greater ability to generalize.

However learning how to work with SVMs can be challenging and somewhat intimidating at first. When i started reading on the topic I took the books by Vapnik on the subject but could not make much head or tail. I could only attain a certain degree of understanding, nothing more. To specialize in something I do well when I start off as a generalist, having a good and quite correct idea of what is exactly going on. Knowing in general what is to be done and what is what, after this initial know-how makes me comfortable I reach the stage of starting with the mathematics which gives profound understanding as anything without mathematics is meaningless. However most books that I came across missed the first point for me, and it was very difficult to make a headstart. There was a book which I could read in two days that helped me get that general picture quite well. I would highly recommend it for most who are in the process of starting with SVMs.The book is titled Support Vector Machines and other Kernel Based Learning methods and is authored by Nello Cristianini and John-Shawe Taylor.

I would highly recommend people who are starting with Support Vector Machines to buy this book. It can  be obtained easily over Amazon.

This book has very less of a Mathematical treatment but it makes clear the ideas involved and this introduces a person studying from it to think more clearly before he/she can refine his/her understanding by reading something heavier mathematically. Another that I would highly recommend is the book Support Vector Machines for Pattern Classification by Shigeo Abe.

Another book that I highly recommend is Learning with Kernels by Bernhard Scholkopf and Alexander Smola. Perfect book for beginners.

Only after one has covered the required stuff from here that I would suggest Vapnik’s books which then would work wonderfully well.

Other than the books there are a number of Video Lectures and tutorials on the Internet that can work as well!

Below is a listing of a large number of good tutorials on the topic. I don’t intend to flood a person interested in starting with too much information, where ever possible i have described what the document carries so that one could decide what should suffice for him/her on the basis of need. Also I have star-marked some of the posts. This marks the ones that i have seen and studied from personally and found them most helpful and i am sure they would work the same way with both beginners and people with reasonable experience alike.

Webcasts/ Video Lectures on Learning Theory, Support Vector Machines and related ideas:

EDIT: For those interested. I had posted about a course on Machine Learning that has been provided by Stanford university. It too is suited for an introduction to Support Vector Machines. Please find the post here. Also this comment might be helpful, suggestions to it according to your learning journey are also welcome.

1. *Machine Learning Workshop, University of California at Berkeley. This series covers most of the basics required. Beginners can skip the sessions on Bayesian models and Manifold Learning.

Workshop Outline:

Session 1: Classification.

Session 2: Regression.

Session 3: Feature Selection

Session 4: Diagnostics

Session 5: Clustering

Session 6: Graphical Models

Session 7: Linear Dimensionality Reduction

Session 8: Manifold Learning and Visualization

Session 9: Structured Classification

Session 10: Reinforcement Learning

Session 11: Non-Parametric Bayesian Models

2. Washington University. Beginners might be interested on the sole talk on the topic of Supervised Learning for Computer Vision Applications or maybe in the talk on Dimensionality Reduction.

3. Reinforcement Learning, Universitat Freiburg.

4. Deep Learning Workshop. Good talks, But I’d say these are meant for only the highly interested.

5. *Introduction to Learning Theory, Olivier Bousquet.

This tutorial focuses on the “larger picture” than on mathematical proofs, it is not restricted to statistical learning theory however. The course comprises of five lectures and is quite good to watch. The Frenchman is both smart and fun!

6. *Statistical Learning Theory, Olivier Bousquet. This course gives a detailed introduction to Learning Theory with a focus on the Classification problem.

Course Outline:

Probabilistic and Concentration inequalities, Union Bounds, Chaining, Measuring the size of a function class, Vapnik Chervonenkis Dimension, Shattering Dimensions and Rademacher averages, Classification with real valued functions.

7. *Statistical Learning Theory, Olivier Bousquet. This is not the repeat of the above course. This one is a more recent lecture series than the above actually. This course has six lectures. Another excellent set.

Course Outline:

Learning Theory: Foundations and Goals

Learning Bounds: Ingredients and Results

Implications: What to conclude from bounds

7. Advanced Statistical Learning Theory, Olivier Bousquet. This set of lectures compliment the above courses on statistical learning theory and give a more detailed exposition of the current advancements in the same.This course has three lectures.

Course Outline:

PAC Bayesian bounds: a simple derivation, comparison with Rademacher averages, Local Rademacher complexity with classification loss, Talagrand’s inequality. Tsybakov noise conditions, Properties of loss functions for classification (influence on approximation and estimation, relationship with noise conditions), Applications to SVM – Estimation and approximation properties, role of eigenvalues of the Gram matrix.

8. *Statistical Learning Theory, John-Shawe Taylor, University of London. One plus point of this course is that is has some good English. Don’t miss this lecture as it has been given by the same professor whose book we just discussed.

9. *Learning with Kernels, Bernhard Scholkopf.

This course covers the basics for Support Vector Machines and related Kernel methods. This course has six lectures.

Course Outline:

Kernel and Feature Spaces, Large Margin Classification, Basic Ideas of Learning Theory, Support Vector Machines, Other Kernel Algorithms.

10. Kernel Methods, Alexander Smola, Australian National University.  This is an advanced course as compared to the above and covers exponential families, density estimation, and conditional estimators such as Gaussian Process classification, regression, and conditional random fields, Moment matching techniques in Hilbert space that can be used to design two-sample tests and independence tests in statistics.

11. *Introduction to Kernel Methods, Bernhard Scholkopf, There are four parts to this course.

Course Outline:

Kernels and Feature Space, Large Margin Classification, Basic Ideas of Learning Theory, Support Vector Machines, Examples of Other Kernel Algorithms.

12. Introduction to Kernel Methods, Partha Niyogi.

13. Introduction to Kernel Methods, Mikhail Belkin, Ohio State University.This lecture is second in part to the above.

14. *Kernel Methods in Statistical Learning, John-Shawe Taylor.

15. *Support Vector Machines, Chih-Jen Lin, National Taiwan University. Easily one of the best talks on SVM. Almost like a run-down tutorial.

Course Outline:

Basic concepts for Support Vector Machines, training and optimization procedures of SVM, Classification and SVM regression.

16. *Kernel Methods and Support Vector Machines, Alexander Smola. A comprehensive six lecture course.

Course Outline:

Introduction of the main ideas of statistical learning theory, Support Vector Machines, Kernel Feature Spaces, An overview of the applications of Kernel Methods.

Additional Courses:

1. Basics of Probability and Statistics for Machine Learning, Mikaela Keller.

This course covers most of the basics that would be required for the above courses. However sometimes the shooting quality is a little shady. This talk seems to be the most popular on the video lectures site, one major reason in my opinion is that the lady delivering the lecture is quite pretty!

2. Some Mathematical Tools for Machine Learning, Chris Burges.

3. Machine Learning Laboratory, S.V.N Vishwanathan.

4. Machine Learning Laboratory, Chrisfried Webers.

Introductory Tutorials (PDF/PS):

1. *Support Vector Machines with Applications (Statistical Science). Click here >>

2. *Support Vector Machines (Marti Hearst, UC Berkeley). Click Here >>

3. *Support Vector Machines- Hype or Hallelujah (K. P. Bennett, RPI). Click Here >>

4. Support Vector Machines and Pattern Recognition (Georgia Tech). Click Here >>

5. An Introduction to Support Vector Machines in Data Mining (Georgia Tech). Click Here >>

6. University of Wisconsin at Madison CS 769 (Zhu). Click Here >>

7. Generalized Support Vector Machines (Mangasarian, University of Wisconsin at Madison). Click Here >>

8. *A Practical Guide to Support Vector Classification (Hsu, Chang, Lin, Via U-Michigan Ann Arbor). Click Here >>

9. *A Tutorial on Support Vector Machines for Pattern Recognition (Christopher J.C Burges, Bell Labs Lucent Technologies, Data mining and knowledge Discovery). Click Here >>

10. Support Vector Clustering (Hur, Horn, Siegelmann, Journal of Machine Learning Research. Via MIT). Click Here >>

11. *What is a Support Vector Machine (Noble, MIT). Click Here >>

12. Notes on PCA, Regularization, Sparisty and Support Vector Machines (Poggio, Girosi, MIT Dept of Brain and Cognitive Sciences). Click Here >>

13. *CS 229 Lecture Notes on Support Vector Machines (Andrew Ng, Stanford University). Click Here >>

Introductory Slides (mostly lecture slides):

1. Support Vector Machines in Machine Learning (Arizona State University). Click here >>

Lecture Outline:

What is Machine Learning, Solving the Quadratic Programs, Three very different approaches, Comparison on medium and large sets.

2. Support Vector Machines (Arizona State University). Click Here >>

Lecture Outline:

The Learning Problem, What do we know about test data, The capacity of a classifier, Shattering, The Hyperplane Classifier, The Kernel Trick, Quadratic Programming.

3. Support Vector Machines, Linear Case (Jieping Ye, Arizona State University). Click Here >>

Lecture Outline:

Linear Classifiers, Maximum Margin Classifier, SVM for Separable data, SVM for non-separable data.

4. Support Vector Machines, Non Linear Case (Jieping Ye, Arizona State University). Click Here >>

Lecture Outline:

Non Linear SVM using basis functions, Non-Linear SVMs using Kernels, SVMs for Multi-class Classification, SVM path, SVM for unbalanced data.

5. Support Vector Machines (Sue Ann Hong, Carnegie Mellon). Click Here >>

6. Support Vector Machines (Carnegie Mellon University Machine Learning 10701/15781). Click Here >>

7. Support Vector Machines and Kernel Methods (CMU). Click Here >>

8. SVM Tutorail (Columbia University). Click Here >>

9. Support Vector Machines (Via U-Maryland at College Park). Click Here >>

10. Support Vector Machines: Algorithms and Applications (MIT OCW). Click Here >>

11. Support Vector Machines (MIT OCW). Click Here >>

Papers/Notes on some basic related ideas (No estoric research papers here):

1. Robust Feature Induction for Support Vector Machines (Arizona State University). Click Here >>

2. Hidden Markov Support Vector Machines (Brown University). Click Here >>

3. *Training Data Set for Support Vector Machines (Brown University). Click Here >>

4. Support Vector Machines are Universally Consistent (Journal Of Complexity). Click Here >>

5. Feature Selection for Classification of Variable Length Multi-Attribute Motions (Li, Khan, Prabhakaran). Click Here >>

6. Selecting Data for Fast Support Vector Machine Training (Wang, Neskovic, Cooper). Click Here >>

7. *Normalization in Support Vector Machines (Caltech). Click Here >>

8. The Support Vector Decomposition Machine (Periera, Gordon, Carnegie Mellon). Click Here >>

9. Semi-Supervised Support Vector Machines (Bennett, Demiriz, RPI). Click Here >>

10. Supervised Clustering with Support Vector Machines (Finley, Joachims, Cornell University). Click Here >>

11. Metric Learning: A Support Vector Approach (Cornell University). Click Here >>

12. Training Linear SVMs in Linear Time (Joachims, Cornell Unversity). Click Here >>

13. *Rule Extraction from Linear Support Vector Machines (Fung, Sandilya, Rao, Siemens Medical Solutions). Click Here >>

14. Support Vector Machines, Reproducing Kernel Hilbert Spaces and Randomizeed GACV (Wahba, University of Wisconsian at Madison). Click Here >>

15. The Mathematics of Learning: Dealing with Data (Poggio, Girosi, AI Lab, MIT). Click Here >>

16. Training Invariant Support Vector Machines (Decoste, Scholkopf, Machine Learning). Click Here >>

*As I have already mentioned above, the star marked courses/lectures/tutorials/papers are the ones that I have seen and studied from personally (and hence can vouch for) and these in my opinion should work best for beginners.

Onionesque Reality Home >>

Read Full Post »

Older Posts »