Feeds:
Posts
Comments

Archive for the ‘Video Lectures’ Category

A second post in a series of posts about Information Theory/Learning based perspectives in Evolution, that started off from the last post.

Although the last post was mostly about a historical perspective, it had a section where the main motivation for some work in metabiology due to Chaitin (now published as a book) was reviewed. The starting point about that work was to view evolution solely through an information processing lens (and hence the use of Algorithmic Information Theory). Ofcourse this lens by itself is not a recent acquisition and goes back a few decades (although in hindsight the fact that it goes back just a few decades is very surprising to me at least). To illustrate this I wanted to share some analogies by John Maynard Smith (perhaps one of my favourite scientists), which I had found to be particularly incisive and clear. To avoid clutter, they are shared here instead (note that most of the stuff he talks about is something we study in high school, however the talk is quite good, especially because it tends to emphasize on the centrality of information throughout). I also want this post to act as a reference for some upcoming posts.

Coda:

Molecular Biology is all about Information. I want to be a little more general than that; the last century, the 19th century was a century in which Science discovered how energy could be transformed from one form to another […] This century will be seen […] where it became clear that information could be translated from one from to another.

[Other parts: Part 2, Part 3, Part 4, Part 5, Part 6]

Throughout this talk he gives wonderful analogies on how information translation underlies the so called Central Dogma of Molecular Biology, and how if the translation was one-way in some stages it could have implications (i.e how August Weismann noted that acquired characters are not inherited by giving a “Chinese telegram translation analogy”; since there was no mechanism to translate acquired traits (acquired information) into the organism so that it could be propagated).

However, the most important point from the talk: One could see evolution as being punctuated by about 6 or so major changes or shifts. Each of these events was marked by the way information was stored and processed in a different way. Some that he talks about are:

1. The origin of replicating molecules.

2. The Evolution of chromosomes: Chromosomes are just strings of the above replicating molecules. The property that they have is that when one of these molecules is replicated, the others have to be as well. The utility of this is the following: Since they are all separate genes, they might have different rates of replication and the gene that replicates fastest will soon outnumber all the others and all the information would be lost. Thus this transition underlies a kind of evolution of cooperation between replicating molecules or in other other words chromosomes are a way for forced cooperation between genes.

3. The Evolution of the Code: That information in the nucleic could be translated to sequences of amino acids i.e. proteins.

4. The Origin of Sex: The evolution of sex is considered an open question. However one argument goes that (details in next or next to next post) the fact that sexual reproduction hastens the acquisition from the environment (as compared to asexual reproduction) explains why it should evolve.

5. The Evolution of multicellular organisms: A large, complex signalling system had to evolve for these different kind of cells to function in an organism properly (like muscle cells or neurons to name some in Humans).

6. Transition from solitary individuals to societies: What made these societies of individuals (ants, humans) possible at all? Say if we stick to humans, this could have only happened only if there was a new way to transmit information from generation to generation – one such possible information transducing machine could be language! Thus giving an additional mechanism to transmit information from one generation to another other than the genetic mechanisms (he compares the genetic code and replication of nucleic acids and the passage of information by language). This momentous event (evolution of language ) itself dependent on genetics. With the evolution of language, other things came by:  Writing, Memes etc. Which might reproduce and self-replicate, mutate and pass on and accelerate the process of evolution. He ends by saying this stage of evolution could perhaps be as profound as the evolution of language itself.

________________

As a side comment: I highly recommend the following interview of John Maynard Smith as well. I rate it higher than the above lecture, although it is sort of unrelated to the topic.

________________

Interesting books to perhaps explore:

1. The Major Transitions in EvolutionJohn Maynard Smith and Eörs Szathmáry.

2. The Evolution of Sex: John Maynard Smith (more on this theme in later blog posts, mostly related to learning and information theory).

________________

Read Full Post »

Deep Learning reads Wikipedia and discovers the meaning of life – Geoff Hinton.

The above quote is from a very interesting talk by Geoffrey Hinton I had the chance to attend recently.

I have been at a summer school on Deep Neural Nets and Unsupervised Featured Learning at the Institute for Pure and Applied Mathematics at UCLA since July 9 (till July 27). It has been organized by Geoff Hinton, Yoshua Bengio, Stan Osher, Andrew Ng and Yann LeCun.

I have always been a “fan” of Neural Nets and the recent spike in interest in them has made me excited, thus the school happened at just the right time. The objective of the summer school is to give a broad overview of some of the recent work in Deep Learning and Unsupervised Feature Learning with emphasis on optimization, deep architectures and sparse representations. I must add that after getting here and looking at the peer group I would consider myself lucky to have obtained funding for the event!

[Click on the above image to see slides for the talks. Videos will be added at this location after July 27 Videos are now available]

That aside, if you are interested in Deep Learning or Neural Networks in general, the slides for the talks are being uploaded over here (or click on the image above), videos will be added at the same location some time after the summer school ends so you might like to bookmark this link.

The school has been interesting given the wide range of people who are here. The diversity of opinions about Deep Learning itself has given a good perspective on the subject and the issues and strengths of it. There are quite a few here who are somewhat skeptical of deep learning but are curious, while there are some who have been actively working on the same for a while. Also, it has been enlightening to see completely divergent views between some of the speakers on key ideas such as sparsity. For example Geoff Hinton had a completely different view of why sparsity was useful in classification tasks than compared to Stéphane Mallat, who gave a very interesting talk today even joking that “Hinton and Yann LeCun told you why sparsity is useful, I’ll tell you why sparsity is useless. “. See the above link for more details.

Indeed, such opinions do tell you that there is a lot of fecund ground for research in these areas.

I have been compiling a reading list on some of this stuff and will make a blog-post on the same soon.

________________

Onionesque Reality Home >>

Read Full Post »

Changing or increasing functionality of circuits in biological evolution is a form of computational learning. – Leslie Valiant

The title of this post comes from Prof. Leslie Valiant‘s The ACM Alan M. Turing award lecture titled “The Extent and Limitations of Mechanistic Explanations of Nature”.

Prof. Leslie G. Valiant

Click on the image above to watch the lecture

[Image Source: CACM “Beauty and Elegance”]

Short blurb: Though the lecture came out sometime in June-July 2011, and I have shared it (and a paper that it quotes) on every online social network I have presence on, I have no idea why I never blogged about it.

The fact that I have zero training (and epsilon knowledge of) in biology that has not stopped me from being completely fascinated by the contents of the talk and a few papers that he cites in it. I have tried to see the lecture a few times and have also started to read and understand some of the papers he mentions. Infact, the talk has inspired me enough to know more about PAC Learning than the usual Machine Learning graduate course might cover. Knowing more about it is now my “full time side-project” and it is a very exciting side-project to say the least!

_________________________

Getting back to the title: One of the motivating questions about this work is the following:

It is widely accepted that Darwinian Evolution has been the driving force for the immense complexity observed in life or how life evolved. In this beautiful 10 minute video Carl Sagan sums up the timeline and the progression:

There is however one problem: While evolution is considered the driving force for such complexity, there isn’t a satisfactory explanation of how 13.75 billion years of it could have been enough. Many have often complained that this reduces it to a little more than an intuitive explanation. Can we understand the underlying mechanism of Evolution (that can in turn give reasonable time bounds)? Valiant makes the case that this underlying mechanism is of computational learning.

There have been a number of computational models that have been based on the general intuitive idea of Darwinian Evolution. Some of these include: Genetic Algorithms/Programming etc. However, people like Valiant amongst others find such methods useful in an engineering sense but unsatisfying w.r.t the question.

In the talk Valiant mentions that this question was asked in Darwin’s day as well. To which Darwin proposed a bound of 300 million years for such evolution to occur. This immediately fell into a problem as Lord Kelvin, one of the leading physicists of the time put the figure of the age of Earth to be 24 million years. Now obviously this was a problem as evolution could not have happened for more than 24 million years according to Kelvin’s estimate. The estimate of the age of the Earth is now much higher. ;-)

The question can be rehashed as: How much time is enough? Can biological circuits evolve in sub-exponential time?

For more I would point out to his paper:

Evolvability: Leslie Valiant (Journal of the ACM – PDF)

Towards the end of the talk he shows a Venn diagram of the type usually seen in complexity theory text books for classes P, NP, BQP etc but with one major difference: These subsets are fact and not unproven:

Fact: Evolvability \subseteq SQ Learnable \subseteq PAC Learnable

*SQ or Statistical Query Learning is due to Michael Kearns (1993)

Coda: Valiant claims that the problem of evolution is no more mysterious than the problem of learning. The mechanism that underlies biological evolution is “evolvable target pursuit”, which in turn is the same as “learnable target pursuit”.

_________________________

Onionesque Reality Home >>

Read Full Post »

One interesting project that I am involved in these days involves certain problems in Intelligent Tutors. It turns out that perhaps one of the best ways to tackle them is by using Conditional Random Fields (CRFs). Many attempts to solving these problems still involve Hidden Markov Models (HMMs). Since I have never really been a Graphical Models guy (though I am always fascinated) so I found the going on studying CRFs quite difficult. Now that the survey is more or less over, here are my suggestions for beginners to go about learning them.

Tutorials and Theory

1. Log-Linear Models and Conditional Random Fields (Tutorial by Charles Elkan)


Log-linear Models and Conditional Random Fields
Charles Elkan

6 videos: Click on Image above to view

Two directions of approaching CRFs are especially useful to get a good perspective on their use. One of these is considering CRFs as an alternate to Hidden Markov Models (HMMs) while another is to think of CRFs building over Logistic Regression.

This tutorial makes an approach from the second direction and is easily one of the most basic around. Most people interested in CRFs would ofcourse be familiar with ideas of maximum likelihood, logistic regression etc. This tutorial does a good job, starting with the absolute basics – talking about logistic regression (for a two class problem) to a more general multi-label machine learning problem with a structured output (outputs having a structure). I tried reading a few tutorials before this one, but found this to be the most comprehensive and the best place to start. It however seems that there is one lecture missing in this series which (going by the notes) covered more training algorithms.

2. Survey Papers on Relational Learning

These are not really tutorials on CRFs, but talk of sequential learning in general. For beginners, these surveys are useful to clarify the range of problems in which CRFs might be useful while also discussing other methods for the same briefly. I would recommend these two tutorials to help put CRFs in perspective in the broader machine learning sub-area of Relational Learning.

— Machine Learning for Sequential Learning: A Survey (Thomas Dietterich)

PDF

This is a very broad survey that talks of sequential learning, defines the problem and some of the most used methods.

— An Introduction to Structured Discriminative Learning (R Memisevic)

PS

This tutorial is like the above, however focuses more on comparing CRFs with large margin methods such as SVM. Giving yet another interesting perspective in placing CRFs.

3. Comprehensive CRF Tutorial (Andrew McCallum and Charles Sutton)

 PDF

This tutorial is the most compendious tutorial available for CRF. While it claims to start from the bare bone basics, I found it hard for a start and took it on third (after the above two). It is potentially the starting and ending point for a more advanced Graphical Models student. It is extensive (90 pages) and gives a feeling of comfort with CRFs when done. It is definitely the best tutorial available though by no means the most easiest point to start if you have never done any sequential learning before.

This might be considered an extension to this tutorial by McCallum et al : CRFs for Relational Learning (PDF)

4. Original CRF Paper (John Lafferty et al.)

PDF

Though not necessary to learn CRFs given many better tutorials, this paper is still recommended, being the first on CRFs.

5. Training/Derivations (Rahul Gupta)

PDF

This report is good for the various training methods and for one to go through the derivations associated.

6. Applications to Vision (Nowozin/Lampert)

If your primary focus is using structured prediction in Computer Vision/Image Analysis then a good tutorial (with a large section on CRFs) can be found over here:

Structured prediction and learning in Computer Vision (Foundations and Trends Volume).

PDF

___________________

Extensions to the CRF concept

There are a number of extensions to CRFs. The two that I have found most helpful in my work are (these are easy to follow given the above):

1. Hidden State Conditional Random Fields (H CRF)

2. Latent Dynamic Conditional Random Fields (LDCRF)

Both of these extensions work to include hidden variables in the CRF framework.

___________________

Software Packages

1. Kevin Murphy’s CRF toolbox (MATLAB)

2. MALLET (I haven’t used MALLET, it is Java based)

3. HCRF – LDCRF Library (MATLAB, C++, Python). As as the name suggests, this package is for HCRF and LDCRF, though can be used as a standalone package for CRF as well.

Read Full Post »

I am a big fan and collector of the BBC Horizon documentaries and I was pleasantly surprised to have found an old one (probably from the year Horizon started, though I think this is from 1966) that I didn’t know exist till two weeks ago. It is on the exciting discovery of the \Omega - and features Richard Feynman and Murray Gell-Mann. It, like the old Horizon documentaries is more technical but at the same time more raw and exciting. And is worth watching only for its historical significance and age if nothing else. Definitely a collector’s item!

________________

Strangeness Minus Three (BBC Horizon, 1964/6)

Total Runtime: 41:20

[Part 1 | Part 2 | Part3]

[Alternative Link]

________________

Onionesque Reality Home >>

Read Full Post »

The first part is just to motivate this upcoming Stanford video series.

Deep Learning? Supervised Learning is the process where an entity has to “teach” or “supervise” the learning. The learning algorithm (such as a neural network) is shown some features (which are carefully extracted) and then it is told the correct answer (training). Over time it learns a function that maps features to labels. It thus  focuses on finding what would be the class label given a set of features i.e. P(Y|X) where Y is the class and X the features. For example in face recognition, after we have extracted features using a technique such as PCA or ICA, the task is to use these features and label information (person name or ID etc) to learn a function that can make predictions. But we see in everyday life that label information is not as important in learning. Humans do some kind of “clustering” and generative modeling of whatever they see all the time. Given a set of objects we tend to form a generative model of those objects, and then assign labels, labels thus give very little information in actual learning. Another interesting question is how features are learnt in the first place? Is it an unsupervised task? How can a computer learn features in an unsupervised manner?

Unsupervised Feature Learning? Now consider a task where you have to improve accuracy on classifying an image as that of an elephant or a Rhino. But the catch is that you are not given any labeled examples of elephants or Rhinos, not even that, suppose you are not even given unlabeled examples of them. But you are given random images of rivers and mountains and you have to learn a feature representation from these that can help you in your task. This can be done by sparse coding as shown by Raina et al.

______________

Lectures: Recently I came across a series of lectures (which are a work in progress) by Professor Andrew Y. Ng on Unsupervised Feature Learning and Deep Learning. This course will help present some issues such as the above to a wider audience. Though still not yet uploaded, I am really excited about these as I had really enjoyed his CS 229 lectures a long time ago. This course needs some basic knowledge of Machine Learning, but does brush up some basics.

I have been working on Meta-Learning for a while, but have been getting more interested in Deep Learning Methods recently and hence am looking forward for these lectures to come online.

I wrote to Professor Ng about them and in his opinion it would take a few months before they can be put up. I think that works fine as I plan to work on Deep Learning in the summers and that these would really help. Even now expertise in Deep Learning Methods is restricted to only a few places and thus such lectures would be a great advantage.

Here is a description to the Unsupervised Feature Learning and Deep Learning course:

Machine learning has seen numerous successes, but applying learning algorithms today often means spending a long time hand-engineering the input feature representation. This is true for many problems in vision, audio, NLP, robotics, and other areas. In this course, you’ll learn about methods for unsupervised feature learning and deep learning, which automatically learn a good representation of the input from unlabeled data. You’ll also pick up the “hands-on,” practical skills and tricks-of-the-trade needed to get these algorithms to work well.

Basic knowledge of machine learning (supervised learning) is assumed, though we’ll quickly review logistic regression and gradient descent.

I hope this would be as widely viewed as the CS 229 lectures. I say that as I know these would be fantastic.

______________

Onionesque Reality Home >>

Read Full Post »

This is a first for this blog, and hence worth mentioning.

I came across a paper that is to appear in the proceedings of the IEEE Conference on Computer Systems and Applications 2010. Find the paper here.

This paper cites an old post on this blog, one of the first few infact. This is reference number [2] on the paper. It was good to know, and more importantly, a boost to blog to discuss small ideas that are otherwise improper for a formal presentation.

___________

Since it is lame to write just the above lines, I leave you with a couple of talks that I watched over the friday night and I would highly recommend.

There was a talk by Machine Learning pioneer Geoffrey Hinton some years ago at Google Tech Talks that became quite a hit. This talk was titled The Next Generation of Neural Networks that discusses Restricted Boltzmann Machines, and how this generative approach can lead to learning complex and deep dependencies in the data.

There was a follow up talk recently, that I had long bookmarked, but just got around to seeing yesterday. This like the previous is a fantastic talk that has completed my conversion to begin exploring deep learning methods. :)

Here is the talk –

Another great talk that I had been looking at last night was a talk by Prof Yann LeCun

Here is the talk –

This talk is started by the late Sam Roweis. It feels good at one level to see his work preserved on the internet. I have quite enjoyed talks by him at summer schools in the past.

___________

Onionesque Reality Home >>

Read Full Post »

A week ago I observed that there was a wonderful new documentary on you-tube, put-up by none other than author and documentary film-maker Christopher Sykes. This post is about this documentary and some thoughts related to it. Before I talk again about the documentary, I’ll digress for a moment and come back to it in a while.

With the exception of the Feynman Lectures in Physics Volume III, Six not so easy pieces (both of which I don’t intend to read in the conceivable future) there is no book with which Feynman was involved (he never wrote himself) that I have not had the opportunity to read. The last that I read was “Don’t You Have Time to Think“, a collection of delightful letters by Feynman written over the years (Note that “Don’t you have time to think” is the same as “Perfectly Reasonable Deviations”).

Don't You Have Time To Think

A number of people including many of Feynman’s close friends were surprised to learn that Feynman wrote letters and so many of them. He didn’t seem to be the kinds who would write the kind of letters that he did.  These give a very different picture of the man than a conventional biography would. Usually, collections of letters tend to be boring and drab, but I think these are an exception.  They reveal him to be a genius with a human touch. I have written about Feynman before, like I have covered points in an earlier post which now seems to me to be overtly enthusiastic. ;-)

Sean Caroll aptly writes that Feynman worship is often overdone, I think he is right. Let me make my own opinion on the matter.

I don’t consider Feynman god or anywhere close to that (but definitely one of my idols and one man I admire greatly), I actually consider him to be very human and some one who was unashamed of admitting to his weaknesses and who had a certain love for life that’s rare. I only am attracted to Feynman for one reason : People like Feynman are a breath of fresh air in the bunch of supercilious pseudo-intellectual snobs that are abound in academia and industry. A breath of fresh air especially for the lesser mortals like me. That’s why I like that man. Why is he so famous? I have tried writing on it before. And I won’t do so anymore.

I’d like to cite two quotes that would give my point of view on the celebrity-fication of scientists, in this case Feynman. Dave Brooks writes in the Telegraph in an article titled “Physicist still leaves some all shook up” February 5, 2003:

Feynman is the person every geek would want to be: very smart, honored  by the establishment even as he won’t play by his rules, admired by people of both sexes, arrogant without being envied and humble without being pitied. In other words, he’s young Elvis, with the Earth  shaking talent transferred from larynx to brain cells and enough sense to have avoided the fat Las Vegas phase. Is such celebrity-fication of scientists good? I think so, even if people do have a tendency to go overboard. Anything that gets us thinking about science is something to be admired, whether it comes in the form of an algorithm or an anecdote.

I remember reading an essay by the legendary Freeman Dyson that said:

Science too needs its share of super heroes to bring in new talent.

These rest my case I suppose.

_____

The only other book of Feynman that I have not read and that I have wanted to read for a LONG time is Tuva or Bust! Richard Feyman’s Last Journey. Unfortunately I have never been able to find it.

Tuva or Bust! Richard Feyman's Last Journey

There was a BBC Horizon documentary on the same. And thankfully Christopher J. Sykes has uploaded that documentary on you-tube.

This is a rare documentary and was the last in which Feynman appeared. It was infact shot just some days before his death. This documents the obsession of Richard Feynman and his friend Ralph Leighton with visiting an obscure place in central Asia called Tannu Tuva. During a discussion on geography and in a teasing mood Feynman was reminded of a long forgotten memory and quipped at Leighton, “Whatever happened to Tannu Tuva”. Leighton thought it was a joke and confidently said that there was no such country at all. After some searching they found out that Tannu Tuva was once a country and now a soviet satellite. It’s capital was “Kyzyl”, the name was so interesting to Feynman that he though he just had to go to this place. The book and the documentary covers Feynman’s and Leighton’s adventure of scheming of getting to go to Tannu Tuva and to get around Soviet bureaucracy. It is an extremely entertaining film to say the least. The end for it is a little sad though. Feynman passed away three days before he got a letter from the Soviets about permission to visit Tannu Tuva and Leighton appears to be on the verge of tears.

The introduction to the documentary reads as:

The story of physicist Richard Feynman’s fascination with the remote Asian country of Tannu Tuva, and his efforts to go there with his great friend and drumming partner Ralph Leighton (co-author of the classic ‘Surely You’re Joking, Mr Feynman’). Feynman was dying of cancer when this was filmed, and died a few weeks after the filming. Originally shown in the BBC TV science series ‘Horizon’ in 1987, and also shown in the USA on PBS ‘Nova’ under the title ‘Last Journey of a Genius’

Find the five parts to the documentary below:

“I’m an explorer okay? I get curious about everything and I want to investigate all kinds of stuff”

Part 1

tatu1Click on the above image to watch

____

Part 2

tatu2Click on the above image to watch

____

Part 3

tatu3-2Click on the above image to watch

____

Part 4

tatu4Click on the above image to watch

____

Part 5

tatu5-2Click on the above image to watch

____

After I got done with the documentary did I realize that the PBS version of the above documentary was available on google video for quite some time.

Find the video here.

_____

Michelle Feynman

Michelle Feynman

As an aside :  though Feynman could not manage to go to Tuva in his lifetime. His daughter Michelle did visit Tuva last month!

_____

One of the things that has me in awe after the documentary over the last week is Tuvan throat singing. It is one of the most remarkable things that I have seen in the past month or two. I am strongly attracted to Tibetan chants too, but these are very different and fascinating. The remarkable thing about them being that the singer can produce two pitches as if being sung by two separate singers. Have a look!

_____

Project Tuva : Character of Physical Law Lectures

On the same day I came across 7 lectures which were given by Feynman at Cornell in 1964 and were put into a book later by the name “The Character of Physical Law”.  These have been made freely available by Microsoft Research. Though some of these lectures have already been on youtube for a while, the ones that were not needless to say were a joy to watch. I had linked to the lectures on Gravitation and Arrow of Time previously.

Project TuvaClick on the above image to be directed to the lectures

I came to know of these lectures on Prof Terence Tao’s page, who I find very inspiring too!

_____

Quick Links:

1. Christopher J. Sykes’ Youtube channel.

2. Tuva or Bust

3. Project Tuva at Microsoft Research

_____

Onionesque Reality Home >>

Read Full Post »

Here are a number of interesting courses, two of which I am looking at for the past two weeks and that i would hopefully finish by the end of August-September.

Introduction to Neural Networks (MIT):

These days, amongst the other things that I have at hand including a project on content based image retrieval. I have been making it a point to look at a MIT course on Neural Networks. And needless to say, I am getting to learn loads.

neurons1

I would like to emphasize that though I have implemented a signature verification system using Neural Nets, I am by no means good with them. I can be classified a beginner. The tool that I am more comfortable with are Support Vector Machines.

I have been wanting to know more about them for some years now, but I never really got the time or you can say the opportunity. Now that I can invest some time, I am glad I came across this course. So far I have been able to look at 7 lectures and I should say that I am MORE than very happy with the course. I think it is very detailed and extremely well suited for the beginner as well as the expert.

The instructor is H. Sebastian Seung who is the professor of computational neuroscience at the MIT.

The course has 25 lectures each one packed with a great amount of information. Meaning, the lectures might work slow for those who are not very familiar with this stuff.

The video lectures can be accessed over here. I must admit that i am a little disappointed that these lectures are not available on you-tube. That’s because the downloads are rather large in size. But I found them worth it any way.

The lectures cover the following:

Lecture 1: Classical neurodynamics
Lecture 2: Linear threshold neuron
Lecture 3: Multilayer perceptrons
Lecture 4: Convolutional networks and vision
Lecture 5: Amplification and attenuation
Lecture 6: Lateral inhibition in the retina
Lecture 7: Linear recurrent networks
Lecture 8: Nonlinear global inhibition
Lecture 9: Permitted and forbidden sets
Lecture 10: Lateral excitation and inhibition
Lecture 11: Objectives and optimization
Lecture 12: Excitatory-inhibitory networks
Lecture 13: Associative memory I
Lecture 14: Associative memory II
Lecture 15: Vector quantization and competitive learning
Lecture 16: Principal component analysis
Lecture 17: Models of neural development
Lecture 18: Independent component analysis
Lecture 19: Nonnegative matrix factorization. Delta rule.
Lecture 20: Backpropagation I
Lecture 21: Backpropagation II
Lecture 22: Contrastive Hebbian learning
Lecture 23: Reinforcement Learning I
Lecture 24: Reinforcement Learning II
Lecture 25: Review session

The good thing is that I have formally studied most of the stuff after lecture 13 , but going by the quality of lectures so far (first 7), I would not mind seeing them again.

Quick Links:

Course Home Page.

Course Video Lectures.

Prof H. Sebastian Seung’s Homepage.

_____

Visualization:

This is a Harvard course. I don’t know when I’ll get the time to have a look at this course, but it sure looks extremely interesting. And I am sure a number of people would be interested in having a look at it. It looks like a course that be covered up pretty quickly actually.tornado

[Image Source]

The course description says the following:

The amount and complexity of information produced in science, engineering, business, and everyday human activity is increasing at staggering rates. The goal of this course is to expose you to visual representation methods and techniques that increase the understanding of complex data. Good visualizations not only present a visual interpretation of data, but do so by improving comprehension, communication, and decision making.

In this course you will learn how the human visual system processes and perceives images, good design practices for visualization, tools for visualization of data from a variety of fields, collecting data from web sites with Python, and programming of interactive visualization applications using Processing.

The topics covered are:

  • Data and Image Models
  • Visual Perception & Cognitive Principles
  • Color Encoding
  • Design Principles of Effective Visualizations
  • Interaction
  • Graphs & Charts
  • Trees and Networks
  • Maps & Google Earth
  • Higher-dimensional Data
  • Unstructured Text and Document Collections
  • Images and Video
  • Scientific Visualization
  • Medical Visualization
  • Social Visualization
  • Visualization & The Arts

Quick Links:

Course Home Page.

Course Syllabus.

Lectures, Slides and other materials.

Video Lectures

_____

Advanced AI Techniques:

This is one course that I would  be looking at some parts of after I have covered the course on Neural Nets.  I am yet to glance at the first lecture or the materials, so i can not say how they would be like. But I sure am expecting a lot from them going by the topics they are covering.

The topics covered in a broad sense are:

  • Bayesian Networks
  • Statistical NLP
  • Reinforcement Learning
  • Bayes Filtering
  • Distributed AI and Multi-Agent systems
  • An Introduction to Game Theory

Quick Link:

Course Home.

_____

Astrophysical Chemistry:

I don’t know if I would be able to squeeze in time for these. But because of my amateurish interest in chemistry (If I were not an electrical engineer, I would have been into Chemistry), and because I have very high regard for Dr Harry Kroto (who is delivering them) I would try and make it a point to have a look at them. I think I’ll skip gym for some days to have a look at them. ;-)

kroto2006

[Nobel Laureate Harry Kroto with a Bucky-Ball model – Image Source : richarddawkins.net]

Quick Links:

Dr Harold Kroto’s Homepage.

Astrophysical Chemistry Lectures

_____

Onionesque Reality Home >>

Read Full Post »

In the past month or so I have been looking at a series of lectures on Data Mining that I had long bookmarked. I’ve had a look at the lectures twice and I found them extremely useful, hence I thought it was not a bad idea to share them here, though I am aware that they are pretty old and rather well circulated.

These lectures delivered by Professor David Mease as Google Tech Talks/Stanford Stat202 course lectures, work equally well for beginners as for experts who need to brush up with basic ideas. The course uses R extensively.

data mining icon11Statistical Aspects of Data Mining

Links:

Course Video Lectures.

Course website.

Lecture Slides.

_____

I’d end with some Dilbert strips on Data-Mining that I have liked in the past!

Data Mining

_____

DilbertMiningData2

_____

DilbertMiningData3

_____

Onionesque Reality Home >>

Read Full Post »

Older Posts »