Feeds:
Posts
Comments

Posts Tagged ‘Lectures’

A second post in a series of posts about Information Theory/Learning based perspectives in Evolution, that started off from the last post.

Although the last post was mostly about a historical perspective, it had a section where the main motivation for some work in metabiology due to Chaitin (now published as a book) was reviewed. The starting point about that work was to view evolution solely through an information processing lens (and hence the use of Algorithmic Information Theory). Ofcourse this lens by itself is not a recent acquisition and goes back a few decades (although in hindsight the fact that it goes back just a few decades is very surprising to me at least). To illustrate this I wanted to share some analogies by John Maynard Smith (perhaps one of my favourite scientists), which I had found to be particularly incisive and clear. To avoid clutter, they are shared here instead (note that most of the stuff he talks about is something we study in high school, however the talk is quite good, especially because it tends to emphasize on the centrality of information throughout). I also want this post to act as a reference for some upcoming posts.

Coda:

Molecular Biology is all about Information. I want to be a little more general than that; the last century, the 19th century was a century in which Science discovered how energy could be transformed from one form to another […] This century will be seen […] where it became clear that information could be translated from one from to another.

[Other parts: Part 2, Part 3, Part 4, Part 5, Part 6]

Throughout this talk he gives wonderful analogies on how information translation underlies the so called Central Dogma of Molecular Biology, and how if the translation was one-way in some stages it could have implications (i.e how August Weismann noted that acquired characters are not inherited by giving a “Chinese telegram translation analogy”; since there was no mechanism to translate acquired traits (acquired information) into the organism so that it could be propagated).

However, the most important point from the talk: One could see evolution as being punctuated by about 6 or so major changes or shifts. Each of these events was marked by the way information was stored and processed in a different way. Some that he talks about are:

1. The origin of replicating molecules.

2. The Evolution of chromosomes: Chromosomes are just strings of the above replicating molecules. The property that they have is that when one of these molecules is replicated, the others have to be as well. The utility of this is the following: Since they are all separate genes, they might have different rates of replication and the gene that replicates fastest will soon outnumber all the others and all the information would be lost. Thus this transition underlies a kind of evolution of cooperation between replicating molecules or in other other words chromosomes are a way for forced cooperation between genes.

3. The Evolution of the Code: That information in the nucleic could be translated to sequences of amino acids i.e. proteins.

4. The Origin of Sex: The evolution of sex is considered an open question. However one argument goes that (details in next or next to next post) the fact that sexual reproduction hastens the acquisition from the environment (as compared to asexual reproduction) explains why it should evolve.

5. The Evolution of multicellular organisms: A large, complex signalling system had to evolve for these different kind of cells to function in an organism properly (like muscle cells or neurons to name some in Humans).

6. Transition from solitary individuals to societies: What made these societies of individuals (ants, humans) possible at all? Say if we stick to humans, this could have only happened only if there was a new way to transmit information from generation to generation – one such possible information transducing machine could be language! Thus giving an additional mechanism to transmit information from one generation to another other than the genetic mechanisms (he compares the genetic code and replication of nucleic acids and the passage of information by language). This momentous event (evolution of language ) itself dependent on genetics. With the evolution of language, other things came by:  Writing, Memes etc. Which might reproduce and self-replicate, mutate and pass on and accelerate the process of evolution. He ends by saying this stage of evolution could perhaps be as profound as the evolution of language itself.

________________

As a side comment: I highly recommend the following interview of John Maynard Smith as well. I rate it higher than the above lecture, although it is sort of unrelated to the topic.

________________

Interesting books to perhaps explore:

1. The Major Transitions in EvolutionJohn Maynard Smith and Eörs Szathmáry.

2. The Evolution of Sex: John Maynard Smith (more on this theme in later blog posts, mostly related to learning and information theory).

________________

Read Full Post »

The first part is just to motivate this upcoming Stanford video series.

Deep Learning? Supervised Learning is the process where an entity has to “teach” or “supervise” the learning. The learning algorithm (such as a neural network) is shown some features (which are carefully extracted) and then it is told the correct answer (training). Over time it learns a function that maps features to labels. It thus  focuses on finding what would be the class label given a set of features i.e. P(Y|X) where Y is the class and X the features. For example in face recognition, after we have extracted features using a technique such as PCA or ICA, the task is to use these features and label information (person name or ID etc) to learn a function that can make predictions. But we see in everyday life that label information is not as important in learning. Humans do some kind of “clustering” and generative modeling of whatever they see all the time. Given a set of objects we tend to form a generative model of those objects, and then assign labels, labels thus give very little information in actual learning. Another interesting question is how features are learnt in the first place? Is it an unsupervised task? How can a computer learn features in an unsupervised manner?

Unsupervised Feature Learning? Now consider a task where you have to improve accuracy on classifying an image as that of an elephant or a Rhino. But the catch is that you are not given any labeled examples of elephants or Rhinos, not even that, suppose you are not even given unlabeled examples of them. But you are given random images of rivers and mountains and you have to learn a feature representation from these that can help you in your task. This can be done by sparse coding as shown by Raina et al.

______________

Lectures: Recently I came across a series of lectures (which are a work in progress) by Professor Andrew Y. Ng on Unsupervised Feature Learning and Deep Learning. This course will help present some issues such as the above to a wider audience. Though still not yet uploaded, I am really excited about these as I had really enjoyed his CS 229 lectures a long time ago. This course needs some basic knowledge of Machine Learning, but does brush up some basics.

I have been working on Meta-Learning for a while, but have been getting more interested in Deep Learning Methods recently and hence am looking forward for these lectures to come online.

I wrote to Professor Ng about them and in his opinion it would take a few months before they can be put up. I think that works fine as I plan to work on Deep Learning in the summers and that these would really help. Even now expertise in Deep Learning Methods is restricted to only a few places and thus such lectures would be a great advantage.

Here is a description to the Unsupervised Feature Learning and Deep Learning course:

Machine learning has seen numerous successes, but applying learning algorithms today often means spending a long time hand-engineering the input feature representation. This is true for many problems in vision, audio, NLP, robotics, and other areas. In this course, you’ll learn about methods for unsupervised feature learning and deep learning, which automatically learn a good representation of the input from unlabeled data. You’ll also pick up the “hands-on,” practical skills and tricks-of-the-trade needed to get these algorithms to work well.

Basic knowledge of machine learning (supervised learning) is assumed, though we’ll quickly review logistic regression and gradient descent.

I hope this would be as widely viewed as the CS 229 lectures. I say that as I know these would be fantastic.

______________

Onionesque Reality Home >>

Read Full Post »

About two months back I came across a series of Reith lectures given by professor Vilayanur Ramachandran, Dr Ramachandran holds a MD from Stanley Medical College and a PhD from Trinity College, Cambridge University and is presently the director of the center for Brain and cognition at the University of California at San Diego and an adjunct professor of biology at the Salk Institute. Dr Ramachandran is known for his work on behavioral neurology, which promises to greatly enhance our understanding of the human brain, which could be the key in my opinion in making “truly intelligent” machines.

vilayanur_ramachandran

[Dr VS Ramachandran: Image Source- TED]

I heard these lectures two three times and really enjoyed them and was intrigued by the cases he presents. Though these are old lectures (they were given in 2003), they are new to me and I think they are worth sharing anyway.

For those who are not aware, the Reith lectures were started by the British Broadcasting Corporation radio in 1948. Each year a person of high distinction gives these lectures. The first were given by mathematician Bertrand Russell. They were named so in the honor of the first director general of the BBC- Lord Reith. Like most other BBC presentations on science, politics and philosophy they are fantastic. Dr Ramachandran became the first from the medical profession to speak at Reith.

The 2003 series named The Emerging Mind has five lectures, each being roughly about 28-30 minutes. Each are a trademark of Dr Ramachandran with funny anecdote, witty arguments, very intersting clinical cases, the best pronunciation of “billions” since Carl Sagan, and let me not mention the way he rolls the RRRRRRRs while talking. Below I don’t intend to write what the lectures are about, I think they should be allowed to talk for themselves.

Lecture 1: Phantoms in the Brain

lecture1Listen to Lecture 1 | View Lecture Text

Lecture 2: Synapses and the Self

lecture2

Listen to Lecture 2 | View Lecture Text

Lecture 3: The Artful Brain

lecture3

Listen to Lecture 3 | View Lecture Text

Lecture 4: Purple Numbers and Sharp Cheese

lecture4

Listen to Lecture 4 | View Lecture Text

Lecture 5: Neuroscience the new Philosophy

lecture5

Listen to Lecture 5 | View Lecture Text

[Images above courtesy of the BBC]

Note: Real Player required to play the above.

As a bonus to the above I would also advice to those who have not seen this to have a look at the following TED talk.

In a wide-ranging talk, Vilayanur Ramachandran explores how brain damage can reveal the connection between the internal structures of the brain and the corresponding functions of the mind. He talks about phantom limb pain, synesthesia (when people hear color or smell sounds), and the Capgras delusion, when brain-damaged people believe their closest friends and family have been replaced with imposters.

Again he talks about curious disorders. One that he talks about in the above video, the Capgras Delusion is only one among the many he talks about in the Reith lectures. Other things that he talks about here is the origin of language and synesthesia.

Now look at the picture below and answer the following question: Which of the two figures is Kiki and which one is Bouba?

500px-booba-kikisvgIf you thought that the one with the jagged shape was Kiki and the one with the rounded one was Bouba then you belong to the majority. The exceptions need not worry.

These experiments were first conducted by the German gestalt psychologist Wolfgang Kohler and were repeated with the names “Kiki” and “Bouba” given to these shapes by VS Ramachandran and Edward Hubbard. In their experiments, they found a very strong inclination in their subjects to name the jagged shape Kiki and the rounded one Bouba. This happened with about 95-98 percent of the subjects. The experiments were repeated in Tamil speakers and then in babies of about 3 years of age. (who could not write) The results were similar. The only exceptions being in people having autistic disorders where the percentage reduced to only 60.

Dr Ramachandran and Dr Hubbard went on to suggest that this could have implications in our understanding of how language evolved as it suggests that naming of objects is not a random process as held by a number of views but depends on the appearance of the object under consideration. The strong “K” in Kiki had a direct correlation with the jagged shape of that object, thus suggesting a non-arbitrary mapping of objects with the sounds associated with them.

In the above talk and also the lectures, he talks about Synesthesia, a condition wherein the subject associates a color on seeing black and white numbers and letters with each.

His method of studying rare disorders to understand what in the brain does what is very interesting and is giving insights much needed to understand the organ that drives innovation and well, almost everything.

I highly recommend all the above lectures and the video above.

Onionesque Reality Home >>

Read Full Post »