Archive for the ‘Game Theory’ Category

A second post in a series of posts about Information Theory/Learning based perspectives in Evolution, that started off from the last post.

Although the last post was mostly about a historical perspective, it had a section where the main motivation for some work in metabiology due to Chaitin (now published as a book) was reviewed. The starting point about that work was to view evolution solely through an information processing lens (and hence the use of Algorithmic Information Theory). Ofcourse this lens by itself is not a recent acquisition and goes back a few decades (although in hindsight the fact that it goes back just a few decades is very surprising to me at least). To illustrate this I wanted to share some analogies by John Maynard Smith (perhaps one of my favourite scientists), which I had found to be particularly incisive and clear. To avoid clutter, they are shared here instead (note that most of the stuff he talks about is something we study in high school, however the talk is quite good, especially because it tends to emphasize on the centrality of information throughout). I also want this post to act as a reference for some upcoming posts.


Molecular Biology is all about Information. I want to be a little more general than that; the last century, the 19th century was a century in which Science discovered how energy could be transformed from one form to another […] This century will be seen […] where it became clear that information could be translated from one from to another.

[Other parts: Part 2, Part 3, Part 4, Part 5, Part 6]

Throughout this talk he gives wonderful analogies on how information translation underlies the so called Central Dogma of Molecular Biology, and how if the translation was one-way in some stages it could have implications (i.e how August Weismann noted that acquired characters are not inherited by giving a “Chinese telegram translation analogy”; since there was no mechanism to translate acquired traits (acquired information) into the organism so that it could be propagated).

However, the most important point from the talk: One could see evolution as being punctuated by about 6 or so major changes or shifts. Each of these events was marked by the way information was stored and processed in a different way. Some that he talks about are:

1. The origin of replicating molecules.

2. The Evolution of chromosomes: Chromosomes are just strings of the above replicating molecules. The property that they have is that when one of these molecules is replicated, the others have to be as well. The utility of this is the following: Since they are all separate genes, they might have different rates of replication and the gene that replicates fastest will soon outnumber all the others and all the information would be lost. Thus this transition underlies a kind of evolution of cooperation between replicating molecules or in other other words chromosomes are a way for forced cooperation between genes.

3. The Evolution of the Code: That information in the nucleic could be translated to sequences of amino acids i.e. proteins.

4. The Origin of Sex: The evolution of sex is considered an open question. However one argument goes that (details in next or next to next post) the fact that sexual reproduction hastens the acquisition from the environment (as compared to asexual reproduction) explains why it should evolve.

5. The Evolution of multicellular organisms: A large, complex signalling system had to evolve for these different kind of cells to function in an organism properly (like muscle cells or neurons to name some in Humans).

6. Transition from solitary individuals to societies: What made these societies of individuals (ants, humans) possible at all? Say if we stick to humans, this could have only happened only if there was a new way to transmit information from generation to generation – one such possible information transducing machine could be language! Thus giving an additional mechanism to transmit information from one generation to another other than the genetic mechanisms (he compares the genetic code and replication of nucleic acids and the passage of information by language). This momentous event (evolution of language ) itself dependent on genetics. With the evolution of language, other things came by:  Writing, Memes etc. Which might reproduce and self-replicate, mutate and pass on and accelerate the process of evolution. He ends by saying this stage of evolution could perhaps be as profound as the evolution of language itself.


As a side comment: I highly recommend the following interview of John Maynard Smith as well. I rate it higher than the above lecture, although it is sort of unrelated to the topic.


Interesting books to perhaps explore:

1. The Major Transitions in EvolutionJohn Maynard Smith and Eörs Szathmáry.

2. The Evolution of Sex: John Maynard Smith (more on this theme in later blog posts, mostly related to learning and information theory).



Read Full Post »

John von Neumann made so many fundamental contributions that Paul Halmos remarked that it was almost like von Neumann maintained a list of various subjects that he wanted to touch and develop and he systematically kept ticking items off. This sounds to be remarkably true if one just has a glance at the dizzyingly long “known for” column below his photograph on his wikipedia entry.

John von Neumann with one of his computers.

John von Neumann with one of his computers.

Since Neumann died (young) in 1957, rather unfortunately, there aren’t very many audio/video recordings of his (if I am correct just one 2 minute video recording exists in the public domain so far).

I recently came across a fantastic film on him that I would very highly recommend. Although it is old and the audio quality is not the best, it is certainly worth spending an hour on. The fact that this film features Eugene Wigner, Stanislaw UlamOskar Morgenstern, Paul Halmos (whose little presentation I really enjoyed), Herman Goldstein, Hans Bethe and Edward Teller (who I heard for the first time, spoke quite interestingly) alone makes it worthwhile.

Update: The following youtube links have been removed for breach of copyright. The producer of the film David Hoffman, tells us that the movie should be available as a DVD for purchase soon. Please check the comments to this post for more information.

Part 1

Find Part 2 here.


Onionesque Reality Home >>

Read Full Post »

Here are a number of interesting courses, two of which I am looking at for the past two weeks and that i would hopefully finish by the end of August-September.

Introduction to Neural Networks (MIT):

These days, amongst the other things that I have at hand including a project on content based image retrieval. I have been making it a point to look at a MIT course on Neural Networks. And needless to say, I am getting to learn loads.


I would like to emphasize that though I have implemented a signature verification system using Neural Nets, I am by no means good with them. I can be classified a beginner. The tool that I am more comfortable with are Support Vector Machines.

I have been wanting to know more about them for some years now, but I never really got the time or you can say the opportunity. Now that I can invest some time, I am glad I came across this course. So far I have been able to look at 7 lectures and I should say that I am MORE than very happy with the course. I think it is very detailed and extremely well suited for the beginner as well as the expert.

The instructor is H. Sebastian Seung who is the professor of computational neuroscience at the MIT.

The course has 25 lectures each one packed with a great amount of information. Meaning, the lectures might work slow for those who are not very familiar with this stuff.

The video lectures can be accessed over here. I must admit that i am a little disappointed that these lectures are not available on you-tube. That’s because the downloads are rather large in size. But I found them worth it any way.

The lectures cover the following:

Lecture 1: Classical neurodynamics
Lecture 2: Linear threshold neuron
Lecture 3: Multilayer perceptrons
Lecture 4: Convolutional networks and vision
Lecture 5: Amplification and attenuation
Lecture 6: Lateral inhibition in the retina
Lecture 7: Linear recurrent networks
Lecture 8: Nonlinear global inhibition
Lecture 9: Permitted and forbidden sets
Lecture 10: Lateral excitation and inhibition
Lecture 11: Objectives and optimization
Lecture 12: Excitatory-inhibitory networks
Lecture 13: Associative memory I
Lecture 14: Associative memory II
Lecture 15: Vector quantization and competitive learning
Lecture 16: Principal component analysis
Lecture 17: Models of neural development
Lecture 18: Independent component analysis
Lecture 19: Nonnegative matrix factorization. Delta rule.
Lecture 20: Backpropagation I
Lecture 21: Backpropagation II
Lecture 22: Contrastive Hebbian learning
Lecture 23: Reinforcement Learning I
Lecture 24: Reinforcement Learning II
Lecture 25: Review session

The good thing is that I have formally studied most of the stuff after lecture 13 , but going by the quality of lectures so far (first 7), I would not mind seeing them again.

Quick Links:

Course Home Page.

Course Video Lectures.

Prof H. Sebastian Seung’s Homepage.



This is a Harvard course. I don’t know when I’ll get the time to have a look at this course, but it sure looks extremely interesting. And I am sure a number of people would be interested in having a look at it. It looks like a course that be covered up pretty quickly actually.tornado

[Image Source]

The course description says the following:

The amount and complexity of information produced in science, engineering, business, and everyday human activity is increasing at staggering rates. The goal of this course is to expose you to visual representation methods and techniques that increase the understanding of complex data. Good visualizations not only present a visual interpretation of data, but do so by improving comprehension, communication, and decision making.

In this course you will learn how the human visual system processes and perceives images, good design practices for visualization, tools for visualization of data from a variety of fields, collecting data from web sites with Python, and programming of interactive visualization applications using Processing.

The topics covered are:

  • Data and Image Models
  • Visual Perception & Cognitive Principles
  • Color Encoding
  • Design Principles of Effective Visualizations
  • Interaction
  • Graphs & Charts
  • Trees and Networks
  • Maps & Google Earth
  • Higher-dimensional Data
  • Unstructured Text and Document Collections
  • Images and Video
  • Scientific Visualization
  • Medical Visualization
  • Social Visualization
  • Visualization & The Arts

Quick Links:

Course Home Page.

Course Syllabus.

Lectures, Slides and other materials.

Video Lectures


Advanced AI Techniques:

This is one course that I would  be looking at some parts of after I have covered the course on Neural Nets.  I am yet to glance at the first lecture or the materials, so i can not say how they would be like. But I sure am expecting a lot from them going by the topics they are covering.

The topics covered in a broad sense are:

  • Bayesian Networks
  • Statistical NLP
  • Reinforcement Learning
  • Bayes Filtering
  • Distributed AI and Multi-Agent systems
  • An Introduction to Game Theory

Quick Link:

Course Home.


Astrophysical Chemistry:

I don’t know if I would be able to squeeze in time for these. But because of my amateurish interest in chemistry (If I were not an electrical engineer, I would have been into Chemistry), and because I have very high regard for Dr Harry Kroto (who is delivering them) I would try and make it a point to have a look at them. I think I’ll skip gym for some days to have a look at them. ;-)


[Nobel Laureate Harry Kroto with a Bucky-Ball model – Image Source : richarddawkins.net]

Quick Links:

Dr Harold Kroto’s Homepage.

Astrophysical Chemistry Lectures


Onionesque Reality Home >>

Read Full Post »

One of the problems that Swarm Intelligence research faces is a precise definition[1]. Many words that are associated with SI are generally Emergence, Self-Organization, Collective Intelligence etc. There is no general mathematical definition to it yet. This lack of a credible and workable framework has made research in this field ad-hoc. In simple words we need a theory to swarming. Is there a theory to swarming behavior or swarm intelligence? We basically have analogies. There is no theory yet i guess.

There have been efforts to rectify this and i will cover this in a post sometime soon.

However we also definitely need a theory to explain altruism in social insects (In a swarm), in animals or in Humans. I basically got interested in Altruism due to my interest in Swarm Intelligence and social insects.


Why does a honey bee do the ultimate sacrifice and lay down its life when it feels its kin is in danger? Why do walruses adopt orphans? How is it that dogs can adopt off-springs of cats, other dogs, and even tigers? There are scores of examples that show such altruistic behavior.There are variants obviously to the altruism that we are talking about.

Altruism as i have mentioned implicitly is a social behavior. A behavior is social if it has implications for both the actor and the recipient. Social behaviors can broadly be categorized depending on whether the actor or the recipients are benefited. Altruism is the category when the fitness level of the actor is reduced after a action and that of the recipient is increased. A selfish behavior contrasts the above exactly. Other two types are mutually beneficial and spiteful. Mutually beneficial is the type in which the fitness of both are increased and spiteful is the reverse of it.

In The God Delusion, Richard Dawkins summarizes some kinds of altruism and separates them and also gives some explanation.

1. The first kind he points out is altruism towards our kin, with individuals with whom we have common genes. The honey bee example in most probability fits into this “category”. Also like how we are evolved to be kind towards our kids. The genes that “code” for such behavior towards our kins people who share genes with us are more likely to survive. This is given by Hamilton’s Kin Selection Theory[2]. It states that altruism is favored when:

Where “c” is the fitness cost to the altruist, “b” is the fitness benefit to the recipient and “r” is a measure of their genetic relatedness.

2. The second kind is reciprocal altruism. The example of a buffalo / crow pair fits in here. This form does not require any sharing of genes. Individuals from VERY different species can actually exist in a symbiotic relationship through this altruistic form. Each individual contributes something that the other individual can not obtain on its own. Within a species, like in us humans, this trait has become more specific and has evolved into the form that we tend to do good to people who do/ can do good things for us.

3. Dawkins then also talks about the importance of building a good reputation.

4. One of the most interesting reasons that he mentions and which is very true IMO is that excessive altruism may be a show of superiority in some manner. That is that it can be because the person can afford to be “altruistic” This can in some way be understood as a payoff in a game theory scenario.

One interesting approach to the questions above is discussed in a nice paper that i read over the past week.

In this paper the two researchers, in their model consider a large and panmictic (unstructured) population where individuals interact pairwise in successive rounds.


  1. Number of rounds of interaction for an individual follows a geometric distribution with a parameter ω, which is a probability that an individual will interact with an individual after a round of interaction.
  2. The Focal Individual can interact with two classes of individuals, one closely related genetically to it and the other not so closely related.
  3. X is the probability of interacting non-randomly with an individual of the related class.
  4. 1-X is the complimentary probability. 1− X interactions occur randomly with any member of the population.

All repeated rounds of interactions take place with the same partner. During each round of interaction the FI invests I• into helping with I• varying between 0 and 1. This investment incurs a cost CI• to the FI and generates a benefit BI• . A fraction ζ of the benefit generated by helping directly return to the FI and the complementary fraction 1−ζ goes to the partner[3].

The fecundity of FI can either be positive or negative depending on the value of the term:


The fecundity of the partner will always be positive unless the FI gets all the benefit of the act (i.e. ζ=1 ) or if it does not invest in helping at all (I• = 0). We noted earlier that the FI is interacting with two classes of individuals. And therefore the fecundity with both these classes of individuals will be different.

To generalise the relative fecundity of the FI interacting with a j class individual will be given by:


We assume that the individual follows a “tit for tat” kind of a approach i.e. that the investment level into helping at a given round depends linearly on the partner’s investment at the previous round. Hence, the investment depends on three traits:

  1. the investment on the first round τ
  2. The response slope β on the partner’s investment for the preceding round.
  3. The memory m (varying between zero and one) of the partner’s investment at the preceding round.

“m” is the probability of not making an error in assignment by considering that a partner has not co-operated in the previous move when he infact has. τ and β can evolve. The paper gives a well put and terse presentation of the idea to generate a formula and then considers cases when co-operation and when altruism can evolve. I highly recommend this paper. It is a wonderful paper and a must read! I thoroughly enjoyed it. It can be obtained here.
The same problem has also been approached by Richard Dawkins comprehensively in the following video that was shown on the BBC. This video has been obtained courtesy of richarddawkins.net

In the video Dawkins starts off with how his seminal book “The Selfish Gene” was misunderstood and how it lead to the phrase “Nice Guys Finish Last”, this phrase was coined by Garrett Hardin to sum up the selfish gene idea. Then how do these “nice guys/altruistic agents” survive? Should not have natural selection wiped them off? Dawkins then argues that the idea of selfish genes can actually give rise to co-operative and altruistic behavior*. It leads to the development of a pre-wired programme or strategy for achieving some desired goal. This is analogous to human strategy in situations. Dawkins then moves to game theory and gives a wonderful explanation of the Prisoners Dilemma and concepts like the tragedy of the commons and tries to explain how altruistic behavior (coupled with a tit for tat kind learning behavior) can actually fetch best pay-offs. It is this “selfish” advantage that actually leads to altruistic behavior. And finally suggests that Hardin’s idea could be slightly modified after this analysis to “Nice Guys Finish First”, which also is the name of the programme.
To sum up, this wonderful video leads to the same conclusion in part as the paper described above does.

That is that the following conditions lead to the evolution of
altruism and cooperation

1. Direct benefits to the individual performing a cooperative act.

2. Direct or indirect information allowing a better than random guess about whether a given individual will behave cooperatively in repeated reciprocal interactions.

3. Altruism or cooperation can evolve if the cost-to-benefit ratio of altruistic and cooperative acts is greater than a threshold value. The cost-to-benefit ratio can be altered by coercion, punishment and policing which therefore act as mechanisms facilitating the evolution of altruism and cooperation. [3] Dawkins explained this idea by the PD example in the video.

However integrating the four ideas mentioned at the start of the post is a problem. Can it be possible to do so? This can have massive implications in understanding social insect behavior and swarm intelligence among many others.

* Co-operative behavior may not always be altruistic.


[1] Foundations of Swarm Intelligence: From Principles to Practice, Mark Fleischer, Institute for Systems Research, University of Maryland College Park.

[2] The genetical evolution of social behaviour. Hamilton, W. D. 1964. I. Journal of Theoretical Biology 7:1-16.

[3] The evolution of cooperation and altruism. A general framework and a classification of models. Laurent Lehmann and Laurent Keller.

Onionesque Reality Home >>

Read Full Post »