Feeds:
Posts
Comments

Archive for the ‘Scientists’ Category

A second post in a series of posts about Information Theory/Learning based perspectives in Evolution, that started off from the last post.

Although the last post was mostly about a historical perspective, it had a section where the main motivation for some work in metabiology due to Chaitin (now published as a book) was reviewed. The starting point about that work was to view evolution solely through an information processing lens (and hence the use of Algorithmic Information Theory). Ofcourse this lens by itself is not a recent acquisition and goes back a few decades (although in hindsight the fact that it goes back just a few decades is very surprising to me at least). To illustrate this I wanted to share some analogies by John Maynard Smith (perhaps one of my favourite scientists), which I had found to be particularly incisive and clear. To avoid clutter, they are shared here instead (note that most of the stuff he talks about is something we study in high school, however the talk is quite good, especially because it tends to emphasize on the centrality of information throughout). I also want this post to act as a reference for some upcoming posts.

Coda:

Molecular Biology is all about Information. I want to be a little more general than that; the last century, the 19th century was a century in which Science discovered how energy could be transformed from one form to another […] This century will be seen […] where it became clear that information could be translated from one from to another.

[Other parts: Part 2, Part 3, Part 4, Part 5, Part 6]

Throughout this talk he gives wonderful analogies on how information translation underlies the so called Central Dogma of Molecular Biology, and how if the translation was one-way in some stages it could have implications (i.e how August Weismann noted that acquired characters are not inherited by giving a “Chinese telegram translation analogy”; since there was no mechanism to translate acquired traits (acquired information) into the organism so that it could be propagated).

However, the most important point from the talk: One could see evolution as being punctuated by about 6 or so major changes or shifts. Each of these events was marked by the way information was stored and processed in a different way. Some that he talks about are:

1. The origin of replicating molecules.

2. The Evolution of chromosomes: Chromosomes are just strings of the above replicating molecules. The property that they have is that when one of these molecules is replicated, the others have to be as well. The utility of this is the following: Since they are all separate genes, they might have different rates of replication and the gene that replicates fastest will soon outnumber all the others and all the information would be lost. Thus this transition underlies a kind of evolution of cooperation between replicating molecules or in other other words chromosomes are a way for forced cooperation between genes.

3. The Evolution of the Code: That information in the nucleic could be translated to sequences of amino acids i.e. proteins.

4. The Origin of Sex: The evolution of sex is considered an open question. However one argument goes that (details in next or next to next post) the fact that sexual reproduction hastens the acquisition from the environment (as compared to asexual reproduction) explains why it should evolve.

5. The Evolution of multicellular organisms: A large, complex signalling system had to evolve for these different kind of cells to function in an organism properly (like muscle cells or neurons to name some in Humans).

6. Transition from solitary individuals to societies: What made these societies of individuals (ants, humans) possible at all? Say if we stick to humans, this could have only happened only if there was a new way to transmit information from generation to generation – one such possible information transducing machine could be language! Thus giving an additional mechanism to transmit information from one generation to another other than the genetic mechanisms (he compares the genetic code and replication of nucleic acids and the passage of information by language). This momentous event (evolution of language ) itself dependent on genetics. With the evolution of language, other things came by:  Writing, Memes etc. Which might reproduce and self-replicate, mutate and pass on and accelerate the process of evolution. He ends by saying this stage of evolution could perhaps be as profound as the evolution of language itself.

________________

As a side comment: I highly recommend the following interview of John Maynard Smith as well. I rate it higher than the above lecture, although it is sort of unrelated to the topic.

________________

Interesting books to perhaps explore:

1. The Major Transitions in EvolutionJohn Maynard Smith and Eörs Szathmáry.

2. The Evolution of Sex: John Maynard Smith (more on this theme in later blog posts, mostly related to learning and information theory).

________________

Read Full Post »

“Our federal income tax law defines the tax y to be paid in terms of the income x; it does so in a clumsy enough way by pasting several linear functions together, each valid in another interval or bracket of income. An archeologist who, five thousand years later from now, shall unearth some of our income tax returns together with relics of engineering works and mathematical books, will probably date them a couple of centuries earlier, certainly before Galileo and Vieta. Vieta was instrumental in introducing a consistent algebraic symbolism. Galileo discovered the quadratic law of falling bodies \frac{1}{2} gt^2 […] by this formula Galileo converted a natural law inherent in the actual motion of bodies into an a priori constructed mathematical function, and that is what physics endeavors to accomplish for every phenomenon […]. This law is much better design than our tax laws. It has been designed by nature, who seems to lay her plans with a fine sense for simplicity and harmony. But then nature is not, as our income and excess profits tax laws are, hemmed in having to be comprehensible to our legislators and chambers of commerce. […]”
(Hermann Weyl, Excerpted from “Levels of Infinity”, Essay 3: “The Mathematical Way of Thinking”, Originally published in Science, 1940).
With the tax season in mind, I was thinking that not much has changed since 1940!

________________

Onionesque Reality Home >>

Read Full Post »

John von Neumann made so many fundamental contributions that Paul Halmos remarked that it was almost like von Neumann maintained a list of various subjects that he wanted to touch and develop and he systematically kept ticking items off. This sounds to be remarkably true if one just has a glance at the dizzyingly long “known for” column below his photograph on his wikipedia entry.

John von Neumann with one of his computers.

John von Neumann with one of his computers.

Since Neumann died (young) in 1957, rather unfortunately, there aren’t very many audio/video recordings of his (if I am correct just one 2 minute video recording exists in the public domain so far).

I recently came across a fantastic film on him that I would very highly recommend. Although it is old and the audio quality is not the best, it is certainly worth spending an hour on. The fact that this film features Eugene Wigner, Stanislaw UlamOskar Morgenstern, Paul Halmos (whose little presentation I really enjoyed), Herman Goldstein, Hans Bethe and Edward Teller (who I heard for the first time, spoke quite interestingly) alone makes it worthwhile.

Update: The following youtube links have been removed for breach of copyright. The producer of the film David Hoffman, tells us that the movie should be available as a DVD for purchase soon. Please check the comments to this post for more information.

Part 1

Find Part 2 here.

________________

Onionesque Reality Home >>

Read Full Post »

Benoit Mandelbrot Dies

Benoit Mandelbrot (20 November 1924 – 14 October 2010)

Cesaro as quoted by Benoit Mandelbrot on limits in Koch Curves and fractals:

The will is infinite
and the execution confined
The desire is boundless
and the act a slave to limit.

From Mandelbrot’s seminal book on fractal geometry “The Fractal Geometry of Nature” (1982). This book has more digressions and quotations than any other book I think.

Mandelbrot was not like Feynman for me, not the kinds I could call a childhood hero, but the kind of person I liked more and more as I grew up. I was extremely saddened by the news of his demise today morning.

RIP!

As a tribute to Mandelbrot and his immense contribution. I would highly recommend this extremely wonderful PBS documentary.

[Click on the Image to Play]

If for some reason the above does not work – check the documentary out on googlevideo.

______________

Links:

New York Times Obituary

______________

Onionesque Reality Home >>

Read Full Post »

I am late with my echo statement on this.

Over the past few days, the internet is abuzz with news of Craig Venter and his team for creating the first fully functional cell, controlled by synthetic DNA and discussions on what might be the ethical consequences of future work in this area.

The fact that this has happened is not surprising at all. Dr Venter has been very open about his work and has been promoting it for some years now.  For instance, a couple of years ago there was a wonderful TED talk in which Venter talks about his team being close to creating synthetic life. The latest news is ofcourse not of synthetic life, but a step closer to that grand aim.

Another Instance : Two years there was a brainstorming session whose transcript was converted by EDGE into a book available for free download too.

Dimitar Sasselov, Max Brockman, Seth Lloyd, George Church, J. Craig Venter, Freeman Dyson, Image Courtesy - EDGE

The BOOK can be downloaded from here.

So from such updates, it did not surprise me much when Venter made the announcement.

____________

Ethics : There have been frenzied debates on what this might lead us to on the internet, on television and elsewhere. These discussions on ethics appear to me to be inevitable and I find it most appropriate to quote the legendary Freeman Dyson on it.

“Two Hundred years ago, William Blake engraved The Gates of Paradise, a little book of drawings and verses. One of the drawings, with the title “Aged Ignorance”, shows an old man wearing professional eyeglasses and holding a large pair of scissors. In front of him, a winged child running naked in the light from a rising sun. The old man sits with his back to the sun. With a self satisfied smile he opens his scissors and chips the child’s wings. With the picture goes a little poem :

“In Time’s Ocean falled drown’d,
In aged ignorance profound,
Holy and cold, I clip’d the Wings
Of all Sublunary Things.”

This picture is an image of the human condition in the era that is now beginning. The rising sun is biological science, throwing light of every increasing intensity onto the processes by which we live and feel and think. The winged child is human life, becoming for the first time aware of itself and its potentialities in the light of science. The old man is our existing human society, shaped by ages of past ignorance. Our laws, our loyalities, our fears and hatreds, our economic and social injustices, all grew slowly and are deeply rooted in the past. Inevitably the advance of biological knowledge will bring clashes between the old institutions and new desires for human improvement. Old institutions will clip the wings of human desire. Up to a point, caution is justified and social constraints are necessary. The new technologies will be dangerous as well as liberating. But in the long run, social constraints must bend to new realities. Humanity can not live forever with clipped wings. The vision of self-improvement which William Blake and Samuel Gompers in their different ways proclaimed, will not vanish from the Earth.”

(The above is an excerpt from a lecture given by Freeman Dyson at the Hebrew University of Jerusalem in 1995. The lecture was pulished by the New York Review of Books in 1997 and later as a chapter in Scientist as Rebel. )

Artificial Life Beyond the Wet Medium :

Life is a process which can be abstracted away from any particular mediumJohn Von Neumann

Wet Artificial-Life is what is basically synthetic life (in synthetic life you don’t really abstract the life process into another medium, but you digitize it and recreate it instead as per your requirement).

I do believe abstracting and digitizing life from a “wet chemical medium” to a computer is not very far off either i.e. a software that not only would imitate “life” but also synthesize it. And coupled with something like Koza’s Genetic Programming scheme embedded in it, develop something that possesses some intelligence other than producing more useful programs.

Coded Messages :

This is the fun part from the news about Venter and his team’s groundbreaking work. The synthetic DNA of the bacteria has a few messages coded into it.

1. “To live, to err, to fall, to triumph, to create life out of life.” – from James Joyce’s A Portrait of the Artist as a Young Man.

James Joyce is one of my favourite writers*, so I was glad that this was encoded too. But I find it funny that what this quote says can also be the undoing of synthetic life or rather a difficult problem to solve. The biggest enemy of synthetic life is evolution (creating life out of life :), evolution would ensure that control of the synthetic bacteria is lost soon enough. I believe that countering this would be the single biggest challenge in synthetic biology.

*When I tried reading Ulysses, I kept giving up. But had this compulsive need to finish it anyway. I had to join an Orkut community called “Who is afraid of James Joyce” and after some motivation could read it! ;-)

2. What I can not build, I can not understand – Richard P. Feynman

This is what Dr Venter announced, isn’t “What I can not create, I do not understand” the correct version?

Feynman's Blackboard at the time of his death: Copyright - Caltech

3. “See things not as they are, but as they might be” – J. Robert Oppenheimer from American Prometheus

____________

Recommendations :

1. What is Life – Erwin Schrodinger (PDF)

2. Life – What A Concept! – EDGE (PDF)

3. A Life Decoded : My Genome, My Life – C. J. Venter (Google Books)

____________

Onionesque Reality Home >>

Read Full Post »

For the past couple of years, I have had a couple of questions about machine learning research that I have wanted to ask some experts, but never got the chance to do so. I did not even know if my questions made sense at all. I would probably write about them on a blog post soon enough.

It is however ironical that I came to know my questions were valid and well discussed (I never knew what to search for them, I used expressions not used by researchers) only by the death of Ray Solomonoff (he was one researcher who worked on it, and an obituary on him highlighted his work on it, something I missed). Solomonoff was one of the founding fathers of Artificial Intelligence as a field and Machine Learning as a discipline within it. It must be noted that he was one of the few attendees at the 1956 Dartmouth Conference, basically an extended brain storming session that started AI formally as a field. The other attendees were : Marvin Minsky, John McCarthy, Allen Newell, Herbert Simon, Arthur SamuelOliver Selfridge, Claude Shannon, Nathaniel Rochester and Trenchand Moore. His 1950-52 papers on networks are regarded as the first statistical analysis of the same.  Solomonoff was thus a towering figure in AI and Machine Learning.

[Ray Solomonoff  : (25 July 1926- 7 December 2009)]

Solomonoff is widely considered as the father of Machine Learning for circulating the first report on the same in 1956. His particular focus was on the use of probability and its relation to learning. He founded the idea of Algorithmic Probability (ALP)  in a 1960 paper at Caltech, an idea that gives rise to Kolmogorov Complexity as a side product. A. N. Kolmogorov independently discovered similar results and came to know of and acknowledged Solomonoff’s earlier work on Algorithmic Information Theory. His work however was relatively unknown in the west than in the soviet union, which is why Algorithmic Information Theory is mostly referred to as Kolmogorov complexity rather than “Solomonoff Complexity”. Kolmogorov and Solomonoff approached the same framework from different directions. While Kolmogorov was concerned with randomness and Information Theory, Solomonoff was concerned with inductive reasoning. And in doing so he discovered ALP and Kolomogorov Complexity years before anyone did. I would write below on only one aspect of his work that I have studied to some degree in the past year.

Solomonoff With G.J Chaitin, another pioneer in Algorithmic Information Theory

[Image Source]

The Universal Distribution:

His 1956 paper, “An inductive inference machine” was one of the seminal papers that used probability in Machine Learning. He outlined two main problems which he thought (correctly) were linked.

The Problem of Learning in Humans : How do you use all the information that you gather in life in making decisions?

The Problem of Probability : Given you have some data and some a-priori information, how can you make the best possible predictions for the future?

The problem of learning is more general and related to the problem of probability. Solomonoff noted that the Machine learning was simply the process of approximating ideal probabilistic predictions for practical use.

Building on his 1956 paper, he discovered probabilistic languages for induction at a time when it was considered out of fashion. And discovered the Universal Distribution.

All induction problems could be basically reduced to this form : Given a sequence of binary symbols how do you extrapolate it? The answer being that we could assign a probability to a sequence and then use Bayes Theorem to make a prediction on which particular continuation of a string was how likely. That gives rise to an even more difficult question that was the basic question for a lot of Solomonoff’s work and on Algorithmic Probability/Algorithmic Information Theory. This question is : How do you assign probabilities to strings?

Solomonoff approached this problem using the idea of a Universal Turing Machine. Suppose this Turing Machine has three types of tapes, an unidirectional input tape, an unidirectional output tape and a bidirectional working tape. Suppose this machine will take some binary string as input and it may give a binary string as output.

It could do any of the following :

1. Print out a string after a while and then come to a stop.

2. It could print an infinite output string.

3. It could go in an infinite loop for computing the string and not output anything at all (Halting Problem).

For a string x, the ALP would be as follows :

If we feed some bits at random to our Turing Machine, there will always be some probability that the output would start with a string x. This probability is the algorithmic or universal probability of the string x.

The ALP would be given as :

\displaystyle P_M(x) = \sum_{i=0}^{\infty}2^{-\lvert\ S_i(x)\rvert}

Where P_M(x) would be the universal probability of string x with respect to the universal turing machine M. To understand the placement of S_i(x) in the above expression, let’s discuss it a little.

There could be many random strings that after being processed by the Turing Machine give an output string that begins with string x. And S_i(x) is the i^{th} such string. Each such string carries a description of x. And since we want to consider all cases, we take the summation. In the expression above \lvert\ S_i(x)\rvert gives the length of a string and 2^{\lvert\ S_i(x)\rvert} the probability that the random input S_i would output a string starting with x.

This definition of ALP has the following properties that have been stated and proved by Solomonoff in the 60s and the 70s.

1. It assigns higher probabilities to strings with shorter descriptions. This is the reverse of something like Huffman Coding.

2. The value for ALP would be independent of the type of the universal machine used.

3. ALP is incomputible. This is the case because of the halting problem. Infact it is this reason that it has not received much attention. Why get interested in a model that is incomputible? However Solomonoff insisted that approximations to the ALP would be much better than existing systems and that getting the exact ALP is not even needed.

4. P_M(x) is a complete description for x. That means any pattern in the data could be found by using P_M. This means that the universal distribution is the only inductive principle that is complete. And approximations to it would be much desirable.

__________

Solomonoff also worked on Grammar discovery and was very interested in Koza’s Genetic Programming system, which he believed could lead to efficient and much better machine learning methods. He published papers till the ripe old age of 83 and is definitely inspiring for the love of his work.  Paul Vitanyi notes that :

It is unusual to find a productive major scientist that is not regularly employed at all. But from all the elder people (not only scientists) I know, Ray Solomonoff was the happiest, the most inquisitive, and the most satisfied. He continued publishing papers right up to his death at 83.

Solomonoff’s ideas are still not exploited to their full potential and in my opinion would be necessary to explore to build the Machine Learning dream of never-ending learners and incremental + synergistic Machine Learning. I would write about this in a later post pretty soon. It was a life of great distinction and a life well lived. I also wish strength and peace to his wife Grace and his nephew Alex.

The five surviving (in 2006) founders of AI who met in 2006 to commemorate 50 years of the Dartmouth Conference. From left : Trenchand Moore, John McCarthy, Marvin Minsky, Oliver Selfridge and Ray Solomonoff

__________

Refernces and Links:

1. Ray Solomonoff’s publications.

2. Obituary: Ray Solomonoff – The founding father of Algorithmic Information Theory by Paul Vitanyi

3. The Universal Distribution and Machine Learning (PDF).

4. Universal Artificial Intelligence by Marcus Hutter (videolectures.net)

5. Minimum Description Length by Peter Grünwald (videolecures.net)

6. Universal Learning Algorithms and Optimal Search (Neural Information Processing Systems 2002 workshop)

__________

Onionesque Reality Home >>

Read Full Post »

A week ago I observed that there was a wonderful new documentary on you-tube, put-up by none other than author and documentary film-maker Christopher Sykes. This post is about this documentary and some thoughts related to it. Before I talk again about the documentary, I’ll digress for a moment and come back to it in a while.

With the exception of the Feynman Lectures in Physics Volume III, Six not so easy pieces (both of which I don’t intend to read in the conceivable future) there is no book with which Feynman was involved (he never wrote himself) that I have not had the opportunity to read. The last that I read was “Don’t You Have Time to Think“, a collection of delightful letters by Feynman written over the years (Note that “Don’t you have time to think” is the same as “Perfectly Reasonable Deviations”).

Don't You Have Time To Think

A number of people including many of Feynman’s close friends were surprised to learn that Feynman wrote letters and so many of them. He didn’t seem to be the kinds who would write the kind of letters that he did.  These give a very different picture of the man than a conventional biography would. Usually, collections of letters tend to be boring and drab, but I think these are an exception.  They reveal him to be a genius with a human touch. I have written about Feynman before, like I have covered points in an earlier post which now seems to me to be overtly enthusiastic. ;-)

Sean Caroll aptly writes that Feynman worship is often overdone, I think he is right. Let me make my own opinion on the matter.

I don’t consider Feynman god or anywhere close to that (but definitely one of my idols and one man I admire greatly), I actually consider him to be very human and some one who was unashamed of admitting to his weaknesses and who had a certain love for life that’s rare. I only am attracted to Feynman for one reason : People like Feynman are a breath of fresh air in the bunch of supercilious pseudo-intellectual snobs that are abound in academia and industry. A breath of fresh air especially for the lesser mortals like me. That’s why I like that man. Why is he so famous? I have tried writing on it before. And I won’t do so anymore.

I’d like to cite two quotes that would give my point of view on the celebrity-fication of scientists, in this case Feynman. Dave Brooks writes in the Telegraph in an article titled “Physicist still leaves some all shook up” February 5, 2003:

Feynman is the person every geek would want to be: very smart, honored  by the establishment even as he won’t play by his rules, admired by people of both sexes, arrogant without being envied and humble without being pitied. In other words, he’s young Elvis, with the Earth  shaking talent transferred from larynx to brain cells and enough sense to have avoided the fat Las Vegas phase. Is such celebrity-fication of scientists good? I think so, even if people do have a tendency to go overboard. Anything that gets us thinking about science is something to be admired, whether it comes in the form of an algorithm or an anecdote.

I remember reading an essay by the legendary Freeman Dyson that said:

Science too needs its share of super heroes to bring in new talent.

These rest my case I suppose.

_____

The only other book of Feynman that I have not read and that I have wanted to read for a LONG time is Tuva or Bust! Richard Feyman’s Last Journey. Unfortunately I have never been able to find it.

Tuva or Bust! Richard Feyman's Last Journey

There was a BBC Horizon documentary on the same. And thankfully Christopher J. Sykes has uploaded that documentary on you-tube.

This is a rare documentary and was the last in which Feynman appeared. It was infact shot just some days before his death. This documents the obsession of Richard Feynman and his friend Ralph Leighton with visiting an obscure place in central Asia called Tannu Tuva. During a discussion on geography and in a teasing mood Feynman was reminded of a long forgotten memory and quipped at Leighton, “Whatever happened to Tannu Tuva”. Leighton thought it was a joke and confidently said that there was no such country at all. After some searching they found out that Tannu Tuva was once a country and now a soviet satellite. It’s capital was “Kyzyl”, the name was so interesting to Feynman that he though he just had to go to this place. The book and the documentary covers Feynman’s and Leighton’s adventure of scheming of getting to go to Tannu Tuva and to get around Soviet bureaucracy. It is an extremely entertaining film to say the least. The end for it is a little sad though. Feynman passed away three days before he got a letter from the Soviets about permission to visit Tannu Tuva and Leighton appears to be on the verge of tears.

The introduction to the documentary reads as:

The story of physicist Richard Feynman’s fascination with the remote Asian country of Tannu Tuva, and his efforts to go there with his great friend and drumming partner Ralph Leighton (co-author of the classic ‘Surely You’re Joking, Mr Feynman’). Feynman was dying of cancer when this was filmed, and died a few weeks after the filming. Originally shown in the BBC TV science series ‘Horizon’ in 1987, and also shown in the USA on PBS ‘Nova’ under the title ‘Last Journey of a Genius’

Find the five parts to the documentary below:

“I’m an explorer okay? I get curious about everything and I want to investigate all kinds of stuff”

Part 1

tatu1Click on the above image to watch

____

Part 2

tatu2Click on the above image to watch

____

Part 3

tatu3-2Click on the above image to watch

____

Part 4

tatu4Click on the above image to watch

____

Part 5

tatu5-2Click on the above image to watch

____

After I got done with the documentary did I realize that the PBS version of the above documentary was available on google video for quite some time.

Find the video here.

_____

Michelle Feynman

Michelle Feynman

As an aside :  though Feynman could not manage to go to Tuva in his lifetime. His daughter Michelle did visit Tuva last month!

_____

One of the things that has me in awe after the documentary over the last week is Tuvan throat singing. It is one of the most remarkable things that I have seen in the past month or two. I am strongly attracted to Tibetan chants too, but these are very different and fascinating. The remarkable thing about them being that the singer can produce two pitches as if being sung by two separate singers. Have a look!

_____

Project Tuva : Character of Physical Law Lectures

On the same day I came across 7 lectures which were given by Feynman at Cornell in 1964 and were put into a book later by the name “The Character of Physical Law”.  These have been made freely available by Microsoft Research. Though some of these lectures have already been on youtube for a while, the ones that were not needless to say were a joy to watch. I had linked to the lectures on Gravitation and Arrow of Time previously.

Project TuvaClick on the above image to be directed to the lectures

I came to know of these lectures on Prof Terence Tao’s page, who I find very inspiring too!

_____

Quick Links:

1. Christopher J. Sykes’ Youtube channel.

2. Tuva or Bust

3. Project Tuva at Microsoft Research

_____

Onionesque Reality Home >>

Read Full Post »

Older Posts »