Feeds:
Posts
Comments

Archive for the ‘Artificial Life’ Category

Long Blurb: In the past year, I read three books by Gregory Chaitin: His magnum opus – MetaMath! (which had been on my reading stack since 2007), a little new book – Proving Darwin and most of Algorithmic Information Theory (available as a PDF as well). Before this I had only read articles and essays by him. I have been sitting on multiple blog post drafts based on these readings but somehow I never got to finishing them up, perhaps it is time to publish them as is! Chaitin is an engaging and provocative writer (and speaker! see this fantastic lecture for instance) who really makes me think. It has come to my notice that a lot of people find his writing style uncomfortable (for apparently lacking humility and having copious hyperbole). While there might be some merits to this view, if one is able to look beyond these minor misgivings he always has a lot of interesting ideas. The apparent hyperbole is best viewed as effusive enthusiasm to get the most out of his writings (since there is indeed a lot to be mined). I might talk about this when I actually write about his books and the ideas therein. For now, this post was in part inspired by something I read in the second book: Proving Darwin.

I thought this was a thought provoking book, but perhaps was not suited to be a book, yet. It builds on the idea of DNA as software and tries to think of Evolution as a random walk in software space. The idea of DNA as software is not new. If one considers DNA to be software and the other biological processes to also be digital, then one is essentially sweeping out all that might not be digital. This view of biological processes might thus be at best an imprecise metaphor. But as Chaitin quotes Picasso ‘art is a lie that helps us see the truth‘ and explains that this is just a model and the point is to see if anything mathematically interesting can be extracted from such a model.

He eloquently points out that once DNA has been thought of as a giant codebase, then we begin to see that a human DNA is just a major software patchwork, having ancient subroutines common with fish, sponges, amphibians etc. As is the case with large software projects, the old code can never be thrown away and has to be reused – patched and made to work and grows by the principle of least effort (thus when someone catches ill and complains of a certain body part being badly designed it is just a case of bad software engineering!). The “growth by accretion” of this codebase perhaps explains Enrst Haeckel‘s slogan “ontogeny recapitulates phylogeny” which roughly means that the growth of a human embryo resembles the successive stages of evolution (thus biology is just a kind of Software Archeology).

All this sounds like a good analogy. But is there any way to formalize this vaguiesh notion of evolution as a random walk in software space and prove theorems about it? In other words, does a theory as elegant as Darwinian Evolution have a purely mathematical core? it must says Chaitin and since the core of Biology is basically information, he reckons that tools from Algorithmic Information Theory might give some answers. Since natural software are too messy, Chaitin considers artificial software in parallel to natural software and considers a contrived toy setting. He then claims in this setting one sees a mathematical proof that such a software evolves. I’ve read some criticisms of this by biologists on the blogosphere, however most criticise it on the premises that Chaitin explicitly states that his book is not about. There are however, some good critiques, I will write about these when I post my draft on the book.

________________

Now coming to the title of the post. The following are some excerpts from the book:

But we could not realize that the natural world is full of software, we could not see this, until we invented human computer programming languages […] Nobel Laureate Sydney Brenner shared an office with Francis Crick of Watson and Crick. Most Molecular Biologists of this generation credit Schrödinger’s book What is Life? (Cambridge University Press, 1944) for inspiring them. In his autobiography My Life in Science Brenner instead credits von Neumann’s work on self-reproducing automata.

[…] we present a revisionist history of the discovery of software and of the early days of molecular biology from the vantage point of Metabiology […] As Jorge Luis Borges points out, one creates one’s predecessors! […] infact the past is not only occasionally rewritten, it has to be rewritten in order to remain comprehensible to the present. […]

And now for von Neumann’s self reproducing automata. von Neumann, 1951, takes from Gödel the idea of having a description of the organism within the organism = instructions for constructing the organism = hereditary information = digital software = DNA. First you follow the instructions in the DNA to build a new copy of the organism, then you copy the DNA and insert it in the new organism, then you start the new organism running. No infinite regress, no homunculus in the sperm! […]

You see, after Watson and Crick discovered the molecular structure of DNA, the 4-base alphabet A, C, G, T, it still was not clear what was written in this 4-symbol alphabet, it wasn’t clear how DNA worked. But Crick, following Brenner and von Neumann, somewhere in the back of his mind had the idea of DNA as instructions, as software”

I did find this quite interesting, even if it sounds revisionist. That is because the power of a paradigm-shift conceptual leap is often understated, especially after more time has passed. The more time passes, the more “obvious” it becomes and hence the more “diffused” its impact. Consider the idea of a “computation”. A century ago there was no trace of any such idea, but once it was born born and diffused throughout we basically take it for granted. In fact, thinking of a lot of things (anything?) precisely is nearly impossible without the notion of a computation in my opinion. von Neumann often remarked that Turing’s 1936 paper contained in it both the idea of software and hardware and the actual computer was just an implementation of that idea. In the same spirit von Neumann’s ideas on Self-Reproducing automata have had a similar impact on the way people started thinking about natural software and replication and even artificial life (recall the famous von Neumann quote: Life is a process that can be abstracted away from any particular medium).

While I found this proposition quite interesting, I left this at that. Chaitin cited Brenner’s autobiography which I could not obtain since I hadn’t planned on reading it, google-books previews did not suffice either.  So i didn’t pursue looking on what Brenner actually said.

However, this changed recently, I exchanged some emails with Maarten Fornerod, who happened to point out an interview of Sydney Brenner that actually talks about this.

The relevant four parts of this very interesting 1984 interview can be found here, here, here and here. The text is reproduced below.

“45: John von Neumann and the history of DNA and self-replication:

What influenced me the most was the articles of von Neumann. Now, I had become interested in von Neumann not through the coding issues but through his interest in the nervous system and computers and of course, that’s what Seymour was interested in because what we wanted to do is find out how the brain worked; that was what… you know, like a hobby on the side, you know. After… after dinner we’d work on central problems of the brain, you see, but it was trying to find out how this worked. And so I got this symposium, the Hixon Symposium, which I had got to read. I was, at that time, very fascinated by a rather strange complexion of psychology things by a man called Wolfgang Köhler who was a gestalt psychologist and another man called Lewin, who was what was called a topological psychologist and of course they were talking at a very different level, but in this book there is an article by Köhler and of course in that book there’s this very famous paper which no one has ever read of von Neumann. Now of course later I discovered that those ideas were much older than the dating of this and that people had taken notes of them and they had circulated. But what is the brilliant part of this paper is in fact his description of what it takes to make a self-reproducing machine. And in fact if you look at what he says and what Schrödinger says, you can see what I have come to call Schrödinger’s fundamental error, and in fact we can… I can find you the passage in that. But it’s an amazing passage, because what von Neumann shows is that you have to have a mechanism not only of copying the machine but of copying the information that specifies the machine, right, so that he then divided the machine – the… the automaton as he called it – into three components: the functional part of the automaton; a… a decoding section of this which is part of that, which actually takes the tape, reads the instructions and builds the automaton; and a device that takes a copy of this tape and inserts it into the new automaton, right, which is the essential… essential, fundamental… and when von Neumann said that is the logical basis of self-reproduction, then you can see where Schrödinger made his mistake and this can be summarised in one sentence. Schrödinger says the chromosomes contain the information to specify the future organism and the means to execute it and that’s not true. The chromosomes contain the information to specify the future organisation and a description of the means to implement, but not the means themselves, and that logical difference is made so crystal clear by von Neumann and that to me, was in fact… The first time now of course, I wasn’t smart enough to really see that this is what DNA is all about, and of course it is one of the ironies of this entire field that were you to write a history of ideas in the whole of DNA, simply from the documented information as it exists in the literature, that is a kind of Hegelian history of ideas, you would certainly say that Watson and Crick depended on von Neumann, because von Neumann essentially tells you how it’s done and then you just… DNA is just one of the implementations of this. But of course, none knew anything about the other, and so it’s a… it’s a great paradox to me that in fact this connection was not seen. Linus Pauling is present at that meeting because he gives this False Theory of Antibodies there. That means he heard von Neumann, must have known von Neumann, but he couldn’t put that together with DNA and of course… well, neither could Linus Pauling put his own paper together with his future work because he and Delbrück wrote a paper on self… on self-complementation – two pieces of information – in about 1949 and had forgotten it by the time he did DNA, so that of course leads to a really distrust about what all the historians of science say, especially those of the history of ideas. But I think that that in a way is part of our kind of revolution in thinking, namely the whole of the theory of computation, which I think biologists have yet to assimilate and yet is there and it’s a… it’s an amazingly paradoxical field. You know, most fields start by struggling through from, from experimental confusion through early theoretical, you know, self-delusion, finally to the great generality and this field starts the other way round. It starts with a total abstract generality, namely it starts with, with Gödel’s hypothesis or the Turing machine, and then it takes, you know, 50 years to descend into… into banality, you see. So it’s the field that goes the other way and that is again remarkable, you know, and they cross each other at about 1953, you know: von Neumann on the way down, Watson and Crick on the way up. It was never put together.

[Q] But… but you didn’t put it together either?

I didn’t put it together, but I did put together a little bit later that, because the moment I saw the DNA molecule, then I knew it. And you connected the two at once? I knew this.

46: Schrodinger Wrong, von Neumann right:

I think he made a fundamental error, and the fundamental error can be seen in his idea of what the chromosome contained. He says… in describing what he calls the code script, he says, ‘The chromosome structures are at the same time instrumental in bringing about the development they foreshadow. They are law code and executive power, or to use another simile, they are the architect’s plan and the builder’s craft in one.’ And in our modern parlance, we would say, ‘They not only contain the program but the means to execute the program’. And that is wrong, because they don’t contain the means; they only contain a description of the means to execute it. Now the person that got it right and got it right before DNA is von Neumann in developing the logic of self-reproducing automata which was based of course on Turing’s previous idea of automaton and he gives this description of this automaton which has one part that is the machine; this machine is built under the instructions of a code script, that is a program and of course there’s another part to the machine that actually has to copy the program and insert a copy in the new machine. So he very clearly distinguishes between the things that read the program and the program itself. In other words, the program has to build the machinery to execute the program and in fact he says it’s… when he tries to talk about the biological significance of this abstract theory, he says: ‘This automaton E has some further attractive sides, which I shall not go into at this time at any length’.

47: Schrodinger’s belief in calculating an organism from chromosomes:

One of the central things in the Schrödinger’s little book What Is Life is what he has to say about the chromosomes containing the program or the code, and he says here… on page 20, he says: ‘In calling the structure of the chromosome fibres a code script, we mean that the all penetrating mind once conceived by Laplace, to which every cause or connection lay immediately open, could tell from their structure whether the egg would develop under suitable conditions, into a black cock, or into a speckled hen, into a fly, or a maize plant, a rhododendron, a beetle, a mouse or a woman’. Now, apart from the list of organisms he has given, which falls short of a serious classification of the living world, what he is saying here is that if you could look at the chromosomes, you could compute the… you could calculate the organism, but he’s saying something more. He’s saying that you could actually implement the organism because he goes on to add: ‘But the term code script is of course too narrow. The chromosome structures are at the same time instrumental in bringing about the development they foreshadow. They are law code and executive power, or to use another simile they are architect’s plan and builder’s craft in one.’ What he is saying here is that the chromosomes not only contain a description of the future organism but the means to implement that description or program as we might call it. That’s wrong, because they don’t contain the means to implement it, they only contain a description of the means to implement it and that distinction was made absolutely crystal clear by this remarkable paper of von Neumann, in which he develops and in fact provides a proof of what a self-reproducing automaton – or machine – would have to… would have to contain in order to satisfy the requirements of self-reproduction. He develops this concept of course from the earlier concept of Turing, who had developed ideas of automata that operated on tapes and which of course gave a mechanical example of how you might implement some computation. And if you like, this is the beginning of the theory of computation and what von Neumann has to say about this is made very clear in this essay. And he says that you’ve got to have several components, and I’ll just summarise them. He said you first start with an automaton A, ‘which, when furnished this description of any other automaton in terms of the appropriate functions, will construct that entity’. It’s like a Turing machine, you see. So automaton A has to be given a tape and then it’ll make another automaton A, all right. Now what you have, you’ve got to have… add to this automaton B. ‘Automaton B can make a copy of any instruction I that is furnished to it’, so if you give it a program, it’ll copy the program. And then you have… you combine A and B with each other and you give a control mechanism C, that you have to add as well, which does the following. It says, ‘Let A be furnished with the instruction I’. So you give… you give the tape to A, A copies it under the control of C, okay, by every description, so it makes another copy of A, then you… and of course B is part of A. Then you… C then tells B, ‘copy this tape I’, gives B the copying part of it, this copies it and then C will then take the tape I, and put it into the new machine, okay, and turns it loose as he says here, ‘as an independent entity’.

48: Automata akin to Living Cells:

What von Neumann says is that you need several components in order to provide this self-reproducing automaton. One component, which he calls automaton A, will make another automaton A when furnished with a description of itself. Then you need an automaton C… you need an automaton B, which has the property of copying the instruction tape I, and then you need a control mechanism which will actually control the switching. And so this automaton, or machine, can reproduce itself in the following way. The entity A is provided with a copy of… with the tape I, it now makes another A. The control mechanism then takes the tape I and gives it to B and says make a copy of this tape. It makes a copy of the tape and the control mechanism then inserts the new copy of the tape into the new automaton and effects the separation of the two. Now, he shows that the entire complex is clearly self-reproductive, there is no vicious circle and he goes on to say, in a very modest way I think, the following. He says, ‘The description of this automaton has some further attractive sides into which I shall not go at this time into any length. For instance, it is quite clear that the instruction I is roughly effecting the functions of a gene. It is also clear that the copying mechanism B performs the fundamental act of reproduction, the duplication of the genetic material which is also clearly the fundamental operation in the multiplication of living cells. It is also clear to see how arbitrary alterations of the system E, and in particular of the tape I, can exhibit certain traits which appear in connection with mutation, which is lethality as a rule, but with a possibility of continuing reproduction with a modification of traits.’ So, I mean, this is… this we know from later work that these ideas were first put forward by him in the late ’40s. This is the… a published form which I read in early 1952; the book was published a year earlier and so I think that it’s a remarkable fact that he had got it right, but I think that because of the cultural difference – distinction between what most biologists were, what most physicists and mathematicians were – it absolutely had no impact at all.”

________________

References and Also See:

1. Proving Darwin: Making Biology Mathematical, Gregory Chatin (Amazon).

2. Theory of Self-Reproducing Automata, John von Neumann. (PDF).

3. 1966 film on John von Neumann.

Read Full Post »

Well just a fortnight or so back I discovered that Dr Radford Neal, one of the top researchers in Statistics and Machine Learning was blogging. And today morning I discovered Dr Vitorino Ramos has been blogging for over a week now too!

This comes as a surprise, but a very pleasant one. I am very glad to have found his page, it promises to be a very different Web-Log and could indeed grow into one of the top blogs on Swarming, Self-Organization, Complexity and Distributed Systems as it would be by a leading expert in the field. It would be great to catch up on his work. In the past I have tried to write on some of his interesting work on my own page. My posts can be found here.

[Vitorino Ramos: Image Source]

Dr Ramos’ research areas are chiefly in Artificial Life, Artificial Intelligence, Bio-Inspired Computing, Collective Intelligence and Complex Systems. He obtained his PhD in 2004 and has published about 70 papers in the above fields and their broad application areas. So put simply it can be said that the IQ of the “blogosphere” has gone up a little with this addition.

For starters I would recommend his article on Financial Markets (given the situation today), talking about the herd mentality and the resulting amplification in dumb investors and its results and what it could result in. Most investors do not understand much of the market mechanism. This is a bare fact put most aptly in this cartoon I found on his blog, and his post goes much beyond that.

[Image Source]

Click to Enlarge

And going by the website and blog name, it seems that Dr Ramos is now interested in some sense in Tibor Ganti’s Chemoton Theory.

Quick Links:

1. Vitorino Ramos’ Homepage.

2. Dr Ramos’ Publications. (PDFs available online)

Onionesque Reality Home >>

Read Full Post »

Before I make a start I would want to make it very clear that inspite of what that the title may suggest, this is not a “sensational” post. It is just something that really intrigued me. It basically falls under the domain of image segmentation and pattern recognition, however it is something that can intrigue a person with a non-scientific background with a like (or dislike) for Franz Kafka’s work equally. I keep the title because it is the title of an original work by Dr Vitorino Ramos and hence making changes to it is not a good thing.

Note: For people who are  not interested in technical details can skip those parts and only read the stuff in bold there.

Franz Kafka is one writer whose works have had a profound impact on me in terms that they disturbed me each time I thought about them. No, not because of his writings per se ONLY but for a greater part because i had read a lot on his rather tragic life and i saw a heart breaking reflection in his works of what happened in his life (i see a lot of similarities between Kafka’s life and that of Premchand albeit that Premchand’s work got published in his lifetime mostly, though he got true critical acclaim after his death). Yes i do think that his writings give a good picture of Europe at that time, on human needs and behavior, but the prior reason outweighs all these. Kafka remains one of my favorite writers, though his works are basically short stories. He mostly wrote on a theme that emphasized the alienation of man and the indifferent society. Kafka’s tormenting thoughts on dehumanization, the cruel world, bureaucratic labyrinths which he experienced as being part of the not so liked Jewish minority in Prague, his experiences in jobs he did, his love life and affairs, on a constant fear of mental and physical collapse as a result of clinical depression and the ill health that he suffered from, reflected in a lot of his works. Including in his novella The Metamorphosis.

W. H Auden rightly wrote about Kafka:

“Kafka is important to us because his predicament is the predicament of the modern man”

In metamorphosis the protagonist Gregor Samsa turns into a giant insect when he wakes up one morning. It is kind of apparent that the “transformation” was meant in a metaphorical sense by Kafka and not in a literal one, mostly based on his fears and his own life experiences. The Novella starts like this. . .

As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a monstrous vermin.

While rummaging through a few scientific papers that explored the problem of pattern recognition using a distributed approach i came across a few by Dr Ramos et al, which dealt with the issue using the artificial colonies approach.

In the previous post i had mentioned that the self organization of neurons into a brain like structure and the self organization of an ant colony were similar in more than a few ways. If it may be implemented then it could have implications in pattern recognition problems, where the perceptive abilities emerge from the local and simple interactions of simple agents. Such decentralized systems, a part of the swarm intelligence paradigm look very promising in applying to pattern recognition and the specific case of image segmentation as basically these may be considered a clustering and combinatorial problem taking the image itself as an ant colony habit.

The basis for this post was laid down in the previous post on colony cognitive maps. We observed the evolution of a pheromonal field there and a simple model for the same:

[Evolution of a distribution of (artificial) ants over time: Image Source]

Click to Enlarge

The above is the evolution of the distribution of artificial ants in a square lattice, this work has been extended to digital image lattices by Ramos et al. Image segmentation is an image processing problem wherein the regions of the image under consideration may be partitioned into different regions. Like into areas of low contrast and areas of high contrast, on basis of texture and grey level and so on. Image segmentation is very important as the output of an image segmentation process may be used as an input in object recognition based scenarios. The work of Ramos et al (In references below) and some of the papers cited in his works have really intrigued me and i would strongly suggest readers to have a look at them if at all they are interested in image segmentation, pattern recognition and self organization in general, some might also be interested in implementing something similar too!

In one of the papers a swarm of artificial ants was thrown on a digital habitat (an image of Albert Einstein) to explore it for 1000 iterations. The Einstein image is replaced by a map image. The evolution of the colony cognitive maps for the Einstein image habitat is shown below for various iterations.

[Evolution of a pheromonal field on an Einstein image habitat for t= 0, 1, 100, 110, 120, 130, 150, 200, 300, 400, 500, 800, 900, 1000: Image Source]

The above is represented most aptly in a .gif image.

[Evolution of a pheromonal field on an Einstein habitat: Image Source]

Now instead of Einstein a Kafka image was taken and was subject to the same. Image Source

The Kafka image habitat is replaced by a red ant in the second row. The abstract of the paper by the same name goes as.

Created with an Artificial Ant Colony, that uses images as Habitats, being sensible to their gray levels. At the second row,  Kafka is replaced as a substrate, by Red Ant. In black, the higher levels of pheromone (a chemical evaporative sugar substance used by swarms on their orientation trough out the trails). It’s exactly this artificial evaporation and the computational ant collective group synergy reallocating their upgrades of pheromone at interesting places, that allows for the emergence of adaptation and “perception” of new images. Only some of the 6000 iterations processed are represented. The system does not have any type of hierarchy, and ants communicate only in indirect forms, through out the successive alteration that they found on the Habitat.

Now what intrigues me is that the transition is extremely rapid. Such perceptive ability with change in the image habitat could have massive implications at how we look at pattern recognition for such cases.

Extremely intriguing!

Resources on Franz Kafka:

1. A Brief Biography

2. The Metamorphosis At Project Gutenberg. Click here >>

3. The Kafka Project

References and STRONGLY recommended papers:

1. Artificial Ant Colonies in Digital Image Habitats – A Mass behavior Effect Study on Pattern Recognition. Vitorino Ramos and Filipe Almeida. Click Here >>

2. Social Cognitive Maps, Swarms Collective Perception and Distributed Search on Dynamic Landscapes. Vitorino Ramos, Carlos Fernandes, Agostinho C. Rosa. Click Here >>

3. Self-Regulated Artificial Ant Colonies on Digital Image Habitats. Carlos Fernandes, Vitorino Ramos, Agostinho C. Rosa. Click Here >>

4. On the Implicit and the Artificial – Morphogenesis and Emergent Aesthetics in Autonomous Collective Systems. Vitorino Ramos. Click Here >>

5. A Strange Metamorphosis [Kafka to Red Ant], Vitorino Ramos.

Onionesque Reality Home >>

Read Full Post »

Some posts back, i posted on Non-Human Art or Swarm Paintings, there I mentioned that those paintings were NOT random but were a Colony Cognitive Map.

This post will serve as the conceptual basis for the Swarm Paintings post, the next post and a few future posts on image segmentation.

Motivation: Some might wonder what is the point of writing about such a topic. And that it is totally unrelated to what i write about generally. No! That is not the case. Most of the stuff I write about is related in some sense. Well the motivation for reading thoroughly about this (and writing) maybe condensed into the following:

1. The idea of a colony cognitive map is used in SI/A-life experiments, areas that really interest me.

2. Understanding the idea of colony cognitive maps gives a much better understanding of the inherent self organization in insect swarms and gives a lead to understand self organization in general.

3. The parallel to colony cognitive maps, the cognitive maps follow from cognitive science and brain science. Again areas that really interest me as they hold the key for the REAL artificial intelligence evolution and development in the future.

The term “Colony Cognitive Map” as i had pointed earlier is in a way a parallel to a Cognitive Map in brain science (i use the term brain science for a combination of fields like neuroscience, Behavioral psychology, cognitive sciences and the likes and will use it in this meaning in this post ) and also that the name is inspired from the same!

There is more than just a romantic resemblance between the self-organization of “simple” neurons into an intelligent brain like structure, producing behaviors well beyond the capabilities of an individual neuron and the self-organization of simple and un-intelligent insects into complex swarms and producing intelligent and very complex and also aesthetically pleasing behavior! I have written previously on such intelligent mass behavior. Consider another example, neurons are known to transmit neurotransmitters in the same way a social insect colony is marked by pheromone deposition and laying.

[Self Organization in Neurons (Left) and a bird swarm(Below).  Photo Credit >> Here and Here]

First let us try to revisit what swarm intelligence roughly is (yes i still am to write a post on a mathematical definition of the same!), Swarm Intelligence is basically a property of a system where the collective actions of unsophisticated agents, acting locally causes functional and sophisticated global patterns to emerge. Swarm intelligence gives a scheme to explore decentralized problem solving. An example that is also one of my favorites is that of a bird swarm, wherein the collective behaviors of birds each of which is very simple causes very complex global patterns to emerge. Over which I have written previously, don’t forget to look at the beautiful video there if you have not done so already!

Self Organization in the Brain: Over the last two months or so i had been reading Douglas Hofstadter’s magnum opus, Gödel, Escher, Bach: an Eternal Golden Braid (GEB). This great book makes a reference to the self organization in the brain and its comparison with the behavior of the ant colonies and the self organization in them as early as 1979.

[Photo Source: Wikipedia Commons]

A brain is often regarded as one of the most if not the most complex entity. However if we look at a rock it is very complex too, but then what makes a brain so special? What distinguishes the brain from something like a rock is the purposeful arrangement of all the elements in it. The massive parallelism and self organization that is observed in it too amongst other things makes it special. Research in Cybernetics in the 1950s and 1960s lead the “cyberneticians” to try to explain the complex reactions and actions of the brain without any external instruction in terms of self organization. Out of these investigations the idea of neural networks grew out (1943 – ), which are basically very simplified models of how neurons interact in our brains. Unlike the conventional approaches in AI there is no centralized control over a neural network. All the neurons are connected to each other in some way or the other but just like the case in an ant colony none is in control. However together they make possible very complex behaviors. Each neuron works on a simple principle. And combinations of many neurons can lead to complex behavior, an example believed to be due to self-organization. In order to help the animal survive in the environment the brain should be in tune with it too. One way the brain does it is by constantly learning and making predictions on that basis. Which means a constant change and evolution of connections.

Cognitive Maps: The concept of space and how humans perceive it has been a topic that has undergone a lot of discussion in academia and philosophy. A cognitive map is often called a mental map, a mind map, cognitive model etc.

The origin of the term Cognitive Map is largely attributed to Edward Chace Tolman, here cognition refers to mental models that people use to perceive, understand and react to seemingly complex information. To understand what a mental model means it would be favorable to consider an example I came across on wikipedia on the same. A mental model is an inherent explanation in somebody’s thought process on how something works in the spatial or external world in general. It is hypothesized that once a mental model for something or some representation is formed in the brain it can replace careful analysis and careful decision making to reduce the cognitive load. Coming back to the example consider a mental model in a person of perceiving the snake as dangerous. A person who holds this model will likely rapidly retreat as if is like a reflex without initial conscious logical analysis. And somebody who does not hold such a model might not react in the same way.

Extending this idea we can look at cognitive maps as a method to structure, organize and store spatial information in the brain which can reduce the cognitive load using mental models and and enhance quick learning and recall of information.

In a new locality for example, human way-finding involves recognition and appreciation of common representations of information such as maps, signs and images so to say. The human brain tries to integrate and connect this information into a representation which is consistent with the environment and is a sort of a “map”. Such spatial (not necessarily spatial) internal representations formed in the brain can be called a cognitive map. As the familiarity of a person with an area increases then the reliance on these external representations of information gradually reduces. And the common landmarks become a tool to localize within a cognitive map.

Cognitive maps store conscious perceptions of the sense of position and direction and also the subconscious automatic interconnections formed as a result of acquiring spatial information while traveling through the environment. Thus they (cognitive maps) help to determine the position of a person, the positioning of objects and places and the idea of how to get from one place unto another. Thus a cognitive map may also be said to be an internal cognitive collage.

Though metaphorically similar the idea of a cognitive map is not really similar to a cartographic map.

Colony Cognitive Maps: With the above general background it would be much easier to think of a colony cognitive map. As it is basically a analogy to the above. As described in my post on adaptive routing, social insects such as ants construct trails and networks of regular traffic via a process of pheromone deposition, positive feedback and amplification by the trail following. These are very similar to cognitive maps. However one obvious difference lies in the fact that cognitive maps lie inside the brain and social insects such as ants write their spatial memories in the external environment.

Let us try to picture this in terms of ants, i HAVE written about how a colony cognitive map is formed in this post without mentioning the term.

A rather indispensable aspect of such mass communication as in insect swarms is Stigmergy. Stigmergy refers to communication indirectly, by using markers such as pheromones in ants. Two distinct types of stigmergy are observed. One is called sematectonic stigmergy, it involves a change in the physical environment characteristics.An example of sematectonic stigmergy is nest building wherein an ant observes a structure developing and adds its ball of mud to the top of it. Another form of stigmergy is sign-based and hence indirect. Here something is deposited in the environment that makes no direct contribution to the task being undertaken but is used to influence the subsequent behavior that is task related. Sign based stigmergy is very highly developed in ants. Ants use chemicals called as pheromones to develop a very sophisticated signaling system. Ants foraging for food lay down some pheromone which marks the path that they follow. An isolated ant moves at random but an ant encountering a previously laid trail will detect it and decide to follow it with a high probability and thereby reinforce it with a further quantity of pheromone. Since the pheromone will evaporate the lesser used paths will gradually vanish. We see that this is a collective behavior.

Now we assume that in an environment the actors (say for example ants) emit pheromone at a set rate. Also there is a constant rate at which the pheromone evaporates. We also assume that the ants themselves have no memory of previous paths taken and act ONLY on the basis of the local interactions with pheromone concentrations in the vicinity. Now if we consider the “field” or “map” that is the overall result and formed in the environment as a result of the movements of the individual ants over a fixed period of time. Then this “pheromonal” field contains information about past movements and decisions of the individual ants.

The pheromonal field (cognitive map) as i just mentioned contains information about past movements and decisions of the organisms, but not arbitrarily far in the past since the field “forgets” its distant history due to evaporation in time. Now this is exactly a parallel to a cognitive map, with the difference that for a colony the spatial information is written in the environment unlike inside the brain in the case of a human cognitive map. Another major similarity is that neurons release a number of neurotransmitters which can be considered to  be a parallel to the pheromones released as described above! The similarities are striking!

Now if i look back at the post on swarm paintings, then we can see that the we can make such paintings, with the help of a swarm of robots. More pheromone concentration on a path means more paint. And hence the painting is NOT random but is EMERGENT. I hope i could make the idea clear.

How Swarms Build Colony Cognitive Maps: Now it would be worthwhile to look at a simple model of how ants construct cognitive maps, that I read about in a wonderful paper by Mark Millonas and Dante Chialvo. Though i have already mentioned, I’ll still sum up the basic assumptions.

Assumptions:

1. The individual agent (or ant) is memoryless.

2. There is no direct communication between the organisms.

3. There is no spatial diffusion of the pheromone deposited. It remains fixed at a point where it was deposited.

4. Each agent emits pheromone at a constant rate say \eta.

Stochastic Transition Probabilities:

Now the state of each agent can be described by a phase variable which contains its position r and orientation \theta. Since the response at any given time is dependent solely on the present and not the previous history, it would be sufficient to specify the transition probability from one location (r,\theta) to another place and orientation (r',\theta') an instant later. Thus the movement of each individual agent can be considered roughly to be a continuous markov process whose probabilities at each and every instance of time are decided by the pheromone concentration \sigma(x, t).

By using theoretical considerations, generalizations from observations in ant colonies the response function can be effectively summed up into a two parameter pheromone weight function.

\displaystyle W(\sigma) = (1 + \frac{\sigma}{1 + \delta\varsigma})

This weight function measures the relative probabilities in moving to a site r with the pheromone density \sigma(r).

Another parameter \beta may be considered. This parameter measures the degree of randomness by which an agent can follow a pheromone trail. For low values of \beta the pheromone concentration does not largely impact its choice but higher values do.

At this point we can define another factor \displaystyle\frac{1}{\varsigma}. This signifies the sensory capability. It describes the fact that the ants ability to sense pheromone decreases somewhat at higher concentrations. Something like a saturation scenario.

Pheromone Evolution: It is essential to describe how the pheromone evolves. According to an assumption already made, each agent emits pheromone at a constant rate \eta with no spatial diffusion. If the pheromone at a location is not replenished then it will gradually evaporate. The pheromonal field so formed does contain a memory of the past movements of the agents in space, however because of the evaporation process it does not have a very distant memory.

Analysis: Another important parameter is the regarding the number of ants present, the density of ants \rho_0. Thus using all these parameters we can define a single parameter, the average pheromonal field \displaystyle\sigma_0 = \frac{\rho_0 \eta}{\kappa}. Where \displaystyle \kappa is what i mentioned above, the rate of scent decay.

Further detailed analysis can be studied out here. With the above background it is just a matter of understanding.

[Evolution of distribution of ants : Source]

Click to Enlarge

Now after continuing with the mathematical analysis in the hyperlink above, we fix the values of the parameters.

Then a large number of ants are placed at random positions, the movement of each ant is determined by the probability P_{ik}.

Another assumption is that the pheromone density at each point at t=0 is zero. Each ant deposits pheromone at a decided rate \eta and also the pheromone evaporates at a fixed rate \kappa.

In the above beautiful picture we the evolution of a distribution of ants on a 32×32 lattice. A pattern begins to emerge as early as the 100th time step. Weak pheromonal paths are completely evaporated and we finally get a emergent ant distribution pattern as shown in the final image.

The Conclusion that Chialvo and Millonas note is that scent following of the very fundamental type described above (assumptions) is sufficient to produce an evolution (emergence) of complex pattern of organized flow of social insect traffic all by itself. Detailed conclusion can be read in this wonderful paper!

References and Suggested for Further Reading:

1. Cognitive Maps, click here >>

2. Remembrance of places past: A History of Theories of Space. click here >>

3. The Science of Self Organization and Adaptivity, Francis Heylighen, Free University of Brussels, Belgium. Click here >>

4.   The Hippocampus as a Cognitive Map, John O’ Keefe and Lynn Nadel, Clarendon Press, Oxford. To access the pdf version of this book click here >>

5. The Self-Organization in the Brain, Christoph von der Malsburg, Depts for Computer Science, Biology and Physics, University of Southern California.

5. How Swarms Build Cognitive Maps, Dante R. Chialvo and Mark M. Millonas, The Santa Fe Institute of Complexity. Click here >>

6. Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes, Vitorino Ramos, Carlos Fernandes, Agostinho C. Rosa.

Related Posts:

1. Swarm Paintings: Non-Human Art

2. The Working of a Bird Swarm

3. Adaptive Routing taking Cues from Stigmergy in Ants

Possibly Related:

Gödel, Escher, Bach: A Mental Space Odyssey

Read Full Post »

General Background: Since childhood i have enjoyed sketching and painting, and very much at that! Sometimes i found myself copying an existing image or painting, making small changes here and there in it. Yes, the paintings came out beautiful (or so i think!), but one thing always made me unhappy, i thought that the creativity needed to make original stuff was missing at times (not always). It was not there all the time. It came in bursts and went away.

I agree with Leonel Moura (from his article) that creativity is basically produced due to different experiences and interactions. Absence or lack of which could make art lose novelty.

Talking of novelty, how about looking at art in nature? Richard Dawkins states that the difference between human art or design and the amazingly “ingenious” forms that we encounter in nature, is due tho the fact that Human art originates in the mind , while the natural designs result from natural selection. Which is very true. However it is another matter that natural selection and cultural selection, that will ultimately decide on the “popularity” of an art don’t function in the same way. Anyhow How can we remove the cultural bias or the human bias that we have in our art forms?
.
Answers in Artificial Life: Artificial life may be defined as “A field of study devoted to understanding life by attempting to derive general theories underlying biological phenomena, and recreating these dynamics in other physical media – such as computers – making them accessible to new kinds of experimental manipulation and testing. This scientific research links biology and computer science.”
Most of the A-Life simulations today can not be considered truly alive, as they still can not show some properties of truly alive systems and also that they have considerable human bias in design. However there are two views that have existed on the whole idea of Artificial Life and the extent it can go.
Weak A-Life is the idea that the “living process” can not be achieved beyond a chemical domain. Weak A-life researchers concentrate on simulating life processes with an underlying aim to understand the biological processes.
Strong A-Life is exactly the reverse. John Von Neumann once remarked life is a process which can be abstracted away from any particular medium. In recent times Ecologist Tom Ray declared that his computer simulation Tierra was not a simulation of life but a synthesis of life. In Tierra, computer programmes compete for CPU time and access to the main memory. These programs are also evolvable, can replicate, mutate and recombine.
.
Relating A-Life to Art: While researching on these ideas and the fact that these could be used to generate the art forms that i talked about in the first paragraph i came across a few papers by Swarm Intelligence Guru Vitorino Ramos and a couple of articles by Leonel Moura who had worked in collaboration with Dr Ramos on precisly this theme.
.
Swarm Paintings: Thus the idea as i had mentioned in my very first paragraph is to create an organism ideally with minimum pre-commitment to any representational art scheme or human style or taste. Sounds simple but is not so simple to implement!
There are a number of projects that have dealt with creating art, but these mostly have been evolutionary algorithms that learn from human behavior, and learn about human mannerisms and try to create art according to that. The idea here is to create art with a minimum of human intervention.
I came across a project by Dr Vitorino Ramos to which i had pointed out implicitly in the last paragraph. This project called ARTSBOT (ARTistic Swarm roBOTs) project. This project tries to address this issue of minimizing the human intervention in aesthetics , ethnicity, taste,style etc. In short their idea was to remove or to minimize the anthropocentric bias that pervades all our art forms. Obviously all this can have massive implications in our understanding of the biological processes also, however here we’ll talk of only art.
Two of the first paintings that emerged were:
(Source: Here)
(Source: Here)
.
These paintings were among the first swarm paintings by Leonel Moura and Vitorino Ramos. Now we see that these seem detached from a functional human pre-commitment. They don’t seem to represent any emotion or style or taste. However they still look very pleasant!
However the point to be understood and to be noted is that these are NOT random pictures created either by a programme or by a swarm of robots moving “randomly”. These pictures were generated by a horde of artificial ants and also by robots. They are not random, but they EMERGE from a process of pheromone deposition and evaporation as was simulated in this system from ants. Thus the result that we have above is a Colony Cognitive Map. The colony cognitive map is analogous to a cognitive map in the brain. I will cover the idea of a colony cognitive map in the next post.
A couple of more beautiful paintings can be seen below!
(Source for both images : Here>>)
Though i have already mentioned how these art forms emerge, i would still like to quote a paragraph from here:

The painting robots are artificial ‘organisms’ able to create their own art forms. They are equipped with environmental awareness and a small brain that runs algorithms based on simple rules. The resulting paintings are not predetermined, emerging rather from the combined effects of randomness and stigmergy, that is, indirect communication trough the environment.
Although the robots are autonomous they depend on a symbiotic relationship with human partners Not only in terms of starting and ending the procedure, but also and more deeply in the fact that the final configuration of each painting is the result of a certain gestalt fired in the brain of the human viewer. Therefore what we can consider ‘art’ here, is the result of multiple agents, some human, some artificial, immerged in a chaotic process where no one is in control and whose output is impossible to determine.
Hence, a ‘new kind of art’ represents the introduction of the complexity paradigm in the cultural and artistic realm.’

A Painting bot is something like in the picture shown below:

A swarm of robots at work:

The final art generated by the swarm of these robots is beautiful!

(Photo Credit for the three pictures above: Here>>

.

Conclusions:

The work of Dr Ramos and Leonel Maura can be summed up as:
1. The human is only the “art-architect”, the “swarm” is the artist.
.
2. The “life” of Artificial Life shows characteristics like natural life itself namely Morphogenesis, ability to adapt to changing environments, evolution etc.
.
Leonel Moura’s wonderful article states that the final aim is to create an “Artificial Autopoietic System”, intriguing indeed and eagerly awaited!!
Such simulations could change the way we understand the biological processes and life.
Also i am now thinking how could music be produced based on the same or similar ideas. I wonder if Swarm music could be available. It would be most interesting and i can’t wait to listen to it already!
.
Have a look at this video by Leonel Moura, having some time lapse footage of robots painting.
References:
1. Ant- Swarm Morphogenese By Leonel Moura
.
2. On the Implicit and on the Artificial – Morphogenesis and Emergent Aesthetics in Autonomous Collective Systems, in ARCHITOPIA Book, Art, Architecture and Science, INSTITUT D’ART CONTEMPORAIN, J.L. Maubant et al. (Eds.), pp. 25-57, Chapter 2, Vitorino Ramos.
.
3. A Strange Metamorphosis [From Kafka to Red Ant], Vitorino Ramos
.
Links:
Follow the following links to follow on more exciting papers and paintings.

Read Full Post »