Archive for February, 2008

10 Mathematical Jokes

Via the geeky page of Professor Erich Friedman. Here is a hilarious list. The first i have already accidentally mentioned in a previous post.

  • Q: Why did the mathematician name his dog “Cauchy”?
    A: Because he left a residue at every pole.
  • Q: What’s sado-masochism?
    A: The standard deviation of the mean.
  • Q: What do you get when you cross an elephant and a grape?
    A: I dunno, but its magnitude is Elephant, grape, sin theta.
  • Q: What do you get when you cross a mountain climber with a grape?
    A: You can’t. A mountain climber is a scaler.
  • Q: What do farmers study in trigonometry?
    A: Swine and cow-swine.
  • Q: What’s the contour integral around Western Europe?
    A: Zero, because all the Poles are in Eastern Europe.
  • Q: How many numerical analysts does it take to screw in a light bulb?
    A: 0.9973 after the first three iterations.
  • Q: How many statisticians does it take to change a lightbulb?
    A: Two plus or minus three.
  • Q: How many applied mathematicians does it take to screw in a lightbulb?
    A: One, who gives it to two statisticians, thereby reducing it to an earlier riddle.
  • Q: How many topologists does it take to change a light bulb?
    A: It really doesn’t matter, since they’d rather knot.

Onionesque Reality Home >>


Read Full Post »

Here is a very nerdy joke. I don’t know the source for it. If you do kindly let me know.

This is one of the best jokes that i know of on a mathematical theorem.

Q. Why did the mathematician name his dog Cauchy?

A. Because it left a residue at every pole.

Ha Ha :D

For those not familiar with Cauchy’s Residue Theorem, have a look here.

Onionesque Reality Home >>

Read Full Post »

Following a discussion on Reasonable Deviations. I was prompted to write on this.


The above is an illustration on the generation of the Sierpinski Gasket. For simplicity we assume that the area of the initial triangle is 1. We split this triangle into four triangles by joining the mid-points of the sides of the triangle. These smaller triangles as shown in figure 2 have equal areas. We then remove the middle triangle. We adopt the convention that we will only remove the middle triangle and not a triangle at the edge.

In each of the three remaining triangles we repeat the process and remove the middle triangles. This is where self-symmetry comes in. If we pretend to see only one of the three small triangles then we are actually doing the same thing as we did to the original triangle. Albeit on a smaller scale. The above figure shows the third and fourth iterates of the original triangle. This process repeated ad infinitum gives rise to the Sierpinski gasket.

The L-System representation of the process is:

variables : A B

constants : + −

start : A

rules : (A → B−A−B),(B → A+B+A)

angle : 60°

Anyway, now what is interesting about this figure is its area and the perimeter.

Continuing with our earlier assumption, suppose the initial triangle had an area

A0= 1;

Now in the first iteration we remove one of the four equal areas in that triangle and keeping the other three that remain.

Therefore the total area of the first iterate will be equal to

A1 = (3/4) x 1;

Similarly in the second iteration, we repeat the process as I have already noted above. Thus,

A2 = (3/4)(3/4) x 1;

For n iterations the area will be given by:

An = (3/4)n x 1;

Now if n is arbitrarily large then the area, it would follow will be ZERO.

Now finding out the perimeter is a similar exercise. The length of the boundary of the nth iterate of the original triangle is the total length of the boundaries of all the shaded small triangles in the nth iterate. It can be shown that this gets arbitrarily large as n gets arbitrarily large. Therefore we conclude that a Sierpinski gasket has infinite perimeter!

And that it has zero area inside and infinite perimeter.

This is against the Koch Snowflake, which is a figure having a finite area inside an infinite perimeter. This goes against the way we think and according to geometric intuition that we have. But it actually is characteristic of many shpaes in nature. For example (i really like this example :D ) if we take all the arteries, veins and capillaries in the human body, then they occupy a relative small fraction of the body. Yet if we were able to lay them out end to end, we would find that the total length would come to over 60000 kilometres. This example i am quoting from an article i had copied years ago.


Related Article:

Lindenmayer systems and Fractals

Onionesque Reality Home >>

Read Full Post »

I dedicated some of the previous articles to the Orion Project only. I tried to briefly review the old project, its demise, then the new designs that have been put forth, and then put a personal opinion on what the problems are with Orion like projects without discounting the obvious advantages.

This video is more of a historical prospective to the Project. Makes a fascinating view!

In this TED talk:

George Dyson tells the amazing story of Project Orion, a massive, nuclear-powered spacecraft that could have taken us to Saturn in five years. With a priceless insider’s perspective and a cache of documents, photos and film, Dyson brings this dusty Atomic Age dream to vivid life.

(Text from the caption to the TED talk )

George Dyson is the son of the celebrated thinker, mathematician and physicist Freeman Dyson. George is a historian and a philosopher of science.

Onionesque Reality Home >>

Read Full Post »

The old Orion project had a few potential problems. There were doubts regarding the stability of the system, but with modern simulation technology this can be verified rather easily and without the need for an actual empirical investigation. The main problem however was the possible nuclear fallout. This was the most important reason due to which the project was shelved. The treaty of 1963 came as the final blow.

Even with the newer “versions” of Orion like projects there are all sorts of problems.

While the Orion was important for its time (in terms of stimulating possible engineering concepts), I would currently view it as “post period”.
A better possible concept is when will we have the technology to effectively observe “everything”? If you can observe it you do not have to “go there”. There are of course limits, and of course these should be discussed. But current planned satellites allow us to “go there” much more effectively without the need for an Orion like project.

I believe the Mars rovers can provide a good example of how to do things now. They have lasted something like 2-3 years longer than their design life. So long as one builds in fault tolerance bots could have even longer lifetimes.
We are not going to really get there until we have true nanorobots that can be organized to operate collectively because these can be launched with very small rockets.

The solution to managing things is what is known as a “broadcast architecture” (which is very similar to what NASA uses now with satellites but on a somewhat larger scale).

Even with the mini-mag there are problems. First, basing a propulsion system on 245 Cm presumes that you could synthesize sufficient amounts of it. That is a massive undertaking.

Second, it assumes one needs to navigate 100ton spaceships around the solar system. We do not. We need to be able to launch nanorobots into orbits that can easily be transported to various places in the solar system to manage development. That requires lots of micro-rockets — not the huge spaceships designed to transport humans to places they were they are not adapted to live. This i have already touched upon in the previous paragraph. The paper (cited in the previous entry) is a classic example of good physicists doing good work who have little understanding of nanotechnology or microbiology. They also are stuck in 1960’s era concepts that “we” should go there when we have to completely alter the human genome before we should even consider that. And by then we will likely be dead or uploaded and so it is pointless to attempt 1960’s era transport of us.

Related Articles:

Death of a project: Project Orion

Possible Rebirth of Project Orion?

Morphogenesis and Swarm Robotics

Onionesque Reality Home >>

Read Full Post »

For fans of the dated Orion and Orion like projects there is some hope though. :)

It is in the form of the Mini-Mag-Orion. It tries to address some issues with the old model and uses modern techniques for simulation as well.

The detailed report can be found as a pdf here (Andrews Space)

A quick summary of this paper:

It – if it becomes reality – can give us the entire solar system with travel times to Mars of a couple of months and to Jupiter in less than a year. In it you compress SMALL pieces of fissionable material (curium245 or uranium 235 or uranium233 or plutonium 239) with the aid of a pulsating super-strong electromagnetic field so that their small masses compressed into ultra-small volumes become supercritical and explode. They explode INSIDE a superconducting magnetic rocket nozzle. One explosion per second.

Onionesque Reality Home >>

Read Full Post »

The Project Orion still has iconic status in the eyes of many to this day, and i will not conceal the fact that the notion really had fascinated me when i first read of it in a newspaper years ago in a passing reference in a much bigger article on the space age. My eyes lit up and i started imagining how space travel could change (or rather could have!). ;)

Though i was planning to continue to write on Swarm Intelligence based routing for some more articles and then was thinking of going to dynamic programming and speech recognition. I decided that i would write on the Orion first. I hope to dedicate the next three or four posts only on this milestone project!

The video below is an excellent BBC excerpt from “To Mars by A-Bomb” (2003) showing some footage of the tests during the Orion years with some commentary from Freeman Dyson (who also happens to be a man i greatly venerate and is one of my heroes!) and Arthur C Clark. This is a rather short video! Do have a look!

I would give a short introduction to those who are not familiar with Orion.

We are used to space-ships using conventional fuels. For rating the efficiency of such fuels one parameter is Specific Impulse. It is stated in seconds and it indicates how many kilograms of thrust are obtained by the consumption of one Kg of the propellant in one second. This value is more or less characteristic of the type of propellant used, however there can be variations due to operating conditions and engine design. Therefore the higher the specific impulse the lesser the propellant is needed to gain a given amount of thrust.

Stanislaw Ulam in 1947 proposed Rocket propulsion using nuclear explosions, or pulsed nuclear explosions. He realized that nuclear explosions had not yet been contained in a combustion chamber. So instead it was proposed that the Orion design would work by dropping fissionable explosives out of the rear of the vehicle and catching the blast with a thick metal pusher plate.

The key components of the Orion are as in the figure.

Orion Design

Photo Courtesy: NASA Archives

The project initiated in 1958 under Ted Taylor and Freeman Dyson. This was the first such think tank assembled since the Manhattan Project. Orion offered both very high thrust and very high specific impulse. The potential it offered was enormous, Freeman Dyson has been quoted as saying that a single mission could provide with a permanent moon base and that it was possible to fly to and return back from Pluto in under one year. The orion could touch speeds upto 0.1 c according to some estimates and could carry as large as 8 million tons of mass, which could be as big as a city!!

The project died, due to concerns with the fallout due to each launch. Though Dyson maintained that conventional explosives could be used for launching the Ship out of Earth’s atmosphere and then nuclear fuel would take over. The Partial Test Ban Treaty of 1963 is said to have killed the project.

Even though the project died it was significant for its time in terms of stimulating possible engineering concepts.

Related Posts on this Blog:

1. Possible Rebirth of Project Orion?

2. Problems with Orion like projects

3. George Dyson on Project Orion

Onionesque Reality Home >>

Read Full Post »


The human brain is an incredibly impressive information processor, even though it “works” quite a bit slower than an ordinary computer. Many researchers in artificial intelligence look to the organization of the brain as a model for building intelligent machines. Think of a sort of “analogy” between the complex webs of interconnected neurons in a brain and the densely interconnected units making up an artificial neural network (ANN), where each unit–just like a biological neuron–is capable of taking in a number of inputs and producing an output. Consider this description: “To develop a feel for this analogy, let us consider a few facts from neurobiology. The human brain is estimated to contain a densely interconnected network of approximately 1011 neurons, each connected, on average, to 104 others. Neuron activity is typically excited or inhibited through connections to other neurons. The fastest neuron switching times are known to be on the order of 10-3 seconds—quite slow compared to computer switching speeds of 10-10 seconds. Yet humans are able to make surprisingly complex decisions, surprisingly quickly. For example, it requires approximately 10-1 seconds to visually recognize your mother.Notice the sequence of neuron firings that can take place during this 10-1-second interval cannot possibly be longer than a few hundred steps, giving the switching speed of single neurons. This observation has led many to speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. One motivation for ANN systems is to capture this kind of highly parallel computation based on distributed representations.”

via Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).]

Read Full Post »

In his blog, Mark Chu-Carroll puts up a rather funny question.

Collective nouns are cool and funny. Some of them are straightforward: a herd of cows, a pack of wolves. Some are goofy: a wake of vultures, a destruction of cats (that’s north american wildcats), an ostentation of peacocks. And there are some fascinating ones: a parliament of ravens, an exaltation of larks.

I don’t know of any good collective noun for a bunch of geeks. But I think we need one! So what should it be?

Some of the responses were hilarious.

Some responses included:

  • A computation of geeks.
  • A parallel of geeks.
  • GEEK geek[MAX_GEEK];
  • A Set of Geeks.
  • it’s a hash of geeks ?
  • I think it depends on the reason for which the geeks are meeting, or the type of geeks that they are. Computer geeks could form arrays, while DnD nerds form parties. Physics geeks might be galaxies, and Chemists condensates, and biologists populations. A sentence of Linguists.What can be geekier than a specific qualifying collective noun?
  • Geek[] geeks;
  • A googleplex of geeks!

Some responses as i said have been hilarious! What do you think is the proper collective noun? ;)

Read Full Post »

This post is as a follow up to the previous post.

I have greatly been interested in fractals and have played around with models and mathematics of the same. With a friend of mine i have developed many NetLogo models as well.

The NetLogo website offers very decent models of fractals. These can be run for the curves.

The models allows the user to change parameters and see the change. NetLogo models such systems amazingly!

1. Koch Curves

2. L-Systems

3. Mandelbrot

4. Sierpinski

5. Recursive Tree

You could play around with these models to get a good understanding of L-system generation. These models can also be extended if they really fire you up!

Related Post: NetLogo Version 4.0.2 released

Read Full Post »

Older Posts »