The Golden Rule in the Wild

In the previous post, we discussed the Prisoner’s Dilemma and saw how a simple strategy called Tit-for-Tat enforced the Golden Rule and won a very interesting contest. But does Tit-for-Tat always come out on top? The most confounding thing about the strategy is that it can never win – at best, it can only tie other strategies. Its success came from avoiding the bloody battles that other more deceptive strategies suffered.

The major criticism of Axelrod’s contest is its artificiality. In real life, some may say, you don’t get to encounter everyone, interact with them, and then have a tally run at the end to determine just how you did. Perhaps more deceptive strategies would do better in a more “natural” environment where losing doesn’t mean you get another chance at another opponent, but that your failures cause you to simply die off.

Artificial Nature

So now let’s look at the same game, with the same scoring system, only this time there’s a twist. Assume that this contest takes place in some sort of ecosystem that can only support a certain number of organisms, and they must fight among each other for the right to reproduce. There will be many different organisms, and they will all be members of a certain species, or specific strategy. We can then construct an artificial world where these strategies can battle it out in a manner that seems to reflect the real world a bit better.

In order to determine supremacy, we’ll play a certain number of rounds of the game, called a generation. At the end of the generation, the scores are tallied for each strategy, and a new generation of strategies is produced – with a twist. Higher scoring strategies will produce more organisms representing them in the next generation, while lower scoring strategies will produce less. Repeat this for many generations, observe the trends, and we can see how these strategies do as part of a population that can grow and shrink, rather than a single strategy that lives forever.

So let’s look at an example. Suppose we have a population that consists of the following simple strategies:

Initial
Population
Strategy Description
60% ALL-C Honest to a fault, this strategy always cooperates.
20% RAND The lucky dunce, this strategy defects or cooperates at random.
10% Tit-for-Tat This strategy mimics the previous move of the other player, every time.
10% ALL-D The bad boy of the bunch, this strategy always defects.

 

So what will happen? Was Tit-for-Tat’s dominance a result of the structure of the contest, or is it hardier than some might think? A graph of the changing populations over 50 generations may be seen below.

It’s a hard world to start. ALL-C immediately starts being decimated by the deception of ALL-D and RAND who start surging ahead, while Tit-for-Tat barely hangs on. ALL-D’s relentless deception allows it to quickly take the lead, and it starts knocking off its former partner in crime, RAND. Tit-for-Tat remains on the ropes, barely keeping its population around 10% as ALL-C and RAND are quickly eliminated around it.

And then something very interesting happens. ALL-D runs out of easy targets, and turns to the only opponents left – Tit-for-Tat and itself. Tit-for-Tat begins a slow climb as ALL-D begins to eat itself fighting over scraps. Slowly, steadily, Tit-for-Tat maintains its numbers by simply getting along with itself while allowing ALL-D to destroy each other. By 25 generations it’s all over – the easy resources exhausted, ALL-D was unable to adapt to the new environment and Tit-for-Tat takes over.

This illustrates a very important concept – that of an evolutionarily stable strategy. ALL-D was well on its way to winning, but left itself open to invasion by constant infighting. ALL-C initially had the highest population but was quickly eaten away by more deceptive strategies. Tit-for-Tat on the other hand was able to get along with itself, and defended itself against outside invaders that did not cooperate in turn. An evolutionarily stable strategy is something that can persist in this manner – once a critical mass of players start following it, it cannot be easily invaded or exploited by other strategies, including itself.

I Can’t Hear You

But there’s one critical weakness to Tit-for-Tat. We’re all aware of feuds that have gone on for ages, both sides viciously attacking the other in retaliation for the last affront, neither one precisely able to tell outsiders when it all started. And if we look at the strategies each use in a simplistic sense, it seems that they’re using Tit-for-Tat precisely. So how did it go so horribly wrong?

It went wrong because Tit-for-Tat has a horrible weakness – its memory is only one move long. If two Tit-for-Tat strategies somehow get stuck in a death spiral of defecting against each other, there’s no allowance in the strategy to realize this foolishness, and be the first to forgive. But how could this happen? Tit-for-Tat is never the first to defect after all, so why are both Tit-for-Tat strategies continually defecting?

The answer is that great force of nature, noise. A message read the wrong way, a shout misheard over the wind, an error in interpretation – all can be the impetus for this first initial defection. No matter that it was pointless and incorrect, the strategy has changed. While Tit-for-Tat’s greatest strength is that it never defects first, its greatest weakness is that it never forgives first either.

All of these simulations we’ve seen so far do not include noise, and it can have a catastrophic effect on the effectiveness of Tit-for-Tat. Its success was built on the strategy of never fighting among itself and allowing other deceptive strategies to destroy themselves by doing the same – but with noise, this advantage becomes a fatal weakness as Tit-for-Tat’s inability to be taken advantage of is turned against itself.

So what does a simulation including noise look like? You can see one below, and it contains an additional mystery strategy, Pavlov. Pavlov is very similar to Tit-for-Tat but slightly different – it forgives far more easily.

We see a similar pattern to our previous simulation. ALL-D has an initial population spike as it knocks off the easy targets, but Tit-for-Tat and Pavlov slowly climb to supremacy with ALL-D eventually eating scraps. But the influence of noise causes Tit-for-Tat to fight among itself, and Pavlov begins what previously seemed impossible – to begin to win against Tit-for-Tat.

Puppy Love

So what is Pavlov and why does it work better in a noisy environment like the real world? Well, Ivan Pavlov was the man who discovered classical conditioning. You probably remember him as the guy who fed dogs while ringing a bell, and who then just rang the bell – and discovered that the dogs salivated expecting food.

The strategy is simple – if you win, keep doing it. If you lose, change your approach. Pavlov will always cooperate with ALL-C and Tit-for-Tat. If it plays ALL-D however, it will hopefully cooperate, lose, get angry about it and defect, lose again, switch back to cooperation, and so on. Like a tiny puppy or the suitor of a crazy girlfriend, it can’t really decide what it wants to do, but it’s going to do it’s damndest to try to succeed anyways. It manages to prevent the death spiral of two Tit-for-Tat strategies continually misunderstanding each other by obeying a very simple rule – if it hurts, stop doing it. While it may be slightly more vulnerable to deceptive strategies, it never gets stuck in these self-destructive loops of behavior.

So there’s a lesson here – life is noisy, and people will never get everything correct all the time. Tit-for-Tat works very well for a wide variety of situations, but has a critical weakness where neither player in a conflict is willing or able to forgive. So the next time you’re in a situation like that, step back, use your head, and switch strategies – it’s what this little puppy would want you to do, anyways.

Triumph of the Golden Rule

We live in a world with other people. Almost every decision we make involves someone else in one way or another, and we face a constant choice regarding just how much we’re going to trust the person on the other side of this decision. Should we take advantage of them, go for the quick score and hope we never see them again – or should we settle for a more reasonable reward, co-operating in the hope that this peaceful relationship will continue long into the future?

We see decisions of this type everywhere, but what is less obvious is the best strategy for us to use to determine how we should act. The Golden Rule states that one should “do unto others as you would have them do unto you”. While it seems rather naive at first glance, if we run the numbers, we find something quite amazing.

A Dilemma

In order to study these types of decisions, we have to define what exactly we’re talking about. Let’s define just what a “dilemma” is. Let’s say it has two people – and they can individually decide to work together for a shared reward, or screw the other one over and take it all for themselves. If you both decide to work together, you both get a medium-sized reward. If you decide to take advantage of someone but they trust you, you’ll get a big reward (and the other person gets nothing). If you’re both jerks and decide to try to take advantage of each other, you both get a tiny fraction of what you could have. Let’s call these two people Alice and Bob – here’s a table to make things a bit more clear.

Alice cooperates
Alice defects
Bob cooperates Everyone wins! A medium-sized reward to both for mutual co-operation Poor Bob. He decided to trust Alice, who screwed him and got a big reward. Bob gets nothing.
Bob defects Poor Alice. She decided to trust Bob, who took advantage of her and got a big reward. Alice gets nothing. No honour among thieves… both Bob and Alice take the low road, and fight over the scraps of a small reward.

This specific order of rewards is referred to as the Prisoner’s Dilemma, and was formalized and studied by Melvin Dresher and Merrill Flood in 1950 while working for the RAND Corporation.

Sale, One Day Only!

Now of course the question is – if you’re in this situation, what is the best thing to do? First suppose that we’re never, ever going to see this other person again. This is a one time deal. Absent any moral consideration, your best option for the most profit is to attempt to take advantage of the other person and hope that they are clueless enough to let you, capitalism at its finest. You could attempt to cooperate, but that leaves you open to the other party screwing you. If each person acts in their own interest and is rational, they will attempt to one-up the other.

But there’s just one problem – if both people act in this way, they both get much less than they would if they simply cooperated. This seems very strange, as the economic models banks and other institutions use to model human behavior assume this type of logic – the model of the rational consumer. But this leads to nearly the worst possible option if both parties take this approach.

It seems that there is no clear ideal strategy for a one time deal. Each choice leaves you open to possible losses in different ways. At this point it’s easy to toss up your hands, leave logic behind, and take a moral stance. You’ll cooperate because you’re a good person – or you’ll take advantage of the suckers because life just isn’t fair.

And this appears to leave us where we are today – some good people, some bad people, and the mythical invisible hand of the market to sort them all out. But there’s just one little issue. We live in a world with reputations, with friends, and with foes – there are no true “one time” deals. The world is small, and people remember.

In it for the Long Run

So instead of thinking of a single dilemma, let’s think about what we should do if we get to play this game more than once. If someone screws you in the first round, you’ll remember – and probably won’t cooperate the next time. If you find someone who always cooperates, you can join them and work together for your mutual benefit – or decide that they’re an easy mark and take them for everything they’ve got.

But what is the best strategy? In an attempt to figure this out, in 1980 Robert Axelrod decided to have a contest. He sent the word out, and game theorists, scientists, and mathematicians all submitted entries for a battle royale to determine which strategy was the best.

Each entry was a computer program designed with a specific strategy for playing this dilemma multiple times against other clever entries. The programs would play this simple dilemma, deciding whether to cooperate or defect against each other, for 200 rounds. Five points for a successful deception (you defect, they cooperate), three points each for mutual cooperation, one point each if you both tried to screw each other (mutual defection), and no points if you were taken advantage of (you cooperate, they defect). Each program would play every other program as well as a copy of itself, and the program with the largest total score over all the rounds would win.

So what would some very simple programs be?

ALL-C (always cooperate) is just like it sounds. Cooperation is the only way, and this program never gets tired of being an upstanding guy.

ALL-D (always defect) is the counterpoint to this, and has one singular goal. No matter what happens, always, always, always try to screw the other person over.

RAND is the lucky dunce – don’t worry too much, just decide to cooperate or defect at random.

You can predict how these strategies might do if they played against each other. Two ALL-C strategies would endlessly cooperate in a wonderful dance of mutual benefit. Two ALL-D strategies would continually fight, endlessly grinding against each other and gaining little. ALL-C pitted against ALL-D would fare about as well as a fluffy bunny in a den of wolves – eternally cooperating and hoping for reciprocation, but always getting the shaft with ALL-D profiting.

So an environment of ALL-C would be a cooperative utopia – unless a single ALL-D strategy came in, and started bleeding them dry. But an environment entirely made of ALL-D would be a wasteland – no one would have any success due to constant fighting. And the RAND strategy is literally no better than a coin flip.

Time to Think

So what should we do? Those simple strategies don’t seem to be very good at all. If we think about it however, there’s a reason they do so poorly – they don’t remember. No matter what the other side does, they’ve already made up their minds. Intelligent strategies remember previous actions of their opponents, and act accordingly. The majority of programs submitted to Axelrod’s competition incorporated some sort of memory. For instance, if you can figure out you’re playing against ALL-C, it’s time to defect. Just like in the real world, these programs tried to figure out some concept of “reputation” that would allow them to act in the most productive manner.

And so Axelrod’s competition was on. Programs from all over the world competed against each other, each trying to maximize their personal benefit. A wide variety of strategies were implemented from some of the top minds in this new field. Disk drives chattered, monitors flickered, and eventually a champion was crowned.

And the Winner Is…

When the dust settled, the winner was clear – and the victory was both surprising and inspiring. The eventual champion seemed to be a 90 lb weakling at first glance, a mere four lines of code submitted by Anatol Rapoport, a mathematical psychologist from the University of Toronto. It was called “Tit-for-Tat”, and it did exactly that. It started every game by cooperating – and then doing exactly what the other player did in their last turn. It cooperated with the “nice” strategies, butted heads with the “mean” strategies, and managed to come out on top ahead of far more complex approaches.

The simplest and shortest strategy won, a program that precisely enforced the Golden Rule. But what precisely made Tit-for-Tat so successful? Axelrod analyzed the results of the tournament and came up with a few principles of success.

  • Don’t get greedy. Tit-for-Tat can never beat another strategy. But it never allows itself to take a beating, ensuring it skips the brutal losses of two “evil” strategies fighting against each other. It actively seeks out win-win situations instead of gambling for the higher payoff.
  • Be nice. The single best predictor of whether a strategy would do well was if they were never the first to defect. Some tried to emulate Tit-for-Tat but with a twist – throwing in the occasional defection to up the score. It didn’t work.
  • Reciprocate, and forgive. Other programs tended to cooperate with Tit-for-Tat since it consistently rewarded cooperation and punished defection. And Tit-for-Tat easily forgives – no matter how many defections it has seen, if a program decides to cooperate, it will join them and reap the rewards.
  • Don’t get too clever. Tit-for-Tat is perfectly transparent, and it becomes obvious that it is very, very difficult to beat. There are no secrets, and no hypocrisy – Tit-for-Tat gets along very well with itself, unlike strategies biased toward deception.

The contest attracted so much attention that a second one was organized, and this time every single entry was aware of the strategy and success of Tit-for-Tat. Sixty-three new entries arrived, all gunning for the top spot. And once again, Tit-for-Tat rose to the top. Axelrod used the results of these tournaments to develop ideas about how cooperative behaviour could evolve naturally, and eventually wrote a bestselling book called The Evolution of Cooperation. But his biggest accomplishment may be showing us that being nice does pay off – and giving us the numbers to prove it.

Reality, Morality, Controversy and Consensus in Philosophy

The beautiful thing about philosophy is that it identifies the truly great problems, ones where arguments can be made for each side with equal validity. The website philpapers.org, an online repository of philosophy articles and books, recently conducted a very interesting survey about some of these grand debates.

It consisted of 30 questions, all current issues with well established alternative positions in philosophy under intense discussion. They surveyed 1803 philosophy faculty members and/or PhDs and 829 philosophy graduate students, and then tabulated the results. I also went through and gave each question a “controversy” score – the lower the score, the lower the consensus in the answers (for the curious, via mean square error).

Is this the real life – or is it just fantasy?

The least controversial issue was Question 6, “External world: idealism, skepticism, or non-skeptical realism?“. This dealt with the structure of the external world, where and how it exists, and what we can know about it.

4.2% of respondents believed that the external world was best described by idealism, that reality is totally dependent on the mind. Plugged into the Matrix? Your “reality” could be described by an idealist perspective.

4.8% chose the viewpoint of skepticism, that the external world can never really be known in its true form.

9.2% reinforced some stereotypes of philosophers and gave an answer best described as “other”.

81.6% of respondents agreed in the clearest consensus of the survey that the external world was best described by a perspective of non-skeptical realism. This means that “reality” exists independent of the mind (the realist part, we aren’t making it all up in our heads) and that we can draw reasonable and consistent conclusions from it (the non-skeptical part).

Life may not be a waking dream after all. Now, onto controversy!

Kill one to save a thousand?

spiderman

You are a superhero. The love of your life dangles from a fraying rope above a pit of spikes, while nearby a speeding train full of orphans rushes toward the edge of a cliff. You only have enough time to save one of the two – what do you do?

Normative ethics is the branch of philosophy that deals with questions like these – how “should” you act in a certain situation? Is there certain approach one should use? This was the subject of one of the most controversial issues, Question 20 – “Normative ethics: deontology, consequentialism, or virtue ethics?“.

18.1% chose the perspective of virtue ethics, first advocated for in a significant sense by Aristotle. Virtue ethics emphasizes the character of the person who is faced with a difficult decision. Do you save the children in the train or your dangling love? It doesn’t matter – what matters is your character and your intent in making the decision.

23.6% argued for consequentialism, the philosophy that spawned the approach of “the ends justify the means”. Whether an action is right or wrong depends on the outcome of the situation, not the specific actions you chose. There are a number of different variants of this which would “score” the final situations according to whether it maximized happiness, economic benefit, liberty, love, or any number of possible ideals to yourself or to others. If you were an egoist you would send the orphans over the cliff and save your love to maximize your own happiness. If you were a utilitarianist, you would save the orphans because it would benefit the largest number of people.

25.8% of respondents chose deontology, where it is the actions you take that are judged rather than the results of those actions. Deontologists are concerned with rules and duties, and an action following these duties can be considered morally correct even if it produces dire consequences. If you were a married superhero who had sworn a vow to protect his love – orphans be damned, there is a duty to perform.

32.3% of philosophers, presumably not wanting to commit to paper their rationale as to whether they’d kill orphans or their love, chose “Other”.

What about the rest?

Here’s a table ranking the “consensus” on each issue, from least to most controversial. The lower the mean square error, the less disparity there is in the magnitude of responses – and so less consensus is reached. I think this is a pretty decent metric, if you have any better suggestions, feel free to leave a comment.

Consensus Rank Question Number Question Text Mean Sq. Err.
1 6 External world: idealism, skepticism, or non-skeptical realism? 0.1073
2 25 Science: scientific realism or scientific anti-realism? 0.0870
3 8 God: theism or atheism? 0.0781
4 1 A priori knowledge: yes or no? 0.0725
5 28 Trolley problem (five straight ahead, one on side track, turn requires switching, what ought one do?): switch or don’t switch? 0.0654
6 4 Analytic-synthetic distinction: yes or no? 0.0557
7 17 Moral judgment: cognitivism or non-cognitivism? 0.0526
8 7 Free will: compatibilism, libertarianism, or no free will? 0.0387
9 27 Time: A-theory or B-theory? 0.0330
10 11 Laws of nature: Humean or non-Humean? 0.0290
11 14 Meta-ethics: moral realism or moral anti-realism? 0.0289
12 16 Mind: physicalism or non-physicalism? 0.0286
13 29 Truth: correspondence, deflationary, or epistemic? 0.0263
14 12 Logic: classical or non-classical? 0.0218
15 21 Perceptual experience: disjunctivism, qualia theory, representationalism, or sense-datum theory? 0.0210
16 9 Knowledge claims: contextualism, relativism, or invariantism? 0.0188
17 23 Politics: communitarianism, egalitarianism, or libertarianism? 0.0175
18 13 Mental content: internalism or externalism? 0.0172
19 15 Metaphilosophy: naturalism or non-naturalism? 0.0137
20 19 Newcomb’s problem: one box or two boxes? 0.0115
21 22 Personal identity: biological view, psychological view, or further-fact view? 0.0113
22 2 Abstract objects: Platonism or nominalism? 0.0055
23 30 Zombies: inconceivable, conceivable but not metaphysically possible, or metaphysically possible? 0.0049
24 5 Epistemic justification: internalism or externalism? 0.0047
25 3 Aesthetic value: objective or subjective? 0.0047
26 20 Normative ethics: deontology, consequentialism, or virtue ethics? 0.0026
27 10 Knowledge: empiricism or rationalism? 0.0016
28 24 Proper names: Fregean or Millian? 0.0012
29 18 Moral motivation: internalism or externalism? 0.0007
30 26 Teletransporter (new matter): survival or death? 0.0004

 

Now you can go to fancy cocktail parties and hold your own in philisophical debates – or at least have the confidence that you’re stating a position that cannot be proven wrong.

Norway’s Spiral in the Sky

Early yesterday morning, citizens in the town of Tromsø, Norway awoke to an amazing sight – a giant glowing spiral, taking up a huge portion of the night sky.

norway-spiral

Bulava

Thousands of people reported seeing it, and amateur pictures and video of the event quickly spread across the internet. What could this be? A prank using some powerful projector? Some military experiment? An intergalactic portal?

One plausible explanation was that it was a rocket, damaged during or early after launch. But where did the rocket come from? One report indicated that it was a RSM-56 Bulava submarine launched ballistic missle, which is currently undergoing testing and development. The notoriously unpredictable missle design experienced an issue when the third stage fired, and it began to spray fuel and spiral out of control.

So it this reasonable? Well, a video has appeared on YouTube with a particle simulation of the fuel dispersal due to a spinning third stage – judge for yourself!

Personally I think the explanation of dual jets off a spinning rocket stage fits the facts and is the simplest explanation. Additionally, a NAVTEX rocket launch warning was issued for an area in the White Sea.

ZCZC FA79
031230 UTC DEC 09
COASTAL WARNING ARKHANGELSK 94
SOUTHERN PART WHITE SEA
1.ROCKET LAUNCHING 2300 07 DEC TO 0600 08 DEC
09 DC 0200 TO 0900 10 DEC 0100 TO 0900
NAVIGATION PROHIBITED IN AREA
65-12.6N 036-37.0E 65-37.2N 036-26.0E
66-12.3N 037-19.0E 66-04.0N 037-47.0E
66-03.0N 038-38.0E 66-06.5N 038-55.0E
65-11.0N 037-28.0E 65-12.1N 036-49.5E
THEN COASTAL LINE 65-12.2N 036-47.6E
2. CANCEL THIS MESSAGE 101000 DEC=
NNNN

Tromsø is marked by a green house and the White Sea is marked as the blue anchor on the following map.

This location for the submarine makes sense for a northbound launch (launching it south into continental Europe would be a bit politically insensitive), and would correlate with a breakup later in flight visible from northern Norway. If, however, this is a prelude to the gates of hell opening up, feel free at that time to email me and gloat.

The Quaternion Bulb

My three dimensional unfolding of the quaternion Julia sets finally finished rendering. There are a fair bit of compression artifacts in the embedded version, click on the Vimeo button on the bottom right side of the video to watch it in full quality HD.

Since each quaternion can be described using four numbers, I unfolded these four dimensional quaternion Julia sets into three dimensional space, and animated the final coefficient.

xyzft

But once I did that I noticed some radial symmetry along the y-z plane – it looks like something that’s been made on a lathe. This means that we can “index” all these shapes in a more sensible manner by collapsing things along this axis of symmetry. While previously we could index all of our shapes with four coefficients a, b, c, and d.

abcd

We can now index them with four coefficients a, r, theta, and d after this transformation. But there’s a nice side effect now that our coordinate system reflects our symmetry – if we vary theta, the appearance of the Julia set doesn’t change, the object just appears to rotate about the a axis.

ard

So really we can index all possible shapes using only three coefficients – a, r, and d. This is awesome – it means we can use this symmetry to collapse a dimension and completely illustrate a discrete approximation of this four dimensional set in three dimensional space. The following images (click for 1080p full resolution images) illustrate the full set of these possible shapes – a is the horizontal axis, r is the vertical axis, and values iterate by 0.25. The grey sphere in the first image is the origin, and the images start at a d value of 0 and iterate upward by 0.25. We find that there exists additional symmetry with our d parameter – namely that d = -d, so we only need to illustrate the absolute value to see all shapes.

d = 0.00
juliacube-0.00

d = 0.25
juliacube-0.25

d = 0.50
juliacube-0.50

d = 0.75
juliacube-0.75

d = 1.00
juliacube-1.00

When d = 1.25 there are only a few bits of unconnected dust loops visible. This analysis only covers a single “slice” – namely the plane normal to (0,0,0,1). I’d be very interested to see if there are any other symmetries…

A Quaternion Fractal Chorus

Treating my last attempt at rendering quaternion Julia sets as a study, I wanted to move on to alternate methods of visualising the deep structure of these four dimensional objects. There’s a lot of complexity there which results in some compression artifacts – watch it in HD to get the full effect.

There is a four dimensional Julia set for every four dimensional quaternion. We can label each quaternion using four numbers.

quaternion

I decided to “unfold” the first two values of the quaternion onto a plane and animate the last two values. The camera is centered at (0,0) and Julia sets are placed at intervals of 0.1 off to infinity for both axes.

grid

You can start to see the larger structure present more clearly. Perhaps a three dimensional unfolding next?

Quaternion Julia Fractals

What exactly is a quaternion Julia set? Well, it’s beautiful.

These shapes are animated projections of three dimensional slices of four dimensional objects known as quaternion Julia sets. The definition of a Julia set can get a bit complicated, but it can be thought of as an object that carves up four-dimensional space into two categories – belonging to the set, and not belonging to the set. How exactly the shape is carved depends on some very deep mathematics.

Now the big question – how do we look at a four dimensional object if we’re just mere three dimensional humans? Well, first let’s try to describe how we can look at a three dimensional object using only two dimensions.

When I think of two dimensions, I think of a flat sheet like a piece of cardboard. How could we use this flat sheet, or a lot of flat sheets, to make up a three dimensional object? Well, if we were very clever like Yuk King Tan, we could cut a huge number of cardboard sheets carefully and stack them up on top of each other. From far away it would look like a three dimensional object.

Tan-03

But if we look closely.

Tan-06

Very closely.

Tan-07

We can see that this is made up entirely of two dimensional objects cut into specific shapes, each shape cut perfectly to reflect the three dimensional object at a certain height. This is just like how an MRI machine takes “slices” of a three dimensional object (a human!) as it slowly moves upwards. The image below shows the 2D slices of the 3D skull starting just below the eyes.

mrislices

If we could only see two dimensions, we could flip through each one of these images in turn to get an idea of just what a three dimensional brain looks like. This is what doctors do – all of our current display technology, fancy HDTVs included, currently only display two dimensions. So they take many two dimensional slices and then compare and visualize them in relation to each other, in order to get some idea of what our three dimensional body is actually like.

So we can do the same thing with these four dimensional Julia sets. We can take many three dimensional slices, animate them, and then compare and relate these slices to each other in order to create some idea in our brain of just what this four dimensional structure is.

I See Sierpinski Shapes by the Sea Shore

I recently saw a very interesting photo of a sea shell on Flickr.

sierpinski_seashell

The patterns on the shell appear to be very similar to that of a mathematical structure called a Sierpinski triangle – and this is no coincidence.

ca_rule

A snail’s shell can grow only by adding on new material in a thin layer on the lip the shell. The pigmentation cells lie in a narrow band on this lip, and decide whether to switch on or off depending on the pigmentation of the area immediately around it. In short, the pigmentation patterns can be modelled as elementary cellular automata very accurately.

Several elementary cellular automata rule sets produce similar structures to that seen on the shell. Combine these basic rules with a little bit of noise due to nature, and you get these beautiful pattens with a bare minimum of computational effort.

The snail that grew the shell above is from the family Conidae. Other species have slightly different rules for pigmentation, but all produce their patterns by a method that can be modelled as cellular automata.

conidae_2

Color and Reality

When I was a kid, I used to wonder if everyone saw the world in the same way. We can all look at the same grass, but maybe the color I called green showed up in my brain as the color my friend called blue. Maybe all of our colors were shifted around to the point where all the colors were accounted for, but how we perceived them was shuffled up. I thought it would be remarkably exciting, and hoped that I could see the world through someone else’s brain to see if, in fact, this was true.

meadows

My eight year old self would be bitterly disappointed technology today has not progressed far enough to make that wish a reality. At the time, we had to settle the debate by another manner – asking an adult, a source of concrete and immutable knowledge. The answer I was given was that everyone sees the same colors of course (although why this was so obvious was never really clear) and if they didn’t it wouldn’t matter much since we couldn’t tell. Color was “real” – bits of light had a color (later I found out we could call it the wavelength of a photon), it hit our eyes, and our brains converted it to a beautiful image.

The only problem is that this is wrong.

Color as Wavelength

Well, alright. Before you get upset, it isn’t completely wrong. We were all taught about Sir Isaac Newton who discovered that a glass prism can split white light apart into its constituent colors.

pink-floyd-dark-side-of-the-moon-crop

While we consider this rather trivial today, at the time you’d be laughed out of the room if you suggested this somehow illustrated a fundamental property of light and color. The popular theory of the day was that color was a mixture of light and dark, and that prisms simply colored light. Color went from bright red (white light with the smallest amount of “dark” added) to dark blue (white light with the most amount of “dark” added before it turned black).

Newton showed this to be incorrect. We now know that light is made up of tiny particles called photons, and these photons have something called “wavelength” that seems to correspond to color. Visible light is made up of a spectrum, a huge number of photons each with a different wavelength our eyes can see. When combined, we see it as white light.

visible_light_spectrum

So this appears to resolve my childhood debate. Light of a single wavelength (like that produced by a laser) corresponds to a single “real” color. The brain just translates wavelengths into colors somehow, and that is that. There’s just one problem.

We’re missing a color!

Color as Experience

To find out just what we’re missing, we have to consider how we can combine colors. For instance, you learned some basic color mixing rules as a kid. In this case, let’s use additive color mixing since we’re mixing light.

Additive_color_mixing

Let’s find two colors on the spectrum line, and then we can estimate the final color they’ll produce when you mix them by finding the midpoint.

Red and green make yellow.

red-green-yellow

Green and blue make turquoise.

blue-green-turquoise

Red and blue make…

red-blue-green

Green? What? That doesn’t seem to make any sense! Red and violet make pink! But where is pink in our spectrum? It’s not violet, it’s not red – it seems like it should be simultaneously above and below our spectrum. But it’s not on the spectrum at all!

So we’re forced to realize a very interesting conclusion. The wavelength of a photon certainly reflects a color – but we cannot produce every color the human eye sees by a single photon of a specific wavelength. There is no such thing as a pink laser – two lasers must be mixed to produce that color. There are “real” colors (we call them pure spectral or monochromatic colors) and “unreal” colors that only exist in the brain.

A Color Map

So what are the rules for creating these “unreal” colors from the very real photons that hit your eye? Well, in the 1920s W. David Wright and John Guild both conducted experiments designed to map how the brain mixed monochomatic light into the millions of colors we experience everyday. They set up a split screen – on one side, they projected a “test” color. On the other side, the subject could mix together three primary colors produced by projectors to match the test color. After a lot of test subjects and a lot of test colors, eventually the CIE 1931 color space was produced.

CIE-1931

I consider this to be a map of the abstractions of the human brain. On the curved border we can see numbers, which correspond to the wavelengths in the spectrum we saw earlier. We can imagine the spectrum bent around the outside of this map – representing “real” colors. The inside represents all the colors our brain produces by mixing – the “unreal” colors.

So let’s try this again – with a map of the brain instead of a map of photon wavelengths. Red and green make yellow.

cie-red-green-yellow

Green and blue make turquoise.

cie-blue-green-turq

Blue and red make…

cie-blue-red-magenta

Pink! Finally! Note that pink is not on the curved line representing monochromatic colors. It is purely a construction of your brain – not reflective of the wavelength of any one photon.

Is Color Real?

So is color real? Well, photons with specific wavelengths seem to correspond to specific colors. But the interior of the CIE 1931 color space is a representation of the a most ridiculously abstract concept, labels that aren’t even labels, something our brain experiences and calculates from averaged photon wavelengths. It is an example of what philosophers call qualia – a subjective quality of consciousness.

I later learned that my childhood argument was a version of the inverted spectrum argument first proposed by John Locke, and that the “adult” perspective of everyone seeing the same colors (and it not really mattering if they didn’t) was argued by the philosopher Daniel Dennett.

I have come no closer to resolving my question from long ago of “individual spectrums” – but for the future, I vow to pay more attention to the idle questions of children.