Convergence in the Lorenz Attractor

Most visualizations of the Lorenz attractor are of a long history of a single point after convergence to the attractor has occurred. I was interested in what the surrounding space looked like, so I randomly selected 20,000 starting points from a three dimensional Gaussian distribution with a standard deviation of 100. Each point was iterated, and a short history displayed as a trail.

Interestingly enough the points do not simply fall in from arbitrary directions like a gravity field, but display structure by instead swirling along a clear path up the z axis.

Lorenz and the Butterfly Effect

In 1962, Edward Lorenz was studying a simplified model of convection flow in the atmosphere. He imagined a closed chamber of air with a temperature difference between the bottom and the top, modeled using the Navier-Stokes equations of fluid flow. If the difference in temperature was slight, the heated air would slowly rise to the top in a predictable manner. But if the difference in temperature was higher, other situations would arise including convection and turbulence. Turbulence is notoriously difficult to model, so Lorenz focused on the formation of convection, somewhat more predictable circular flows.

Focusing only on convection allowed Lorenz to simplify the model sufficiently so that he could run calculations on the primitive (by today’s standards) computer available to him at MIT. And something wonderful happened – instead of settling down into a predictable pattern, Lorenz’s model oscillated within a certain range but never seemed to do the exact same thing twice. It was an excellent approach to the problem, exhibiting the same kind of semi-regular behavior we see in actual atmospheric dynamics. It made quite a stir at MIT at the time, with other faculty betting on just how this strange mathematical object would behave from iteration to iteration.

But Lorenz was about to discover something even more astounding about this model. While running a particularly interesting simulation, he was forced to stop his calculations prematurely as the computer he was using was shared by many people. He was prepared for this, and wrote down three values describing the current state of the system at a certain time. Later when the computer was available again, he entered the three values and let the model run once more. But something very strange happened. It initially appeared to follow the same path as before, but soon it started to diverge from the original slightly, and quickly these small differences became huge variances. What had happened to cause this?

Lorenz soon found that his error was a seemingly trivial one. He had rounded the numbers he wrote down to three decimal places, while the computer stored values to six decimal places. Simply rounding to the nearest thousandth had created a tiny error that propagated throughout the system, creating drastic differences in the final result. This is the origin of the term “butterfly effect”, where infinitesimal initial inputs can cause huge results. Lorenz had discovered the first chaotic dynamical system.

We can see this effect in action here, where a red and a blue path have initial values truncated in a similar manner. Instead of graphing X, Y, and Z values separately, we’ll graph them in 3D space.

Initial Values (Red)   Initial Values (Blue)  
X 5.766 5.766450
Y -2.456 -2.456460
Z 32.817 32.817251

We can see at first that the two paths closely track each other. The error is magnified as time marches on, and the paths slowly diverge until they are completely different – all as a result of truncating to three decimal places. This is a contrast to the sensitivity to error we are typically used to, where small errors are not magnified uncontrollably as calculation progresses. Imagine if using a measuring tape that was off by a tenth of a percent caused your new deck to to collapse rather than stand strong…

There is also a hint of a more complex underlying structure, a folded three dimensional surface that the paths move along. If a longer path is drawn, the semi-regular line graphs of before are revealed to be the projections of a structure of undeniable beauty.

This is a rendering of the long journey of a single point in Lorenz’s creation, a path consisting of three million iterations developed in Processing that shows the region of allowable values in the model more clearly. Older values are darker, and the visualization is animated to show the strange dance of the current values of the system as they sweep from arm to arm of this dual-eyed mathematical monster, order generating chaos.

River Crossing Problems and Discrete State Spaces

A brain teaser goes as follows: a farmer is returning from market, where he has bought a wolf, a goat, and a cabbage. On his way home he must cross a river by boat, but the boat is only large enough to carry the farmer and one additional item. The farmer realizes he must shuttle the items across one by one. He must be careful however, as the wolf cannot be left alone on the same side of the river as the goat (since the goat will be eaten), and the goat cannot be left alone on the same side of the river as the cabbage (since the goat will eat the cabbage).

How can the farmer arrange his trips to move the wolf, the goat, and the cabbage across the river while ensuring that no one eats the other while he’s away? The problem usually yields to an analysis by trial and error, where one of the possible solutions is found by iterating and checking the possible decisions of the farmer.

But is there a more rigorous approach that will allow us to easily find all possible solutions without having to resort to guessing and checking?

A Discrete State Space

Let’s create a three-dimensional vector that represents the state of the system. The components of this vector will be 0 or 1 depending on which side of the river the wolf, the goat, and the cabbage are. Let’s list off all possible combinations without worrying about whether something is going to get eaten – we’ll get to that later.

Vector Location
(0,0,0) Our initial state, with all three on the starting side of the river.
(1,0,0) The wolf has crossed the river, but not the goat or cabbage.
(0,1,0) The goat has crossed the river, but not the wolf or cabbage.
(0,0,1) The cabbage has crossed the river, but not the wolf or goat.
(1,1,0) The wolf and the goat have crossed the river, but not the cabbage.
(0,1,1) The goat and the cabbage have crossed the river, but not the wolf.
(1,0,1) The wolf and cabbage have crossed the river, but not the goat.
(1,1,1) Our desired state, where all three have crossed the river.

Now a giant list like this isn’t much help. We’ve listed off all possible states, but we can structure this in a more sensible manner which will make solving the problem much easier. Let’s allow the vectors to represent the corners of a cube, with the edges of the cube representing trips across the river. Movement along the x axis represents the wolf moving, movement along the y axis represents the goat moving, and movement along the z axis represents the cabbage moving.

Alright, that seems a bit better. What we need to do now is find an allowable path along the edges of the cube from (0,0,0) to (1,1,1), and this will represent the solution to our puzzle. First we need to remove the edges where something gets eaten.

Path Removed Rationale
(0,0,0) to (1,0,0) This moves the wolf across, leaving the goat with the cabbage.
(0,0,0) to (0,0,1) This moves the cabbage across, leaving the wolf with the goat.
(0,1,1) to (1,1,1) This moves the wolf across, leaving the goat with the cabbage.
(1,1,0) to (1,1,1) This moves the cabbage across, leaving the wolf with the goat.

Our allowable state space now looks like this.

The problem has simplified itself drastically. We want to go from (0,0,0) to (1,1,1) by travelling along the black (allowable) edges of this cube. We can see that if we ignore solutions where the farmer loops pointlessly, there are two allowable solutions to this puzzle.

  1. Move goat to other side.
  2. Move cabbage to other side.
  3. Move goat back.
  4. Move wolf to other side.
  5. Move goat to other side.
  1. Move goat to other side.
  2. Move wolf to other side.
  3. Move goat back.
  4. Move cabbage to other side.
  5. Move goat to other side.

This is a beautiful example of the simple structures that underly many of these classic riddles.

In This Virtual Fish Tank, You Make the Rules

In my last post, I discussed how individuals following simple rules cause cause coordinated group behavior to arise. The boid model created by Craig Reynolds used three rules – alignment, separation, and coherence. But how much attention does each individual pay to each rule? In situations like migration, alignment might be the most important rule. If you’re being attacked by predators, sticking together and paying attention to the coherence rule might keep you from being eaten.

Clearly different situations might have different approaches. I was very interested in how different weightings of these rules might behave, so I decided to create a program that would allow you to change the relative importance of each rule at will. You can see a video of it in action below (the three sliders on the bottom left control the influence of the various rules).

Smaller screen, older computer, or just want something more simple?
Load the standard definition fish school simulation.
Big screen, fast computer, and want to give the fish some more room to swim?
Load the high definition fish school simulation.
   
Alignment: This slider adjusts how much each fish wants to head in the same direction as other fish around it.
Cohesion: This slider adjusts how close each fish wants to be to its neighbors.
Separation: This slider adjust how much each fish wants to space itself out from the others.
Click anywhere in the water to add a new fish.
Press this button to reset the simulation.
   

I hope you enjoy it – please leave me a comment if you have any questions, comments, or find any bugs.

The Golden Rule in the Wild

In the previous post, we discussed the Prisoner’s Dilemma and saw how a simple strategy called Tit-for-Tat enforced the Golden Rule and won a very interesting contest. But does Tit-for-Tat always come out on top? The most confounding thing about the strategy is that it can never win – at best, it can only tie other strategies. Its success came from avoiding the bloody battles that other more deceptive strategies suffered.

The major criticism of Axelrod’s contest is its artificiality. In real life, some may say, you don’t get to encounter everyone, interact with them, and then have a tally run at the end to determine just how you did. Perhaps more deceptive strategies would do better in a more “natural” environment where losing doesn’t mean you get another chance at another opponent, but that your failures cause you to simply die off.

Artificial Nature

So now let’s look at the same game, with the same scoring system, only this time there’s a twist. Assume that this contest takes place in some sort of ecosystem that can only support a certain number of organisms, and they must fight among each other for the right to reproduce. There will be many different organisms, and they will all be members of a certain species, or specific strategy. We can then construct an artificial world where these strategies can battle it out in a manner that seems to reflect the real world a bit better.

In order to determine supremacy, we’ll play a certain number of rounds of the game, called a generation. At the end of the generation, the scores are tallied for each strategy, and a new generation of strategies is produced – with a twist. Higher scoring strategies will produce more organisms representing them in the next generation, while lower scoring strategies will produce less. Repeat this for many generations, observe the trends, and we can see how these strategies do as part of a population that can grow and shrink, rather than a single strategy that lives forever.

So let’s look at an example. Suppose we have a population that consists of the following simple strategies:

Initial
Population
Strategy Description
60% ALL-C Honest to a fault, this strategy always cooperates.
20% RAND The lucky dunce, this strategy defects or cooperates at random.
10% Tit-for-Tat This strategy mimics the previous move of the other player, every time.
10% ALL-D The bad boy of the bunch, this strategy always defects.

 

So what will happen? Was Tit-for-Tat’s dominance a result of the structure of the contest, or is it hardier than some might think? A graph of the changing populations over 50 generations may be seen below.

It’s a hard world to start. ALL-C immediately starts being decimated by the deception of ALL-D and RAND who start surging ahead, while Tit-for-Tat barely hangs on. ALL-D’s relentless deception allows it to quickly take the lead, and it starts knocking off its former partner in crime, RAND. Tit-for-Tat remains on the ropes, barely keeping its population around 10% as ALL-C and RAND are quickly eliminated around it.

And then something very interesting happens. ALL-D runs out of easy targets, and turns to the only opponents left – Tit-for-Tat and itself. Tit-for-Tat begins a slow climb as ALL-D begins to eat itself fighting over scraps. Slowly, steadily, Tit-for-Tat maintains its numbers by simply getting along with itself while allowing ALL-D to destroy each other. By 25 generations it’s all over – the easy resources exhausted, ALL-D was unable to adapt to the new environment and Tit-for-Tat takes over.

This illustrates a very important concept – that of an evolutionarily stable strategy. ALL-D was well on its way to winning, but left itself open to invasion by constant infighting. ALL-C initially had the highest population but was quickly eaten away by more deceptive strategies. Tit-for-Tat on the other hand was able to get along with itself, and defended itself against outside invaders that did not cooperate in turn. An evolutionarily stable strategy is something that can persist in this manner – once a critical mass of players start following it, it cannot be easily invaded or exploited by other strategies, including itself.

I Can’t Hear You

But there’s one critical weakness to Tit-for-Tat. We’re all aware of feuds that have gone on for ages, both sides viciously attacking the other in retaliation for the last affront, neither one precisely able to tell outsiders when it all started. And if we look at the strategies each use in a simplistic sense, it seems that they’re using Tit-for-Tat precisely. So how did it go so horribly wrong?

It went wrong because Tit-for-Tat has a horrible weakness – its memory is only one move long. If two Tit-for-Tat strategies somehow get stuck in a death spiral of defecting against each other, there’s no allowance in the strategy to realize this foolishness, and be the first to forgive. But how could this happen? Tit-for-Tat is never the first to defect after all, so why are both Tit-for-Tat strategies continually defecting?

The answer is that great force of nature, noise. A message read the wrong way, a shout misheard over the wind, an error in interpretation – all can be the impetus for this first initial defection. No matter that it was pointless and incorrect, the strategy has changed. While Tit-for-Tat’s greatest strength is that it never defects first, its greatest weakness is that it never forgives first either.

All of these simulations we’ve seen so far do not include noise, and it can have a catastrophic effect on the effectiveness of Tit-for-Tat. Its success was built on the strategy of never fighting among itself and allowing other deceptive strategies to destroy themselves by doing the same – but with noise, this advantage becomes a fatal weakness as Tit-for-Tat’s inability to be taken advantage of is turned against itself.

So what does a simulation including noise look like? You can see one below, and it contains an additional mystery strategy, Pavlov. Pavlov is very similar to Tit-for-Tat but slightly different – it forgives far more easily.

We see a similar pattern to our previous simulation. ALL-D has an initial population spike as it knocks off the easy targets, but Tit-for-Tat and Pavlov slowly climb to supremacy with ALL-D eventually eating scraps. But the influence of noise causes Tit-for-Tat to fight among itself, and Pavlov begins what previously seemed impossible – to begin to win against Tit-for-Tat.

Puppy Love

So what is Pavlov and why does it work better in a noisy environment like the real world? Well, Ivan Pavlov was the man who discovered classical conditioning. You probably remember him as the guy who fed dogs while ringing a bell, and who then just rang the bell – and discovered that the dogs salivated expecting food.

The strategy is simple – if you win, keep doing it. If you lose, change your approach. Pavlov will always cooperate with ALL-C and Tit-for-Tat. If it plays ALL-D however, it will hopefully cooperate, lose, get angry about it and defect, lose again, switch back to cooperation, and so on. Like a tiny puppy or the suitor of a crazy girlfriend, it can’t really decide what it wants to do, but it’s going to do it’s damndest to try to succeed anyways. It manages to prevent the death spiral of two Tit-for-Tat strategies continually misunderstanding each other by obeying a very simple rule – if it hurts, stop doing it. While it may be slightly more vulnerable to deceptive strategies, it never gets stuck in these self-destructive loops of behavior.

So there’s a lesson here – life is noisy, and people will never get everything correct all the time. Tit-for-Tat works very well for a wide variety of situations, but has a critical weakness where neither player in a conflict is willing or able to forgive. So the next time you’re in a situation like that, step back, use your head, and switch strategies – it’s what this little puppy would want you to do, anyways.

Triumph of the Golden Rule

We live in a world with other people. Almost every decision we make involves someone else in one way or another, and we face a constant choice regarding just how much we’re going to trust the person on the other side of this decision. Should we take advantage of them, go for the quick score and hope we never see them again – or should we settle for a more reasonable reward, co-operating in the hope that this peaceful relationship will continue long into the future?

We see decisions of this type everywhere, but what is less obvious is the best strategy for us to use to determine how we should act. The Golden Rule states that one should “do unto others as you would have them do unto you”. While it seems rather naive at first glance, if we run the numbers, we find something quite amazing.

A Dilemma

In order to study these types of decisions, we have to define what exactly we’re talking about. Let’s define just what a “dilemma” is. Let’s say it has two people – and they can individually decide to work together for a shared reward, or screw the other one over and take it all for themselves. If you both decide to work together, you both get a medium-sized reward. If you decide to take advantage of someone but they trust you, you’ll get a big reward (and the other person gets nothing). If you’re both jerks and decide to try to take advantage of each other, you both get a tiny fraction of what you could have. Let’s call these two people Alice and Bob – here’s a table to make things a bit more clear.

Alice cooperates
Alice defects
Bob cooperates Everyone wins! A medium-sized reward to both for mutual co-operation Poor Bob. He decided to trust Alice, who screwed him and got a big reward. Bob gets nothing.
Bob defects Poor Alice. She decided to trust Bob, who took advantage of her and got a big reward. Alice gets nothing. No honour among thieves… both Bob and Alice take the low road, and fight over the scraps of a small reward.

This specific order of rewards is referred to as the Prisoner’s Dilemma, and was formalized and studied by Melvin Dresher and Merrill Flood in 1950 while working for the RAND Corporation.

Sale, One Day Only!

Now of course the question is – if you’re in this situation, what is the best thing to do? First suppose that we’re never, ever going to see this other person again. This is a one time deal. Absent any moral consideration, your best option for the most profit is to attempt to take advantage of the other person and hope that they are clueless enough to let you, capitalism at its finest. You could attempt to cooperate, but that leaves you open to the other party screwing you. If each person acts in their own interest and is rational, they will attempt to one-up the other.

But there’s just one problem – if both people act in this way, they both get much less than they would if they simply cooperated. This seems very strange, as the economic models banks and other institutions use to model human behavior assume this type of logic – the model of the rational consumer. But this leads to nearly the worst possible option if both parties take this approach.

It seems that there is no clear ideal strategy for a one time deal. Each choice leaves you open to possible losses in different ways. At this point it’s easy to toss up your hands, leave logic behind, and take a moral stance. You’ll cooperate because you’re a good person – or you’ll take advantage of the suckers because life just isn’t fair.

And this appears to leave us where we are today – some good people, some bad people, and the mythical invisible hand of the market to sort them all out. But there’s just one little issue. We live in a world with reputations, with friends, and with foes – there are no true “one time” deals. The world is small, and people remember.

In it for the Long Run

So instead of thinking of a single dilemma, let’s think about what we should do if we get to play this game more than once. If someone screws you in the first round, you’ll remember – and probably won’t cooperate the next time. If you find someone who always cooperates, you can join them and work together for your mutual benefit – or decide that they’re an easy mark and take them for everything they’ve got.

But what is the best strategy? In an attempt to figure this out, in 1980 Robert Axelrod decided to have a contest. He sent the word out, and game theorists, scientists, and mathematicians all submitted entries for a battle royale to determine which strategy was the best.

Each entry was a computer program designed with a specific strategy for playing this dilemma multiple times against other clever entries. The programs would play this simple dilemma, deciding whether to cooperate or defect against each other, for 200 rounds. Five points for a successful deception (you defect, they cooperate), three points each for mutual cooperation, one point each if you both tried to screw each other (mutual defection), and no points if you were taken advantage of (you cooperate, they defect). Each program would play every other program as well as a copy of itself, and the program with the largest total score over all the rounds would win.

So what would some very simple programs be?

ALL-C (always cooperate) is just like it sounds. Cooperation is the only way, and this program never gets tired of being an upstanding guy.

ALL-D (always defect) is the counterpoint to this, and has one singular goal. No matter what happens, always, always, always try to screw the other person over.

RAND is the lucky dunce – don’t worry too much, just decide to cooperate or defect at random.

You can predict how these strategies might do if they played against each other. Two ALL-C strategies would endlessly cooperate in a wonderful dance of mutual benefit. Two ALL-D strategies would continually fight, endlessly grinding against each other and gaining little. ALL-C pitted against ALL-D would fare about as well as a fluffy bunny in a den of wolves – eternally cooperating and hoping for reciprocation, but always getting the shaft with ALL-D profiting.

So an environment of ALL-C would be a cooperative utopia – unless a single ALL-D strategy came in, and started bleeding them dry. But an environment entirely made of ALL-D would be a wasteland – no one would have any success due to constant fighting. And the RAND strategy is literally no better than a coin flip.

Time to Think

So what should we do? Those simple strategies don’t seem to be very good at all. If we think about it however, there’s a reason they do so poorly – they don’t remember. No matter what the other side does, they’ve already made up their minds. Intelligent strategies remember previous actions of their opponents, and act accordingly. The majority of programs submitted to Axelrod’s competition incorporated some sort of memory. For instance, if you can figure out you’re playing against ALL-C, it’s time to defect. Just like in the real world, these programs tried to figure out some concept of “reputation” that would allow them to act in the most productive manner.

And so Axelrod’s competition was on. Programs from all over the world competed against each other, each trying to maximize their personal benefit. A wide variety of strategies were implemented from some of the top minds in this new field. Disk drives chattered, monitors flickered, and eventually a champion was crowned.

And the Winner Is…

When the dust settled, the winner was clear – and the victory was both surprising and inspiring. The eventual champion seemed to be a 90 lb weakling at first glance, a mere four lines of code submitted by Anatol Rapoport, a mathematical psychologist from the University of Toronto. It was called “Tit-for-Tat”, and it did exactly that. It started every game by cooperating – and then doing exactly what the other player did in their last turn. It cooperated with the “nice” strategies, butted heads with the “mean” strategies, and managed to come out on top ahead of far more complex approaches.

The simplest and shortest strategy won, a program that precisely enforced the Golden Rule. But what precisely made Tit-for-Tat so successful? Axelrod analyzed the results of the tournament and came up with a few principles of success.

  • Don’t get greedy. Tit-for-Tat can never beat another strategy. But it never allows itself to take a beating, ensuring it skips the brutal losses of two “evil” strategies fighting against each other. It actively seeks out win-win situations instead of gambling for the higher payoff.
  • Be nice. The single best predictor of whether a strategy would do well was if they were never the first to defect. Some tried to emulate Tit-for-Tat but with a twist – throwing in the occasional defection to up the score. It didn’t work.
  • Reciprocate, and forgive. Other programs tended to cooperate with Tit-for-Tat since it consistently rewarded cooperation and punished defection. And Tit-for-Tat easily forgives – no matter how many defections it has seen, if a program decides to cooperate, it will join them and reap the rewards.
  • Don’t get too clever. Tit-for-Tat is perfectly transparent, and it becomes obvious that it is very, very difficult to beat. There are no secrets, and no hypocrisy – Tit-for-Tat gets along very well with itself, unlike strategies biased toward deception.

The contest attracted so much attention that a second one was organized, and this time every single entry was aware of the strategy and success of Tit-for-Tat. Sixty-three new entries arrived, all gunning for the top spot. And once again, Tit-for-Tat rose to the top. Axelrod used the results of these tournaments to develop ideas about how cooperative behaviour could evolve naturally, and eventually wrote a bestselling book called The Evolution of Cooperation. But his biggest accomplishment may be showing us that being nice does pay off – and giving us the numbers to prove it.

The Quaternion Bulb

My three dimensional unfolding of the quaternion Julia sets finally finished rendering. There are a fair bit of compression artifacts in the embedded version, click on the Vimeo button on the bottom right side of the video to watch it in full quality HD.

Since each quaternion can be described using four numbers, I unfolded these four dimensional quaternion Julia sets into three dimensional space, and animated the final coefficient.

xyzft

But once I did that I noticed some radial symmetry along the y-z plane – it looks like something that’s been made on a lathe. This means that we can “index” all these shapes in a more sensible manner by collapsing things along this axis of symmetry. While previously we could index all of our shapes with four coefficients a, b, c, and d.

abcd

We can now index them with four coefficients a, r, theta, and d after this transformation. But there’s a nice side effect now that our coordinate system reflects our symmetry – if we vary theta, the appearance of the Julia set doesn’t change, the object just appears to rotate about the a axis.

ard

So really we can index all possible shapes using only three coefficients – a, r, and d. This is awesome – it means we can use this symmetry to collapse a dimension and completely illustrate a discrete approximation of this four dimensional set in three dimensional space. The following images (click for 1080p full resolution images) illustrate the full set of these possible shapes – a is the horizontal axis, r is the vertical axis, and values iterate by 0.25. The grey sphere in the first image is the origin, and the images start at a d value of 0 and iterate upward by 0.25. We find that there exists additional symmetry with our d parameter – namely that d = -d, so we only need to illustrate the absolute value to see all shapes.

d = 0.00
juliacube-0.00

d = 0.25
juliacube-0.25

d = 0.50
juliacube-0.50

d = 0.75
juliacube-0.75

d = 1.00
juliacube-1.00

When d = 1.25 there are only a few bits of unconnected dust loops visible. This analysis only covers a single “slice” – namely the plane normal to (0,0,0,1). I’d be very interested to see if there are any other symmetries…