The Theis Equation and Flow

Mathematics is remarkably effective in describing the physical world in part due to isomorphisms, relationships between concepts that reveal a similar underlying structure. In 1935 Charles Vernon Theis was working on groundwater flow, a subject with little mathematical treatment at the time. He thought that perhaps a well tapping a confined aquifer could be described using the same mathematics as the heat flow of a thin wire drawing heat from a large plate, as this work was better established. With a little bit of help from C. I. Lubin and considering how parameters describing underground water flow could be compared to those describing heat flow in solid materials, he developed the Theis equation which is used to this day to model the response of a confined aquifer to pumping over time.

I developed a small program which allows visualization of the potentiometric surface of a confined aquifer subject to pumping using Processing. This particular example uses aquifer and pumping parameters from a Geo-Slope whitepaper.

The source code may be downloaded here. All values including aquifer, pumping, visualization, and numerical parameters may be varied to apply to a wide variety of situations. The exponential integral (or “well function”) is calculated using a numerical approximation accurate to at least 1 part in 10,000,000 .

Convergence in the Lorenz Attractor

Most visualizations of the Lorenz attractor are of a long history of a single point after convergence to the attractor has occurred. I was interested in what the surrounding space looked like, so I randomly selected 20,000 starting points from a three dimensional Gaussian distribution with a standard deviation of 100. Each point was iterated, and a short history displayed as a trail.

Interestingly enough the points do not simply fall in from arbitrary directions like a gravity field, but display structure by instead swirling along a clear path up the z axis.

Lorenz and the Butterfly Effect

In 1962, Edward Lorenz was studying a simplified model of convection flow in the atmosphere. He imagined a closed chamber of air with a temperature difference between the bottom and the top, modeled using the Navier-Stokes equations of fluid flow. If the difference in temperature was slight, the heated air would slowly rise to the top in a predictable manner. But if the difference in temperature was higher, other situations would arise including convection and turbulence. Turbulence is notoriously difficult to model, so Lorenz focused on the formation of convection, somewhat more predictable circular flows.

Focusing only on convection allowed Lorenz to simplify the model sufficiently so that he could run calculations on the primitive (by today’s standards) computer available to him at MIT. And something wonderful happened – instead of settling down into a predictable pattern, Lorenz’s model oscillated within a certain range but never seemed to do the exact same thing twice. It was an excellent approach to the problem, exhibiting the same kind of semi-regular behavior we see in actual atmospheric dynamics. It made quite a stir at MIT at the time, with other faculty betting on just how this strange mathematical object would behave from iteration to iteration.

But Lorenz was about to discover something even more astounding about this model. While running a particularly interesting simulation, he was forced to stop his calculations prematurely as the computer he was using was shared by many people. He was prepared for this, and wrote down three values describing the current state of the system at a certain time. Later when the computer was available again, he entered the three values and let the model run once more. But something very strange happened. It initially appeared to follow the same path as before, but soon it started to diverge from the original slightly, and quickly these small differences became huge variances. What had happened to cause this?

Lorenz soon found that his error was a seemingly trivial one. He had rounded the numbers he wrote down to three decimal places, while the computer stored values to six decimal places. Simply rounding to the nearest thousandth had created a tiny error that propagated throughout the system, creating drastic differences in the final result. This is the origin of the term “butterfly effect”, where infinitesimal initial inputs can cause huge results. Lorenz had discovered the first chaotic dynamical system.

We can see this effect in action here, where a red and a blue path have initial values truncated in a similar manner. Instead of graphing X, Y, and Z values separately, we’ll graph them in 3D space.

 Initial Values (Red) Initial Values (Blue) X 5.766 5.766450 Y -2.456 -2.456460 Z 32.817 32.817251

We can see at first that the two paths closely track each other. The error is magnified as time marches on, and the paths slowly diverge until they are completely different – all as a result of truncating to three decimal places. This is a contrast to the sensitivity to error we are typically used to, where small errors are not magnified uncontrollably as calculation progresses. Imagine if using a measuring tape that was off by a tenth of a percent caused your new deck to to collapse rather than stand strong…

There is also a hint of a more complex underlying structure, a folded three dimensional surface that the paths move along. If a longer path is drawn, the semi-regular line graphs of before are revealed to be the projections of a structure of undeniable beauty.

This is a rendering of the long journey of a single point in Lorenz’s creation, a path consisting of three million iterations developed in Processing that shows the region of allowable values in the model more clearly. Older values are darker, and the visualization is animated to show the strange dance of the current values of the system as they sweep from arm to arm of this dual-eyed mathematical monster, order generating chaos.

River Crossing Problems and Discrete State Spaces

A brain teaser goes as follows: a farmer is returning from market, where he has bought a wolf, a goat, and a cabbage. On his way home he must cross a river by boat, but the boat is only large enough to carry the farmer and one additional item. The farmer realizes he must shuttle the items across one by one. He must be careful however, as the wolf cannot be left alone on the same side of the river as the goat (since the goat will be eaten), and the goat cannot be left alone on the same side of the river as the cabbage (since the goat will eat the cabbage).

How can the farmer arrange his trips to move the wolf, the goat, and the cabbage across the river while ensuring that no one eats the other while he’s away? The problem usually yields to an analysis by trial and error, where one of the possible solutions is found by iterating and checking the possible decisions of the farmer.

But is there a more rigorous approach that will allow us to easily find all possible solutions without having to resort to guessing and checking?

A Discrete State Space

Let’s create a three-dimensional vector that represents the state of the system. The components of this vector will be 0 or 1 depending on which side of the river the wolf, the goat, and the cabbage are. Let’s list off all possible combinations without worrying about whether something is going to get eaten – we’ll get to that later.

 Vector Location (0,0,0) Our initial state, with all three on the starting side of the river. (1,0,0) The wolf has crossed the river, but not the goat or cabbage. (0,1,0) The goat has crossed the river, but not the wolf or cabbage. (0,0,1) The cabbage has crossed the river, but not the wolf or goat. (1,1,0) The wolf and the goat have crossed the river, but not the cabbage. (0,1,1) The goat and the cabbage have crossed the river, but not the wolf. (1,0,1) The wolf and cabbage have crossed the river, but not the goat. (1,1,1) Our desired state, where all three have crossed the river.

Now a giant list like this isn’t much help. We’ve listed off all possible states, but we can structure this in a more sensible manner which will make solving the problem much easier. Let’s allow the vectors to represent the corners of a cube, with the edges of the cube representing trips across the river. Movement along the x axis represents the wolf moving, movement along the y axis represents the goat moving, and movement along the z axis represents the cabbage moving.

Alright, that seems a bit better. What we need to do now is find an allowable path along the edges of the cube from (0,0,0) to (1,1,1), and this will represent the solution to our puzzle. First we need to remove the edges where something gets eaten.

 Path Removed Rationale (0,0,0) to (1,0,0) This moves the wolf across, leaving the goat with the cabbage. (0,0,0) to (0,0,1) This moves the cabbage across, leaving the wolf with the goat. (0,1,1) to (1,1,1) This moves the wolf across, leaving the goat with the cabbage. (1,1,0) to (1,1,1) This moves the cabbage across, leaving the wolf with the goat.

Our allowable state space now looks like this.

The problem has simplified itself drastically. We want to go from (0,0,0) to (1,1,1) by travelling along the black (allowable) edges of this cube. We can see that if we ignore solutions where the farmer loops pointlessly, there are two allowable solutions to this puzzle.

 Move goat to other side. Move cabbage to other side. Move goat back. Move wolf to other side. Move goat to other side. Move goat to other side. Move wolf to other side. Move goat back. Move cabbage to other side. Move goat to other side.

This is a beautiful example of the simple structures that underly many of these classic riddles.

Our Modular Minds

I believe that ultimately human consciousness can be described by a program. Now this doesn’t mean we’re all in the Matrix, simply that our mind is a giant seething logical machine with values that are manipulated by rules. There is no strange new science in the sense of new specialties that must be discovered in order for the mind to be understood, but a progression in a new kind of science as Wolfram dubbed it – the study of how complexity arises.

A List of Rules

When I first heard that you could program a computer as a child I was amazed. A strange wonder that I could only spend a few shared minutes with at school, something that could draw, add, and write far faster than I could ever dream of – and I could tell it what to do? I wasn’t quite sure how to inform it to bend to my every wish, so I started with Turtle (actually called LOGO I later found) upon my teacher’s recommendation. I fed Turtle long lists of instructions – move forward, draw a line, turn left, repeated in all and any ways I could think of. He would draw glowing green shapes across my screen, and never tired.

The Need for Modularity

The only problem was that while the Turtle seemed infallible, I certainly could not say the same. I was making the classic beginner’s mistake – I would write one giant chunk of code that was supposed to cause my turtle to dance in precisely the way I wanted. Any little mistake would send it widely off course and I would end up with a mess that barely looked like the original design at the best of times. I later learned that using a programming concept called “modules” could help me isolate these errors and make code more efficient and reusable. Just like a company could have a manufacturing and engineering division which could communicate with standardized blueprints, a program could have different modules that would exchange data in a standardized manner. A modular program is more stable since mistakes are typically limited in influence to the module they’re contained in, and each module can be modified by separate influences with only the understanding that they are supposed to behave and communicate in a certain manner.

Damage as Evidence

So is our mind modular? Well, if it wasn’t, we could assume that a brain injury affecting a certain part of the brain would have a consistent and general impact across all of our consciousness. The only problem is that we generally only see a nonspecific mental decline like this from a nonspecific trauma, say impact blows to the head over a long period of time. Injuries in specific areas seem to be correlated with deficits in certain mental abilities – while leaving others totally intact.

A stroke can basically be thought of as an incident where blood flow is drastically affected in a certain specific area of the brain. This subregion of the brain is unable to function due to lack of blood flow, and very strange things can occur.

Howard Engel is a Canadian novelist who had a stroke. Upon waking one morning, he found that the morning paper seemed to be written in some strange script, an alphabet he could not understand. Everything else appeared normal, except his visual cortex had been damaged in a specific area which prevented him from visually parsing letters and words. As a writer, he despaired – it seems that his livelihood had been lost. Soon he realized a critical distinction which gave him hope – he may be unable to read visually, but could he write? Howard sat down and traced these strange looking symbols, his pen gliding over the bizarre shapes over and over. And eventually, the concepts came back to him. In a strange sense, he could now read again. Years and years of writing had associated certain movements of his hand with letters and concepts. Instead of words in his head put to paper by hand movements and a pen, he had to move concepts in the opposite direction – moving his hand over shapes written previously by others, the concepts echoed back up his motor cortex.

And it worked. There was irreparable damage to his visual cortex, a critical module malfunctioning. So he hacked his brain, redistributing resources from his motor cortex which had been trained to recognized these same symbols and concepts necessary for reading by his constant writing. Howard now traces the shapes he sees on the inside of his front teeth with his tongue. His speed has steadily increased, and he says he can now read about half of the subtitles in a foreign film before they flash off the screen.

It doesn’t seem too strange to suggest that there are different localized modules in the brain for motor control, visual interpretation, and other concepts easily identifiable with different aspects of the physical world. But are there modules with finer distinctions, working on different parts of our mental experience rather than different parts of the physical world?

The Wason selection task is a very interesting experiment in the field of psychological reasoning. Before I spoil it for you by talking too much, let’s just do it right now. Look at the following cards.

Assume the cards have a number on one side, and a color on the other. What cards need to be flipped over to make sure that all even numbers have red backs? Make sure you’ve picked a card or cards.

Got it? Now a bit of unsettling but ego-salvaging news. When this experiment is done with undergraduates, only 10 to 20 percent get it right. The correct answer is to flip over two cards: the number 4 to make sure it has a red back, and the blue card to make sure that it doesn’t have an even number on the other side. Most people suggest flipping over the 4 and the red card – this is wrong, as it doesn’t matter if the red card has an odd number on the other side.

Now let’s mix it up a bit. Instead of numbers and colors, let’s try people and social activities. Assume the cards have a drink on one side, and a person of a certain age on the other. What cards need to be flipped over to make sure everyone drinking beer is old enough to do so?

Now the answer flows quickly and easily, and almost everyone gets it correct. We need to flip over the beer card to make sure that the person on the other side is old enough, and we need to flip over the card showing the underage drinker to make sure they’re playing by the rules.

The weird thing is that both examples are logically equivalent. Instead of numbers and colors, we’ve just used people and drinks. But something very important has occurred, and it happens time and time again as these tests are administered. It seems that people are fast and accurate at solving this task only if it is described as a test of social obligations. They both can be described identically with logic – but that doesn’t appear to matter to our mind. We appear to have a module dedicated to social reasoning and conflict, and can only solve these problems quickly if it involves determining if someone is cheating or breaking social conventions. This ancient module would hold significant survival value – a general logic verification module not quite so much.

Modules Upon Modules

There appears to be significant evidence for a modular mind, not just in terms of divisions between senses such as sight or hearing and other actions like movement but also more abstract modules that deal with concepts such as social rules. Stroke victims can literally rewire their brains, passing concepts upward into their consciousness through paths never intended to be used in such a strange manner, duplicating the work of other modules lost to injury. These modules live in a strange world of physical interaction and abstract mental space, a huge interconnected mass with no clear outline behind it. The big question now becomes: is a sufficiently complex system able to understand itself, and are we that system?

Monday Miscellany

A few little things for Monday. I made a new video using the same engine as the fish school simulation described in the post In This Virtual Fish Tank, You Make the Rules.

This video shows 5000 fish swimming while the weightings of the three behavioural rules vary using sinusoidal functions. I love the “borders” created as large fish schools collide, and fish attempt to escape along the plane of intersection.

Triumph of the Golden Rule was also linked to as recommended reading for MIT course 6.033 Computer Systems Engineering – one of the best compliments I’ve received. Thanks Evan!

In This Virtual Fish Tank, You Make the Rules

In my last post, I discussed how individuals following simple rules cause cause coordinated group behavior to arise. The boid model created by Craig Reynolds used three rules – alignment, separation, and coherence. But how much attention does each individual pay to each rule? In situations like migration, alignment might be the most important rule. If you’re being attacked by predators, sticking together and paying attention to the coherence rule might keep you from being eaten.

Clearly different situations might have different approaches. I was very interested in how different weightings of these rules might behave, so I decided to create a program that would allow you to change the relative importance of each rule at will. You can see a video of it in action below (the three sliders on the bottom left control the influence of the various rules).

 Smaller screen, older computer, or just want something more simple? Load the standard definition fish school simulation. Big screen, fast computer, and want to give the fish some more room to swim? Load the high definition fish school simulation.
 Alignment: This slider adjusts how much each fish wants to head in the same direction as other fish around it. Cohesion: This slider adjusts how close each fish wants to be to its neighbors. Separation: This slider adjust how much each fish wants to space itself out from the others. Click anywhere in the water to add a new fish. Press this button to reset the simulation.

I hope you enjoy it – please leave me a comment if you have any questions, comments, or find any bugs.

Herds of Android Birds Mimic Ad Hoc Flocks

In winter during the late afternoon before settling down to roost, flocks of thousands of starlings will twist and turn, turning the sky black with strange curves that seem to move with a mind of their own. The flocks of up to a million strong form for warmth, for security, and for social contact.

These seething clouds of birds have no leader, no planning, and yet produce a dance that seems choreographed in its precision and beauty. So how do they do it? Some strange ability given to them, a unique skill among the animal kingdom? Well, perhaps not quite. Consider that humans seem to be able to move in crowds with the same ease, although some might say with a bit less beauty.

People seem to move in clumps, in the same direction, and manage to not trample each other or collide. There seem to be parallels in crowds of organisms, that this order emerges consistently and that somehow each animal “knows” what to do on a small scale, resulting in cohesive and sensible group movement. The question then becomes – what exactly are we doing unconsciously to organize like this?

The Boid Model

In 1986 Craig Reynolds produced a computer model describing how large groups of animals such as schools, herds, and flocks could be able to move in unison with no central coordination. He called his creations “boids”, and imagined that each would follow some simple, sensible rules to navigate around. He also knew a single animal would never be able to keep track of every other animal in the group, and assumed that they would only pay attention to their immediate neighbors in the flock.

So if you were a boid, what rules would you follow?

 Alignment: Look around you. Where is everyone else going? Probably a good idea to go there too. Alignment is a rule that finds the average direction your neighbours are going, and tells you to go there too. Cohesion:Predators look to pick off stragglers on the edge of the pack. Cohesion is a rule that finds the average position of your neighbors and tells you to go there, pulling you into the relative safety of the center of the pack. Separation: When crowds get big, they can get dangerous. Animals can trample each other, and birds can collide. Separation is a rule that tells you to make sure to give your neighbors some space.

How do we know how to do this? Well, the tautological answer is that it simply works. The beauty of these rules is that each of them is amazingly simple, and seem to make sense to us on an intuitive level. We tend to go with the crowd, stick together, and still try to give everyone a bit of personal space. But are these rules enough to produce the complex dance of the starlings, or are we missing some detail?

A Boid Dance

It turns out these simple rules produce patterns of movement that are anything but.

We can see that it looks flowing and wonderfully organic – but it’s not quite the same as the starling flocks. We can play with the boid model, tuning its various parameters to see how the resulting crowds might react.

 A school of fish trying to avoid predators might be modeled best by weighting the cohesiveness rule very heavily, attempting to keep together at all costs while not worrying too much about personal space or where precisely they’re heading as long as it’s away from what’s trying to eat you. A flock of geese flying south for the winter will focus on heading in a certain direction (alignment), then space themselves out to ensure they can see in front of themselves (seperation) while staying close enough and on the same plane to experience the aerodynamic benefits of flying in a group (cohesion) producing the “flying V” we see so often in fall. Migrating animals are primarily concerned with making sure they’re going in the same direction as everyone else (alignment) while ensuring that no one is trampled (seperation) and that young and weak animals are protected in the center of the herd (cohesion). For instance, the wildebeest stampede that killed Simba’s father in the Lion King was generated using the same theory as the boid model.

This simple model has a wide range of possible applications, and a huge amount of flexibility allowing it to produce a wide range of behavioural simulations. This gorgeous installation of light graffiti uses the rules we’ve just discussed to create hypnotizing patterns of artificial creatures.

The boid model has been used to animate realistic looking behavior over a huge range of media. If you’ve seen a computer generated crowd moving in a vaguely sensible manner anywhere, it likely uses the same basic theory. The power of the model comes from its construction, assuming simple behaviors competing for influence that result in a complex outcome not unlike our own consciousness or political systems. Not too shabby for a research project from over 20 years ago.