In 1962, Edward Lorenz was studying a simplified model of convection flow in the atmosphere. He imagined a closed chamber of air with a temperature difference between the bottom and the top, modeled using the Navier-Stokes equations of fluid flow. If the difference in temperature was slight, the heated air would slowly rise to the top in a predictable manner. But if the difference in temperature was higher, other situations would arise including convection and turbulence. Turbulence is notoriously difficult to model, so Lorenz focused on the formation of convection, somewhat more predictable circular flows.

Focusing only on convection allowed Lorenz to simplify the model sufficiently so that he could run calculations on the primitive (by today’s standards) computer available to him at MIT. And something wonderful happened – instead of settling down into a predictable pattern, Lorenz’s model oscillated within a certain range but never seemed to do the exact same thing twice. It was an excellent approach to the problem, exhibiting the same kind of semi-regular behavior we see in actual atmospheric dynamics. It made quite a stir at MIT at the time, with other faculty betting on just how this strange mathematical object would behave from iteration to iteration.

But Lorenz was about to discover something even more astounding about this model. While running a particularly interesting simulation, he was forced to stop his calculations prematurely as the computer he was using was shared by many people. He was prepared for this, and wrote down three values describing the current state of the system at a certain time. Later when the computer was available again, he entered the three values and let the model run once more. But something very strange happened. It initially appeared to follow the same path as before, but soon it started to diverge from the original slightly, and quickly these small differences became huge variances. What had happened to cause this?

Lorenz soon found that his error was a seemingly trivial one. He had rounded the numbers he wrote down to three decimal places, while the computer stored values to six decimal places. Simply rounding to the nearest thousandth had created a tiny error that propagated throughout the system, creating drastic differences in the final result. This is the origin of the term “butterfly effect”, where infinitesimal initial inputs can cause huge results. Lorenz had discovered the first chaotic dynamical system.

We can see this effect in action here, where a red and a blue path have initial values truncated in a similar manner. Instead of graphing X, Y, and Z values separately, we’ll graph them in 3D space.

Initial Values (Red)   Initial Values (Blue)  
X 5.766 5.766450
Y -2.456 -2.456460
Z 32.817 32.817251

We can see at first that the two paths closely track each other. The error is magnified as time marches on, and the paths slowly diverge until they are completely different – all as a result of truncating to three decimal places. This is a contrast to the sensitivity to error we are typically used to, where small errors are not magnified uncontrollably as calculation progresses. Imagine if using a measuring tape that was off by a tenth of a percent caused your new deck to to collapse rather than stand strong…

There is also a hint of a more complex underlying structure, a folded three dimensional surface that the paths move along. If a longer path is drawn, the semi-regular line graphs of before are revealed to be the projections of a structure of undeniable beauty.

This is a rendering of the long journey of a single point in Lorenz’s creation, a path consisting of three million iterations developed in Processing that shows the region of allowable values in the model more clearly. Older values are darker, and the visualization is animated to show the strange dance of the current values of the system as they sweep from arm to arm of this dual-eyed mathematical monster, order generating chaos.

2 thoughts on “Lorenz and the Butterfly Effect

  1. While I agree that things have goettn better in the mid-range forecast horizon (in relatively stable pattern setups); I also believe it is a bit short-sighted and self congratulatory to tout these as overwhelmingly successful mid-range examples. From the viewpoint of a meteorologist I think these do appear to be solid examples of good weather prediction, but to the person on the ground (especially in the Mountain West and Pacific Northwest) those slight variations in phasing of troughs and amplitude of ridges, or longitudinal placement of the ridge often mean all the difference in the world with respect to the ‘weather’ even if the ‘pattern’ is correct. A flatter dirty ridge with filtered sun can mean a 10 degree miss temp-wise, even though we are still under a ridge of similar form (big deal when talking about WA’s #1 killer in the wintertime; road icing). Additionally, we still have a long road ahead, developing skillful mid-range prediction when it matters most; during rapidly evolving weather patterns (Think snow-storms, thunderstorms, even hurricanes). While it’s true that we’ve managed to more skillfully forecast large patterns , we remain comparatively inept when it comes to mid or long range point-prediction of medium/high-impact weather . If we in the met community are going to truly pat-ourselves on the back and call a mid-range weather forecast skillful, it needs to be so in all or at least MOST weather patterns. I’m just not so sure we’re there yet. Matt

Leave a Reply

Your email address will not be published. Required fields are marked *