Graph Theory, Algorithmic Consensus, and MMA Ranking

I’ve been working on an objective ranking system lately that could be applied to groups with large numbers of individual competitors, like the sport of mixed martial arts (MMA). The biggest issue compared to typical ranking systems is that there are so many participants that they cannot all compete against each other in a round-robin tournament or similar within a reasonable time frame. In order to calculate a global ranking, all of the players must be compared against each other through a sort of “six degrees of separation” style comparison, which is vulnerable to bias and calculation error.

This problem has already been solved in the chess world with the Elo rating system, a statistical approach that requires frequent competition in order to generate statistically significant results. Unfortunately competitors in sports like mixed martial arts or boxing do not compete nearly as frequently as chess players (for obvious reasons) and this approach drowns in a sea of statistical noise. Typically combat sport rankings are done by a knowledgeable observer by hand, through consensus of many observers, or by models with a large number of tunable parameters. It is very interesting to consider that humans appear to be able to easily determine who should be ranked highly, and that many algorithmic approaches largely match these evaluations but make some seemingly obvious mistakes. My goal was to find an approach that produced rankings that seemed sensible to a human observer with a minimum of tunable parameters (preferably none).

Data Structure

The initial step is to structure our data in a sensible way. We have a large number of participants, connected by individual competition which can either result in a win or a loss. One way of structuring this data would be in a directed graph, where competitors are represented by nodes and matches as edges with direction defined by who wins or loses. We seem to be focused on losses (or win/loss ratio) as the biggest factor – a competitor with 40 wins and zero losses is typically regarded as better than a competitor with 60 wins and 20 losses. Let’s set the direction of the edge from the losing competitor to the winning competitor. A “good” competitor’s node will therefore have many incoming edges and few outgoing edges, and tend to be at the center of force-directed graph layouts.

Evaluation Algorithms

There are many possible evaluation algorithms which will produce a ranking from this data structure. After many trials, two appeared to stand above the rest.

  1. The first is recommended in the journal article Ranking from unbalanced paired-comparison data by H.A. David published in Volume 74, Issue 2 of Biometrika in 1987.
  2. David also discussed the Kendall-Wei algorithm in his paper, of which Google’s PageRank algorithm is a special case. PageRank is used to rank webpages which are represented as a directed graph based on the concept of network flow, and may also be applied to other directed graphs including our case. The PageRank algorithm contains one tunable parameter, a damping factor which is currently set to the default 0.85.

It was found that both algorithms seemed to emphasize different aspects important to MMA ranking. David’s “Unbalanced Pair Comparison” emphasized a grittier statistics-based approach, highlighting fighters such as Anderson Silva, Rashad Evans, and Jon Fitch. Google’s PageRank seemed to take a more social approach emphasizing fighters with a wide range of quality opponents, like Georges St-Pierre, Matt Hughes, and Forest Griffin. It was very interesting how one algorithm appeared to highlight the “hardcore mma fan” perspective, while the other seemed to be pulled straight from the UFC head office.

It was decided that both would be calculated, scores normalized, and used in combination to generate a consensus ranking similar to consensus rankings generated from human experts. This was inspired by IBM’s Watson which uses a consensus of multiple algorithms to evaluate answers to trivia questions. Two possible improvements are hypothesized but undertested:

  1. Perhaps additional independent ranking algorithms incorporated in this consensus would improve accuracy. The big issue appears to be “independent” algorithms which do not simply restate the work of other algorithms, and of those, finding algorithms which display ranking behavior useful for our application.
  2. Unlike Watson, confidence levels are not used. This would be a useful addition given situations like extreme upsets. A newer beta version of this ranking system determines if highly ranked fighters coincide with centrality metrics in an attempt to implement this, but is not complete at this time of this post.

Results

The ranking system was run on every UFC event from UFC 1 (November 12, 1993) to Fight for the Troops 2 (January 22, 2011). Both algorithms are shown ranked alone for comparison, and their scores were equally weighted to produce the final results.

Lightweight (155lbs)
Overall Rank

PageRank Unbalanced Pair
1. Gray Maynard 1. B.J. Penn 1. Gray Maynard
2. B.J. Penn 2. Gray Maynard 2. George Sotiropolous
3. Frankie Edgar 3. Frankie Edgar 3. Frankie Edgar
4. George Sotiropolous 4. Kenny Florian 4. Jim Miller
5. Jim Miller 5. Joe Lauzon 5. Nik Lentz

First up are the lightweights – and the results aren’t too shabby. No one seems to want to admit it due to his sometimes snooze-inducing style, but Gray Maynard is a beast who is likely to cause B.J. Penn significant issues if they ever fought. Frankie Edgar deserves to be right up there but not number one, and chronically underrated George Sotiropolous and Jim Miller round out the pack.

Welterweight (170lbs)
Overall Rank PageRank Unbalanced Pair
1. Georges St-Pierre 1. Georges St-Pierre 1. Matt Hughes
2. Matt Hughes 2. Matt Hughes 2. Josh Koscheck
3. Josh Koscheck 3. Matt Serra 3. Georges St-Pierre
4. Martin Kampmann 4. Dennis Hallman 4. Martin Kampmann
5. Dennis Hallman 5. Martin Kampmann 5. Rick Story

Georges St-Pierre is the obvious frontrunner at 170. Matt Hughes at number two is a bit more debatable, but a long title reign and consistent quality opposition provide a reasonable rationale. Josh Koscheck is perpetually always the bridesmaid, never the bride at third, and Martin Kampmann and Dennis Hallman round out a somewhat thin division.

Middleweight (185lbs)
Overall Rank PageRank Unbalanced Pair
1. Anderson Silva 1. Anderson Silva 1. Anderson Silva
2. Jon Fitch 2. Jon Fitch 2. Jon Fitch
3. Yushin Okami 3. Vitor Belfort 3. Yushin Okami
4. Michael Bisping 4. Nate Marquardt 4. Michael Bisping
5. Nate Marquardt 5. Yushin Okami 5. Demian Maia

Anderson Silva provides another easy choice for number one at 185lbs. Both Jon Fitch and Yushin Okami deserve their spots with a consistent if slightly dull record. Michael Bisping has slowly been grinding his way up the charts, and Nate Marquardt rounds out the top five.

Light Heavyweight (205lbs)
Overall Rank PageRank Unbalanced Pair
1. Rashad Evans 1. Forrest Griffin 1. Rashad Evans
2. Lyoto Machida 2. Lyoto Machida 2. Jon Jones
3. Forrest Griffin 3. Rashad Evans 3. Ryan Bader
4. Quinton Jackson 4. Quinton Jackson 4. Lyoto Machida
5. Mauricio Rua 5. Mauricio Rua 5. Thiago Silva

Rashad Evans appears to have made a sensible call waiting for his title shot at UFC 128. The hypercompetitive light heavyweight division is always a tough one to call. A split in the consensus between the two algorithms produces a top five that seems to emphasize number of fights in the Octagon, with champion Mauricio “Shogun” Rua a surprising fifth. Too early to call Evans over Rua? Only time will tell.

Heavyweight (265lbs)
Overall Rank PageRank Unbalanced Pair
1. Frank Mir 1. Frank Mir 1. Frank Mir
2. Cain Velasquez 2. Brock Lesnar 2. Junior Dos Santos
3. Junior Dos Santos 3. Cain Velasquez 3. Cain Velasquez
4. Brock Lesnar 4. Antonio Rodrigo Nogueira 4. Cheick Kongo
5. Shane Carwin 5. Shane Carwin 5. Brendan Schaub

I initially disagreed with Frank Mir as number one here – Cain Velasquez seems to be the obvious choice. But the ranking process seems to trust number of fights over new hype, and the rest of the top five is bang on what I would choose. You can’t win them all – or perhaps I’m just being unfair to Frank Mir.

Conclusions

The approach produced excellent rankings from UFC-only data, largely coinciding with established and more complete authorities like FightMatrix. The approach used two ranking algorithms which traversed a directed graph, and produced scores which were normalized and added to produce a final score which was sorted to produce final rankings. One tunable parameter (PageRank damping factor) exists in the model, but was left at the default value of 0.85. Further work will focus on additional ranking algorithms which may be incorporated into the consensus, parametric analysis of the PageRank damping factor, and determining confidence scores.

River Crossing Problems and Discrete State Spaces

A brain teaser goes as follows: a farmer is returning from market, where he has bought a wolf, a goat, and a cabbage. On his way home he must cross a river by boat, but the boat is only large enough to carry the farmer and one additional item. The farmer realizes he must shuttle the items across one by one. He must be careful however, as the wolf cannot be left alone on the same side of the river as the goat (since the goat will be eaten), and the goat cannot be left alone on the same side of the river as the cabbage (since the goat will eat the cabbage).

How can the farmer arrange his trips to move the wolf, the goat, and the cabbage across the river while ensuring that no one eats the other while he’s away? The problem usually yields to an analysis by trial and error, where one of the possible solutions is found by iterating and checking the possible decisions of the farmer.

But is there a more rigorous approach that will allow us to easily find all possible solutions without having to resort to guessing and checking?

A Discrete State Space

Let’s create a three-dimensional vector that represents the state of the system. The components of this vector will be 0 or 1 depending on which side of the river the wolf, the goat, and the cabbage are. Let’s list off all possible combinations without worrying about whether something is going to get eaten – we’ll get to that later.

Vector Location
(0,0,0) Our initial state, with all three on the starting side of the river.
(1,0,0) The wolf has crossed the river, but not the goat or cabbage.
(0,1,0) The goat has crossed the river, but not the wolf or cabbage.
(0,0,1) The cabbage has crossed the river, but not the wolf or goat.
(1,1,0) The wolf and the goat have crossed the river, but not the cabbage.
(0,1,1) The goat and the cabbage have crossed the river, but not the wolf.
(1,0,1) The wolf and cabbage have crossed the river, but not the goat.
(1,1,1) Our desired state, where all three have crossed the river.

Now a giant list like this isn’t much help. We’ve listed off all possible states, but we can structure this in a more sensible manner which will make solving the problem much easier. Let’s allow the vectors to represent the corners of a cube, with the edges of the cube representing trips across the river. Movement along the x axis represents the wolf moving, movement along the y axis represents the goat moving, and movement along the z axis represents the cabbage moving.

Alright, that seems a bit better. What we need to do now is find an allowable path along the edges of the cube from (0,0,0) to (1,1,1), and this will represent the solution to our puzzle. First we need to remove the edges where something gets eaten.

Path Removed Rationale
(0,0,0) to (1,0,0) This moves the wolf across, leaving the goat with the cabbage.
(0,0,0) to (0,0,1) This moves the cabbage across, leaving the wolf with the goat.
(0,1,1) to (1,1,1) This moves the wolf across, leaving the goat with the cabbage.
(1,1,0) to (1,1,1) This moves the cabbage across, leaving the wolf with the goat.

Our allowable state space now looks like this.

The problem has simplified itself drastically. We want to go from (0,0,0) to (1,1,1) by travelling along the black (allowable) edges of this cube. We can see that if we ignore solutions where the farmer loops pointlessly, there are two allowable solutions to this puzzle.

  1. Move goat to other side.
  2. Move cabbage to other side.
  3. Move goat back.
  4. Move wolf to other side.
  5. Move goat to other side.
  1. Move goat to other side.
  2. Move wolf to other side.
  3. Move goat back.
  4. Move cabbage to other side.
  5. Move goat to other side.

This is a beautiful example of the simple structures that underly many of these classic riddles.

Triumph of the Golden Rule

We live in a world with other people. Almost every decision we make involves someone else in one way or another, and we face a constant choice regarding just how much we’re going to trust the person on the other side of this decision. Should we take advantage of them, go for the quick score and hope we never see them again – or should we settle for a more reasonable reward, co-operating in the hope that this peaceful relationship will continue long into the future?

We see decisions of this type everywhere, but what is less obvious is the best strategy for us to use to determine how we should act. The Golden Rule states that one should “do unto others as you would have them do unto you”. While it seems rather naive at first glance, if we run the numbers, we find something quite amazing.

A Dilemma

In order to study these types of decisions, we have to define what exactly we’re talking about. Let’s define just what a “dilemma” is. Let’s say it has two people – and they can individually decide to work together for a shared reward, or screw the other one over and take it all for themselves. If you both decide to work together, you both get a medium-sized reward. If you decide to take advantage of someone but they trust you, you’ll get a big reward (and the other person gets nothing). If you’re both jerks and decide to try to take advantage of each other, you both get a tiny fraction of what you could have. Let’s call these two people Alice and Bob – here’s a table to make things a bit more clear.

Alice cooperates
Alice defects
Bob cooperates Everyone wins! A medium-sized reward to both for mutual co-operation Poor Bob. He decided to trust Alice, who screwed him and got a big reward. Bob gets nothing.
Bob defects Poor Alice. She decided to trust Bob, who took advantage of her and got a big reward. Alice gets nothing. No honour among thieves… both Bob and Alice take the low road, and fight over the scraps of a small reward.

This specific order of rewards is referred to as the Prisoner’s Dilemma, and was formalized and studied by Melvin Dresher and Merrill Flood in 1950 while working for the RAND Corporation.

Sale, One Day Only!

Now of course the question is – if you’re in this situation, what is the best thing to do? First suppose that we’re never, ever going to see this other person again. This is a one time deal. Absent any moral consideration, your best option for the most profit is to attempt to take advantage of the other person and hope that they are clueless enough to let you, capitalism at its finest. You could attempt to cooperate, but that leaves you open to the other party screwing you. If each person acts in their own interest and is rational, they will attempt to one-up the other.

But there’s just one problem – if both people act in this way, they both get much less than they would if they simply cooperated. This seems very strange, as the economic models banks and other institutions use to model human behavior assume this type of logic – the model of the rational consumer. But this leads to nearly the worst possible option if both parties take this approach.

It seems that there is no clear ideal strategy for a one time deal. Each choice leaves you open to possible losses in different ways. At this point it’s easy to toss up your hands, leave logic behind, and take a moral stance. You’ll cooperate because you’re a good person – or you’ll take advantage of the suckers because life just isn’t fair.

And this appears to leave us where we are today – some good people, some bad people, and the mythical invisible hand of the market to sort them all out. But there’s just one little issue. We live in a world with reputations, with friends, and with foes – there are no true “one time” deals. The world is small, and people remember.

In it for the Long Run

So instead of thinking of a single dilemma, let’s think about what we should do if we get to play this game more than once. If someone screws you in the first round, you’ll remember – and probably won’t cooperate the next time. If you find someone who always cooperates, you can join them and work together for your mutual benefit – or decide that they’re an easy mark and take them for everything they’ve got.

But what is the best strategy? In an attempt to figure this out, in 1980 Robert Axelrod decided to have a contest. He sent the word out, and game theorists, scientists, and mathematicians all submitted entries for a battle royale to determine which strategy was the best.

Each entry was a computer program designed with a specific strategy for playing this dilemma multiple times against other clever entries. The programs would play this simple dilemma, deciding whether to cooperate or defect against each other, for 200 rounds. Five points for a successful deception (you defect, they cooperate), three points each for mutual cooperation, one point each if you both tried to screw each other (mutual defection), and no points if you were taken advantage of (you cooperate, they defect). Each program would play every other program as well as a copy of itself, and the program with the largest total score over all the rounds would win.

So what would some very simple programs be?

ALL-C (always cooperate) is just like it sounds. Cooperation is the only way, and this program never gets tired of being an upstanding guy.

ALL-D (always defect) is the counterpoint to this, and has one singular goal. No matter what happens, always, always, always try to screw the other person over.

RAND is the lucky dunce – don’t worry too much, just decide to cooperate or defect at random.

You can predict how these strategies might do if they played against each other. Two ALL-C strategies would endlessly cooperate in a wonderful dance of mutual benefit. Two ALL-D strategies would continually fight, endlessly grinding against each other and gaining little. ALL-C pitted against ALL-D would fare about as well as a fluffy bunny in a den of wolves – eternally cooperating and hoping for reciprocation, but always getting the shaft with ALL-D profiting.

So an environment of ALL-C would be a cooperative utopia – unless a single ALL-D strategy came in, and started bleeding them dry. But an environment entirely made of ALL-D would be a wasteland – no one would have any success due to constant fighting. And the RAND strategy is literally no better than a coin flip.

Time to Think

So what should we do? Those simple strategies don’t seem to be very good at all. If we think about it however, there’s a reason they do so poorly – they don’t remember. No matter what the other side does, they’ve already made up their minds. Intelligent strategies remember previous actions of their opponents, and act accordingly. The majority of programs submitted to Axelrod’s competition incorporated some sort of memory. For instance, if you can figure out you’re playing against ALL-C, it’s time to defect. Just like in the real world, these programs tried to figure out some concept of “reputation” that would allow them to act in the most productive manner.

And so Axelrod’s competition was on. Programs from all over the world competed against each other, each trying to maximize their personal benefit. A wide variety of strategies were implemented from some of the top minds in this new field. Disk drives chattered, monitors flickered, and eventually a champion was crowned.

And the Winner Is…

When the dust settled, the winner was clear – and the victory was both surprising and inspiring. The eventual champion seemed to be a 90 lb weakling at first glance, a mere four lines of code submitted by Anatol Rapoport, a mathematical psychologist from the University of Toronto. It was called “Tit-for-Tat”, and it did exactly that. It started every game by cooperating – and then doing exactly what the other player did in their last turn. It cooperated with the “nice” strategies, butted heads with the “mean” strategies, and managed to come out on top ahead of far more complex approaches.

The simplest and shortest strategy won, a program that precisely enforced the Golden Rule. But what precisely made Tit-for-Tat so successful? Axelrod analyzed the results of the tournament and came up with a few principles of success.

  • Don’t get greedy. Tit-for-Tat can never beat another strategy. But it never allows itself to take a beating, ensuring it skips the brutal losses of two “evil” strategies fighting against each other. It actively seeks out win-win situations instead of gambling for the higher payoff.
  • Be nice. The single best predictor of whether a strategy would do well was if they were never the first to defect. Some tried to emulate Tit-for-Tat but with a twist – throwing in the occasional defection to up the score. It didn’t work.
  • Reciprocate, and forgive. Other programs tended to cooperate with Tit-for-Tat since it consistently rewarded cooperation and punished defection. And Tit-for-Tat easily forgives – no matter how many defections it has seen, if a program decides to cooperate, it will join them and reap the rewards.
  • Don’t get too clever. Tit-for-Tat is perfectly transparent, and it becomes obvious that it is very, very difficult to beat. There are no secrets, and no hypocrisy – Tit-for-Tat gets along very well with itself, unlike strategies biased toward deception.

The contest attracted so much attention that a second one was organized, and this time every single entry was aware of the strategy and success of Tit-for-Tat. Sixty-three new entries arrived, all gunning for the top spot. And once again, Tit-for-Tat rose to the top. Axelrod used the results of these tournaments to develop ideas about how cooperative behaviour could evolve naturally, and eventually wrote a bestselling book called The Evolution of Cooperation. But his biggest accomplishment may be showing us that being nice does pay off – and giving us the numbers to prove it.

Color and Reality

When I was a kid, I used to wonder if everyone saw the world in the same way. We can all look at the same grass, but maybe the color I called green showed up in my brain as the color my friend called blue. Maybe all of our colors were shifted around to the point where all the colors were accounted for, but how we perceived them was shuffled up. I thought it would be remarkably exciting, and hoped that I could see the world through someone else’s brain to see if, in fact, this was true.

meadows

My eight year old self would be bitterly disappointed technology today has not progressed far enough to make that wish a reality. At the time, we had to settle the debate by another manner – asking an adult, a source of concrete and immutable knowledge. The answer I was given was that everyone sees the same colors of course (although why this was so obvious was never really clear) and if they didn’t it wouldn’t matter much since we couldn’t tell. Color was “real” – bits of light had a color (later I found out we could call it the wavelength of a photon), it hit our eyes, and our brains converted it to a beautiful image.

The only problem is that this is wrong.

Color as Wavelength

Well, alright. Before you get upset, it isn’t completely wrong. We were all taught about Sir Isaac Newton who discovered that a glass prism can split white light apart into its constituent colors.

pink-floyd-dark-side-of-the-moon-crop

While we consider this rather trivial today, at the time you’d be laughed out of the room if you suggested this somehow illustrated a fundamental property of light and color. The popular theory of the day was that color was a mixture of light and dark, and that prisms simply colored light. Color went from bright red (white light with the smallest amount of “dark” added) to dark blue (white light with the most amount of “dark” added before it turned black).

Newton showed this to be incorrect. We now know that light is made up of tiny particles called photons, and these photons have something called “wavelength” that seems to correspond to color. Visible light is made up of a spectrum, a huge number of photons each with a different wavelength our eyes can see. When combined, we see it as white light.

visible_light_spectrum

So this appears to resolve my childhood debate. Light of a single wavelength (like that produced by a laser) corresponds to a single “real” color. The brain just translates wavelengths into colors somehow, and that is that. There’s just one problem.

We’re missing a color!

Color as Experience

To find out just what we’re missing, we have to consider how we can combine colors. For instance, you learned some basic color mixing rules as a kid. In this case, let’s use additive color mixing since we’re mixing light.

Additive_color_mixing

Let’s find two colors on the spectrum line, and then we can estimate the final color they’ll produce when you mix them by finding the midpoint.

Red and green make yellow.

red-green-yellow

Green and blue make turquoise.

blue-green-turquoise

Red and blue make…

red-blue-green

Green? What? That doesn’t seem to make any sense! Red and violet make pink! But where is pink in our spectrum? It’s not violet, it’s not red – it seems like it should be simultaneously above and below our spectrum. But it’s not on the spectrum at all!

So we’re forced to realize a very interesting conclusion. The wavelength of a photon certainly reflects a color – but we cannot produce every color the human eye sees by a single photon of a specific wavelength. There is no such thing as a pink laser – two lasers must be mixed to produce that color. There are “real” colors (we call them pure spectral or monochromatic colors) and “unreal” colors that only exist in the brain.

A Color Map

So what are the rules for creating these “unreal” colors from the very real photons that hit your eye? Well, in the 1920s W. David Wright and John Guild both conducted experiments designed to map how the brain mixed monochomatic light into the millions of colors we experience everyday. They set up a split screen – on one side, they projected a “test” color. On the other side, the subject could mix together three primary colors produced by projectors to match the test color. After a lot of test subjects and a lot of test colors, eventually the CIE 1931 color space was produced.

CIE-1931

I consider this to be a map of the abstractions of the human brain. On the curved border we can see numbers, which correspond to the wavelengths in the spectrum we saw earlier. We can imagine the spectrum bent around the outside of this map – representing “real” colors. The inside represents all the colors our brain produces by mixing – the “unreal” colors.

So let’s try this again – with a map of the brain instead of a map of photon wavelengths. Red and green make yellow.

cie-red-green-yellow

Green and blue make turquoise.

cie-blue-green-turq

Blue and red make…

cie-blue-red-magenta

Pink! Finally! Note that pink is not on the curved line representing monochromatic colors. It is purely a construction of your brain – not reflective of the wavelength of any one photon.

Is Color Real?

So is color real? Well, photons with specific wavelengths seem to correspond to specific colors. But the interior of the CIE 1931 color space is a representation of the a most ridiculously abstract concept, labels that aren’t even labels, something our brain experiences and calculates from averaged photon wavelengths. It is an example of what philosophers call qualia – a subjective quality of consciousness.

I later learned that my childhood argument was a version of the inverted spectrum argument first proposed by John Locke, and that the “adult” perspective of everyone seeing the same colors (and it not really mattering if they didn’t) was argued by the philosopher Daniel Dennett.

I have come no closer to resolving my question from long ago of “individual spectrums” – but for the future, I vow to pay more attention to the idle questions of children.

Clever as a Fox

Sometimes we see things so often that we simply forget to ask “why are they like that?” For instance, let’s take a closer look at domestic animals. Dogs, cats, horses, cows, pigs – animals that we live with, and who couldn’t live without us.

Common Traits

What do all these domestic animals have in common?

pb_pup pb_cat pb_dog
pb_cow pb_horse pb_pig

Now this isn’t a particularly subtle example, but that’s kind of the point. You can see that all of these domestic animals have large white patches – they’ve lost pigment in their coats in some areas. Why do we care? Well, this is something that is extremely common among domesticated animals, but very rare among wild animals. I hear you saying “but what about zebras, or any other wild animal with white patches?”. What we’re referring to here is slightly different. A zebra will always have that patterning, whereas what we’re looking at here is depigmentation – the loss of color in certain areas in an animal that is “normally” colored.

What else is common among domestic animals but rare in the wild? Well, things like dwarf and giant varieties, floppy ears, and non-seasonal mating. Charles Darwin, in Chapter One of Origin of the Species noted that “not a single domestic animal can be named which has not in some country drooping ears”. A very significant observation when you consider that there is only a single wild animal with drooping ears – the elephant.

So perhaps something weird is going on here. Why do animals as different as cats and dogs have these common traits? It seems to arise simply from being around humans!

The Hypothesis

belyaev

The Russian geneticist Dmitri Belyaev provided a very interesting potential explanation. Genetics at the time was preoccupied with easily measurable traits that could be passed on – if you bred dogs, you could pick the biggest puppies, breed them, and they would produce bigger dogs on average. Fine. But that is selection of a single simple trait, something that likely did not require that many genes to “switch” in order for the puppies to be bigger.

But what if you were selecting for something more complicated? What if, instead of selecting for a simple trait like size or eye color, you selected for something more vague like behaviour – in this case, the very behaviour that made these animals more likely to be around humans. We can call it tamability, or lack of aggressiveness, or whatever – the point is, we are selecting for those animals who will behave in a manner we want around us. A wolf who does not display aggressive behaviour might be able to grab a few scraps of food from the garbage pile of a early human settlement, rather than being driven off.

And if we were selecting a complicated behaviour, rather than a simple trait, it seems likely that it will require more change in the animals genetic code. And since the genetic code is a tangled web where a small bit of DNA can be referenced in many areas of the body – perhaps selecting for a common behaviour would also cause other common traits to arise in animals that are otherwise different.

It’s like giving your car a paint job versus trying to make it go faster – the paint job is easy, but trying to make it faster could lead to your car exhibiting other traits you didn’t directly request, like consuming more gas during regular driving. This could be common across all your project cars. One is a low level trait (the paint, the size of puppy) that can be encompassed in a tiny bit of information (color, size), the other is a high level trait (speed, tamability) that must involve a wide variety of sub-systems changing as well.

The Experiment

Now if you were a Soviet scientist in the late 1950s, you probably worked on something awesome like a giant robot that shot nuclear missles, or a flying submarine. Not Dmitri Belyaev. No, he lost his job as head of the Department of Fur Animal Breeding at the Central Research Laboratory of Fur Breeding in Moscow in 1948 because he was committed to the theories of classical genetics rather than the very fashionable (and totally wrong) theories of Lysenkoism.

So instead, he started breeding foxes. Well, it was technically an experiment to study animal physiology, but that was more of a ruse to get his Lysenkoism-loving bosses off his back while he could study genetics and his theories of selecting for behaviour.

fox_1

He started out with 130 silver foxes. Like foxes in the wild, their ears are erect, the tail is low slung, and the fur is silver-black with a white tip on the tail. Tameness was selected for rigorously – only about 5% of males and 20% of females were allowed to breed each generation.

fox_2

At first, all foxes bred were classified as Class III foxes. They are tamer than the calmest farm-bred foxes, but flee from humans and will bite if stroked or handled.

fox_3

The next generation of foxes were deemed Class II foxes. Class II foxes will allow humans to pet them and pick them up, but do not show any emotionally friendly response to people. If you are a cat owner, you would call the experiment a success at this point.

fox_4

Later generations produced Class I foxes. They are eager to establish human contact, and will wag their tails and whine. Domesticated features were noted to occur with increasing frequency.

fox_5

Forty years after the start of the experiment, 70 to 80 percent of the foxes are now Class IE – the “domesticated elite”. When raised with humans, they are affectionate devoted animals, capable of forming strong bonds with their owner.

These “elite” foxes also exhibit domestic features such as depigmentation (1,646% increase in frequency), floppy ears (35% increase in frequency), short tails (6,900% increase in frequency), and other traits also seen frequently in domesticated animals.

The Results

Belyaevn passed away in 1985, but he was able to witness the early success of his hypothesis, that selecting for behaviour can cause cascading changes throughout the entire organism. For instance, the current explanation for the loss of pigment is that melanin (a compound that acts to color the coat of the animal) shares a common pathway with adrenaline (a compound that increases the “fight or flight” instinct of an animal). Reduction of adrenaline (by selecting for tame animals) inadvertently reduces melanin (causing the observed depigmentation effects).

So if Belyaevn is right, genetics is not just a low slow process that works on tiny incremental tweaks. Complicated environmental pressures can result in complicated genetic results, in a stunningly quick period of time. Where do I think we’re going with this?

Well, designer pets for one. Following the collapse of the Soviet Union, the project ran into serious financial trouble in the late 1990s. They had to cut down the amount of foxes drastically, and the project survived primarily on funding obtained from selling the tame foxes as exotic pets. Imagine a menagerie of dwarf exotic animals, who crave human attention and form bonds with people. It would be obscenely profitable.

And the out there thought for the day? We’re doing this to ourselves. We don’t encourage people to act aggressively all day to everyone they meet. We reward certain behaviours more than other behaviours. My unprovable conjecture? Humanity is selecting itself for certain behaviours, and the traits we think of as fundamentally human (loss of hair, retention of juvenile characteristics relative to primates) are a side effect of this self-selection.

Videos

Here are some great videos with footage of the tame foxes.

From NOVA – Dogs and More Dogs (starts at about 17:30)

“Suddenly, it all started to make sense. As Belyaev bred his foxes for tameness, over the generations their bodies began producing different levels of a whole range of hormones. These hormones, in turn, set off a cascade of changes that somehow triggered a surprising degree of genetic variation.

Just the simple act of selecting for tameness destabilized the genetic make up of these animals in such a way that all sorts of stuff that you would never normally see in a wild population suddenly appeared.” (Full transcript)

The Ultimate Home Laser Show

Note: instructions for an even-more-ultimate laser show are coming soon, in the meantime check out a sneak peek of it in action.

This is probably the coolest thing I’ve ever made. It’s quite a step up from the Five Dollar Laser Show I posted a bit back. The only logical step after building that was to drastically increase the power and number of beams. I loved the effect that it generated, but it wasn’t bright enough, and only covered a small portion of the wall.

This my first attempt at solving those two issues, I think it worked out pretty well. Here’s the new version in action on the ceiling of my living room.

It’s quite an effect, and rather hypnotizing – it’s quite easy to zone out and become completely absorbed in the music. The multiple vibrating beams coming out of the unit also look amazing when fog or smoke is in the room.

Even the most ADHD-addled individual, myself included, tends to go “whoa”. You can also do cool things like hooking it up to a microphone and watching the patterns your voice makes. A disco ball or mirrors stuck to the ceiling help spread the effect around even more.

So how does one go about making one of these things? Well, I made every effort to make construction as simple as possible for two reasons. One, electrical engineering is far from my speciality and I didn’t want to kill myself/ruin a laser I could only afford one of. Two, I always hated seeing incredibly awesome projects on the internet that I never had any hope of building due to funds and bizarre parts. That’s not to say you don’t need some basic soldering and construction skills, as well as a healthy respect for the power of laser light, but it’s definitely doable if you put your mind to it.

The full details are in the links below, but what you basically need are a heatsinked lab style laser (so it can run for a few hours, high power laser pointers will get too hot), a diffraction grating, a pair of old headphones, and a few electrical parts to tie it all together.

So if you’re the type who enjoys projects, I strongly recommend giving this a shout – it’s proof positive that you can obtain amazing results without the backing of a large electronics company. If you do end up building one, please send me a link to the results so I can see how it turns out!

Instructions