It Can Be Smart To Be Crazy

In a complex world where everybody wants something out of life, it is not always an
advantage to be selfish. It can be good to be nice and altruistic. It might be smart to be crazy and stupid. And sometimes it even pays to be your own worst enemy.

(Published in Ingeniøren, 13 February 2009. Online link (in Danish): http://ing.dk/artikel/det-kan-vaere-klogt-vaere-dum-95698Translated by the author.)

Poker star Gus Hansen has a reputation for being lucky. Often, just when his opponent decides to punish Gus for his reckless bets with mediocre cards, the Dane holds the highest hand and wins the pot. But that has nothing to do with luck. His aggressive style is a cleverly developed `table image ́. And to be able to use your table image for your own advantage is one of the most important weapons in professional poker.

Mathematical game theory is still not close to describing the complexity of advanced strategy games like real life poker. But it has begun to explain why crazy ideas, bad bets and ill-advised decisions might have a place in an otherwise logical and rule-based game universe.

One of such explanations was recently developed by professor David Wolpert from the Santa Fe Institute in New Mexico in a paper published together with Julian Jamison, David Newth, and Michael Harre in the Proceedings of the National Academy of Sciences. They use Gus Hansen's method: develop a suitable table image, which convinces the other players to change their strategy. If you succeed, you might look `stupid ́ but you will increase your chip stack in the long run.

Abnormal Game Theory
Since modern game theory was introduced by John von Neumann and Oskar Morgenstern in the 1944 book Theory of Games and Economic Behavior, a vast number of applications have been developed in such diverse fields as economy, evolutionary psychology, ecology, anthropology and political science. Most of the time the scientists investigated equilibrium strategies, the so-called Nash-equilibria, due to an expectation that actors will be rational and selfish in order to maximize their payoff. But this is not always an optimal strategy.

Let's take an example: Game theory assesses what strategy I should use in a competition with another actor so that I can get the best result for myself. Graphically this is normally illustrated by a payoff-matrix:


In the table rows I can see how much I will win by either cooperating or defecting. If we both would act perfectly rational, I should cooperate (and get 6 points) and my opponent should defect (and get 1 point), because any departure from this equilibrium strategy would reduce our payoff (if I suddenly decide to defect, my payoff would go down from 6 to 4, and if my opponent suddenly would cooperate, his payoff would go down from 1 to 0). Strategy (6;1) is called a Nash-equilibrium, and if a player deviates from this state, he or she is defined as being 'not rational'.

However, it is obvious from the matrix above that my opponent would like me to defect, no matter what. A Gus Hansen strategy could be useful here: act irrationally and shuffle randomly between cooperation and defection. This would give me 0 points half the time and 6 point the other half, since I still cooperate all the time. On average this gives me 3 points per turn and our imaginary Gus would get 0.5 points per turn. In this situation it will be better for me to shift strategy and defect all the time, because I would get an average payoff of 4.5, no matter what my opponent does.


This is exactly what `The Great Dane ́ has been waiting for! Now his average return increases to 5.5 points, which is even more than I get. If Gus had been `rational ́ he would have stayed with 1 point and I would have continued to get 6 points. But now, because his table image made me change my strategy, I get less, and he gets a lot more.

How to Repair the Theory?
Examples like these (see also BOX 1: The Traveler's Dilemma) have made game theory a limited success in real-life applications. Time after time experiments have shown that people do not act rationally. They cooperate voluntarily, act irrationally or against their better judgment. The greatest and most important results in game theory have therefore revolved around the reasons for why people are not rational actors. Eight Nobel Prizes have already been awarded to researchers in this field, and scientists continue to find fruitful paradoxes between the perceived beauty of the theory and the mess of reality.

BOX 1: The Traveler's Dilemma
Just emerged from the subterranean catacombs of the airport, your suitcase lies on the baggage conveyor belt crumpled like a tin can. You anxiously opening it, and find that the antique vase that you had packed so carefully in foam, cardboard, and wrapped in a thick blouse, has tuned into a jigsaw puzzle in thirteen pieces. 
The airline representative says that you will be reimbursed with an amount between 20 and 1000 dollar. The only thing you need to do is to fill out a form and write down the true value of the vase. The representative also informs you that the airline has received a request from another passenger from the same plane who had the same type of vase and which also had been smashed. 
If you both write down the same value, the airline company will believe you both and pay this value. But if one of you write down a lower number than the other, the airline company with pay out the lower value to both and then give a 20 dollar bonus to the person who wrote the lower and probably more truthful number, while punishing the higher bidder with a `fine ́ of 20 dollar. 
So what to do? Your first impulse might be to write down a greedy 1000 dollars. But your well-honed mathematical skills quickly tell you to go lower. You might get more if you write 990 dollars instead, because if the other passenger writes 1000 dollars, then you'll get 1010 (990 + 20) and he or she will only get 970 dollars (990 – 20), and even if she writes 990, you will both end up with 990 dollars, which is still more than the 980 you would have gotten by writing 1000. And if she bids less than that it wouldn't matter anyway whether your form says 1000 or 990 dollars. 
Before you finish the first 9.. a chill runs down your spine. The other passenger can't be anyone else than the cunning stockbroker lady next to you. She is relentless and extremely intelligent. She will definitely have thought this through and your reputation as a rational genius is on the line. It is therefore wise to write 980 for the same reasons as it was wise to write 990. But wait... she will know that too! Let's write 970, or better 960... no, 950... no, 940... 
This is the crux of the matter: if both passengers were perfectly rational and selfish they should write down 20 dollars. This is as logical as it is crazy. Game theoreticians conducted real life experiments of the dilemma in order to see if people really behave as the theory predicts. Of course they don't. The average is somewhere between 990 and 1000 dollars, far away from the Nash equilibrium, and even trained game theoreticians would never offer a meager 20 dollars. So the question is: What's wrong with game theory and how can it be fixed so that it can explain real-life phenomena?
So, how to repair game theory? The example with The Traveler's Dilemma shows that arguments about people being nice and altruistic cannot be the whole story. A large part of the problem seems to be a rather rigid definition of rationality. Gambling, for instance, is not about rationality. It is about maximizing profit.

The idea to adopt different images for different situations (Wolpert calls them 'personae') might contain an important insight by which to solve many of the existing game theoretical paradoxes. The researchers show how such a flexible adoption of strategies can account for some of the discrepancies between experimental and theoretical data - for instance the The Traveler's Dilemma: If you would take into account that your opponent won't be 'rational' in a strict sense (by only requesting 20 dollars from the company) the game theoretical predictions would end up with an average of 970-980 dollars – more or less the same as result as in real-life experiments.

Wolpert's and Jamison's model is of course still quite crude and would never be able to compete with professional poker players. It contains only two 'personae' or images: the perfectly rational superhuman and the completely randomized maniac. But by expanding the idea to a continuum of possible images, one could imagine the approach to be much more useful in real-life situations. Phenomena like anti-rationality, meaning people who consciously act as their own worst enemies, would make sense in terms of payoffs. The model even contains examples of games where all players can act as their own worst enemies and still get a better result than if they had acted rationally.
BOX 2: The Prisoner's Dilemma
My partner Kurt and I are suspects in a murder case. Placed in two adjacent interrogation rooms, we are unable to synchronize our stories. The police don't have sufficient evidence to convince a judge of our guilt, and so they offer us a bargain: If I talk, but Kurt doesn't, I will go free and Kurt will get 10 years behind bars. The same goes for Kurt. If he talks and I keep my mouth shut, he will go free and I will be incarcerated for 10 years. If none of us talks we both get one year and if we both talk, we get five years each.

I know Kurt and I know he is a sly psychopath. No matter what I choose to do, he knows that he always will get a milder sentence by snitching. Either he gets five years instead of ten (if I talk) or he will walk free instead of getting one year in prison (if I say nothing). The obvious strategy for Kurt is to talk to the police. 
If I analyze the situation the same way as Kurt does, we should both defect and get five years each, even though we could have settled with only one year each by keeping quiet. Thus, rational thinking leads to a bad result for both of us. That's the dilemma. Examples from real life situations, for instance in burglary cases (but also in bicycle races, where two lone riders have a substantial lead, which is a similar situation) show that people usually cooperate. They seldom defect, even if it's a one time game only, where you cannot expect to gain anything later on. 
In the repeated prisoner's dilemma things are different. Now you can develop strategies depending on what the other guy did in the previous round. I this situation it has been shown that cooperation can develop from a tit-for-tat rule. You start by cooperating, and then you do what the other player has done before. In 1978, the famous game theoretician Robert Axelrod invited researchers to send him all kinds of strategies which he let compete against each other in a computer simulation. The simple tit-for-tat rule won several times.

Even the classical prisoner's dilemma can now be seen in a different light (see BOX 2: Prisoner's Dilemma). Experiments have repeatedly shown that prisoners seldomly choose the rational Nash-equilibrium. They almost always choose cooperation. Wolpert and colleges can explain this via their personae-theory without involving repeated trials. The results in the so-called Ultimatum Game can also be explained by using this idea.

But if you think that you can beat Gus Hansen by reading up on Wolpert and colleagues, you will be disappointed. Mathematical game theory is still far too simple to account for the cunning of poker players, who have developed a much larger bag of tricks and techniques than just a manipulation of a table image.


References:
• Wolpert, D. H. and Jamison, J. The Strategic Choice of Preferences: the Persona Model, Berkeley Electronic Journal of Theoretical Economics, 2011,
http://www.bos.frb.org/economic/wp/wp2010/wp1010.pdf
• Wolpert, D. H. and Jamison, J. “Schelling Formalized: Strategic Choices of Non-Rational Behavior”, Evolution and Rationality: Decisions, Cooperation, and Strategic Behavior, K.Binmore and S. Okasha (eds.), Cambridge University Press, in press.

There was an error in this gadget