Tag: Game Theory

Mental Model: Prisoners’ Dilemma

The prisoners’ dilemma is the best known strategy game in social science. The game shows why two entities might not cooperate even when it appears in their best (rational) interest to do so. What is rational for the individual in certain circumstances is not rational for the group — that is, pursuing a strategy that is rational for you leads to a worse outcome.

With applications to economics, politics, and business the game illustrates the conflict, which can sometimes arise, between individual and group rationality.

From Greg Mankiw’s Economics textbook:

The prisoners’ dilemma is a story about two criminals who have been captured by the police. Let’s call them Mr Black and Mr Pink. The police have enough evidence to convict Mr Black and Mr Pink of a relatively minor crime, illegal possession of a handgun, so that each would spend a year in jail. The police also suspect that the two criminals have committed a jewelery robbery together, but they lack hard evidence to convict them of this major crime. The police question Mr Black and Mr Pink in separate rooms, and they offer each of them the following deal:

Right now we can lock you up for 1 year. If you confess to the jewelery robbery and implicate your partner, however, we’ll give you immunity and you can go free. Your partner will get 20 years in jail. But if you both confess to the crime, we won’t need your testimony and we can avoid the cost of a trial, so you will each get an intermediate sentence of 8 years.

If Mr Black and Mr Pink, heartless criminals that they are, care only about their own sentences, what would you expect them to do? Would they confess or remain silent? Each prisoner has two strategies: confess or remain silent. The sentence each prisoner gets depends on the strategy chosen by his or her partner in crime.

Consider first Mr Black’s decision. He reasons as follows:

I don’t know what Mr Pink is going to do. If he remains silent, my best strategy is to confess, since then I’ll go free rather than spending a year in jail. If he confesses, my best strategy is still to confess, since then I’ll spend 8 years in jail rather than 20. So, regardless of what Mr Pink does, I am better off confessing.

In the language of game theory, a strategy is called a dominant strategy if it is the best strategy for a player to follow regardless of the strategies pursued by other players. In this case, confessing is a dominant strategy for Mr Black. He spends less time in jail if he confesses, regardless of whether Mr Pink confesses or remains silent.

Now consider Mr Pink’s decision. He faces exactly the same choices as Mr Black, and he reasons in much the same way. Regardless of what Mr Black does, Mr Pink can reduce his time in jail by confessing. In other words, confessing is a dominant strategy for Mr Pink.

In the end, both Mr Black and Mr Pink confess, and both spend 8 years in jail. Yet, from their standpoint, this is a terrible outcome. If they had both remained silent, both of them would have been better off, spending only 1 year in jail on the gun charge. By each pursuing his own interests, the two prisoners together reach an outcome that is worse for each of them.

To see how difficult it is to maintain cooperation, imagine that, before the police captured Mr Black and Mr Pink, the two criminals had made a pack not to confess. Clearly, this agreement would make them both better off if they both live up to it, because they would each spend only 1 year in jail. But would the two criminals in fact remain silent, simply because they had agreed to? Once they are being questioned separately, the logic of self-interest takes over and leads them to confess. Cooperation between the two prisoners is difficult to maintain because cooperation is individually irrational.

* * *

Michael J. Mauboussin writes:

The classic two-player example of game theory is the prisoners’ dilemma. We can recast the prisoners’ dilemma in a business context by considering a simple case of capacity addition. Say two competitors, A and B, are considering adding capacity. If competitor A adds capacity and B doesn’t, A gets an outsized payoff. Likewise, if B adds capacity and A doesn’t than B gets the large payoff. If neither expands, A and B aren’t as well-off as if one alone had added capacity. But if both add capacity, they’re worse off of than if they had done nothing.

* * *

Avinash Dixit offers:

Consider two firms, say Coca-Cola and Pepsi, selling similar products. Each must decide on a pricing strategy. They best exploit their joint market power when both charge a high price; each makes a profit of ten million dollars per month. If one sets a competitive low price, it wins a lot of customers away from the rival. Suppose its profit rises to twelve million dollars, and that of the rival falls to seven million. If both set low prices, the profit of each is nine million dollars. Here, the low-price strategy is akin to the prisoner’s confession, and the high-price akin to keeping silent. Call the former cheating, and the latter cooperation. Then cheating is each firm’s dominant strategy, but the result when both “cheat” is worse for each than that of both cooperating.

* * *

Warren Buffett provides some illumination as to how the Prisoners’ Dilemma plays out in business in the 1985 Berkshire Hathaway Annual report.

The domestic textile industry operates in a commodity business, competing in a world market in which substantial excess capacity exists. Much of the trouble we experienced was attributable, both directly and indirectly, to competition from foreign countries whose workers are paid a small fraction of the U.S. minimum wage. But that in no way means that our labor force deserves any blame for our closing. In fact, in comparison with employees of American industry generally, our workers were poorly paid, as has been the case throughout the textile business. In contract negotiations, union leaders and members were sensitive to our disadvantageous cost position and did not push for unrealistic wage increases or unproductive work practices. To the contrary, they tried just as hard as we did to keep us competitive. Even during our liquidation period they performed superbly. (Ironically, we would have been better off financially if our union had behaved unreasonably some years ago; we then would have recognized the impossible future that we faced, promptly closed down, and avoided significant future losses.)

Over the years, we had the option of making large capital expenditures in the textile operation that would have allowed us to somewhat reduce variable costs. Each proposal to do so looked like an immediate winner. Measured by standard return-on-investment tests, in fact, these proposals usually promised greater economic benefits than would have resulted from comparable expenditures in our highly-profitable candy and newspaper businesses.

But the promised benefits from these textile investments were illusory. Many of our competitors, both domestic and foreign, were stepping up to the same kind of expenditures and, once enough companies did so, their reduced costs became the baseline for reduced prices industry-wide. Viewed individually, each company’s capital investment decision appeared cost-effective and rational; viewed collectively, the decisions neutralized each other and were irrational (just as happens when each person watching a parade decides he can see a little better if he stands on tiptoes). After each round of investment, all the players had more money in the game and returns remained anemic.

Thus, we faced a miserable choice: huge capital investment would have helped to keep our textile business alive, but would have left us with terrible returns on ever-growing amounts of capital. After the investment, moreover, the foreign competition would still have retained a major, continuing advantage in labor costs. A refusal to invest, however, would make us increasingly non-competitive, even measured against domestic textile manufacturers. I always thought myself in the position described by Woody Allen in one of his movies: “More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness, the other to total extinction. Let us pray we have the wisdom to choose correctly.”

For an understanding of how the to-invest-or-not-to-invest dilemma plays out in a commodity business, it is instructive to look at Burlington Industries, by far the largest U.S. textile company both 21 years ago and now. In 1964 Burlington had sales of $1.2 billion against our $50 million. It had strengths in both distribution and production that we could never hope to match and also, of course, had an earnings record far superior to ours. Its stock sold at 60 at the end of 1964; ours was 13.

Burlington made a decision to stick to the textile business, and in 1985 had sales of about $2.8 billion. During the 1964-85 period, the company made capital expenditures of about $3 billion, far more than any other U.S. textile company and more than $200-per-share on that $60 stock. A very large part of the expenditures, I am sure, was devoted to cost improvement and expansion. Given Burlington’s basic commitment to stay in textiles, I would also surmise that the company’s capital decisions were quite rational.

Nevertheless, Burlington has lost sales volume in real dollars and has far lower returns on sales and equity now than 20 years ago. Split 2-for-1 in 1965, the stock now sells at 34 — on an adjusted basis, just a little over its $60 price in 1964. Meanwhile, the CPI has more than tripled. Therefore, each share commands about one-third the purchasing power it did at the end of 1964. Regular dividends have been paid but they, too, have shrunk significantly in purchasing power.

This devastating outcome for the shareholders indicates what can happen when much brain power and energy are applied to a faulty premise. The situation is suggestive of Samuel Johnson’s horse: “A horse that can count to ten is a remarkable horse – not a remarkable mathematician.” Likewise, a textile company that allocates capital brilliantly within its industry is a remarkable textile company – but not a remarkable business.

My conclusion from my own experiences and from much observation of other businesses is that a good managerial record (measured by economic returns) is far more a function of what business boat you get into than it is of how effectively you row (though intelligence and effort help considerably, of course, in any business, good or bad). Some years ago I wrote: “When a management with a reputation for brilliance tackles a business with a reputation for poor fundamental economics, it is the reputation of the business that remains intact.” Nothing has since changed my point of view on that matter. Should you find yourself in a chronically-leaking boat, energy devoted to changing vessels is likely to be more productive than energy devoted to patching leaks.

* * *

Mauboussin adds:

Our discussion so far has focused on competition. But thoughtful strategic analysis also recognizes the role of co-evolution, or cooperation, in business. Not all business relationships are conflictual. Sometimes companies outside the purview of a firm’s competitive set can heavily influence its value creation prospects.

Consider the example of DVD makers (software) and DVD player makers (hardware). These companies do not compete with one another. But the more DVD titles that are available, the more attractive it will be for a consumer to buy a DVD player and vice versa. Another example is the Wintel standard—added features on Microsoft’s operating system required more powerful Intel microprocessors, and more powerful microprocessors could support updated operating systems. Complementors make the added value pie bigger. Competitors fight over a fixed pie.

* * *

Mankiw offers another real world example:

Consider an oligopoly with two members, called Iran and Saudi Arabia. Both countries sell crude oil. After prolonged negotiation, the countries agree to keep oil production low in order to keep the world price of oil high. After they agree on production levels, each country must decide whether to cooperate and live up to this agreement or to ignore it and produce at a higher level. The following image shows how the profits of the two countries depend on the strategies they choose.

Suppose you are the leader of Saudi Arabia. You might reason as follows:
I could keep production low as we agreed, or I could raise my production and sell more oil on world markets. If Iran lives up to the agreement and keeps its production low, then my country ears profit of $60 billion with high production and $50 billion with low production. In this case, Saudi Arabia is better off with high production. If Iran fails to live up to the agreement and produces at a high level, then my country earns $40 billion with high production and $30 billion with low production. Once again, Saudia Arabia is better off with high production. So, regardless of what Iran chooses to do, my country is better off reneging on our agreement and producing at a high level.

Producing at a high level is a dominant strategy for Saudi Arabia. Of course, Iran reasons in exactly the same way, and so both countries produce at a high level. The result is the inferior outcome (from both Iran and Saudi Arabia’s standpoint) with low profits in each country.

This example illustrates why oligopolies have trouble maintaining monopoly profits. The monopoly outcome is jointly rational for the oligopoly, but each oligopolist has an incentive to cheat. Just as self-interest drives the prisoners in the prisoners’ dilemma to confess, self-interest makes it difficult for the oligopoly to maintain the cooperative outcome with low production, high prices and monopoly prices.

Other examples of prisoners’ dilemma’s include: arms races, advertising, and common resources (see the Tradegy of the Commons)

The Prisoners’ Dilemma is part of the Farnam Street latticework of Mental Models.

The Tragedy Of The Commons

What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others. — Aristotle

The rules pay you to do the wrong thing. — Garrett Hardin

The Tragedy of the Commons is a parable that illustrates why common resources get used more than is desirable from the standpoint of society as a whole.

Garrett Hardin, introduces us to the The Tragedy of the Commons:

Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy.

As a rational being, each herdsman seeks to maximize his gain. Explicitly or implicitly, more or less consciously, he asks, “What is the utility to me of adding one more animal to my herd?” This utility has one negative and one positive component.

1) The positive component is a function of the increment of one animal. Since the herdsman receives all the proceeds from the sale of the additional animal, the positive utility is nearly +1.

2) The negative component is a function of the additional overgrazing created by one more animal. Since, however, the effects of overgrazing are shared by all the herdsmen, the negative utility for any particular decision-making herdsman is only a fraction of 1.

Adding together the component partial utilities, the rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another. . . . But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit–in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.

Greg Mankiw, in his Microeconomics text says:

Consider life in a small medieval town. Of the many economic activities that take place in the town, one of the most important is raising sheep. Many of the town’s families own flocks of sheep and support themselves by selling the sheep’s wool, which is used to make clothing.

As our story begins, the sheep spend much of their time grazing on the land surrounding the town, called the Town Commons. No family owns the land. Instead the town residents own the land collectively, and all the residents are allowed to graze their sheep on it. Collective ownership works well because land is plentiful. As long as everyone can get all the good grazing land they want, the Town Common is not a rival good and allowing residents’ sheep to graze for free causes no problems. Everyone in the town is happy.

As the years pass, the population of the town grows and so does the number of sheep grazing on the Town Commons. With a growing number of sheep and a fixed amount of land, the land starts to lose its ability to replenish itself. Eventually, the land is grazed so heavily that it becomes barren. With no grass left on the Town Common, raising sheep is impossible, and the town’s once prosperous wool industry disappears and, tragically, many families lose their source of livelihood.

What causes the tragedy? Why do the shepherds allow the sheep population to grow so large that is destroys the Town Common? The reason is that social and private incentives differ. Avoiding the destruction of the grazing land depends on the collective action of the shepherds. If the shepherds acted together, they could reduce the sheep population to a size that the Town Common could support. Yet no single family has an incentive to reduce the size of its own flock because each flock represents only a small part of the problem.

In essence, the Tragedy of the Commons arises because of an externality. When one family’s flock grazes on the common land, it reduces the quality of the land available for other families. Because people neglect this negative externality when deciding how many sheep to own, the result is an excessive number of sheep.

If the tragedy had been foreseen, the town could have solved the problem in various ways. It could have regulated the number of sheep in each family’s flock, internalized the externality by taxing sheep, or auctioned off a limited number of sheep grazing permits. That is, the medieval town could have dealt with the problem of overgrazing in the way that modern society deals with the problem of pollution.

In the case of land, however, there is a simpler solution. The town can divide up the land among town families. Each family can enclose its allotment of land with a fence and then protect it from excessive grazing. In this way, the land becomes a private good rather than a common resource. This outcome in fact occurred during the enclosure movement in England in the 17th century.

The Tragedy of the Commons is a story with a general lesson: when one person uses a common resource, he diminishes other people’s enjoyment of it. Because of this negative externality, common resources tend to be used excessively. The government can solve the problem by reducing use of the common resource through regulation or taxes. Alternatively, the government can sometimes turn the common resource into a private good.

This lesson has been known for thousands of years. The ancient Greek philosopher Aristotle pointed out the problem with common resources: ‘What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others.’

The Tragedy of the Commons is a Farnam Street Mental Model.

Defending a New Domain: The Pentagon’s Cyberstrategy

As someone interested in how the weak win wars, I found this article (pdf), by William Lynn, in the recent Foreign Affairs utterly fascinating.

…cyberwarfare is asymmetric. The low cost of computing devices means that U.S. adversaries do not have to build expensive weapons, such as stealth fighters or aircraft carriers, to pose a significant threat to U.S. military capabilities. A dozen determined computer programmers can, if they find a vulnerability to exploit, threaten the United States’ global logistics network, steal its operational plans, blind its intelligence capabilities, or hinder its ability to deliver weapons on target. Knowing this, many militaries are developing offensive capabilities in cyberspace, and more than 100 foreign intelligence organizations are trying to break into U.S. networks. Some governments already have the capacity to disrupt elements of the U.S. information infrastructure.

In cyberspace, the offense has the upper hand. The Internet was designed to be collaborative and rapidly expandable and to have low barriers to technological innovation; security and identity management were lower priorities. For these structural reasons, the U.S. government’s ability to defend its networks always lags behind its adversaries’ ability to exploit U.S. networks’ weaknesses. Adept programmers will find vulnerabilities and overcome security measures put in place to prevent intrusions. In an offense-dominant environment, a fortress mentality will not work. The United States cannot retreat behind a Maginot Line of firewalls or it will risk being overrun. Cyberwarfare is like maneuver warfare, in that speed and agility matter most. To stay ahead of its pursuers, the United States must constantly adjust and improve its defenses.

It must also recognize that traditional Cold War deterrence models of assured retaliation do not apply to cyberspace, where it is difficult and time consuming to identify an attack’s perpetrator. Whereas a missile comes with a return address, a computer virus generally does not. The forensic work necessary to identify an attacker may take months, if identification is possible at all. And even when the attacker is identified, if it is a nonstate actor, such as a terrorist group, it may have no assets against which the United States can retaliate. Furthermore, what constitutes an attack is not always clear. In fact, many of today’s intrusions are closer to espionage than to acts of war. The deterrence equation is further muddled by the fact that cyberattacks often originate from co-opted servers in neutral countries and that responses to them could have unintended consequences.

The Colonel Blotto Game: How Underdogs Can Win

If you’ve ever wondered why underdogs win or how to improve your odds of winning when you’re the underdog, this article on The Colonel Blotto Game is for you.

***

There is a rich tradition of celebrating wins by the weak—while forgetting those who lost—including the biblical Story of David vs. Goliath. It is notable, that “David shunned a traditional battle using a helmet and sword and chose instead to fight unconventionally with stones and a slingshot,” says Michael Mauboussin.

Luckily, David was around before Keynes said: “It is better to fail conventionally than to succeed unconventionally.” Turns out, if you’re an underdog, David was onto something.

Despite the fact it is not as well known as the Prisoners’ Dilemma, the Colonel Blotto Game can teach us a lot about strategic behavior and competition.

Underdogs can change the odds of winning simply by changing the basis of competition.

So what exactly is the Colonel Blotto Game and what can we learn from it?

In the Colonel Blotto game, two players concurrently allocate resources across n battlefields. The player with the greatest resources in each battlefield wins that battle and the player with the most overall wins is the victor.

An extremely simple version of this game would consist of two players, A and B, allocating 100 soldiers to three battlefields. Each player’s goal is to create favorable mismatches versus his or her opponent.

According to Mauboussin, “The Colonel Blotto game is useful because by varying the game’s two main parameters, giving one player more resources or changing the number of battlefields, you can gain insight into the likely winners of competitive encounters.”

To illustrate this point, Malcolm Gladwell tells the story of Vivek Ranadivé:

When Vivek Ranadivé decided to coach his daughter Anjali’s basketball team, he settled on two principles. The first was that he would never raise his voice. This was National Junior Basketball—the Little League of basketball. The team was made up mostly of twelve-year-olds, and twelve-year-olds, he knew from experience, did not respond well to shouting. He would conduct business on the basketball court, he decided, the same way he conducted business at his software firm. He would speak calmly and softly, and convince the girls of the wisdom of his approach with appeals to reason and common sense.

The second principle was more important. Ranadivé was puzzled by the way Americans played basketball. He is from Mumbai. He grew up with cricket and soccer. He would never forget the first time he saw a basketball game. He thought it was mindless. Team A would score and then immediately retreat to its own end of the court. Team B would inbound the ball and dribble it into Team A’s end, where Team A was patiently waiting. Then the process would reverse itself. A basketball court was ninety-four feet long. But most of the time a team defended only about twenty-four feet of that, conceding the other seventy feet.

Occasionally, teams would play a full-court press—that is, they would contest their opponent’s attempt to advance the ball up the court. But they would do it for only a few minutes at a time. It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, and Ranadivé thought that that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent’s end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that made them so good?

Basically, the more dimensions the game has the less certain the outcome becomes and the more likely underdogs are to win.

In other words, adding battlefields increases the number of interactions (dimensions) and improves the chances of an upset. When the basketball team cited by Malcolm Gladwell above started a full court press, it increased the number of dimensions and, in the process, substituted effort for skill.

The political scientist Ivan Arreguín-Toft recently looked at every war fought in the past two hundred years between strong and weak combatants in his book How the Weak Win Wars. The Goliaths, he found, won in 71.5 percent of the cases. That is a remarkable fact.

Arreguín-Toft was analyzing conflicts in which one side was at least ten times as powerful—in terms of armed might and population—as its opponent, and even in those lopsided contests, the underdog won almost a third of the time.

In the Biblical story of David and Goliath, David initially put on a coat of mail and a brass helmet and girded himself with a sword: he prepared to wage a conventional battle of swords against Goliath. But then he stopped. “I cannot walk in these, for I am unused to it,” he said (in Robert Alter’s translation), and picked up those five smooth stones.

Arreguín-Toft wondered, what happened when the underdogs likewise acknowledged their weakness and chose an unconventional strategy? He went back and re-analyzed his data. In those cases, David’s winning percentage went from 28.5 to 63.6. When underdogs choose not to play by Goliath’s rules, they win, Arreguín-Toft concluded, “even when everything we think we know about power says they shouldn’t.”

Arreguín-Toft discovered another interesting point: over the past two centuries the weaker players have been winning at a higher and higher rate. For instance, strong actors prevailed in 88 percent of the conflicts from 1800 to 1849, but the rate dropped very close to 50% from 1950 to 1999.

After reviewing and dismissing a number of possible explanations for these findings, Arreguín-Toft suggests that an analysis of strategic interaction best explains the results. Specifically, when the strong and weak actors go toe-to-toe (effectively, a low n), the weak actor loses roughly 80 percent of the time because “there is nothing to mediate or deflect a strong player‘s power advantage.”

In contrast, when the weak actors choose to compete on a different strategic basis (effectively increasing the size of n), they lose less than 40 percent of the time “because the weak refuse to engage where the strong actor has a power advantage.” Weak actors have been winning more conflicts over the years because they see and imitate the successful strategies of other actors and have come to the realization that refusing to fight on the strong actor’s terms improves their chances of victory. This might explain what’s happening in the Gulf War.

In the Gulf War, the number of battlefields (dimensions) is high. Even though substantially outnumbered, the Taliban, have increased the odds of “winning,” by changing the base of competition, as they did previously against the superpower Russians. It also explains why the strategy employed by Ranadivé’s basketball team, while not guaranteed to win, certainly increased the odds.

Mauboussin provides another great example:

A more concrete example comes from Division I college football. Texas Tech has adopted a strategy that has allowed it to win over 70 percent of its games in recent years despite playing a highly competitive schedule. The team’s success is particularly remarkable since few of the players were highly recruited or considered “first-rate material” by the professional scouts. Based on personnel alone, the team was weaker than many of its opponents.

Knowing that employing a traditional game plan would put his weaker team at a marked disadvantage, the coach offset the talent gap by introducing more complexity into the team’s offense via a large number of formations. These formations change the geometry of the game, forcing opponents to change their defensive strategies. It also creates new matchups (i.e., increasing n, the number of battlefields) that the stronger teams have difficulty winning. For example, defensive linemen have to drop back to cover receivers. The team’s coach explained that “defensive linemen really aren’t much good at covering receivers. They aren’t built to run around that much. And when they do, you have a bunch of people on the other team doing things they don’t have much experience doing.” This approach is considered unusual in the generally conservative game of college football.

While it’s easy to recall all the examples of underdogs who found winning strategies by increasing the number of competition dimensions, it’s not easy to recall all of those who, employing similar dimension enhancing strategies, have failed.

Another interesting point is why teams who are likely to lose use conventional strategies, which only increase the odds of failure?

According to Mauboussin:

What the analysis also reveals, however, is that nearly 80 percent of the losers in asymmetric conflicts never switch strategies. Part of the reason players don’t switch is that there is a cost: when personnel training and equipment are geared toward one strategy, it’s often costly to shift to another. New strategies are also stymied by leaders or organizational traditions. This type of inertia appears to be a consequential impediment to organizations embracing the strategic actions implied by the Colonel Blotto game.

Teams have an incentive to maintain a conventional strategy, even when it increases their odds of losing. Malcolm Gladwell explores:

The consistent failure of underdogs in professional sports to even try something new suggests, to me, that there is something fundamentally wrong with the incentive structure of the leagues. I think, for example, that the idea of ranking draft picks in reverse order of finish — as much as it sounds “fair” — does untold damage to the game. You simply cannot have a system that rewards anyone, ever, for losing. Economists worry about this all the time, when they talk about “moral hazard.” Moral hazard is the idea that if you insure someone against risk, you will make risky behavior more likely. So if you always bail out the banks when they take absurd risks and do stupid things, they are going to keep on taking absurd risks and doing stupid things. Bailouts create moral hazard. Moral hazard is also why your health insurance has a co-pay. If your insurer paid for everything, the theory goes, it would encourage you to go to the doctor when you really don’t need to. No economist in his right mind would ever endorse the football and basketball drafts the way they are structured now. They are a moral hazard in spades. If you give me a lottery pick for being an atrocious GM, where’s my incentive not to be an atrocious GM?

Key takeaways:

  • Underdogs improve their chances of winning by changing the basis for competition and, if possible, creating more dimensions.
  • We often fail to switch strategies because of a combination of biases, including social proof, status quo, commitment and consistency, and confirmation.

Malcolm Gladwell is a staff writer at the New Yorker and the author of The Tipping Point: How Little Things Make a Big Difference, Blink, Outliers and most recently, What the Dog Saw.

Michael Mauboussin is the author of More More Than You Know: Finding Financial Wisdom in Unconventional Places and more recently, Think Twice: Harnessing the Power of Counterintuition.

Moral Hypocrisy

From Jonathan Haidt’s book The Happiness Hypothesis:

The gap between action and perception is bridged by the art of impression management. If life itself is what you deem it, then why not focus your efforts on persuading others to believe that you are a virtuous and trustworthy cooperator?

Natural selection, like politics, works by the principle of survival of the fittest, and several researchers have argued that human beings evolved to play the game of life in a Machiavellian way. The Machiavellian version of tit for tat… is to do all you can to cultivate the reputation of a trustworthy yet vigilant partner, whatever reality may be.

The simplest way to cultivate a reputation for being fair is to really be fair, but life and psychology experiments sometimes force us to choose between appearance and reality. The findings are not pretty. … The tendency to value the appearance of morality over reality has been dubbed “moral hypocrisy”.

… Proving that people are selfish, or that they’ll sometimes cheat when they know they won’t be caught, seems like a good way to get an article into the Journal of Incredibly Obvious Results. What’s not so obvious is that, in nearly all these studies, people don’t think they are doing anything wrong. It’s the same in real life. From the person who cuts you off on the highway all the way to the Nazis who ran the concentration camps, most people think they are good people and they their actions are motivated by good reasons. Machiavellian tit for tat requires devotion to appearances, including protestations of one’s virtue even when one chooses vice. And such protestations are most effective when the person making them really believes them.

As Robert Wright puts it in his masterful book The Moral Animal, “Human beings are a species splendid in their array of moral equipment, tragic in their propensity to misuse it, and pathetic in their constitutional ignorance of the misuse.

Social Dilemmas: When to Defect and When to Cooperate

Social dilemmas arise when an individual receives a higher payoff for defecting than cooperating when everyone else cooperates. When everyone defects they are worse off. That is, each member has a clear and unambiguous incentive to make a choice, which if made by all members provides a worse outcome.

A great example of a social dilemma is to imagine yourself out with a group of your friends for dinner. Before the meal, you all agree to share the cost equally. Looking at the menu you see a lot of items that appeal to you but are outside of your budget.

Pondering on this, you realize that you’re only on the hook for 1/(number of friends at the dinner) of the bill. Now you can enjoy yourself without having to pay the full cost.

But what if everyone at the table realized the same thing? My guess is you’d all be stunned by the bill, even the tragedy of the commons.

This is a very simple example but you can map this to the business word by thinking about healthcare and insurance.

If that sounds a lot like game theory, you’re on the right track.

I came across an excellent paper[1] by Robyn Dawes and David Messick, which takes a closer look at social dilemmas.

A Psychological Analysis of Social Dilemmas

In the case of the public good, one strategy that has been employed is to create a moral sense of duty to support it—for instance, the public television station that one watches. The attempt is to reframe the decision as doing one’s duty rather than making a difference—again, in the wellbeing of the station watched. The injection of a moral element changes the calculation from “Will I make a difference” to “I must pay for the benefit I get.”

The final illustration, the shared meal and its more serious counterparts, requires yet another approach. Here there is no hierarchy, as in the organizational example, that can be relied upon to solve the problem. With the shared meal, all the diners need to be aware of the temptation that they have and there need to be mutually agreed-upon limits to constrain the diners. Alternatively, the rule needs to be changed so that everyone pays for what they ordered. The latter arrangement creates responsibility in that all know that they will pay for what they order. Such voluntary arrangements may be difficult to arrange in some cases. With the medical insurance, the insurance company may recognize the risk and insist on a principle of co-payments for medical services. This is a step in the direction of paying for one’s own meal, but it allows part of the “meal’ ‘ to be shared and part of it to be paid for by the one who ordered it.

The fishing version is more difficult. To make those harvesting the fish pay for some of the costs of the catch would require some sort of taxation to deter the unbridled exploitation of the fishery. Taxation, however, leads to tax avoidance or evasion. But those who harvest the fish would have no incentive to report their catches accurately or at all, especially if they were particularly successful, which simultaneously means particularly successfully—compared to others at least—in contributing to the problem of a subsequently reduced yield. Voluntary self-restraint would be punished as those with less of that personal quality would thrive while those with more would suffer. Conscience, as Hardin (1968) noted, would be self-eliminating. …

Relatively minor changes in the social environment can induce major changes in decision making because these minor changes can change the perceived appropriateness of a situation. One variable that has been shown to make such a difference is whether the decision maker sees herself as an individual or as a part of a group.

Footnotes
  • 1

    Dawes RM, Messick M (2000) Social Dilemmas. Int J Psychol 35(2):111–116

12