Tag: Incentives

Several Uncomfortable Realities

Something to ponder.

A sobering excerpt from Vaclav Smil‘s Global Catastrophes and Trends: The Next Fifty Years:

The first is that even the most assiduous deployment of the best available preventive measures (smart policing, clever informants, globe-spanning, electronic intelligence, willingness to undertake necessary military action) will not be able to thwart all planned attacks.

The second reality is that the most dangerous form of terrorist attacks cannot be deterred because the political and ideological motivations for terrorist attacks that characterized Rapoport’s (2001) first three waves of terror have blended with religious zealotry and become one with the Muslim concept of martyrdom, providing the perpetrators with an irresistible reward: instant access to paradise. Murder by suicide has deep roots in Muslim history …

The third sobering consideration is that neither personal instability nor an individuals hopelessness or overt personal defects, factors that come immediately to mind as the most likely drivers, are dependable predictors of candidates for suicidal martyrdom. Nor are such indicators as poverty, level of eduction, or religious devotion. Institutional manipulation of emotional commitment seems to be a key factor, and one not easily eliminated. Other obvious contributions are rapidly rising youth populations in countries governed by dictatorial regimes with limited economic opportunities, and the disenchantment of second-generation Muslim immigrants with their host societies. But none of these factors can offer any selective guidance to identify the most susceptible individuals and to prevent their murderous suicides.

The forth consideration is that out understandable fixation on suicidal missions may be misplaced. A dirty bomb containing enough radioactive waste to contaminate several downtown blocks of a major city and cause mass panic (as anything invisible and nuclear is bound to do) can be positioned in a place calculated to have a maximum impact and then remotely detonated. And while Hizbullah’s more than 30 days of rocket attacks on Israel in the summer of 2006 were not particularly deadly, they paralyzed a large part of the country and demonstrated how more conventional weapons could be used in the service of terrorism.

This is a World of Incentives

I thought Warren Buffett said a lot of interesting things in his recent interview with Charlie Rose.

Here are some of the bits that stood out for me.

Fairness:

BUFFETT: …I also think fairness is important and I think getting rid of promises that you can’t keep is important. I don’t think we should cut spending dramatically now. I don’t think that what I’m talking about on taxes solves the — the deficit gap at all. But I think fairness is important. I think having a sensible long-term plan is important to explain and I think having it be believable is terribly important because people don’t believe these out year things generally with Congress. They see too much of what’s happened.

The deficit as stimulus

BUFFETT: The deficit is our stimulus. You can — you can say a bridge someplace is part of that act, you can say cutting taxes is part of it as was the case in our stimulus act. But the stimulus is the government pouring more money out than it’s taking in. And we have a — a stimulus going on that’s 10 percent of GDP which we haven’t seen since World War II. So we have a huge stimulus going on. Nobody wants to call it a stimulus because that’s gotten to be a dirty word. But we have a big stimulus. So we do — in my view, whether we have a 10 percent of GDP deficit —

ROSE: Right.

BUFFETT: — which is a huge stimulus or a 12 percent or eight percent it doesn’t make much difference. I — I think that we pushed monetary policy to a level, we’ve pushed fiscal policy to the limit but fortunately the most important thing in terms of this country ever coming out of recessions has been the natural workings of capitalism and I think you’ve seen that for the last couple of years.

Following through

BUFFETT: What our leaders were saying to us then, the key players are saying we’ll do whatever it takes. And I believed it. I knew they had the power to do whatever it took and I believed they would do it.

Now, the problem about government now is that if they come out and get on the Sunday talks shows and say “I’ll do whatever it takes”, you know, people don’t believe them. And I mean, they — they — they’ve got to see action and — and here they see something like the raising the deficit limit used as a hostage for something of vital importance to the United States. And if you — you can use it as a hostage in terms of spending, you can use it as a hostage on funding on education or anything else. I mean, it isn’t limited about it; if you’ve got something that comes up like it.

Incentives

BUFFETT: But I just use it to illustrate that this is a world of incentives and we work on incentives in every way. If we work on education, in business, every other place. And what I try to think of the incentives to get somebody who comes up for re-election in a year to do something where the policy cycle goes out five years or ten years, how do you do it when the policy cycle exceeds the electoral cycle? You’ve got to make sure the electoral cycle is in the equation.

A Simple Checklist to Improve Decisions

We owe thanks to the publishing industry. Their ability to take a concept and fill an entire category with a shotgun approach is the reason that more people are talking about biases.

Unfortunately, talk alone will not eliminate them but it is possible to take steps to counteract them. Reducing biases can make a huge difference in the quality of any decision and it is easier than you think.

In a recent article for Harvard Business Review, Daniel Kahneman (and others) describe a simple way to detect bias and minimize its effects in the most common type of decisions people make: determining whether to accept, reject, or pass on a recommendation.

The Munger two-step process for making decisions is a more complete framework, but Kahneman’s approach is a good way to help reduce biases in our decision-making.

If you’re short on time here is a simple checklist that will get you started on the path towards improving your decisions:

Preliminary Questions: Ask yourself

1. Check for Self-interested Biases

  • Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?
  • Review the proposal with extra care, especially for overoptimism.

2. Check for the Affect Heuristic

  • Has the team fallen in love with its proposal?
  • Rigorously apply all the quality controls on the checklist.

3. Check for Groupthink

  • Were there dissenting opinions within the team?
  • Were they explored adequately?
  • Solicit dissenting views, discreetly if necessary.
  • Challenge Questions: Ask the recommenders

4. Check for Saliency Bias

  • Could the diagnosis be overly influenced by an analogy to a memorable success?
  • Ask for more analogies, and rigorously analyze their similarity to the current situation.

5. Check for Confirmation Bias

  • Are credible alternatives included along with the recommendation?
  • Request additional options.

6. Check for Availability Bias

  • If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?
  • Use checklists of the data needed for each kind of decision.

7. Check for Anchoring Bias

  • Do you know where the numbers came from? Can there be
  • …unsubstantiated numbers?
  • …extrapolation from history?
  • …a motivation to use a certain anchor?
  • Reanchor with figures generated by other models or benchmarks, and request new analysis.

8. Check for Halo Effect

  • Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?
  • Eliminate false inferences, and ask the team to seek additional comparable examples.

9. Check for Sunk-Cost Fallacy, Endowment Effect

  • Are the recommenders overly attached to a history of past decisions?
  • Consider the issue as if you were a new CEO.
  • Evaluation Questions: Ask about the proposal

10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect

  • Is the base case overly optimistic?
  • Have the team build a case taking an outside view; use war games.

11. Check for Disaster Neglect

  • Is the worst case bad enough?
  • Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.

12. Check for Loss Aversion

  • Is the recommending team overly cautious?
  • Realign incentives to share responsibility for the risk or to remove risk.

If you’re looking to dramatically improve your decision making here is a great list of books to get started:

Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein

Think Twice: Harnessing the Power of Counterintuition by Michael J. Mauboussin

Think Again: Why Good Leaders Make Bad Decisions and How to Keep It from Happening to You by Sydney Finkelstein, Jo Whitehead, and Andrew Campbell

Predictably Irrational: The Hidden Forces That Shape Our Decisions by Dan Ariely

Thinking, Fast and Slow by Daniel Kahneman

Judgment and Managerial Decision Making by Max Bazerman

Defending a New Domain: The Pentagon’s Cyberstrategy

As someone interested in how the weak win wars, I found this article (pdf), by William Lynn, in the recent Foreign Affairs utterly fascinating.

…cyberwarfare is asymmetric. The low cost of computing devices means that U.S. adversaries do not have to build expensive weapons, such as stealth fighters or aircraft carriers, to pose a significant threat to U.S. military capabilities. A dozen determined computer programmers can, if they find a vulnerability to exploit, threaten the United States’ global logistics network, steal its operational plans, blind its intelligence capabilities, or hinder its ability to deliver weapons on target. Knowing this, many militaries are developing offensive capabilities in cyberspace, and more than 100 foreign intelligence organizations are trying to break into U.S. networks. Some governments already have the capacity to disrupt elements of the U.S. information infrastructure.

In cyberspace, the offense has the upper hand. The Internet was designed to be collaborative and rapidly expandable and to have low barriers to technological innovation; security and identity management were lower priorities. For these structural reasons, the U.S. government’s ability to defend its networks always lags behind its adversaries’ ability to exploit U.S. networks’ weaknesses. Adept programmers will find vulnerabilities and overcome security measures put in place to prevent intrusions. In an offense-dominant environment, a fortress mentality will not work. The United States cannot retreat behind a Maginot Line of firewalls or it will risk being overrun. Cyberwarfare is like maneuver warfare, in that speed and agility matter most. To stay ahead of its pursuers, the United States must constantly adjust and improve its defenses.

It must also recognize that traditional Cold War deterrence models of assured retaliation do not apply to cyberspace, where it is difficult and time consuming to identify an attack’s perpetrator. Whereas a missile comes with a return address, a computer virus generally does not. The forensic work necessary to identify an attacker may take months, if identification is possible at all. And even when the attacker is identified, if it is a nonstate actor, such as a terrorist group, it may have no assets against which the United States can retaliate. Furthermore, what constitutes an attack is not always clear. In fact, many of today’s intrusions are closer to espionage than to acts of war. The deterrence equation is further muddled by the fact that cyberattacks often originate from co-opted servers in neutral countries and that responses to them could have unintended consequences.

The Colonel Blotto Game: How Underdogs Can Win

If you’ve ever wondered why underdogs win or how to improve your odds of winning when you’re the underdog, this article on The Colonel Blotto Game is for you.

***

There is a rich tradition of celebrating wins by the weak—while forgetting those who lost—including the biblical Story of David vs. Goliath. It is notable, that “David shunned a traditional battle using a helmet and sword and chose instead to fight unconventionally with stones and a slingshot,” says Michael Mauboussin.

Luckily, David was around before Keynes said: “It is better to fail conventionally than to succeed unconventionally.” Turns out, if you’re an underdog, David was onto something.

Despite the fact it is not as well known as the Prisoners’ Dilemma, the Colonel Blotto Game can teach us a lot about strategic behavior and competition.

Underdogs can change the odds of winning simply by changing the basis of competition.

So what exactly is the Colonel Blotto Game and what can we learn from it?

In the Colonel Blotto game, two players concurrently allocate resources across n battlefields. The player with the greatest resources in each battlefield wins that battle and the player with the most overall wins is the victor.

An extremely simple version of this game would consist of two players, A and B, allocating 100 soldiers to three battlefields. Each player’s goal is to create favorable mismatches versus his or her opponent.

According to Mauboussin, “The Colonel Blotto game is useful because by varying the game’s two main parameters, giving one player more resources or changing the number of battlefields, you can gain insight into the likely winners of competitive encounters.”

To illustrate this point, Malcolm Gladwell tells the story of Vivek Ranadivé:

When Vivek Ranadivé decided to coach his daughter Anjali’s basketball team, he settled on two principles. The first was that he would never raise his voice. This was National Junior Basketball—the Little League of basketball. The team was made up mostly of twelve-year-olds, and twelve-year-olds, he knew from experience, did not respond well to shouting. He would conduct business on the basketball court, he decided, the same way he conducted business at his software firm. He would speak calmly and softly, and convince the girls of the wisdom of his approach with appeals to reason and common sense.

The second principle was more important. Ranadivé was puzzled by the way Americans played basketball. He is from Mumbai. He grew up with cricket and soccer. He would never forget the first time he saw a basketball game. He thought it was mindless. Team A would score and then immediately retreat to its own end of the court. Team B would inbound the ball and dribble it into Team A’s end, where Team A was patiently waiting. Then the process would reverse itself. A basketball court was ninety-four feet long. But most of the time a team defended only about twenty-four feet of that, conceding the other seventy feet.

Occasionally, teams would play a full-court press—that is, they would contest their opponent’s attempt to advance the ball up the court. But they would do it for only a few minutes at a time. It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, and Ranadivé thought that that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent’s end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that made them so good?

Basically, the more dimensions the game has the less certain the outcome becomes and the more likely underdogs are to win.

In other words, adding battlefields increases the number of interactions (dimensions) and improves the chances of an upset. When the basketball team cited by Malcolm Gladwell above started a full court press, it increased the number of dimensions and, in the process, substituted effort for skill.

The political scientist Ivan Arreguín-Toft recently looked at every war fought in the past two hundred years between strong and weak combatants in his book How the Weak Win Wars. The Goliaths, he found, won in 71.5 percent of the cases. That is a remarkable fact.

Arreguín-Toft was analyzing conflicts in which one side was at least ten times as powerful—in terms of armed might and population—as its opponent, and even in those lopsided contests, the underdog won almost a third of the time.

In the Biblical story of David and Goliath, David initially put on a coat of mail and a brass helmet and girded himself with a sword: he prepared to wage a conventional battle of swords against Goliath. But then he stopped. “I cannot walk in these, for I am unused to it,” he said (in Robert Alter’s translation), and picked up those five smooth stones.

Arreguín-Toft wondered, what happened when the underdogs likewise acknowledged their weakness and chose an unconventional strategy? He went back and re-analyzed his data. In those cases, David’s winning percentage went from 28.5 to 63.6. When underdogs choose not to play by Goliath’s rules, they win, Arreguín-Toft concluded, “even when everything we think we know about power says they shouldn’t.”

Arreguín-Toft discovered another interesting point: over the past two centuries the weaker players have been winning at a higher and higher rate. For instance, strong actors prevailed in 88 percent of the conflicts from 1800 to 1849, but the rate dropped very close to 50% from 1950 to 1999.

After reviewing and dismissing a number of possible explanations for these findings, Arreguín-Toft suggests that an analysis of strategic interaction best explains the results. Specifically, when the strong and weak actors go toe-to-toe (effectively, a low n), the weak actor loses roughly 80 percent of the time because “there is nothing to mediate or deflect a strong player‘s power advantage.”

In contrast, when the weak actors choose to compete on a different strategic basis (effectively increasing the size of n), they lose less than 40 percent of the time “because the weak refuse to engage where the strong actor has a power advantage.” Weak actors have been winning more conflicts over the years because they see and imitate the successful strategies of other actors and have come to the realization that refusing to fight on the strong actor’s terms improves their chances of victory. This might explain what’s happening in the Gulf War.

In the Gulf War, the number of battlefields (dimensions) is high. Even though substantially outnumbered, the Taliban, have increased the odds of “winning,” by changing the base of competition, as they did previously against the superpower Russians. It also explains why the strategy employed by Ranadivé’s basketball team, while not guaranteed to win, certainly increased the odds.

Mauboussin provides another great example:

A more concrete example comes from Division I college football. Texas Tech has adopted a strategy that has allowed it to win over 70 percent of its games in recent years despite playing a highly competitive schedule. The team’s success is particularly remarkable since few of the players were highly recruited or considered “first-rate material” by the professional scouts. Based on personnel alone, the team was weaker than many of its opponents.

Knowing that employing a traditional game plan would put his weaker team at a marked disadvantage, the coach offset the talent gap by introducing more complexity into the team’s offense via a large number of formations. These formations change the geometry of the game, forcing opponents to change their defensive strategies. It also creates new matchups (i.e., increasing n, the number of battlefields) that the stronger teams have difficulty winning. For example, defensive linemen have to drop back to cover receivers. The team’s coach explained that “defensive linemen really aren’t much good at covering receivers. They aren’t built to run around that much. And when they do, you have a bunch of people on the other team doing things they don’t have much experience doing.” This approach is considered unusual in the generally conservative game of college football.

While it’s easy to recall all the examples of underdogs who found winning strategies by increasing the number of competition dimensions, it’s not easy to recall all of those who, employing similar dimension enhancing strategies, have failed.

Another interesting point is why teams who are likely to lose use conventional strategies, which only increase the odds of failure?

According to Mauboussin:

What the analysis also reveals, however, is that nearly 80 percent of the losers in asymmetric conflicts never switch strategies. Part of the reason players don’t switch is that there is a cost: when personnel training and equipment are geared toward one strategy, it’s often costly to shift to another. New strategies are also stymied by leaders or organizational traditions. This type of inertia appears to be a consequential impediment to organizations embracing the strategic actions implied by the Colonel Blotto game.

Teams have an incentive to maintain a conventional strategy, even when it increases their odds of losing. Malcolm Gladwell explores:

The consistent failure of underdogs in professional sports to even try something new suggests, to me, that there is something fundamentally wrong with the incentive structure of the leagues. I think, for example, that the idea of ranking draft picks in reverse order of finish — as much as it sounds “fair” — does untold damage to the game. You simply cannot have a system that rewards anyone, ever, for losing. Economists worry about this all the time, when they talk about “moral hazard.” Moral hazard is the idea that if you insure someone against risk, you will make risky behavior more likely. So if you always bail out the banks when they take absurd risks and do stupid things, they are going to keep on taking absurd risks and doing stupid things. Bailouts create moral hazard. Moral hazard is also why your health insurance has a co-pay. If your insurer paid for everything, the theory goes, it would encourage you to go to the doctor when you really don’t need to. No economist in his right mind would ever endorse the football and basketball drafts the way they are structured now. They are a moral hazard in spades. If you give me a lottery pick for being an atrocious GM, where’s my incentive not to be an atrocious GM?

Key takeaways:

  • Underdogs improve their chances of winning by changing the basis for competition and, if possible, creating more dimensions.
  • We often fail to switch strategies because of a combination of biases, including social proof, status quo, commitment and consistency, and confirmation.

Malcolm Gladwell is a staff writer at the New Yorker and the author of The Tipping Point: How Little Things Make a Big Difference, Blink, Outliers and most recently, What the Dog Saw.

Michael Mauboussin is the author of More More Than You Know: Finding Financial Wisdom in Unconventional Places and more recently, Think Twice: Harnessing the Power of Counterintuition.

Can one person successfully play different roles that require different, and often competing, perspectives?

No, according to research by Max Bazerman, author of the best book on decision making I’ve ever read: Judgment in Managerial Decision Making.

Contrary to F. Scott Fitzgerald’s famous quote, “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function,” evidence suggests that even the most intelligent find it difficult to sustain opposing beliefs without the two influencing each other.

Why?

One reason is a bias from incentives. Another is bounded awareness. The auditor who desperately wants to retain a client’s business may have trouble adopting the perspective of a dispassionate referee when it comes time to prepare a formal evaluation of the client’s accounting practices.

* * * * *

In many situations, professionals are called upon to play dual roles that require different perspectives. For example, attorneys embroiled in pretrial negotiations may exaggerate their chances of winning in court to extract concessions from the other side. But when it comes time to advise the client on whether to accept a settlement offer, the client needs objective advice.

Professors, likewise, have to evaluate the performance of graduate students and provide them with both encouragement and criticism. But public criticism is less helpful when faculty serve as their students’ advocates in the job market. And, although auditors have a legal responsibility to judge the accuracy of their clients’ financial accounting, the way to win a client’s business is not by stressing one’s legal obligation to independence, but by emphasizing the helpfulness and accommodation one can provide.

Are these dual roles psychologically feasible?; that is, can one person successfully play different roles that require different, and often competing, perspectives? No.

Abstract

This paper explores the psychology of conflict of interest by investigating how conflicting interests affect both public statements and private judgments. The results suggest that judgments are easily influenced by affiliation with interested partisans, and that this influence extends to judgments made with clear incentives for objectivity. The consistency we observe between public and private judgments indicates that participants believed their biased assessments. Our results suggest that the psychology of conflict of interest is at odds with the way economists and policy makers routinely think about the problem. We conclude by exploring implications of this finding for professional conduct and public policy.

Full Paper (PDF)

Read what you’ve been missing. Subscribe to Farnam Street via Email, RSS, or Twitter.

Shop at Amazon.com and support Farnam Street