Category: Decision Making

Too Many Decisions

The first thing you do in the morning is to make a decision. And those decisions pile up fast. Should I hit snooze? What clothes should I wear? What should I have for breakfast? What combination of choices from Starbucks will make my morning go smoother?

You’ve already made more decisions than most of our ancestors would make in a day by the time you arrive at work. Unfortunately — at least as far as the quality of your decisions is concerned — your day is just getting started.

Decisions take a lot of mental effort. And that’s a problem. Making choices reduces physical stamina, reduces persistence, reduces willpower, and even encourages procrastination.

John Tierney, adapted part of his upcoming book, Willpower: Rediscovering the Greatest Human Strength, for a New York Times Magazine article: Do You Suffer From Decision Fatigue?

Our brains are tired of making decisions. This has been coined as “Decision fatigue” and helps explain why, in the words of Tierney, “normally sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car.”

No matter how rational you are (or try to be), you can’t make decision after decision without paying a mental price. “It’s different”, Tierney writes, “from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy.”

The more choices you make, the harder they become. To save energy your brain starts to look for shortcuts. One shortcut is to be reckless and act impulsively (rather than rationally). The other shortcut is to do nothing, which saves as much energy as possible (and often creates bigger problems in the long run).

It turns out that glucose is a vital part of willpower. Tierney writes, “Your brain does not stop working when glucose is low. It stops doing some things and starts doing others. It responds more strongly to immediate rewards and plays less attention to long-term prospects.”

Glucose explains a lot. For instance, why people with phenomenally strong willpower in the rest of their lives struggle to lose weight. It also explains how someone can resist junk all day but gorge on a bag of chips right before bed. We start the day with a clean slate and the best intentions. It’s fairly easy to resist fatty muffins at breakfast and skip the snickers bar fix after lunch. But each of these decisions—resistances—consumes glucose and lowers our willpower. Eventually, we need to replenish it. But that requires glucose, which creates a catch-22: We need willpower not to eat but in order to have willpower we need to eat.

Tierney continues, “when the brain’s regulatory powers weaken, frustrations seem more irritating than usual. Impulses to eat, drink, spend and say stupid things feel more powerful (and alcohol causes self-control to decline further)…ego-depleted humans become more likely to get into needless fights over turf.”

Although we have no way of knowing, it seems like a fairly safe bet that we make more decisions now than at any point in history. That is, we’re under more decision making strain and we’re starting to show cracks.

The internet and our ability to “multitask” isn’t helping, argues Nicolas Carr, author of The Shallows: What the Internet is Doing to Our Brian: “A growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers.” Carr argues that our continuously connected, constantly distracted lives (read—constantly making decisions) rob us of the opportunity for deep thinking. The kind of thinking that we need to make a lot of decisions. By making thousands of trivial decisions every day, we rob ourselves of the ability to make more difficult contemplative decisions.

There are ways to improve our ability to make better decisions. Social psychologist Roy Baumeister has done research showing that people with the best self-control are the ones who structure their lives to conserve willpower. “They don’t,” Tierney suggests, “schedule endless back-to-back meetings. They avoid temptations like all-you-can-eat buffets, and they establish habits that eliminate the mental effort of making choices. Instead of deciding every morning whether or not to force themselves to exercise, they set up regular appointments to work out with a friend. Instead of counting on willpower to remain robust all day, they conserve it so that it’s available for emergencies and important decisions.” Wise advice we should all follow.

Organizations should start thinking carefully about how their employees actually end up spending their time and what they “waste” their precious mental energy on. If they’re filling out forms, trudging through a bureaucratic morass, or attending more than a few meetings a day, they’re likely using their mental energy on things that add little value to the organization.

Still Curious? John Tierney wrote a book about willpower and decision fatigue, Willpower: Rediscovering the Greatest Human Strength.

Promoting People In Organizations

In their 1978 paper Performance Sampling in Social Matches, researchers March and March discussed the implications of performance sampling for understanding careers in organizations. They came to some interesting conclusions with implications for those of us working in organizations.

Considerable evidence exists documenting that individuals confronted with problems requiring the estimation of proportions act as though sample size were substantially irrelevant to the reliability of their estimates. We do this in hiring all the time. Yet we know that sample size matters.

On how this cognitive bias affects hiring, March and March offer some good insights including the false record effect, the hero effect, the disappointment affect.

False Record Effect

A group of managers of identical (moderate) ability will show considerable variation in their performance records in the short run. Some will be found at one end of the distribution and will be viewed as outstanding; others will be at the other end and will be viewed as ineffective. The longer a manager stays in a job, the less the probable difference between the observed record of performance and actual ability. Time on the job increased the expected sample of observations, reduced expected sampling error, and thus reduced the chance that the manager (of moderate ability) will either be promoted or exit.

Hero Effect

Within a group of managers of varying abilities, the faster the rate of promotion, the less likely it is to be justified. Performance records are produced by a combination of underlying ability and sampling variation. Managers who have good records are more likely to have high ability than managers who have poor records, but the reliability of the differentiation is small when records are short.

Disappointment Effect

On the average, new managers will be a disappointment. The performance records by which managers are evaluated are subject to sampling error. Since a manager is promoted to a new job on the basis of a good previous record, the proportion of promoted managers whose past records are better than their abilities will be greater than the proportion whose past records are poorer. As a result, on the average, managers will do less well in their new jobs than they did in their old ones, and observers will come to believe that higher level jobs are more difficult than lower level ones, even if they are not.

…The present results reinforce the idea that indistinguishability among managers is a joint property of the individuals being evaluated and the process by which they are evaluated. Performance sampling models show how careers may be the consequences of erroneous interpretations of variations in performance produced by equivalent managers. But they also indicate that the same pattern of careers could be the consequence of unreliable evaluation of managers who do, in fact, differ, or of managers who do, in fact, learn over the course of their experience.

But hold on a second before you stop promoting new managers (who, by definition, have a limited sample size).

I’m not sure that sample size alone is the right way to think about this.

Consider two people: Manager A and Manager B who are up for promotion. Manager A has 10 years of experience and is an “all-star” (that is great performance with little variation in observations). Manager B, on the other hand, has only 5 years of experience but has shown a lot of variance in performance.

If you had to hire someone you’d likely pick A. But it’s important not to misinterpret the results of March and March and dig a little deeper.

What if we add one more variable to our two managers.

Manager A’s job has been “easy” whereas Manager B took a very “tough” assignment.

With this in mind, it seems reasonable to conclude that Manager B’s variance in performance could be explained by the difficulty of their task. This could also explain the lack of variance in Manager A’s performance.

Some jobs are tougher than others.

If you don’t factor in degree-of-difficulty you’re missing something big and sending a message to your workforce that discourages people from taking difficult assignments.

The importance of measuring performance over a meaningful sample size is the key to distinguishing between luck and skill. When in doubt go with the person that’s excelled with more variance in difficulty.

Just for you: How Scarcity Factors Into Decisions

Rather than invest the time and effort necessary to ponder the pluses and minuses of most decisions, we tend to rely on quick heuristics to make most decisions. These rules of thumb help us save cognitive processing and navigate a world full of choices. Our tendency to make near-automatic decisions exposes us to exploitation by individuals who understand how heuristics work.

In the study below the authors were interested in what role the scarcity principle—the notion that the less available an opportunity appears, the more valuable it becomes—plays in decision compliance and heuristics.

Previously, there were two main ways that we thought scarcity played a role in compliance situations.

First, something, a product for example, can be described as being in short supply (or a limited edition. This is why sales often say “while supplies last” which leaves the reader with a slight nudge to purchase right now before missing out. Second, scarcity affects compliance when an opportunity is available for a limited time (e.g., “This weekend only.”) While these principles don’t ensure that someone will purchase a product, they do increase the odds of a purchase.

The authors wanted to test another way that scarcity might factor into our decisions. They hypothesized that we naturally rely on a rule of thumb (heuristic) that says one should take advantage of a unique opportunity. Specifically, “the rule says that we should take advantage of opportunities that few others have access to. For example, if I believe I can purchase tickets to a play at a low price that is unavailable to most people, I am more likely to buy the tickets than if I believe many people have access to this same price.”

We’ve all seen examples of this type of marketing already through “friends and family” and “not available to the public” events. But do they work?

The authors certainly think so:

individuals are more likely to comply with a request when they believe the request represents a unique opportunity not available to most people. The effect appears to operate independently of a limited supply effect and is not the result of a perceived need to help the requester. Moreover, the effect is found even when the opportunity is determined purely by chance, suggesting that individuals are not responding to a sense that they have somehow earned the opportunity. Rather, the unique opportunity effect appears to be the result of heuristic processing, i.e., people relying on a rule of thumb that says they should grab an opportunity available to few others.

Source: Burger, J. M. & Caldwell, D.C., When opportunity knocks: The effect of a perceived unique opportunity on compliance. Group Processes and Intergroup Relations

Abstract:

Four studies examined the effect of a perceived unique opportunity on compliance. In all four studies, participants who believed they had an opportunity available to few others were more likely to agree with a request than participants who believed the opportunity was widely available or participants who received no opportunity information. We attribute the effect to a widely held heuristic that one should take advantage of unique opportunities. Study results demonstrated that people respond to a perceived unique opportunity even when supplies are not limited and when the opportunity is the result of pure chance. The results of a mediation analysis supported the interpretation that the perceived uniqueness of the opportunity underlies the effect.

A Simple Checklist to Improve Decisions

We owe thanks to the publishing industry. Their ability to take a concept and fill an entire category with a shotgun approach is the reason that more people are talking about biases.

Unfortunately, talk alone will not eliminate them but it is possible to take steps to counteract them. Reducing biases can make a huge difference in the quality of any decision and it is easier than you think.

In a recent article for Harvard Business Review, Daniel Kahneman (and others) describe a simple way to detect bias and minimize its effects in the most common type of decisions people make: determining whether to accept, reject, or pass on a recommendation.

The Munger two-step process for making decisions is a more complete framework, but Kahneman’s approach is a good way to help reduce biases in our decision-making.

If you’re short on time here is a simple checklist that will get you started on the path towards improving your decisions:

Preliminary Questions: Ask yourself

1. Check for Self-interested Biases

  • Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?
  • Review the proposal with extra care, especially for overoptimism.

2. Check for the Affect Heuristic

  • Has the team fallen in love with its proposal?
  • Rigorously apply all the quality controls on the checklist.

3. Check for Groupthink

  • Were there dissenting opinions within the team?
  • Were they explored adequately?
  • Solicit dissenting views, discreetly if necessary.
  • Challenge Questions: Ask the recommenders

4. Check for Saliency Bias

  • Could the diagnosis be overly influenced by an analogy to a memorable success?
  • Ask for more analogies, and rigorously analyze their similarity to the current situation.

5. Check for Confirmation Bias

  • Are credible alternatives included along with the recommendation?
  • Request additional options.

6. Check for Availability Bias

  • If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?
  • Use checklists of the data needed for each kind of decision.

7. Check for Anchoring Bias

  • Do you know where the numbers came from? Can there be
  • …unsubstantiated numbers?
  • …extrapolation from history?
  • …a motivation to use a certain anchor?
  • Reanchor with figures generated by other models or benchmarks, and request new analysis.

8. Check for Halo Effect

  • Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?
  • Eliminate false inferences, and ask the team to seek additional comparable examples.

9. Check for Sunk-Cost Fallacy, Endowment Effect

  • Are the recommenders overly attached to a history of past decisions?
  • Consider the issue as if you were a new CEO.
  • Evaluation Questions: Ask about the proposal

10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect

  • Is the base case overly optimistic?
  • Have the team build a case taking an outside view; use war games.

11. Check for Disaster Neglect

  • Is the worst case bad enough?
  • Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.

12. Check for Loss Aversion

  • Is the recommending team overly cautious?
  • Realign incentives to share responsibility for the risk or to remove risk.

If you’re looking to dramatically improve your decision making here is a great list of books to get started:

Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein

Think Twice: Harnessing the Power of Counterintuition by Michael J. Mauboussin

Think Again: Why Good Leaders Make Bad Decisions and How to Keep It from Happening to You by Sydney Finkelstein, Jo Whitehead, and Andrew Campbell

Predictably Irrational: The Hidden Forces That Shape Our Decisions by Dan Ariely

Thinking, Fast and Slow by Daniel Kahneman

Judgment and Managerial Decision Making by Max Bazerman

Albert Bernstein on the Dinosaur Brain and How To Make Bad Decisions

I enjoyed reading Dinosaur Brains: Dealing with All Those Impossible People at Work by Albert Bernstein.

Near the end of the book, Bernstein playfully illuminates how too many decisions get made.

  1. Take an idea from some authority figure, maybe your boss, or an author;
  2. Tell everyone this idea is the basis for all the change you’re going to make;
  3. Do things the way you’ve always done them;
  4. If something changes, take credit for it. If something bad happens, point out that this just goes to show that the old way of doing things was better.

Sound familiar? It should.

Another approach, let’s call it the more rational approach, might look something like:

  1. Understand the problem, or set a goal;
  2. Establish criteria (how will you know the problem is solved or you’ve reached your goal);
  3. Generate alternatives;
  4. Measure alternatives versus criteria and try a few of them;
  5. Evaluate;
  6. Choose the best solution.

This approach probably won’t get you promoted, but it will increase the odds of making better decisions.

The Art and Science of High-Stakes Decisions

How can anyone make rational decisions in a world where knowledge is limited, time is pressing, and deep thought is often unattainable?

Some decisions are more difficult than others. Yet we’re often forced to make all of our decisions the way we make easy ones: on autopilot. 

One particular difficulty we have is with making decisions that help us avoid threats which are low probability, but high stakes. We are least prepared to make the decisions that matter the most.

Sure we can pick the right brand of peanut butter with ease. But life offers few opportunities to prepare for the type of decisions which could have catastrophic consequences. The kinds of decisions which could wipe us out if we mess up.

Shortly after 9/11 some well-known academics got together to discuss1how people make choices involving low and ambiguous probability of a high-stakes loss.

High-stakes decisions involve two distinctive properties:

1) existence of a possible large loss (financial or emotional) and

2) the costs to reverse decisions once made are high.

More importantly, these professors wanted to determine if prescriptive guidelines for improving decision-making process could be created in an effort to help make better decisions.

Whether we’re buying something at the grocery store or making a decision to purchase earthquake insurance, we operate in the same way. The possibility of catastrophic outcomes does little to reduce our reliance on heuristics (rules of thumb.) These serve us well most of the time, when the cost of an error is low. However, they can often be a poor technique for high-stakes decisions.

In order to make better high-stakes decisions, we need a better understanding of why we generally make poor decisions.

Here are several causes.

Poor understanding of probability.

Several studies show that people either utilize probability information insufficiently when it is made available to them or ignore it all together. In one study, 78% of subjects failed to seek out probability information when evaluating between several risky managerial decisions.

In the context of high-stakes decisions, the probability of an event causing loss may seem sufficiently low that organizations and individuals consider them not worth worry about. In doing so, they effectively treat the probability of something as zero or close to it.

An excessive focus on short time horizons.

Many high-stakes decisions are not obvious to the decision-maker. In part, this is because people tend to focus on the immediate consequences and not the long-term consequences.

A CEO near retirement has incentives to skimp on insurance to report slightly higher profits before leaving (shareholders are unaware of the increased risk and appreciate the increased profits). Governments tend to under-invest in less visible things like infrastructure because they have short election cycles. The long-term consequences of short-term thinking can be disastrous.

The focus on short-term decision making is one of the most widely-documented failings of human decision making. People have difficulty considering the future consequences of current actions over long periods of time. Garrett Hardin, the author of Filters against Folly, suggests we look at things through three filters (literacy, numeracy, and ecolacy). In ecolacy, the key question is “and then what?” And then what helps us avoid a focus solely on the short-term.

Excessive attention to what’s available.

Decisions requiring difficult trade-offs between attributes or entailing ambiguity as to what a right answer looks like often leads people to resolve choices by focusing on the information most easily brought to mind. Sometimes things can be difficult to bring to mind.

Constant exposure to low-risk events without realization leads to us being less concerned than we probably would warrant (it makes these events less available) and “proves” our past decisions to ignore low-risk events were right.

People refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value. Kunreuther et. al. (1993) suggests underreaction to threats of flooding may arise from “the inability of individuals to conceptualize floods that have never occurred… Men on flood plains appear to be very much prisoners of their experience… Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned.”Paradoxically, we feel more secure even as the “risk” may have increased.

Distortions under stress.

Most high-stakes decisions will be made under perceived (or real) stress. A large number of empirical studies find that stress focuses decision-makers on a selective set of cues when evaluating options and leads to greater reliance on simplifying heuristics. When we’re stressed, we’re less likely to think things through.

Over-reliance on social norms.

Most individuals have little experience with high-stakes decisions and are highly uncertain about how to resolve them (procedural uncertainty). In such cases—and combined with stress—the natural course of action is to mimic the behavior of others or follow established social norms. This is based on the psychological desire to fail conventionally.

The tendency to prefer the status-quo.

What happens when people are presented with difficult choices and no obvious right answer? We tend to prefer making no decision at all, we choose the norm.

In high-stakes decisions many options are better than the status-quo and we must make trade-offs. Yet, when faced with decisions that involve life-and-death trade-offs, people frequently remark “I’d rather not think about it.”

Failures to learn.

Although individuals and organizations are eager to derive intelligence from experience, the inferences stemming from that eagerness are often misguided. The problems lie partly in errors in how people think, but even more so in properties of experience that confound learning from it. Experience may possibly be the best teacher, but it is not a particularly good teacher.

As an illustration, one study finds that participants in an earthquake simulation tended to over-invest in mitigation that was normatively ineffective but under-invest when it is normatively effective. The reason was a misinterpretation of feedback; when mitigation was ineffective, respondents attributed the persistence of damage to the fact that they had not invested enough. by contract, when it was effective, they attributed the absence of damage to a belief that earthquakes posted limited damage risk.

Gresham’s Law of Decision making.

Over time, bad decisions will tend to drive out good decisions in an organization.

Improving.

What can you do to improve your decision-making?

A few things:

1) learn more about judgment and decision making;

2) encourage decision makers to see events through alternative frames, such as gains versus losses and changes in the status-quo;

3) adjust the time frame of decisions—while the probability of an earthquake at your plant may be 1/100 in any given year, the probability over the 25 year life of the plant will be 1/5; and

4) read Farnam Street!

Footnotes