Tag: Max Bazerman

3 Things Everyone Should Know About the Availability Heuristic

There are 3 things you should know about the availability heuristic:

  1. We often misjudge the frequency and magnitude of events that have happened recently.
  2. This happens, in part, because of the limitations on memory.
  3. We remember things better when they come in a vivid narrative.


There are two biases emanating from the availability heuristic (a.k.a. the availability bias): Ease of recall and retrievability.

Because of the availability bias, our perceptions of risk may be in error and we might worry about the wrong risks. This can have disastrous impacts.

Ease of recall suggests that if something is more easily recalled in memory it must occur with a higher probability.

The availability heuristic distorts our understanding of real risks.

“The attention which we lend to an experience is proportional to its vivid or interesting character; and it is a notorious fact that what interests us most vividly at the time is, other things equal, what we remember best.”

— William James

When we make decisions we tend to be swayed by what we remember. What we remember is influenced by many things including beliefs, expectations, emotions, and feelings as well as things like frequency of exposure.  Media coverage (e.g., Internet, radio, television) makes a big difference. When rare events occur they become very visible to us as they receive heavy coverage by the media. This means we are more likely to recall it, especially in the immediate aftermath of the event. However, recalling an event and estimating its real probability are two different things. If you’re in a car accident, for example, you are likely to rate the odds of getting into another car accident much higher than base rates would indicate.

Retrievability suggests that we are biased in assessments of frequency in part because of our memory structure limitations and our search mechanisms. It’s the way we remember that matters.

The retrievability and ease of recall biases indicate that the availability bias can substantially and unconsciously influence our judgment. We too easily assume that our recollections are representative and true and discount events that are outside of our immediate memory.


In Thinking Fast and Slow, Kahneman writes:

People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media.


Nobel Prize winning Social Scientist and Father of Artificial Intelligence, Herbert Simon, wrote in Models of My life:

I soon learned that one wins awards mainly for winning awards: an example of what Bob Merton calls the Matthew Effect. It is akin also to the phenomenon known in politics as “availability,” or name recognition. Once one becomes sufficiently well known, one’s name surfaces automatically as soon as an award committee assembles.

* * *

According to Harvard professor Max Bazerman

Many life decisions are affected by the vividness of information. Although most people recognize that AIDS is a devastating disease, many individuals ignore clear data about how to avoid contracting AIDS. In the fall of 1991, however, sexual behavior in Dallas was dramatically affected by one vivid piece of data that may or may not have been true. In a chilling interview, a Dallas woman calling herself C.J. claimed she had AIDS and was trying to spread the disease out of revenge against the man who had infected her. After this vivid interview made the local news, attendance at Dallas AIDS seminary increased dramatically. Although C.J.’s possible actions were a legitimate cause for concern, it is clear that most of the health risks related to AIDS are not a result of one woman’s actions. There are many more important reasons to be concerned about AIDS. However, C.J.’s vivid report had a more substantial effect on many people’s behavior than the mountains of data available. The Availability Heuristic describes the inferences we make about even commonness based on the ease with which we can remember instances of that event

While this example of vividness may seem fairly benign, it is not difficult to see how the availability bias could lead managers to make potentially destructive workplace decisions. The following came from the experience of one of our MBA students: As a purchasing agent, he had to select one of several possible suppliers. He chose the firm with whose name was the most familiar to him. He later found out that the salience of the name resulted from recent adverse publicity concerning the firm’s extortion of funds from client companies!

Managers conducting performance appraisals often fall victim to the availability heuristic. Working from memory, vivid instances of an employee’s behavior (either positive or negative) will be most easily recalled from memory, will appear more numerous than commonplace incidents, and will therefore be weighted more heavily in the performance appraisals. The recency of events is also a factor: Managers give more weight to performance during the three months prior to the evaluation than to the previous nine months of the evaluation period because it is more available in memory.

* * *

There are numerous implications for availability bias for investors.

A study by Karlsson, Loewenstein, and Ariely (2008) showed that people are more likely to purchase insurance to protect themselves after a natural disaster they have just experienced than they are to purchase insurance on this type of disaster before it happens.

Bazerman adds:

This pattern may be sensible for some types of risks. After all, the experience of surviving a hurricane may offer solid evidence that your property is more vulnerable to hurricanes than you had thought or that climate change is increasing your vulnerability to hurricanes.

Robyn M. Dawes, in his book Everyday Irrationality, says:

What is a little less obvious is that people can make judgments of the ease with which instances can come to mind without actually recalling specific instances. We know, for example, whether we can recall the presidents of the United States–or rather how well we can recall their names; moreover, we know at which periods of history we are better at recalling them than at which other periods. We can make judgments without actually listing in our minds the names of the specific presidents.

This recall of ease of creating instances is not limited to actual experience, but extends to hypothetical experience as well. For example, subjects are asked to consider how many subcommittees of two people can be formed from a committee of eight, and either the same or other subjects are asked to estimate how many subcommittees of six can be formed from a committee of eight people. It is much easier to think about pairs of people than to think about sets of six people, with the result that the estimate of pairs tends to be much higher than the estimate of subsets of six. In point of logic, however, the number of subsets of two is identical that of six; the formation of a particular subset of two people automatically involves the formation of a particular subset consisting of the remaining six. Because these unique subsets are paired together, there are the same number of each.

This availability to the imagination also creates a particularly striking irrationality, which can be termed with the conjunction fallacy or compound probability fallacy. Often combinations of events or entities are easier to think about than their components, because the combination might make sense whereas the individual component does not. A classic example is that of a hypothetical woman names Linda who is said to have been a social activist majoring in philosophy as a college undergraduate. What is the probability that at age thirty she is a bank teller? Subjects judge the probability as very unlikely. But when asked whether she might be a bank teller active in a feminist movement, subjects judge this combination to be more likely than for her to be a bank teller.

* * *

Retrievability (based on memory structures)

We are better at retrieving words from memory using the word’s initial letter than a random position like 3 (Tversky & Kahneman, 1973).

In 1984 Tverksy and Kahneman demonstrated the retrievability bias again when they asked participants in their study to estimate the frequency of seven-letter words that had the letter “n” in the sixth position. Their participants estimated such words to be less common than seven letter words ending in the more memorable “ing”. This response is incorrect. All seven letter words ending with “ing” also have an “n” in the sixth position. However, it’s easy to recall seven letter words ending with ing. As we demonstrated with Dawes above, this is another example of the conjunction fallacy.

Retail locations are chosen based on search as well, which explains why gas stations and retail stores are often “clumped” together. Consumers learn the location of a product and organize their mind accordingly. While you may not remember the name of all three gas stations on the same corner, your mind tells you that is where to go to find gas. Each station, assuming all else equal, then has a 1/3 shot at your business which is much better than gas stations you don’t visit because their location doesn’t resonate with your minds search. In order to maximize traffic stores must find locations that consumers associate with a product.

* * *

Exposure Effect

People tend to develop a preference for things because they are familiar with them. This is called the exposure effect. According to Titchener (1910) the exposure effect leads people to experience a “glow or warmth, a sense of ownership, a feeling of intimacy.”

The exposure effect applies only to things that are perceived as neutral to positive. If you are repeatedly exposed to something perceived as a negative stimuli it may in fact amplify negative feelings. For example, when someone is playing loud music you tend to have a lot of patience at first. However, as time goes on you get increasingly aggravated as your exposure to the stimuli increases.

The more we are exposed to something the easier it is to recall in our minds. The exposure effect influences us in many ways. Think about brands, stocks, songs, companies, and even the old saying “the devil you know.”

* * *

The Von Restorff Effect

“One of these things doesn’t belong,” can accurately summarize the Von Restorff Effect (also known as the isolation effect and novelty effect). In our minds, things that stand out are more likely to be remembered and recalled because we give increased attention to distinctive items in a set.

For example, if I asked you to remember the following sequence of characters “RTASDT9RTGS” I suspect the most common character remembered would be the “9” because it stands out and thus your mind gives it more attention.

The Von Restorff Effect leads us to Vivid evidence.

* * *

Vivid Evidence

According to William James in the Principles of Psychology:

An impression may be so exciting emotionally as to almost leave a scar upon the cerebral tissues; and thus originates a pathological delusion. For example “A woman attacked by robbers takes all the men whom she sees, even her own son, for brigands bent on killing her. Another woman sees her child run over by a horse; no amount of reasoning, not even the sight of the living child, will persuade her that he is not killed.

M. Taine wrote:

If we compare different sensations, images, or ideas, we find that their aptitudes for revival are not equal. A large number of them are obliterated, and never reappear throughout life; for instance, I drove through Paris a day or two ago, and though I saw plainly some sixty or eighty new faces, I cannot now recall any one of them; some extraordinary circumstance, a fit of delirium, or the excitement of hashish would be necessary to give me a chance at revival. On the other hand, there are sensations with a force of revival which nothing destroys or decreases. Though, as a rule, time weakens and impairs our strongest sensations, these reappear entire and intense, without having lost a particle of their detail, or any degree of their force. M. Breirre de Boismont, having suffered when a child from a disease of the scalp, asserts that ‘after fifty-five years have elapsed he can still feel his hair pulled out under the treatment of the ‘skull-cap.’–For my own part, after thirty years, I remember feature for feature the appearance of the theater to which I was taken for the first time. From the third row of boxes, the body of the theater appeared to me an immense well, red and flaming, swarming with heads; below, on the right, on a narrow floor, two men and a woman entered, went out, and re-entered, made gestures, and seemed to me like lively dwarfs: to my great surprise one of these dwarfs fell on his knees, kissed the lady’s hand, then hid behind a screen: the other, who was coming in, seemed angry, and raised his arm. I was then seven, I could understand nothing of what was going on; but the well of crimson velvet was so crowded, and bright, that after a quarter of an hour i was, as it were, intoxicated, and fell asleep.

Every one of us may find similar recollections in his memory, and may distinguish them in a common character. The primitive impression has been accompanied by an extraordinary degree of attention, either as being horrible or delightful, or as being new, surprising, and out of proportion to the ordinary run of life; this it is we express by saying that we have been strongly impressed; that we were absorbed, that we could not think of anything else; that our other sensations were effaced; that we were pursued all the next day by the resulting image; that it beset us, that we could not drive it away; that all distractions were feeble beside it. It is by force of this disproportion that impressions of childhood are so persistent; the mind being quite fresh, ordinary objects and events are surprising…

Whatever may be the kind of attention, voluntary or involuntary, it always acts alike; the image of an object or event is capable of revival, and of complete revival, in proportion to the degree of attention with which we have considered the object or event. We put this rule into practice at every moment in ordinary life.

An example from Freeman Dyson:

A striking example of availability bias is the fact that sharks save the lives of swimmers. Careful analysis of deaths in the ocean near San Diego shows that on average, the death of each swimmer killed by a shark saves the lives of ten others. Every time a swimmer is killed, the number of deaths by drowning goes down for a few years and then returns to the normal level. The effect occurs because reports of death by shark attack are remembered more vividly than reports of drownings.

Availability Bias is a Mental Model in the Farnam Street Mental Model Index

Ethical Breakdowns: Why Good People often Let Bad Things Happen

When Charlie Munger recommended reading Max Bazerman’s Judgment in Managerial Decision Making I had never hear of the HBS professor. A lot of reading later and I’m a huge fan.

In the HBR article below Bazerman covers some of the ground from his new book Blind Spots (see my notes).

These days, many of us are instructed to make decisions from a business perspective (thereby reducing or eliminating the ethical implications of our decisions). The Ford Pinto example below is very telling:

Consider an infamous case that, when it broke, had all the earmarks of conscious top-down corruption. The Ford Pinto, a compact car produced during the 1970s, became notorious for its tendency in rear-end collisions to leak fuel and explode into flames. More than two dozen people were killed or injured in Pinto fires before the company issued a recall to correct the problem. Scrutiny of the decision process behind the model’s launch revealed that under intense competition from Volkswagen and other small-car manufacturers, Ford had rushed the Pinto into production. Engineers had discovered the potential danger of ruptured fuel tanks in preproduction crash tests, but the assembly line was ready to go, and the company’s leaders decided to proceed. Many saw the decision as evidence of the callousness, greed, and mendacity of Ford’s leaders—in short, their deep unethicality.

But looking at their decision through a modern lens—one that takes into account a growing understanding of how cognitive biases distort ethical decision making—we come to a different conclusion. We suspect that few if any of the executives involved in the Pinto decision believed that they were making an unethical choice. Why? Apparently because they thought of it as purely a business decision rather than an ethical one.

Taking an approach heralded as rational in most business school curricula, they conducted a formal cost-benefit analysis—putting dollar amounts on a redesign, potential lawsuits, and even lives—and determined that it would be cheaper to pay off lawsuits than to make the repair. That methodical process colored how they viewed and made their choice. The moral dimension was not part of the equation. Such “ethical fading,” a phenomenon first described by Ann Tenbrunsel and her colleague David Messick, takes ethics out of consideration and even increases unconscious unethical behavior.

Continue Reading at HBR.

I recommend you purchase Judgment in Managerial Decision Making and Blind Spots.

Can one person successfully play different roles that require different, and often competing, perspectives?

No, according to research by Max Bazerman, author of the best book on decision making I’ve ever read: Judgment in Managerial Decision Making.

Contrary to F. Scott Fitzgerald’s famous quote, “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function,” evidence suggests that even the most intelligent find it difficult to sustain opposing beliefs without the two influencing each other.


One reason is a bias from incentives. Another is bounded awareness. The auditor who desperately wants to retain a client’s business may have trouble adopting the perspective of a dispassionate referee when it comes time to prepare a formal evaluation of the client’s accounting practices.

* * * * *

In many situations, professionals are called upon to play dual roles that require different perspectives. For example, attorneys embroiled in pretrial negotiations may exaggerate their chances of winning in court to extract concessions from the other side. But when it comes time to advise the client on whether to accept a settlement offer, the client needs objective advice.

Professors, likewise, have to evaluate the performance of graduate students and provide them with both encouragement and criticism. But public criticism is less helpful when faculty serve as their students’ advocates in the job market. And, although auditors have a legal responsibility to judge the accuracy of their clients’ financial accounting, the way to win a client’s business is not by stressing one’s legal obligation to independence, but by emphasizing the helpfulness and accommodation one can provide.

Are these dual roles psychologically feasible?; that is, can one person successfully play different roles that require different, and often competing, perspectives? No.


This paper explores the psychology of conflict of interest by investigating how conflicting interests affect both public statements and private judgments. The results suggest that judgments are easily influenced by affiliation with interested partisans, and that this influence extends to judgments made with clear incentives for objectivity. The consistency we observe between public and private judgments indicates that participants believed their biased assessments. Our results suggest that the psychology of conflict of interest is at odds with the way economists and policy makers routinely think about the problem. We conclude by exploring implications of this finding for professional conduct and public policy.

Full Paper (PDF)

Read what you’ve been missing. Subscribe to Farnam Street via Email, RSS, or Twitter.

Shop at Amazon.com and support Farnam Street

The Anatomy of a Decision: An Introduction to Decision Making

An Introduction to Decision Making

“The only proven way to raise your odds of making a good decision is
to learn to use a good decision-making process—one that can
get you the best solution with a minimal
loss of time, energy, money, and composure.”
— John Hammond


This is an introduction to decision making.

A good decision-making process can literally change the world.

Consider the following example from Predictable Surprises: In 1962, when spy planes spotted Soviet missiles in Cuba, U.S. military leaders urged President Kennedy to authorize an immediate attack. Fresh from the bruising failure of the Bay of Pigs, Kennedy instead set up a structured decision-making process to evaluate his options. In a precursor of the Devil’s Advocacy method, Kennedy established two groups each including government officials and outside experts, to develop and evaluate the two main options–attack Cuba or set up a blockade to prevent more missiles from reaching its shores. Based on the groups’ analysis and debate, Kennedy decided to establish a blockade. The Soviets backed down, and nuclear war was averted. Recently available documents suggest that if the United States had invaded Cuba the consequences would have been catastrophic: Soviet missiles that had not been located by U.S. Intelligence could still have struck several U.S. cities.

The concept of a decision-making process can be found in the early history of thinking. Decisions should be the result of rational and deliberate reasoning. Plato argues that human knowledge can be derived on the basis of reason alone using deduction and self-evident propositions. Aristotle formalized logic with logical proofs where someone could reasonably determine if a conclusion was true or false. However, as we will discover not all decisions are perfectly rational. Often, we let our system one thinking–intuition–make decisions for us. Our intuition is based on long-term memory that has been primarily acquired over the years through learning and allows our mind to process and judge without conscious awareness. System one thinking, however, does not always lead to optimal solutions and often tricks our mind to thinking that consequences and second-order effects are either non-existent or less probable than reality would indicate.

In Predictable Surprises Max Bazerman writes:

Rigorous decision analysis combines a systematic assessment of the probabilities of future events with a hard-headed evaluation of the costs and benefits of particular outcomes. As such, it can be an invaluable tool in helping organizations overcome the biases that hinder them in estimating the likelihood of unpleasant events. Decision analysis begins with a clear definition of the decision to be made, followed by an explicit statement of objectives and explicit criteria for assessing the “goodness” of alternative courses of action, by which we mean the net cost or benefit as perceived by the decision-maker. The next steps involve identifying potential courses of action and their consequences. Because these elements often are laid out visually in a decision tree, this technique is known as “decision tree analysis.” Finally, the technique instructs decision-makers to explicitly assess and make trade-offs based on the potential costs and benefits of different courses of action.

To conduct a proper decision analysis, leaders must carefully quantify costs and benefits, their tolerance for accepting risk, and the extent of uncertainty associated with different potential outcomes. These assumptions are inherently subjective, but the process of quantification is nonetheless extremely valuable’ it forces participants to express their assumptions and beliefs, thereby making them transparent and subject to challenge and improvement.

From Judgment in Management Decision Making by Max Bazerman:

The term judgment refers to the cognitive aspects of the decision-making process. To fully understand judgment, we must first identify the components of the decision-making process that require it.

Let’s look at six steps you should take, either implicitly or explicitly, when applying a “rational” decision-making process to each scenario.

1. Define the problem. (M)anagers often act without a thorough understanding of the problem to be solved, leading them to solve the wrong problem. Accurate judgment is required to identify and define the problem. Managers often err by (a) defining the problem in terms of a proposed solution, (b) missing a bigger problem, or (c) diagnosing the problem in terms of its symptoms. Your goal should be to solve the problem not just eliminate its temporary symptoms.

2. Identify the criteria. Most decisions require you to accomplish more than one objective. When buying a car, you may want to maximize fuel economy, minimize cost, maximize comfort, and so on. The rational decision maker will identify all relevant criteria in the decision-making process.

3. Weight the criteria. Different criteria will vary in importance to a decision maker. Rational decision makers will know the relative value they place on each of the criteria identified. The value may be specified in dollars, points, or whatever scoring system makes sense.

4. Generate alternatives. The fourth step in the decision-making process requires identification of possible courses of action. Decision makers often spend an inappropriate amount of search time seeking alternatives, thus creating a barrier to effective decision making. An optimal search continues only until the cost of the search outweighs the value of added information.

5. Rate each alternative on each criterion. How well will each of the alternative solutions achieve each of the defined criteria? This is often the most difficult stage of the decision-making process, as it typically requires us to forecast future events. The rational decision maker carefully assesses the potential consequences on each of the identified criteria of selecting each of the alternative solutions.

6. Compute the optimal decision. Ideally, after all of the first five steps have been completed, the process of computing the optimal decision consists of (a) multiplying the ratings in step 5 by the weight of each criterion, (b) adding up the weighted ratings across all of the criteria for each alternative, and (c) choosing the solution with the highest sum of weighted ratings.

Hammond, Keeney, and Raiffa suggest 8 steps in their book Smart Choices:

1. Work on the right problem.
2. Identify all criteria.
3. Create imaginative alternatives.
4. Understand the consequences.
5. Grapple with your tradeoffs.
6. Clarify your uncertainties.
7. Think hard about your risk tolerance.
8. Consider linked decisions.

* * *

People, however, are not always perfectly logical machines. In Judgment in Managerial Decision Making, the distinction between System One and System Two thinking becomes clear:

System 1 thinking refers to our intuitive system, which is typically fast, automatic, effortless, implicit, and emotional. We make most decisions in life using System 1 thinking. For instance, we usually decide how to interpret verbal language or visual information automatically and unconsciously. By contrast, System 2 refers to reasoning that is slower, conscious, effortful, explicit, and logical. System 2 thinking can be broken down into (1) define the problem; (2) identify the criteria; (3) weigh the criteria; (4) generate alternatives; (5) rate each alternative on each criterion; (6) compute the optimal decision.

In most situations, our system 1 thinking is quite sufficient; it would be impractical, for example, to logically reason through every choice we make while shopping for groceries. But System 2 logic should preferably influence our most important decisions.

* * *

When making a decision we are psychologically influenced either consciously or unconsciously. By exploring these biases and other elementary worldly wisdom, I hope to make you a better decision maker.

Following a rational decision process can help us focus on outcomes that are low in probability but high in potential costs. Without easily quantifiable costs, we often dismiss low probability events or fall prey to biases. We don’t want to be the fragilista.

Even rational decision-making processes like the one presented above make several assumptions. The first assumption is that a rational decision maker is completely informed which means they know about all the possible options and outcomes. The second major assumption is that the decision maker does not fall prey to any biases that might impact the rational decision.

In researching decision-making processes it struck me as odd that few people question the information upon which criteria are measured. For instance, if you are purchasing a car and use fuel efficiency as the sole criterion for decision making you would need to make sure that the cars under consideration were all tested and measured fuel consumption in the same way. This second order of thinking can help you make better decisions.

If you want to make better decisions, you should read Judgment in Managerial Decision Making. Hands down that is the best book I’ve come across on decision making. If you know of a better one, please send me an email.

Stanovich’s book, What Intelligence Tests Miss: The Psychology of Rational Thought, proposes a whole range of cognitive abilities and dispositions independent of intelligence that have at least as much to do with whether we think and behave rationally.


Follow your curiosity to The best books on the psychology behind human decision making and Problem Solving 101.