Tag: Cognitive Bias

The Availability Bias: How to Overcome a Common Cognitive Distortion

“The attention which we lend to an experience is proportional to its vivid or interesting character, and it is a notorious fact that what interests us most vividly at the time is, other things equal, what we remember best.” —William James

The availability heuristic explains why winning an award makes you more likely to win another award. It explains why we sometimes avoid one thing out of fear and end up doing something else that’s objectively riskier. It explains why governments spend enormous amounts of money mitigating risks we’ve already faced. It explains why the five people closest to you have a big impact on your worldview. It explains why mountains of data indicating something is harmful don’t necessarily convince everyone to avoid it. It explains why it can seem as if everything is going well when the stock market is up. And it explains why bad publicity can still be beneficial in the long run.

Here’s how the availability heuristic works, how to overcome it, and how to use it to your advantage.

***

How the availability heuristic works

Before we explain the availability heuristic, let’s quickly recap the field it comes from.

Behavioral economics is a field of study bringing together knowledge from psychology and economics to reveal how real people behave in the real world. This is in contrast to the traditional economic view of human behavior, which assumed people always behave in accordance with rational, stable interests. The field largely began in the 1960s and 1970s with the work of psychologists Amos Tversky and Daniel Kahneman.

Behavioral economics posits that people often make decisions and judgments under uncertainty using imperfect heuristics, rather than by weighing up all of the relevant factors. Quick heuristics enable us to make rapid decisions without taking the time and mental energy to think through all the details.

Most of the time, they lead to satisfactory outcomes. However, they can bias us towards certain consistently irrational decisions that contradict what economics would tell us is the best choice. We usually don’t realize we’re using heuristics, and they’re hard to change even if we’re actively trying to be more rational.

One such cognitive shortcut is the availability heuristic, first studied by Tversky and Kahneman in 1973. We tend to judge the likelihood and significance of things based on how easily they come to mind. The more “available” a piece of information is to us, the more important it seems. The result is that we give greater weight to information we learned recently because a news article you read last night comes to mind easier than a science class you took years ago. It’s too much work to try to comb through every piece of information that might be in our heads.

We also give greater weight to information that is shocking or unusual. Shark attacks and plane crashes strike us more than an accidental drowning or car accidents, so we overestimate their odds.

If we’re presented with a set of similar things with one that differs from the rest, we’ll find it easier to remember. For example, of the sequence of characters “RTASDT9RTGS,” the most common character remembered would be the “9” because it stands out from the letters.

In Behavioural Law and Economics, Timur Kuran and Cass Sunstein write:

“Additional examples from recent years include mass outcries over Agent Orange, asbestos in schools, breast implants, and automobile airbags that endanger children. Their common thread is that people tended to form their risk judgments largely, if not entirely, on the basis of information produced through a social process, rather than personal experience or investigation. In each case, a public upheaval occurred as vast numbers of players reacted to each other’s actions and statements. In each, moreover, the demand for swift, extensive, and costly government action came to be considered morally necessary and socially desirable—even though, in most or all cases, the resulting regulations may well have produced little good, and perhaps even relatively more harm.”

Narratives are more memorable than disjointed facts. There’s a reason why cultures around the world teach important life lessons and values through fables, fairy tales, myths, proverbs, and stories.

Personal experience can also make information more salient. If you’ve recently been in a car accident, you may well view car accidents as more common in general than you did before. The base rates haven’t changed; you just have an unpleasant, vivid memory coming to mind whenever you get in a car. We too easily assume that our recollections are representative and true and discount events that are outside of our immediate memory. To give another example, you may be more likely to buy insurance against a natural disaster if you’ve just been impacted by one than you are before it happens.

Anything that makes something easier to remember increases its impact on us. In an early study, Tversky and Kahneman asked subjects whether a random English word is more likely to begin with “K” or have “K” as the third letter. Seeing as it’s typically easier to recall words beginning with a particular letter, people tended to assume the former was more common. The opposite is true.

In Judgment Under Uncertainty: Heuristics and Biases, Tversky and Kahneman write:

“…one may estimate probability by assessing availability, or associative distance. Lifelong experience has taught us that instances of large classes are recalled better and faster than instances of less frequent classes, that likely occurrences are easier to imagine than unlikely ones, and that associative connections are strengthened when two events frequently co-occur.

…For example, one may assess the divorce rate in a given community by recalling divorces among one’s acquaintances; one may evaluate the probability that a politician will lose an election by considering various ways in which he may lose support; and one may estimate the probability that a violent person will ‘see’ beasts of prey in a Rorschach card by assessing the strength of association between violence and beasts of prey. In all of these cases, the assessment of the frequency of a class or the probability of an event is mediated by an assessment of availability.”

They go on to write:

“That associative bonds are strengthened by repetition is perhaps the oldest law of memory known to man. The availability heuristic exploits the inverse form of this law, that is, it uses strength of association as a basis for the judgment of frequency. In this theory, availability is a mediating variable, rather than a dependent variable as is typically the case in the study of memory.”

***

How the availability heuristic misleads us

“People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media.” —Daniel Kahneman, Thinking Fast and Slow

To go back to the points made in the introduction of this post, winning an award can make you more likely to win another award because it gives you visibility, making your name come to mind more easily in connection to that kind of accolade. We sometimes avoid one thing in favor of something objectively riskier, like driving instead of taking a plane, because the dangers of the latter are more memorable. The five people closest to you can have a big impact on your worldview because you frequently encounter their attitudes and opinions, bringing them to mind when you make your own judgments. Mountains of data indicating something is harmful don’t always convince people to avoid it if those dangers aren’t salient, such as if they haven’t personally experienced them. It can seem as if things are going well when the stock market is up because it’s a simple, visible, and therefore memorable indicator. Bad publicity can be beneficial in the long run if it means something, such as a controversial book, gets mentioned often and is more likely to be recalled.

These aren’t empirical rules, but they’re logical consequences of the availability heuristic, in the absence of mitigating factors.

We are what we remember, and our memories have a significant impact on our perception of the world. What we end up remembering is influenced by factors such as the following:

  • Our foundational beliefs about the world
  • Our expectations
  • The emotions a piece of information inspires in us
  • How many times we’re exposed to a piece of information
  • The source of a piece of information

There is no real link between how memorable something is and how likely it is to happen. In fact, the opposite is often true. Unusual events stand out more and receive more attention than commonplace ones. As a result, the availability heuristic skews our perception of risks in two key ways:

We overestimate the likelihood of unlikely events. And we underestimate the likelihood of likely events.

Overestimating the risk of unlikely events leads us to stay awake at night, turning our hair grey, worrying about things that have almost no chance of happening. We can end up wasting enormous amounts of time, money, and other resources trying to mitigate things that have, on balance, a small impact. Sometimes those mitigation efforts end up backfiring, and sometimes they make us feel safer than they should.

On the flipside, we can overestimate the chance of unusually good things happening to us. Looking at everyone’s highlights on social media, we can end up expecting our own lives to also be a procession of grand achievements and joys. But most people’s lives are mundane most of the time, and the highlights we see tend to be exceptional ones, not routine ones.

Underestimating the risk of likely events leads us to fail to prepare for predictable problems and occurrences. We’re so worn out from worrying about unlikely events, we don’t have the energy to think about what’s in front of us. If you’re stressed and anxious much of the time, you’ll have a hard time paying attention to those signals when they really matter.

All of this is not to say that you shouldn’t prepare for the worst. Or that unlikely things never happen (as Littlewood’s Law states, you can expect a one-in-a-million event at least once per month.) Rather, we should be careful about only preparing for the extremes because those extremes are more memorable.

***

How to overcome the availability heuristic

Knowing about a cognitive bias isn’t usually enough to overcome it. Even people like Kahneman who have studied behavioral economics for many years sometimes struggle with the same irrational patterns. But being aware of the availability heuristic is helpful for the times when you need to make an important decision and can step back to make sure it isn’t distorting your view. Here are five ways of mitigating the availability heuristic.

#1. Always consider base rates when making judgments about probability.
The base rate of something is the average prevalence of it within a particular population. For example, around 10% of the population are left-handed. If you had to guess the likelihood of a random person being left-handed, you would be correct to say 1 in 10 in the absence of other relevant information. When judging the probability of something, look at the base rate whenever possible.

#2. Focus on trends and patterns.
The mental model of regression to the mean teaches us that extreme events tend to be followed by more moderate ones. Outlier events are often the result of luck and randomness. They’re not necessarily instructive. Whenever possible, base your judgments on trends and patterns—the longer term, the better. Track record is everything, even if outlier events are more memorable.

#3. Take the time to think before making a judgment.
The whole point of heuristics is that they save the time and effort needed to parse a ton of information and make a judgment. But, as we always say, you can’t make a good decision without taking time to think. There’s no shortcut for that. If you’re making an important decision, the only way to get around the availability heuristic is to stop and go through the relevant information, rather than assuming whatever comes to mind first is correct.

#4. Keep track of information you might need to use in a judgment far off in the future.
Don’t rely on memory. In Judgment in Managerial Decision-Making, Max Bazerman and Don Moore present the example of workplace annual performance appraisals. Managers tend to base their evaluations more on the prior three months than the nine months before that. It’s much easier than remembering what happened over the course of an entire year. Managers also tend to give substantial weight to unusual one-off behavior, such as a serious mistake or notable success, without considering the overall trend. In this case, noting down observations on someone’s performance throughout the entire year would lead to a more accurate appraisal.

#5. Go back and revisit old information.
Even if you think you can recall everything important, it’s a good idea to go back and refresh your memory of relevant information before making a decision.

The availability heuristic is part of Farnam Street’s latticework of mental models.

Who’s in Charge of Our Minds? The Interpreter

One of the most fascinating discoveries of modern neuroscience is that the brain is a collection of distinct modules (grouped, highly connected neurons) performing specific functions rather than a unified system.

We’ll get to why this is so important when we introduce The Interpreter later on.

This modular organization of the human brain is considered one of the key properties that sets us apart from animals. So much so, that it has displaced the theory that it stems from disproportionately bigger brains for our body size.

As neuroscientist Dr. Michael Gazzaniga points out in his wonderful book Who’s In Charge? Free Will and the Science of the Brain, in terms of numbers of cells, the human brain is a proportionately scaled-up primate brain: It is what is expected for a primate of our size and does not possess relatively more neurons. They also found that the ratio between nonneuronal brain cells and neurons in human brain structures is similar to those found in other primates.

So it’s not the size of our brains or the number of neurons, it’s about the patterns of connectivity. As brains scaled up from insect to small mammal to larger mammal, they had to re-organize, for the simple reason that billions of neurons cannot all be connected to one another — some neurons would be way too far apart and too slow to communicate. Our brains would be gigantic and require a massive amount of energy to function.

Instead, our brain specializes and localizes. As Dr. Gazzaniga puts it, “Small local circuits, made of an interconnected group of neurons, are created to perform specific processing jobs and become automatic.” This is an important advance in our efforts to understand the mind.

Dr. Gazzaniga is most famous for his work studying split-brain patients, where many of the discoveries we’re talking about were refined and explored. Split-brain patients give us a natural controlled experiment to find out “what the brain is up to” — and more importantly, how it does its work. What Gazzaniga and his co-researchers found was fascinating.

Emergence

We experience our conscious mind as a single unified thing. But if Gazzaniga & company are right, it most certainly isn’t. How could a “specialized and localized” modular brain give rise to the feeling of “oneness” we feel so strongly about? It would seem there are too many things going on separately and locally:

Our conscious awareness is the mere tip of the iceberg of nonconscious processing. Below our level of awareness is the very busy nonconscious brain hard at work. Not hard for us to imagine are the housekeeping jobs the brain constantly struggles to keep homeostatic mechanisms up and running, such as our heart beating, our lungs breathing, and our temperature just right. Less easy to imagine, but being discovered left and right over the past fifty years, are the myriads of nonconscious processes smoothly putt-putting along. Think about it.

To begin with there are all the automatic visual and other sensory processing we have talked about. In addition, our minds are always being unconsciously biased by positive and negative priming processes, and influenced by category identification processes. In our social world, coalitionary bonding processes, cheater detection processes, and even moral judgment processes (to name only a few) are cranking away below our conscious mechanisms. With increasingly sophisticated testing methods, the number and diversity of identified processes is only going to multiply.

So what’s going on? Who’s controlling all this stuff? The idea is that the brain works more like traffic than a car. No one is controlling it!

It’s due to a principle of complex systems called emergence, and it explains why all of these “specialized and localized” processes can give rise to what seems like a unified mind.

The key to understanding emergence is to understand that there are different levels of organization. My favorite analogy is that of the car, which I have mentioned before. If you look at an isolated car part, such as a cam shaft, you cannot predict that the freeway will be full of traffic at 5:15 PM. Monday through Friday. In fact, you could not even predict the phenomenon of traffic would even occur if you just looked at a brake pad. You cannot analyze traffic at the level of car parts. Did the guy who invented the wheel ever visualize the 405 in Los Angeles on Friday evening? You cannot even analyze traffic at the level of the individual car. When you get a bunch of cars and drivers together, with the variables of location, time, weather, and society, all in the mix, then at that level you can predict traffic. A new set of laws emerge that aren’t predicted from the parts alone.

Emergence, Gazzaniga goes on, is how to understand the brain. Sub-atomic particles, atoms, molecules, cells, neurons, modules, the mind, and a collection of minds (a society) are all different levels of organization, with their own laws that cannot necessarily be predicted from the properties of the level below.

The unified mind we feel present emerges from the thousands of lower-level processes operating in parallel. Most of it is so automatic that we have no idea it’s going on. (Not only does the mind work bottom-up but top down processes also influence it. In other words, what you think influences what you see and hear.)

And when we do start consciously explaining what’s going on — or trying to — we start getting very interesting results. The part of our brain that seeks explanations and infers causality turns out to be a quirky little beast.

The Interpreter

Let’s say you were to see a snake and jump back, automatically and quickly. Did you choose that action? If asked, you’d almost certainly say so, but the truth is more complicated.

If you were to have asked me why I jumped, I would have replied that I thought I’d seen a snake. That answer certainly makes sense, but the truth is I jumped before I was conscious of the snake: I had seen it, I didn’t know I had seen it. My explanation is from post hoc information I have in my conscious system: The facts are that I jumped and that I saw a snake. The reality, however, is that I jumped way before (in a world of milliseconds) I was conscious of the snake. I did not make a conscious decision to jump and then consciously execute it. When I answered that question, I was, in a sense, confabulating: giving a fictitious account of a past event, believing it to be true. The real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala. The reason I would have confabulated is that our human brains are driven to infer causality. They are driven to explain events that make sense out of the scattered facts. The facts that my conscious brain had to work were that I saw a snake, and I jumped. It did not register that I jumped before I was consciously aware of the snake.

Here’s how it works: A thing happens, we react, we feel something about it, and then we go on explaining it. Sensory information is fed into an explanatory module which Gazzaniga calls The Interpreter, and studying split-brain patients showed him that it resides in the left hemisphere of the brain.

With that knowledge, Gazzaniga and his team were able to do all kinds of clever things to show how ridiculous our Interpreter can often be, especially in split-brain patients.

Take this case of a split-brain patient unconsciously making up a nonsense story when its two hemispheres are shown different images and instructed to choose a related image from a group of pictures. Read carefully:

We showed a split-brain patient two pictures: A chicken claw was shown to his right visual field, so the left hemisphere only saw the claw picture, and a snow scene was shown to the left visual field, so the right hemisphere saw only that. He was then asked to choose a picture from an array of pictures placed in fully view in front of him, which both hemispheres could see.

The left hand pointed to a shovel (which was the most appropriate answer for the snow scene) and the right hand pointed to a chicken (the most appropriate answer for the chicken claw). Then we asked why he chose those items. His left-hemisphere speech center replied, “Oh, that’s simple. The chicken claw goes with the chicken,” easily explaining what it knew. It had seen the chicken claw.

Then, looking down at his left hand pointing to the shovel, without missing a beat, he said, “And you need a shovel to clean out the chicken shed.” Immediately, the left brain, observing the left hand’s response without the knowledge of why it had picked that item, put into a context that would explain it. It interpreted the response in a context consistent with what it knew, and all it knew was: Chicken claw. It knew nothing about the snow scene, but it had to explain the shovel in his left hand. Well, chickens do make a mess, and you have to clean it up. Ah, that’s it! Makes sense.

What was interesting was that the left hemisphere did not say, “I don’t know,” which truly was the correct answer. It made up a post hoc answer that fit the situation. It confabulated, taking cues from what it knew and putting them together in an answer that made sense.

The left hand, responding to the snow Gazzaniga covertly showed the left visual field, pointed to the snow shovel. This all took place in the right hemisphere of the brain (think of it like an “X” — the right hemisphere controls the left side of the body and vice versa). But since it was a split-brain patient, the left hemisphere was not given any of the information about snow.

And yet, the left hemisphere is where the Interpreter resides! So what did the Interpreter do, asked to explain why the shovel was chosen seeing but having no information about snow, only about chickens? It made up a story about shoveling chicken coops!

Gazzaniga goes on to explain several cases of being able to fool the left brain Interpreter over and over, and in often subtle ways.

***

This left-brain module is what we use to explain causality, seeking it for its own sake. The Interpreter, like all of our mental modules, is a wonderful adaption that’s led us to understand and explain causality and the world around us, to our great advantage, but as any good student of social psychology knows, we’ll simply make up a plausible story if we have nothing solid to go on — leading to a narrative fallacy.

This leads to odd results that seem pretty maladaptive, like our tendency to gamble like idiots. (In his famous talk, The Psychology of Human Misjudgment, Charlie Munger calls this mis-gambling compulsion.) But outside of the artifice of the casino, the Interpreter works quite well.

But here’s the catch. In the words of Gazzaniga, “The interpreter is only as good as the information it gets.”

The interpreter receives the results of the computations of a multitude of modules. It does not receive the information that there are multitudes of modules. It does not receive the information about how the modules work. It does not receive the information that there is a pattern-recognition system in the right hemisphere. The interpreter is a module that explains events from the information it does receive.

[…]

The interpreter is receiving data from the domains that monitor the visual system, the somatosensory system, the emotions, and cognitive representations. But as we just saw above, the interpreter is only as good as the information it receives. Lesions or malfunctions in any one of these domain-monitoring systems leads to an array of peculiar neurological conditions that involve the formation of either incomplete or delusional understandings about oneself, other individuals, objects, and the surrounding environment, manifesting in what appears to be bizarre behavior. It no longer seems bizarre, however, once you understand that such behaviors are the result of the interpreter getting no, or bad, information.

This can account for a lot of the ridiculous behavior and ridiculous narratives we see around us. The Interpreter must deal with what it’s given, and as Gazzaniga’s work shows, it can be manipulated and tricked. He calls it “hijacking” — and when the Interpreter is hijacked, it makes pretty bad decisions and generates strange explanations.

Anyone who’s watched a friend acting hilariously when wearing a modern VR headset can see how easy it is to “hijack” one’s sensory perceptions even if the conscious brain “knows” that it’s not real. And of course, Robert Cialdini once famously described this hijacking process as a “click, whirr” reaction to social stimuli. It’s a powerful phenomenon.

***

What can we learn from this?

The story of the multi-modular mind and the Interpreter module shows us that the brain does not have a rational “central command station” — your mind is at the mercy of what it’s fed. The Interpreter is constantly weaving a story of what’s going on around us, applying causal explanations to the data it’s being fed; doing the best job it can with what it’s got.

This is generally useful: a few thousand generations of data has honed our modules to understand the world well enough to keep us surviving and thriving. The job of the brain is to pass on our genes. But that doesn’t mean that it’s always making optimal decisions in the modern world.

We must realize that our brain can be fooled; it can be tricked, played with, and we won’t always realize it immediately. Our Interpreter will weave a plausible story — that’s it’s job.

For this reason, Charlie Munger employs a “two-track” analysis: What are the facts; and where is my brain fooling me? We’re wise to follow suit.