Tag: Rational Choice

The Availability Bias: How to Overcome a Common Cognitive Distortion

“The attention which we lend to an experience is proportional to its vivid or interesting character, and it is a notorious fact that what interests us most vividly at the time is, other things equal, what we remember best.” —William James

The availability heuristic explains why winning an award makes you more likely to win another award. It explains why we sometimes avoid one thing out of fear and end up doing something else that’s objectively riskier. It explains why governments spend enormous amounts of money mitigating risks we’ve already faced. It explains why the five people closest to you have a big impact on your worldview. It explains why mountains of data indicating something is harmful don’t necessarily convince everyone to avoid it. It explains why it can seem as if everything is going well when the stock market is up. And it explains why bad publicity can still be beneficial in the long run.

Here’s how the availability heuristic works, how to overcome it, and how to use it to your advantage.

***

How the availability heuristic works

Before we explain the availability heuristic, let’s quickly recap the field it comes from.

Behavioral economics is a field of study bringing together knowledge from psychology and economics to reveal how real people behave in the real world. This is in contrast to the traditional economic view of human behavior, which assumed people always behave in accordance with rational, stable interests. The field largely began in the 1960s and 1970s with the work of psychologists Amos Tversky and Daniel Kahneman.

Behavioral economics posits that people often make decisions and judgments under uncertainty using imperfect heuristics, rather than by weighing up all of the relevant factors. Quick heuristics enable us to make rapid decisions without taking the time and mental energy to think through all the details.

Most of the time, they lead to satisfactory outcomes. However, they can bias us towards certain consistently irrational decisions that contradict what economics would tell us is the best choice. We usually don’t realize we’re using heuristics, and they’re hard to change even if we’re actively trying to be more rational.

One such cognitive shortcut is the availability heuristic, first studied by Tversky and Kahneman in 1973. We tend to judge the likelihood and significance of things based on how easily they come to mind. The more “available” a piece of information is to us, the more important it seems. The result is that we give greater weight to information we learned recently because a news article you read last night comes to mind easier than a science class you took years ago. It’s too much work to try to comb through every piece of information that might be in our heads.

We also give greater weight to information that is shocking or unusual. Shark attacks and plane crashes strike us more than an accidental drowning or car accidents, so we overestimate their odds.

If we’re presented with a set of similar things with one that differs from the rest, we’ll find it easier to remember. For example, of the sequence of characters “RTASDT9RTGS,” the most common character remembered would be the “9” because it stands out from the letters.

In Behavioural Law and Economics, Timur Kuran and Cass Sunstein write:

“Additional examples from recent years include mass outcries over Agent Orange, asbestos in schools, breast implants, and automobile airbags that endanger children. Their common thread is that people tended to form their risk judgments largely, if not entirely, on the basis of information produced through a social process, rather than personal experience or investigation. In each case, a public upheaval occurred as vast numbers of players reacted to each other’s actions and statements. In each, moreover, the demand for swift, extensive, and costly government action came to be considered morally necessary and socially desirable—even though, in most or all cases, the resulting regulations may well have produced little good, and perhaps even relatively more harm.”

Narratives are more memorable than disjointed facts. There’s a reason why cultures around the world teach important life lessons and values through fables, fairy tales, myths, proverbs, and stories.

Personal experience can also make information more salient. If you’ve recently been in a car accident, you may well view car accidents as more common in general than you did before. The base rates haven’t changed; you just have an unpleasant, vivid memory coming to mind whenever you get in a car. We too easily assume that our recollections are representative and true and discount events that are outside of our immediate memory. To give another example, you may be more likely to buy insurance against a natural disaster if you’ve just been impacted by one than you are before it happens.

Anything that makes something easier to remember increases its impact on us. In an early study, Tversky and Kahneman asked subjects whether a random English word is more likely to begin with “K” or have “K” as the third letter. Seeing as it’s typically easier to recall words beginning with a particular letter, people tended to assume the former was more common. The opposite is true.

In Judgment Under Uncertainty: Heuristics and Biases, Tversky and Kahneman write:

“…one may estimate probability by assessing availability, or associative distance. Lifelong experience has taught us that instances of large classes are recalled better and faster than instances of less frequent classes, that likely occurrences are easier to imagine than unlikely ones, and that associative connections are strengthened when two events frequently co-occur.

…For example, one may assess the divorce rate in a given community by recalling divorces among one’s acquaintances; one may evaluate the probability that a politician will lose an election by considering various ways in which he may lose support; and one may estimate the probability that a violent person will ‘see’ beasts of prey in a Rorschach card by assessing the strength of association between violence and beasts of prey. In all of these cases, the assessment of the frequency of a class or the probability of an event is mediated by an assessment of availability.”

They go on to write:

“That associative bonds are strengthened by repetition is perhaps the oldest law of memory known to man. The availability heuristic exploits the inverse form of this law, that is, it uses strength of association as a basis for the judgment of frequency. In this theory, availability is a mediating variable, rather than a dependent variable as is typically the case in the study of memory.”

***

How the availability heuristic misleads us

“People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media.” —Daniel Kahneman, Thinking Fast and Slow

To go back to the points made in the introduction of this post, winning an award can make you more likely to win another award because it gives you visibility, making your name come to mind more easily in connection to that kind of accolade. We sometimes avoid one thing in favor of something objectively riskier, like driving instead of taking a plane, because the dangers of the latter are more memorable. The five people closest to you can have a big impact on your worldview because you frequently encounter their attitudes and opinions, bringing them to mind when you make your own judgments. Mountains of data indicating something is harmful don’t always convince people to avoid it if those dangers aren’t salient, such as if they haven’t personally experienced them. It can seem as if things are going well when the stock market is up because it’s a simple, visible, and therefore memorable indicator. Bad publicity can be beneficial in the long run if it means something, such as a controversial book, gets mentioned often and is more likely to be recalled.

These aren’t empirical rules, but they’re logical consequences of the availability heuristic, in the absence of mitigating factors.

We are what we remember, and our memories have a significant impact on our perception of the world. What we end up remembering is influenced by factors such as the following:

  • Our foundational beliefs about the world
  • Our expectations
  • The emotions a piece of information inspires in us
  • How many times we’re exposed to a piece of information
  • The source of a piece of information

There is no real link between how memorable something is and how likely it is to happen. In fact, the opposite is often true. Unusual events stand out more and receive more attention than commonplace ones. As a result, the availability heuristic skews our perception of risks in two key ways:

We overestimate the likelihood of unlikely events. And we underestimate the likelihood of likely events.

Overestimating the risk of unlikely events leads us to stay awake at night, turning our hair grey, worrying about things that have almost no chance of happening. We can end up wasting enormous amounts of time, money, and other resources trying to mitigate things that have, on balance, a small impact. Sometimes those mitigation efforts end up backfiring, and sometimes they make us feel safer than they should.

On the flipside, we can overestimate the chance of unusually good things happening to us. Looking at everyone’s highlights on social media, we can end up expecting our own lives to also be a procession of grand achievements and joys. But most people’s lives are mundane most of the time, and the highlights we see tend to be exceptional ones, not routine ones.

Underestimating the risk of likely events leads us to fail to prepare for predictable problems and occurrences. We’re so worn out from worrying about unlikely events, we don’t have the energy to think about what’s in front of us. If you’re stressed and anxious much of the time, you’ll have a hard time paying attention to those signals when they really matter.

All of this is not to say that you shouldn’t prepare for the worst. Or that unlikely things never happen (as Littlewood’s Law states, you can expect a one-in-a-million event at least once per month.) Rather, we should be careful about only preparing for the extremes because those extremes are more memorable.

***

How to overcome the availability heuristic

Knowing about a cognitive bias isn’t usually enough to overcome it. Even people like Kahneman who have studied behavioral economics for many years sometimes struggle with the same irrational patterns. But being aware of the availability heuristic is helpful for the times when you need to make an important decision and can step back to make sure it isn’t distorting your view. Here are five ways of mitigating the availability heuristic.

#1. Always consider base rates when making judgments about probability.
The base rate of something is the average prevalence of it within a particular population. For example, around 10% of the population are left-handed. If you had to guess the likelihood of a random person being left-handed, you would be correct to say 1 in 10 in the absence of other relevant information. When judging the probability of something, look at the base rate whenever possible.

#2. Focus on trends and patterns.
The mental model of regression to the mean teaches us that extreme events tend to be followed by more moderate ones. Outlier events are often the result of luck and randomness. They’re not necessarily instructive. Whenever possible, base your judgments on trends and patterns—the longer term, the better. Track record is everything, even if outlier events are more memorable.

#3. Take the time to think before making a judgment.
The whole point of heuristics is that they save the time and effort needed to parse a ton of information and make a judgment. But, as we always say, you can’t make a good decision without taking time to think. There’s no shortcut for that. If you’re making an important decision, the only way to get around the availability heuristic is to stop and go through the relevant information, rather than assuming whatever comes to mind first is correct.

#4. Keep track of information you might need to use in a judgment far off in the future.
Don’t rely on memory. In Judgment in Managerial Decision-Making, Max Bazerman and Don Moore present the example of workplace annual performance appraisals. Managers tend to base their evaluations more on the prior three months than the nine months before that. It’s much easier than remembering what happened over the course of an entire year. Managers also tend to give substantial weight to unusual one-off behavior, such as a serious mistake or notable success, without considering the overall trend. In this case, noting down observations on someone’s performance throughout the entire year would lead to a more accurate appraisal.

#5. Go back and revisit old information.
Even if you think you can recall everything important, it’s a good idea to go back and refresh your memory of relevant information before making a decision.

The availability heuristic is part of Farnam Street’s latticework of mental models.

13 Practical Ideas That Have Helped Me Make Better Decisions

This article is a collaboration between Mark Steed and myself. He did most of the work. Mark was a participant at the last Re:Think Decision Making event as well as a member of the Good Judgment Project. I asked him to put together something on making better predictions. This is the result.

We all face decisions. Sometimes we think hard about a specific decision, other times, we make decisions without thinking. If you’ve studied the genre you’ve probably read Taleb, Tversky, Kahneman, Gladwell, Ariely, Munger, Tetlock, Mauboussin and/or Thaler. These pioneers write a lot about “rationality” and “biases”.

Rationality dictates the selection of the best choice among however many options. Biases of a cognitive or emotional nature creep in and are capable of preventing the identification of the “rational” choice. These biases can exist in our DNA or can be formed through life experiences. The mentioned authors consider biases extensively, and, lucky for us, their writings are eye-opening and entertaining.

Rather than rehash what brighter minds have discussed, I’ll focus on practical ideas that have helped me make better decisions. I think of this as a list of “lessons learned (so far)” from my work in asset management and as a forecaster for the Good Judgment Project. I’ve held back on submitting this given the breadth and depth of the FS readers, but, rather than expect perfection, I wanted to put something on the table because I suspect many of you have useful ideas that will help move the conversation forward.

1. This is a messy business. Studying decision science can easily motivate self-loathing. There are over one-hundred cognitive biases that might prevent us from making calculated and “rational” decisions. What, you can’t create a decision tree with 124 decision nodes, complete with assorted probabilities in split seconds? I asked around, and it turns out, not many people can. Since there is no way to eliminate all the potential cognitive biases and I don’t possess the mental faculties of Dr. Spock or C-3PO, I might as well live with the fact that some decisions will be more elegant than others.

2. We live and work in dynamic environments. Dynamic environments adapt. The opposite of dynamic environments are static environments. Financial markets, geopolitical events, team sports, etc. are examples of dynamic “environments” because relationships between agents evolve and problems are often unpredictable. Changes from one period are conditional on what happened the previous period. Casinos are more representative of static environments. Not casinos necessarily, but the games inside. If you play Roulette, your odds of winning are always the same and it doesn’t matter what happened the previous turn.

3. Good explanatory models are not necessarily good predictive models. Dynamic environments have a habit of desecrating rigid models. While blindly following an elegant model may be ill-advised, strong explanatory models are excellent guideposts when paired with sound judgment and intuition. Just as I’m not comfortable with the automatic pilot flying a plane without a human in the cockpit, I’m also not comfortable with a human flying a plane without the help of technology. It has been said before, people make models better and models make people better.

4. Instinct is not always irrational.  The rule of thumb, otherwise known as heuristics, provide better results than more complicated analytical techniques. Gerd Gigerenzer, is the thought leader and his book Risk Savvy: How to Make Good Decisions is worth reading. Most literature despises heuristics, but he asserts intuition proves superior because optimization is sometimes mathematically impossible or exposed to sampling error. He often uses the example of Harry Markowitz, who won a Nobel Prize in Economics in 1990 for his work on Modern Portfolio Theory. Markowitz discovered a method for determining the “optimal” mix of assets. However, Markowitz himself did not follow his Nobel prize-winning mean-variance theory but instead used a 1/N heuristic by spreading his dollars equally across N number of investments. He concluded that his 1/N strategy would perform better than a mean-optimization application unless the mean-optimization model had 500 years to compete.  Our intuition is more likely to be accurate if it is preceded by rigorous analysis and introspection. And simple rules are more effective at communicating winning strategies in complex environments. When coaching a child’s soccer team, it is far easier teaching a few basic principles, than articulating the nuances of every possible situation.

5. Decisions are not evaluated in ways that help us reduce mistakes in the future. Our tendency is to only critique decisions where the desired outcome was not achieved while uncritically accepting positive outcomes even if luck, or another factor, produced the desired result. At the end of the day I understand all we care about are results, but good processes are more indicative of future success than good results.

6. Success is ill-defined. In some cases this is relatively straightforward. If the outcome is binary, either it did, or did not happen, success is easy to identify. But this is more difficult in situations where the outcome can take a range of potential values, or when individuals differ on what the values should be.

7. We should care a lot more about calibration. Confidence, not just a decision, should be recorded (and to be clear, decisions should be recorded). Next time you have a major decision, ask yourself how confident you are that the desired outcome will be achieved. Are you 50% confident? 90%? Write it down. This helps with calibration. For all decisions in which you are 50% confident, half should be successes. And you should be right nine out of ten times for all decisions in which you are 90% confident. If you are 100% confident, you should never be wrong. If you don’t know anything about a specific subject then you should be no more confident than a coin flip. It’s amazing how we will assign high confidence to an event we know nothing about. Turns out this idea is pretty helpful. Let’s say someone brings an idea to you and you know nothing about it. Your default should be 50/50; you might as well flip a coin. Then you just need to worry about the costs/payouts.

8. Probabilities are one thing, payouts are another. You might feel 50/50 about your chances but you need to know your payouts if you are right. This is where the expected value comes in handy. It’s the probability of being right multiplied by the payout if you are right, plus the probability of being wrong multiplied by the cost. E= .50(x) + .50(y). Say someone on your team has an idea for a project and you decided there is a 50% chance that it succeeds and, if it does, you double your money, if it doesn’t, you lose what you invested. If the project required $10mm, then the expected outcome is calculated as .50*20 + .50*0 = 10, or $10mm. If you repeat this process a number of times, approving only projects with a 2:1 payout and 50% probability of success you would likely end up with the same amount you started with. Binary outcomes that have a 50/50 probability should have a double-or-nothing payout. This is even more helpful given #7 above. If you were tracking this employee’s calibration you would have a sense as to whether their forecasts are accurate. As a team member or manager, you would want to know if a specific employee is 90% confident all the time but only 50% accurate. More importantly, you would want to know if a certain team member is usually right when they express 90% or 100% confidence. Use a Brier Score to track colleagues but provide an environment to encourage discussion and openness.

9. We really are overconfident. Starting from the assumption that we are probably only 50% accurate is not a bad idea. Phil Tetlock, a professor at UPenn, Team Leader for the Good Judgment Project and author of Expert Political Judgment: How Good Is It? How Can We Know?, suggested political pundits are about 53% accurate regarding political forecasts while CXO Advisory tracks investment gurus and finds they are, in aggregate, about 48% accurate. These are experts making predictions about their core area of expertise. Consider the rate of divorce in the U.S., currently around 40%-50%, as additional evidence that sometimes we don’t know as much as we think. Experts are helpful in explaining a specific discipline but they are less helpful in dynamic environments. If you need something fixed, like a car, a clock or an appliance then experts can be very helpful. Same for tax and accounting advice. It’s not because this stuff is simple, it’s because the environment is static.

10. Improving estimations of probabilities and payouts is about polishing our 1) subject matter expertise and 2) cognitive processing abilities. Learning more about a given subject reduces uncertainty and allows us to move from the lazy 50/50 forecast. Say you travel to Arizona and get stung by a scorpion. Rather than assume a 50% probability of death you can do a quick internet search and learn no one has died from a scorpion bite in Arizona since the 1960s. Overly simplistic, but, you get the picture. Second, data needs to be interpreted in a cogent way. Let’s say you work in asset management and one of your portfolio managers has made three investments that returned -5%, -12% and 22%. What can you say about the manager (other than two of the three investments lost money)? Does the information allow you to claim the portfolio manager is a bad manager? Does the information allow you to claim you can confidently predict his/her average rate of return? Unless you’ve had some statistics, it might not be entirely clear what clinical conclusions you can draw. What if you flipped a coin three times and came up with tails on two of them? That wouldn’t seem so strange. Two-thirds is the same as 66%. If you tossed the coin one-hundred times and got 66 tails, that would be a little more interesting. The more observations, the higher our confidence should be. A 95% confidence interval for the portfolio manager’s average return would be a range between -43% and 45%. Is that enough to take action?

11. Bayesian analysis is more useful than we think. Bayesian updating helps direct given false/true positives and false/true negatives. It’s the probability of a hypothesis given some observed data. For example, what’s the likelihood of X (this new hire will place in the top 10% of the firm) given Y (they graduated from an Ivy League school)? A certain percentage of employees are top-performing employees, some Ivy League grads will be top-performers (others not) and some non-Ivy League grads will be top-performers (others not). If I’m staring at a random employee trying to guess whether they are a top-performing employee all I have are the starting odds, and, if only the top 10% qualify, I know my chances are 1 in 10. But I can update my odds if supplied information regarding their education. Here’s another example. What is the likelihood a project will be successful (X) given it missed one of the first two milestones (Y)?. There are lots of helpful resources online if you want to learn more but think of it this way (hat tip to Kalid Azad at Better Explained); original odds x the evidence adjustment = your new odds. The actual equation is more complicated but that is the intuition behind it. Bayesian analysis has its naysayers. In the examples provided, the prior odds of success are known, or could easily be obtained, but this isn’t always true. Most of the time subjective prior probabilities are required and this type of tomfoolery is generally discouraged. There are ways around that, but no time to explain it here.

12. A word about crowds. Is there a wisdom of crowds? Some say yes, others say no. My view is that crowds can be very useful if individual members of the crowd are able to vote independently or if the environment is such that there are few repercussions for voicing disagreement. Otherwise, I think signaling effects from seeing how others are “voting” is too much evolutionary force to overcome with sheer rational willpower. Our earliest ancestors ran when the rest of the tribe ran. Not doing so might have resulted in an untimely demise.

13. Analyze your own motives. Jonathan Haidt, author of The Righteous Mind: Why Good People Are Divided by Politics and Religion, is credited with teaching that logic isn’t used to find truth, it’s used to win arguments. Logic may not be the only source of truth (and I have no basis for that claim). Keep this in mind as it has to do with the role of intuition in decision making.

Just a few closing thoughts.

We are pretty hard on ourselves. My process is to make the best decisions I can, realizing not all of them will be optimal. I have a method to track my decisions and to score how accurate I am. Sometimes I use heuristics, but I try to keep those to within my area of competency, as Munger says. I don’t do lists of pros and cons because I feel like I’m just trying to convince myself, either way.

If I have to make a big decision, in an unfamiliar area, I try to learn as much as I can about the issue on my own and from experts, assess how much randomness could be present, formulate my thesis, look for contradictory information, try and build downside protection (risking as little as possible) and watch for signals that may indicate a likely outcome. Many of my decisions have not worked out, but most of them have. As the world changes, so will my process, and I look forward to that.

Have something to say? Become a member: join the slack conversation and chat with Mark directly.