Tag: Max Bazerman

The Power of Noticing: What the Best Leaders See

In The Power of Noticing: What the Best Leaders See, Harvard Professor Max Bazerman, opines about how the failure to notice things leads to “poor personal decisions, organizational crises, and societal disasters.” He walks us through the details of each of these, highlighting recent research and how it impacts our awareness of information we’re prone to ignore. Bazerman presents a blueprint to help us be more aware of critical information that we otherwise would have ignored. It causes us to ask the questions, typically found in hindsight but rarely in foresight, “How could that have happened” and “Why didn’t I see it coming?”

Even the best of us fail to notice things, even critical and readily available information in our environment, “due to the human tendency to wear blinders that focus us on a limited set of information.” This additional information, however, is essential to success, and Bazerman argues that “in the future, it will prove a defining quality of leadership.”

Noticing is a System 2 process.

In his best-selling book from 2011, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman discusses Stanovich and West’s distinction between System 1 and System 2 thinking. System 1 is our intuitive system: it is quick, automatic, effortless, implicit, and emotional. Most of our decisions occur in System 1. By contrast, System 2 thinking is slower and more conscious, effortful, explicit, and logical. My colleague Dolly Chugh of New York University notes that the frantic pace of managerial life requires that executives typically rely on System 1 thinking. Readers of this book doubtless are busy people who depend on System 1 when making many decisions. Unfortunately we are generally more affected by biases that restrict our awareness when we rely on System 1 thinking than when we use System 2 thinking.

Noticing important information in contexts where many people do not is generally a System 2 process.

Logic and other strategic thinking tools, like game theory, are also generally system 2 thinking. This requires that we step away from the heat of the moment and think a few steps ahead – imagining how others will respond. This is something that “system 1 intuition typically fails to do adequately.”

So a lot of what Bazerman spends time on is moving toward system 2 thinking when making important judgments.

When you do so, you will find yourself noticing more pertinent information from your environment than you would have otherwise. Noticing what is not immediately in front of you is often counterintuitive and the province of System 2. Here, then, is the purpose and promise of this book: your broadened perspective as a result of System 2 thinking will guide you toward more effective decisions and fewer disappointments.

Rejecting What’s Available

Often the best decisions require that you look beyond what’s available and reject the presented options. Bazerman didn’t always think this way, he needed some help from his colleague Richard Zeckhauser. At a recent talk, Zeckhauser provided the audience with the “Cholesterol Problem.”

Your doctor has discovered that you have a high cholesterol level, namely 260. She prescribes one of many available statin drugs. She says this will generally drop your cholesterol about 30 percent. There may be side effects. Two months later you return to your doctor. Your cholesterol level is now at 195. Your only negative side effect is sweaty palms, which you experience once or twice a week for one or two hours. Your doctor asks whether you can live with this side effect. You say yes. She tells you to continue on the medicine. What do you say?

Bazerman, who has naturally problematic lipids, had a wide body of knowledge on the subject and isn’t known for his shyness. He went with the statin.

Zeckhauser responded, “Why don’t you try one of the other statins instead?” I immediately realized that he was probably right. Rather than focusing on whether or not to stay on the current statin, broadening the question to include the option of trying other statins makes a great deal of sense. After all, there may well be equally effective statins that don’t cause sweaty palms or any other side effects. My guess is that many patients err by accepting one of two options that a doctor presents to them. It is easy to get stuck on an either/or choice, which I … fell victim to at Zeckhauser’s lecture. I made the mistake of accepting the choice as my colleague presented it. I could have and should have asked what all of the options were. But I didn’t. I too easily accepted the choice presented to me.

The Power of Noticing: What the Best Leaders See opens your eyes to what you’re missing.

Max Bazerman Offers Books for Leaders

Max Bazerman, the author of the best book on general decision making that I’ve ever read, Judgment in Managerial Decision Making, came out with 7 book recommendations1.

I hadn’t heard of two of these, which I picked up.

1. Thinking, Fast and Slow by Daniel Kahneman

I think we’ve all heard of this one. Bazerman says:

The development of decision research is the most pronounced influence of the social sciences on professional education and societal change that we have witnessed in the last half-century. Kahneman is the greatest social scientist of our time, and Thinking, Fast and Slow provides an integrated history of the fields of behavioral decision research and behavioral economics, the role of our two different systems for processing information (System 1 vs. System 2), and the wonderful story of Kahneman’s relationship with Amos Tversky (Tversky would have shared Kahneman’s Nobel Prize had he not passed away at an early age.).

2. Nudge: Improving Decisions About Health, Wealth and Happiness by Richard Thaler & Cass Sunstein

This is another one I think most of you have heard of but it’s a classic. I once used this book as the foundation to make the case to a management team for hiring a group of behavioral psychologists. Along with Thinking, Fast and Slow it is part of the ultimate behavioral economics reading list.

Nudge takes the study of how humans depart from rational decision making and turns this work into a prescriptive strategy for action. Over the last 40 years, we have learned a great deal about the systematic and predictable ways in which the human mind departs from rational action. Yet, we have observed dozens of studies that show the limits of trying to debias the human mind. Nudge highlights that we do not need to debias humans, we simply need to understand humans, and create decision architectures with a realistic understanding of the human to guide humans to wise decisions. Nudge has emerged as the bible of behavioral insight teams that are transforming the ways countries help to devise wise policies.

3. The Big Short: Inside the Doomsday Machine by Michael Lewis

Lewis is an amazing writer, with the talent to capture amazing features of how humans have the capacity to overcome common limitations. Moneyball (that would have been on the list, but I imposed a one book per author limit) was a fascinating look about how overcoming common human limits allowed baseball leaders to develop unique and effective leadership strategies. In The Big Short, Lewis shows how people can notice, even when most of us are failing to do so. Lewis shows that it was possible to notice vast problems with our economy by 2007, and tells the amazing account of those who did.

4. Eyewitness To Power: The Essence of Leadership Nixon to Clinton by David Gergen

This one looks fascinating.

David Gergen is an amazingly insightful intellect about so many things, including the nature of Presidential leadership. His writing is wonderful, and his ability to pull out the nuggets of effective leadership in his closing chapter is a lasting contribution. You will learn about four Presidents that have escaped you in the past, and in the process, learn some insights about leadership in your organization.

5. Moral Tribes: Emotion, Reason, and the Gap Between Us and Them by Joshua Greene
This book has been recommended to me by so many smart people that there must be something to it.

Joshua Greene is a wonderful mix of insightful philosopher, careful psychologist, and keen observer of human morality. If you have ever been confronted with the famous “trolley problem”, and want to learn more, Moral Tribes is the place to go. Whether you are a philosopher looking for a new path, a psychologist looking for insight from a new direction, or simply a human who wants to understand your own morality, this book is terrific.

6. Happy Money: The Science of Smarter Spending by Elizabeth Dunn & Michael Norton

For decades, the study of consumer behavior has been dominated by the question of how marketers can understand consumers to sell their products and services. Dunn and Norton use contemporary social science to provide insight into what consumers can do to make themselves, rather than marketers, happy.

7. The Art and Science of Negotiation by Howard Raiffa

The Art and Science of Negotiation is where it all began from an intellectual standpoint, where Raiffa provides insight into how to think systematically in a world where you cannot count on the other side to do so.

Footnotes
  • 1

    source:http://250words.com/2014/03/max-bazerman-best-books-for-leaders/

Daniel Kahneman’s Favorite Approach For Making Better Decisions

Bob Sutton’s book, Scaling Up Excellence: Getting to More Without Settling for Less, contains an interesting section towards the end on looking back from the future, which talks about “a mind trick that goads and guides people to act on what they know and, in turn, amplifies their odds of success.”

We build on Nobel winner Daniel Kahneman’s favorite approach for making better decisions. This may sound weird, but it’s a form of imaginary time travel.

It’s called the premortem. And, while it may be Kahneman’s favorite, he didn’t come up with it. A fellow by the name of Gary Klein invented the premortem technique.

A premortem works something like this. When you’re on the verge of making a decision, not just any decision but a big decision, you call a meeting. At the meeting, you ask each member of your team to imagine that it’s a year later.

Split them into two groups. Have one group imagine that the effort was an unmitigated disaster. Have the other pretend it was a roaring success. Ask each member to work independently and generate reasons, or better yet, write a story, about why the success or failure occurred. Instruct them to be as detailed as possible, and, as Klein emphasizes, to identify causes that they wouldn’t usually mention “for fear of being impolite.” Next, have each person in the “failure” group read their list or story aloud, and record and collate the reasons. Repeat this process with the “success” group. Finally use the reasons from both groups to strengthen your … plan. If you uncover overwhelming and impassible roadblocks, then go back to the drawing board.

Premortems encourage people to use “prospective hindsight,” or, more accurately, to talk in “future perfect tense.” Instead of thinking, “we will devote the next six months to implementing a new HR software initiative,” for example, we travel to the future and think, “we have devoted six months to implementing a new HR software package.”

You imagine that a concrete success or failure has occurred and look “back from the future” to tell a story about the causes.

[…]

Pretending that a success or failure has already occurred—and looking back and inventing the details of why it happened—seems almost absurdly simple. Yet renowned scholars including Kahneman, Klein, and Karl Weick supply compelling logic and evidence that this approach generates better decisions, predictions, and plans. Their work suggests several reasons why. …

1. This approach helps people overcome blind spots

As … upcoming events become more distant, people develop more grandiose and vague plans and overlook the nitty-gritty daily details required to achieve their long-term goals.

2. This approach helps people bridge short-term and long-term thinking

Weick argues that this shift is effective, in part, because it is far easier to imagine the detailed causes of a single outcome than to imagine multiple outcomes and try to explain why each may have occurred. Beyond that, analyzing a single event as if it has already occurred rather than pretending it might occur makes it seem more concrete and likely to actually happen, which motivates people to devote more attention to explaining it.

3. Looking back dampens excessive optimism

As Kahneman and other researchers show, most people overestimate the chances that good things will happen to them and underestimate the odds that they will face failures, delays, and setbacks. Kahneman adds that “in general, organizations really don’t like pessimists” and that when naysayers raise risks and drawbacks, they are viewed as “almost disloyal.”

Max Bazerman, a Harvard professor, believes that we’re less prone to irrational optimism when we predict the fate of projects that are not our own. For example, when it comes to friends’ home renovation projects, most people estimate the costs will run 25 to 50 percent over budget. When it comes to our projects; however, they will be “completed on time and near the project costs.”

4. A premortem challenges the illusion of consensus

Most times, not everyone on a team agrees with the course of action. Even when you have enough cognitive diversity in the room, people still keep their mouths shut because people in power tend to reward people who agree with them while punishing those who dare to speak up with a dissenting view.

The resulting corrosive conformity is evident when people don’t raise private doubts, known risks, and inconvenient facts. In contrast, as Klein explains, a premortem can create a competition where members feel accountable for raising obstacles that others haven’t. “The whole dynamic changes from trying to avoid anything that might disrupt harmony to trying to surface potential problems.”

Mental Model: Bias from Insensitivity to Sample Size

The widespread misunderstanding of randomness causes a lot of problems.

Today we’re going to explore a concept that causes a lot of human misjudgment. It’s called the bias from insensitivity to sample size, or, if you prefer,the law of small numbers.

Insensitivity to small sample sizes causes a lot of problems.

* * *

If I measured one person, who happened to measure 6 feet, and then told you that everyone in the whole world was 6 feet, you’d intuitively realize this is a mistake. You’d say, you can’t measure only one person and then draw such a conclusion. To do that you’d need a much larger sample.

And, of course, you’d be right.

While simple, this example is a key building block to our understanding of how insensitivity to sample size can lead us astray.

As Stuard Suterhland writes in Irrationality:

Before drawing conclusions from information about a limited number of events (a sample) selected from a much larger number of events (the population) it is important to understand something about the statistics of samples.

In Thinking, Fast and Slow, Daniel Kahneman writes “A random event, by definition, does not lend itself to explanation, but collections of random events do behave in a highly regular fashion.” Kahnemen continues, “extreme outcomes (both high and low) are more likely to be found in small than in large samples. This explanation is not causal.”

We all intuitively know that “the results of larger samples deserve more trust than smaller samples, and even people who are innocent of statistical knowledge have heard about this law of large numbers.”

The principle of regression to the mean says that as the sample size grows larger results should converge to a stable frequency. So, if we’re flipping coins, and measuring the proportion of times that we get heads, we’d expect it to approach 50% after some large sample size of, say, 100 but not necessarily 2 or 4.

In our minds, we often fail to account for the accuracy and uncertainty with a given sample size.

While we all understand it intuitively, it’s hard for us to realize in the moment of processing and decision making that larger samples are better representations than smaller samples.

We understand the difference between a sample size of 6 and 6,000,000 fairly well but we don’t, intuitively, understand the difference between 200 and 3,000.

* * *

This bias comes in many forms.

In a telephone poll of 300 seniors, 60% support the president.

If you had to summarize the message of this sentence in exactly three words, what would they be? Almost certainly you would choose “elderly support president.” These words provide the gist of the story. The omitted details of the poll, that it was done on the phone with a sample of 300, are of no interest in themselves; they provide background information that attracts little attention.” Of course, if the sample was extreme, say 6 people, you’d question it. Unless you’re fully mathematically equipped, however, you’ll intuitively judge the sample size and you may not react differently to a sample of, say, 150 and 3000. That, in a nutshell, is exactly the meaning of the statement that “people are not adequately sensitive to sample size.”

Part of the problem is that we focus on the story over reliability, or, robustness, of the results.

System one thinking, that is our intuition, is “not prone to doubt. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. Unless the message is immediately negated, the associations that it evokes will spread as if the message were true.”

Considering sample size, unless it’s extreme, is not a part of our intuition.

Kahneman writes:

The exaggerated faith in small samples is only one example of a more general illusion – we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.

* * *

In engineering, for example, we can encounter this in the evaluation of precedent.

Steven Vick, writing in Degrees of Belief: Subjective Probability and Engineering Judgment, writes:

If something has worked before, the presumption is that it will work again without fail. That is, the probability of future success conditional on past success is taken as 1.0. Accordingly, a structure that has survived an earthquake would be assumed capable of surviving with the same magnitude and distance, with the underlying presumption being that the operative causal factors must be the same. But the seismic ground motions are quite variable in their frequency content, attenuation characteristics, and many other factors, so that a precedent for a single earthquake represents a very small sample size.

Bayesian thinking tells us that a single success, absent of other information, raises the likelihood of survival in the future.

In a way this is related to robustness. The more you’ve had to handle and you still survive the more robust you are.

Let’s look at some other examples.

* * *

Hospital

Daniel Kahneman and Amos Tversky demonstrated our insensitivity to sample size with the following question:

A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50% of all babies are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50%, sometimes lower. For a period of 1 year, each hospital recorded the days on which more than 60% of the babies born were boys. Which hospital do you think recorded more such days?

  1. The larger hospital
  2. The smaller hospital
  3. About the same (that is, within 5% of each other)

Most people incorrectly choose 3. The correct answer is, however, 2.

In Judgment in Managerial Decision Making, Max Bazerman explains:

Most individuals choose 3, expecting the two hospitals to record a similar number of days on which 60 percent or more of the babies board are boys. People seem to have some basic idea of how unusual it is to have 60 percent of a random event occurring in a specific direction. However, statistics tells us that we are much more likely to observe 60 percent of male babies in a smaller sample than in a larger sample.” This effect is easy to understand. Think about which is more likely: getting more than 60 percent heads in three flips of coin or getting more than 60 percent heads in 3,000 flips.

* * *

Another interesting example comes from Poker.

Over short periods of time luck is more important than skill. The more luck contributes to the outcome, the larger the sample you’ll need to distinguish between someone’s skill and pure chance.

David Einhorn explains.

People ask me “Is poker luck?” and “Is investing luck?”

The answer is, not at all. But sample sizes matter. On any given day a good investor or a good poker player can lose money. Any stock investment can turn out to be a loser no matter how large the edge appears. Same for a poker hand. One poker tournament isn’t very different from a coin-flipping contest and neither is six months of investment results.

On that basis luck plays a role. But over time – over thousands of hands against a variety of players and over hundreds of investments in a variety of market environments – skill wins out.

As the number of hands played increases, skill plays a larger and larger role and luck plays less of a role.

* * *

But this goes way beyond hospitals and poker. Baseball is another good example. Over a long season, odds are the best teams will rise to the top. In the short term, anything can happen. If you look at the standing 10 games into the season, odds are they will not be representative of where things will land after the full 162 game season. In the short term, luck plays too much of a role.

In Moneyball, Michael Lewis writes “In a five-game series, the worst team in baseball will beat the best about 15% of the time.”

* * *

If you promote people or work with colleagues you’ll also want to keep this bias in mind.

If you assume that performance at work is some combination of skill and luck you can easily see that sample size is relevant to the reliability of performance.

That performance sampling works like anything else, the bigger the sample size the bigger the reduction in uncertainty and the more likely you are to make good decisions.

This has been studied by one of my favorite thinkers, James March. He calls it the false record effect.

He writes:

False Record Effect. A group of managers of identical (moderate) ability will show considerable variation in their performance records in the short run. Some will be found at one end of the distribution and will be viewed as outstanding; others will be at the other end and will be viewed as ineffective. The longer a manager stays in a job, the less the probable difference between the observed record of performance and actual ability. Time on the job increased the expected sample of observations, reduced expected sampling error, and thus reduced the change that the manager (or moderate ability) will either be promoted or exit.

Hero Effect. Within a group of managers of varying abilities, the faster the rate of promotion, the less likely it is to be justified. Performance records are produced by a combination of underlying ability and sampling variation. Managers who have good records are more likely to have high ability than managers who have poor records, but the reliability of the differentiation is small when records are short.

(I realize promotions are a lot more complicated than I’m letting on. Some jobs, for example, are more difficult than others. It gets messy quickly and that’s part of the problem. Often when things get messy we turn off our brains and concoct the simplest explanation we can. Simple but wrong. I’m only pointing out that sample size is one input into the decision. I’m by no means advocating an “experience is best” approach, as that comes with a host of other problems.)

* * *

This bias is also used against you in advertising.

The next time you see a commercial that says “4 out of 5 Doctors recommend ….” These results are meaningless without knowing the sample size. Odds are pretty good that the sample size is 5.

* * *

Large sample sizes are not a panacea. Things change. Systems evolve and faith in those results can be unfounded as well.

The key, at all times, is to think.

This bias leads to a whole slew of things, such as:
– under-estimating risk
– over-estimating risk
– undue confidence in trends/patterns
– undue confidence in the lack of side-effects/problems

The Bias from insensitivity to sample size is part of the Farnam Street latticework of mental models.

Making Smart Choices: 8 Keys to Making Effective Decisions

Making decisions is a fundamental life skill. Expecting to make perfect decisions all of the time is unreasonable. When even an ounce of luck is involved, good decisions can have bad outcomes. So our goal should be to raise the odds of making a good decision. The best way to do that is to use a good decision-making process.

Smart Choices: A Practical Guide to Making Better Decisions contains an interesting decision-making framework: PrOACT.

We have found that even the most complex decision can be analysed and resolved by considering a set of eight elements. The first five—Problem, Objectives, Alternatives, Consequences, and Tradeoffs—constitute the core of our approach and are applicable to virtually any decision. The acronym for these—PrOACT—serves as a reminder that the best approach to decision situations is a proactive one. … The three remaining elements—uncertainty, risk tolerance, and linked decisions—help clarify decisions in volatile or evolving environments.

This framework can help you make better decisions. Of course, sometimes good decisions go wrong. A good decision, however, increases the odds of success.

There are eight keys to effective decision making.

Work on the right decision problem. … The way you frame your decision at the outset can make all the difference. To choose well, you need to state your decision problems carefully, acknowledging their complexity and avoiding unwarranted assumptions and option-limiting prejudices. …

Specify your objectives. … A decision is a means to an end. Ask yourself what you most want to accomplish and which of your interests, values, concerns, fears, and aspirations are most relevant to achieving your goal. … Decisions with multiple objectives cannot be resolved by focusing on any one objective.

Create imaginative alternatives. … Remember: your decision can be no better than your best alternative. …

Understand the consequences. … Assessing frankly the consequences of each alternative will help you to identify those that best meet your objectives—all your objectives. …

Grapple with your tradeoffs. Because objectives frequently conflict with one another, you’ll need to strike a balance. Some of this must sometimes be sacrifices in favor of some of that. …

Clarify your uncertainties. What could happen in the future and how likely is it that it will? …

Think hard about your risk tolerance. When decisions involve uncertainties, the desired consequence may not be the one that actually results. A much-deliberated bone marrow transplant may or may not halt cancer. …

Consider linked decisions. What you decide today could influence your choices tomorrow, and your goals for tomorrow should influence your choices today. Thus many important decisions are linked over time. …

Harvard Professor Max Bazerman, who has written extensively human misjudgment, suggests something very similar to this approach in his book Judgment in Management Decision Making when he explains the anatomy of decisions. Before we can fully understand judgment, we have to identify the components of the decision-making process that require it. Here are the six steps that Bazerman aruges you should take, either implicitly or explicitly, when applying a rational decision-making process.

1. Define the problem. (M)anagers often act without a thorough understanding of the problem to be solved, leading them to solve the wrong problem. Accurate judgment is required to identify and define the problem. Managers often err by (a) defining the problem in terms of a proposed solution, (b) missing a bigger problem, or (c) diagnosing the problem in terms of its symptoms. Your goal should be to solve the problem not just eliminate its temporary symptoms.

2. Identify the criteria. Most decisions require you to accomplish more than one objective. When buying a car, you may want to maximize fuel economy, minimize cost, maximize comfort, and so on. The rational decision maker will identify all relevant criteria in the decision-making process.

3. Weight the criteria. Different criteria will vary in importance to a decision maker. Rational decision makers will know the relative value they place on each of the criteria identified. The value may be specified in dollars, points, or whatever scoring system makes sense.

4. Generate alternatives. The fourth step in the decision-making process requires identification of possible courses of action. Decision makers often spend an inappropriate amount of search time seeking alternatives, thus creating a barrier to effective decision making. An optimal search continues only until the cost of the search outweighs the value of added information.

5. Rate each alternative on each criterion. How well will each of the alternative solutions achieve each of the defined criteria? This is often the most difficult stage of the decision-making process, as it typically requires us to forecast future events. The rational decision maker carefully assesses the potential consequences on each of the identified criteria of selecting each of the alternative solutions.

6. Compute the optimal decision. Ideally, after all of the first five steps have been completed, the process of computing the optimal decision consists of (a) multiplying the ratings in step 5 by the weight of each criterion, (b) adding up the weighted ratings across all of the criteria for each alternative, and (c) choosing the solution with the highest sum of weighted ratings.

Rational decision frameworks, such as those suggested above, are a great starting place. On top of that, we need to consider our psychological biases. And keep a decision journal.

Insensitivity To Base Rates: An Introduction

In statistics, a base rate refers to the percentage of a population (e.g. grasshoppers, people who live in New York, newborn babies) which have a characteristic. Given a random individual and no additional information, the base rate tells us the likelihood of them exhibiting that characteristic. For instance, around 10% of people are left-handed. If you selected a random person and had no information related to their handedness, you could safely guess there to be a 1 in 10 chance of them being left-handed.

When we make estimations, we often fail to consider the influence of base rates. This is a common psychological bias and is related to the representativeness heuristic.

From Smart Choices: A Practical Guide to Making Better Decisions:

Donald Jones is either a librarian or a salesman. His personality can best be described as retiring. What are the odds that he is a librarian?

When we use this little problem in seminars, the typical response goes something like this: “Oh, it’s pretty clear that he’s a librarian. It’s much more likely that a librarian will be retiring; salesmen usually have outgoing personalities. The odds that he’s a librarian must be at least 90 percent.” Sounds good, but it’s totally wrong.

The trouble with this logic is that it neglects to consider that there are far more salesmen than male librarians. In fact, in the United States, salesmen outnumber male librarians 100 to 1. Before you even considered the fact that Donald Jones is “retiring,” therefore, you should have assigned only a 1 percent chance that Jones is a librarian. That is the base rate.

Now, consider the characteristic “retiring.” Suppose half of all male librarians are retiring, whereas only 5 percent of salesmen are. That works out to 10 retiring salesmen for every retiring librarian — making the odds that Jones is a librarian closer to 10 percent than to 90 percent. Ignoring the base rate can lead you wildly astray.

* * *

Charlie Munger, instructs us how to think about base rates with an example of an employee who got caught for stealing, claiming she’s never done it before and will never do it again:

You find an isolated example of a little old lady in the See’s Candy Company, one of our subsidiaries, getting into the till. And what does she say? “I never did it before, I’ll never do it again. This is going to ruin my life. Please help me.” And you know her children and her friends, and she’d been around 30 years and standing behind the candy counter with swollen ankles. When you’re an old lady it isn’t that glorious a life. And you’re rich and powerful and there she is: “I never did it before, I’ll never do it again.” Well how likely is it that she never did it before? If you’re going to catch 10 embezzlements a year, what are the chances that any one of them — applying what Tversky and Kahneman called base rate information — will be somebody who only did it this once? And the people who have done it before and are going to do it again, what are they all going to say? Well in the history of the See’s Candy Company they always say, “I never did it before, and I’m never going to do it again.” And we cashier them. It would be evil not to, because terrible behavior spreads (Greshams law).

* * *

Max Bazerman, in Judgment in Managerial Decision Making, writes:

(Our tendency to ignore base rates) is even stronger when the specific information is vivid and compelling, as Kahneman and Tversky illustrated in one study from 1972. Participants were given a brief description of a person who enjoyed puzzles and was both mathematically inclined and introverted. Some participants were told that this description was selected from a set of seventy engineers and thirty lawyers. Others were told that the description came from a list of thirty engineers and seventy lawyers. Next, participants were asked to estimate the probability that the person described was an engineer. Even though people admitted that the brief description did not offer a foolproof means of distinguishing lawyers from engineers, most tended to believe the description was of an engineer. Their assessments were relatively impervious to differences in base rates of engineers (70 percent versus 30 percent of the sample group.)

Participants do use base-rate data correctly when no other information is provided. In the absence of a personal description, people use the base rates sensibly and believe that a person picked at random from a group made up mostly of lawyers is most likely to be a lawyer. Thus, people understand the relevance of base-rate information, but tend to disregard such data when individuating data are also available.

Ignoring base rates has many unfortunate implications. … Similarly, unnecessary emotional distress is caused in the divorce process because of the failure of couples to create prenuptial agreements that facilitate the peaceful resolution of a marriage. The suggestion of a prenuptial agreement is often viewed as a sign of bad faith. However, in far too many cases, the failure to create prenuptial agreements occurs when individuals approach marriage with the false belief that the high base rate for divorce does not apply to them.

* * *

Of course, this applies to investing as well. This conversation with Sanjay Bakshi speaks to this:

One of the great lessons from studying history is to do with “base rates”. “Base rate” is a technical term of describing odds in terms of prior probabilities. The base rate of having a drunken-driving accident is higher than those of having accidents in a sober state.

So, what’s the base rate of investing in IPOs? When you buy a stock in an IPO, and if you flip it, you make money if it’s a hot IPO. If it’s not a hot IPO, you lose money. But what’s the base rate – the averaged out experience – the prior probability of the activity of subscribing for IPOs – in the long run?

If you do that calculation, you’ll find that the base rate of IPO investing (in fact, it’s not even investing … it’s speculating) sucks! [T]hat’s the case, not just in India, but in every market, in different time periods.

[…]

When you evaluate whether smoking is good for you or not, if you look at the average experience of 1,000 smokers and compare them with a 1,000 non-smokers, you’ll see what happens.

People don’t do that. They get influenced by individual stories like a smoker who lived till he was 95. Such a smoker will force many people to ignore base rates, and to focus on his story, to fool themselves into believing that smoking can’t be all that bad for them.

What is the base rate of investing in leveraged companies in bull markets?

[…]

This is what you learn by studying history. You know that the base rate of investing in an airline business sucks. There’s this famous joke about how to become a millionaire. You start with a billion, and then you buy an airline. That applies very well in this business. It applies in so many other businesses.

Take the paper industry as an example. Averaged out returns on capital for paper industry are bad for pretty good reasons. You are selling a commodity. It’s an extremely capital intensive business. There’s a lot of over-capacity. And if you understand microeconomics, you really are a price taker. There’s no pricing power for you. Extreme competition in such an environment is going to cause your returns on capital to be below what you would want to have.

It’s not hard to figure this out (although I took a while to figure it out myself). Look at the track record of paper companies around the world, and the airline companies around the world, or the IPOs around the world, or the textile companies around the world. Sure, there’ll be exceptions. But we need to focus on the average experience and not the exceptional ones. The metaphor I like to use here is that of a pond. You are the fisherman. If you want to catch a lot of fish, then you must go to a pond where there’s a lot of fish. You don’t want to go to fish in a pond where there’s very little fish. You may be a great fisherman, but unless you go to a pond where there’s a lot of fish, you are not going to find a lot of fish.

[…]

So one of the great lessons from studying history is to see what has really worked well and what has turned out to be a disaster – and to learn from both.

***

Bias from Insensitivity To Base Rates is part of the Farnam Street Latticework of Mental Models.

12