Tag: Daniel Kahneman

Blog Posts, Book Reviews, and Abstracts: On Shallowness

We’re quite glad that you read Farnam Street, and we hope we’re always offering you a massive amount of value. (If not, email us and tell us what we can do more effectively.)

But there’s a message all of our readers should appreciate: Blog posts are not enough to generate the deep fluency you need to truly understand or get better at something. We offer a starting point, not an end point.

This goes just as well for book reviews, abstracts, cliff’s notes, and a good deal of short-form journalism.

This is a hard message for some who want a shortcut. They want the “gist” and the “high level takeaways”, without doing the work or eating any of the broccoli. They think that’s all it takes: Check out a 5-minute read, and instantly their decision making and understanding of the world will improve right-quick. Most blogs, of course, encourage this kind of shallowness. Because it makes you feel that the whole thing is pretty easy.

Here’s the problem: The world is more complex than that. It doesn’t actually work this way. The nuanced detail behind every “high level takeaway” gives you the context needed to use it in the real world. The exceptions, the edge cases, and the contradictions.

Let me give you an example.

A high-level takeaway from reading Kahneman’s Thinking Fast, and Slow would be that we are subject to something he and Amos Tversky call the Representativeness Heuristic. We create models of things in our head, and then fit our real-world experiences to the model, often over-fitting drastically. A very useful idea.

However, that’s not enough. There are so many follow-up questions. Where do we make the most mistakes? Why does our mind create these models? Where is this generally useful? What are the nuanced examples of where this tendency fails us? And so on. Just knowing about the Heuristic, knowing that it exists, won’t perform any work for you.

Or take the rise of human species as laid out by Yuval Harari. It’s great to post on his theory; how myths laid the foundation for our success, how “natural” is probably a useless concept the way it’s typically used, and how biology is the great enabler.

But Harari’s book itself contains the relevant detail that fleshes all of this out. And further, his bibliography is full of resources that demand your attention to get even more backup. How did he develop that idea? You have to look to find out.

Why do all this? Because without the massive, relevant detail, your mind is built on a house of cards.

What Farnam Street and a lot of other great resources give you is something like a brief map of the territory.

Welcome to Colonial Williamsburg! Check out the re-enactors, the museum, and the theatre. Over there is the Revolutionary City. Gettysburg is 4 hours north. Washington D.C. is closer to 2.5 hours.

Great – now you have a lay of the land. Time to dig in and actually learn about the American Revolution. (This book is awesome, if you actually want to do that.)

Going back to Kahneman, one of his and Tversky’s great findings was the concept of the Availability Heuristic. Basically, the mind operates on what it has close at hand.

As Kahneman puts it, “An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.”

That means that in the moment of decision making, when you’re thinking hard on some complex problem you face, it’s unlikely that your mind is working all that successfully without the details. It doesn’t have anything to draw on. It’d be like a chess player who read a book about great chess players, but who hadn’t actually studied all of their moves. Not very effective.

The great difficulty, of course, is that we lack the time to dig deep into everything. Opportunity costs and trade-offs are quite real.

That’s why you must develop excellent filters. What’s worth learning this deeply? We think it’s the first-principle style mental models. The great ideas from physical systems, biological systems, and human systems. The new-new thing you’re studying is probably either A. Wrong or B. Built on one of those great ideas anyways. Farnam Street, in a way, is just a giant filtering mechanism to get you started down the hill.

But don’t stop there. Don’t stop at the starting line. Resolve to increase your depth and stop thinking you can have it all in 5 minutes or less. Use our stuff, and whoever else’s stuff you like, as an entrée to the real thing.

Daniel Kahneman on Human Gullibility

“The premise of this book is that it is easier to recognize other people’s mistakes than our own.”

***

A simple article connecting two ideas from Daniel Kahneman’s Thinking Fast and Slow on human gullibility and availability bias.

A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact. But it was psychologists who discovered that you do not have to repeat the entire statement of a fact or idea to make it appear true. People who were repeatedly exposed to the phrase “the body temperature of a chicken” were more likely to accept as true the statement that “the body temperature of a chicken is 144°” (or any other arbitrary number). The familiarity of one phrase in the statement sufficed to make the whole statement feel familiar, and therefore true. If you cannot remember the source of a statement, and have no way to relate it to other things you know, you have no option but to go with the sense of cognitive ease.

This is due, in part, to the fact that repetition causes familiarity and familiarity distorts our thinking.

People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media. Frequently mentioned topics populate the mind even as others slip away from awareness. In turn, what the media choose to report corresponds to their view of what is currently on the public’s mind. It is no accident that authoritarian regimes exert substantial pressure on independent media. Because public interest is most easily aroused by dramatic events and by celebrities, media feeding frenzies are common. For several weeks after Michael Jackson’s death, for example, it was virtually impossible to find a television channel reporting on another topic. In contrast, there is little coverage of critical but unexciting issues that provide less drama, such as declining educational standards or overinvestment of medical resources in the last year of life. (As I write this, I notice that my choice of “little-covered” examples was guided by availability. The topics I chose as examples are mentioned often; equally important issues that are less available did not come to my mind.)

Mental Model: Bias from Conjunction Fallacy

The bias from conjunction fallacy is a common reasoning error in which we believe that two events happening in conjunction is more probable than one of those events happening alone. Here’s why this happens and how we can overcome the fallacy.

***

Daniel Kahneman and Amos Tversky spent decades in psychology research to disentangle patterns in errors of human reasoning. The duo discovered a variety of logical fallacies that we tend to make when facing information that appears vaguely familiar. These fallacies lead to bias — irrational behavior based on beliefs that are not always grounded in reality.

In his book Thinking Fast and Slow, which summarizes his and Tversky’s life work, Kahneman introduces biases that stem from the conjunction fallacy – the false belief that a conjunction of two events is more probable than one of the events on its own.

What is Probability?

Probability can be a difficult concept. Most of us have an intuitive understanding of what probability is, but there is little consensus on what it actually means. It is just as vague and subjective a concept as democracy, beauty or freedom. However, this is not always troublesome – we can still easily discuss the notion with others. Kahneman reflects:

In all the years I spent asking questions about the probability of events, no one ever raised a hand to ask me, “Sir, what do you mean by probability?” as they would have done if I had asked them to assess a strange concept such as globability.

Everyone acted as if they knew how to answer my questions, although we all understood that it would be unfair to ask them for an explanation of what the word means.

While logicians and statisticians might disagree, to most of us probability is simply a tool that describes our degree of belief. For instance, we know that the sun will rise tomorrow and we consider it near impossible that there will be two suns up in the sky instead of one. In addition to the extremes, there are also events which lie somewhere in the middle of the probability spectrum, such as the degree of belief that it will rain tomorrow.

Despite its vagueness, probability has its virtues. Assigning probabilities helps us make the degree of belief actionable and also communicable to others. If we believe that the probability it will rain tomorrow is 90%, we are likely to carry an umbrella and suggest our family do so as well.

Probability, Base Rates, and Representativeness

Most of us are already familiar with representativeness and base rates. Consider the classic example of x number of black and y number of white-colored marbles in a jar. It is a simple exercise to tell what the probabilities of drawing each color are if you know their base rates (proportion). Using base rates is the obvious approach for estimations when no other information is provided.

However, Kahneman managed to prove that we have a tendency to ignore base rates in light of specific descriptions. He calls this phenomenon the Representativeness Bias. To illustrate representativeness bias, consider the example of seeing a person reading The New York Times on the New York subway. Which do you think would be a better bet about the reading stranger?

1) She has a PhD.
2) She does not have a college degree.

Representativeness would tell you to bet on the PhD, but this is not necessarily a good idea. You should seriously consider the second alternative because many more non-graduates than PhDs ride in New York subways. While a more significant proportion of PhDs may read The New York Times, the total number of New York Times readers with only high school degrees is likely to be much larger, even if the proportion itself is very slim.

In a series of similar experiments, Kahneman’s subjects failed to recognize the base rates in light of individual information. This is unsurprising. Kahneman explains:

On most occasions, people who act friendly are in fact friendly. A professional athlete who is very tall and thin is much more likely to play basketball than football. People with a PhD are more likely to subscribe to The New York Times than people who ended their education after high school. Young men are more likely than elderly women to drive aggressively.

While following representativeness bias might improve your overall accuracy, it will not always be the statistically optimal approach.

In his bestseller Moneyball, Michael Lewis tells a story of Oakland A’s baseball team coach, Billy Beane, who recognized this fallacy and used it to his advantage.

When recruiting new players for the team, instead of relying on scouts he relied heavily on statistics of past performance. This approach allowed him to build a team of great players that were passed up by other teams because they did not look the part. Needless to say, the team achieved excellent results at a low cost.

Conjunction Fallacy

While representativeness bias occurs when we fail to account for low base rates, conjunction fallacy occurs when we assign a higher probability to an event of higher specificity. This violates the laws of probability.

Consider the following study:

Participants were asked to rank four possible outcomes of the next Wimbledon tournament from most to least probable. Björn Borg was the dominant tennis player of the day when the study was conducted. These were the outcomes:

A. Borg will win the match.
B. Borg will lose the first set.
C. Borg will lose the first set but win the match.
D. Borg will win the first set but lose the match.

How would you order them?

Kahneman was surprised to see that most subjects ordered the chances by directly contradicting the laws of logic and probability. He explains:

The critical items are B and C. B is the more inclusive event and its probability must be higher than that of an event it includes. Contrary to logic, but not to representativeness or plausibility, 72% assigned B a lower probability than C.

If you thought about the problem carefully, you drew the following diagram in your head. Losing the first set will always, by definition, be a more probable event than losing the first set and winning the match.
Screen Shot 2016-08-05 at 6.28.30 PM

The Linda Problem

As discussed in our piece on the Narrative Fallacy, the best-known and most controversial of Kahneman and Tversky’s experiments involved a fictitious lady called Linda. The fictional character was created to illustrate the role heuristics play in our judgment and how it can be incompatible with logic. This is how they described Linda.

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Kahneman conducted a series of experiments, in which he showed that representativeness tends to cloud our judgments and that we ignore the base rates in light of stories. The Linda problem started off with the task to estimate the plausibility of 9 different scenarios that subjects were supposed to rank in order of likelihood.

Linda is a teacher in elementary school.
Linda works in a bookstore and takes yoga classes.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a member of the League of Women Voters.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.

Kahneman was startled to see that his subjects judged the likelihood of Linda being a bank teller and a feminist more likely than her being just a bank teller. As explained earlier, doing so makes little sense. He went on to explore the phenomenon further:

In what we later described as “increasingly desperate” attempts to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:

Which alternative is more probable?

Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.

What is especially interesting about these results is that, even when aware of the biases in place, we do not discard them.

When I asked my large undergraduate class in some indignation, “Do you realize that you have violated an elementary logical rule?” someone in the back row shouted, “So what?” and a graduate student who made the same error explained herself by saying, “I thought you just asked for my opinion.”

The issue is not constrained to students and but also affects professionals.

The naturalist Stephen Jay Gould described his own struggle with the Linda problem. He knew the correct answer, of course, and yet, he wrote, “a little homunculus in my head continues to jump up and down, shouting at me—‘but she can’t just be a bank teller; read the description.”

Our brains simply seem to prefer consistency over logic.

The Role of Plausibility

Representativeness and conjunction fallacy occurs because we make the mental shortcut from our perceived plausibility of a scenario to its probability.

The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. Representativeness belongs to a cluster of closely related assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories.

Kahneman warns us about the effects of these biases on our perception of expert opinion and forecasting. He explains that we are more likely to believe scenarios that are illustrative rather than probable.

The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting. Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability:

A massive flood somewhere in North America next year, in which more than 1,000 people drown

An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown

The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.

In order to appreciate the role of plausibility, he suggests we have a look at an example without an accompanying explanation.

Which alternative is more probable?

Jane is a teacher.
Jane is a teacher and walks to work.

In this case, when evaluating plausibility and coherence, there are no quick answers to the probability question, and we can easily conclude that the first one is more likely. The rule goes that in the absence of a competing intuition, logic prevails.

Taming our intuition

The first lesson to thinking clearly is to question how you think. We should not simply believe whatever comes to our mind – our beliefs must be constrained by logic. You don’t have to become an expert in probability to tame your intuition, but having a grasp of simple concepts will help. There are two main rules that are worth repeating in light of representativeness bias:

1) All probabilities add up to 100%.

This means that if you believe that there’s a 90% chance it will rain tomorrow, there’s a 10% chance that it will not rain tomorrow.

However, since you believe that there is only a 90% chance that it will rain tomorrow, you cannot be 95% certain that it will rain tomorrow morning.

We typically make this type of error, when we mean to say that, if it rains, there’s a 95% probability it will happen in the morning. That’s a different claim and the probability of raining tomorrow morning under such premises is 0.9*0.95=85.5%.

This also means the odds that, if it rains, it will not rain in the morning, are 90.0%-85.5% = 4.5%.

2) The second principle is called the Bayes rule.

 It allows us to correctly adjust our beliefs with the diagnosticity of the evidence. Bayes rule follows the formula:

Picture1

In essence, the formula states that the posterior odds are proportional to prior odds times the likelihood. Kahneman crystallizes two keys to disciplined Bayesian reasoning:

• Anchor your judgment of the probability of an outcome on a plausible base rate.
• Question the diagnosticity of your evidence.

Kahneman explains it with an example:

If you believe that 3% of graduate students are enrolled in computer science (the base rate), and you also believe that the description of Tom is 4 times more likely for a graduate student in computer science than in other fields, then Bayes’s rule says you must believe that the probability that Tom is a computer science student is now 11%.

Four times as likely means that we expect roughly 80% of all computer science students to resemble Tom. We use this proportion to obtain the adjusted odds. (The calculation goes as follows: 0.03*0.8/(0.03*0.8+((1-0.03)*(1-0.8)))=11%)

The easiest way to become better at making decisions is by making sure you question your assumptions and follow strong evidence. When evidence is anecdotal, adjust minimally, and trust the base rates. Odds are, you will be pleasantly surprised.

***

Want More? Check out our ever-growing collection of mental models and biases and get to work.

Avoiding Falling Victim to The Narrative Fallacy

The narrative fallacy leads us to see events as stories, with logical chains of cause and effect. Stories help us make sense of the world. However, if we’re not aware of the narrative fallacy it can lead us to believe we understand the world more than we really do.

***

A typical biography starts by describing the subject’s young life, trying to show how the ultimate painting began as just a sketch. In Walter Isaacson’s biography of Steve Jobs, for example, Isaacson illustrates that Jobs’s success was determined to a great degree by the childhood influence of his father. Paul Jobs, a careful, detailed-oriented engineer and craftsman – would carefully craft the backs of fences and cabinets even if no one would see – who Jobs later found out was not his biological father. The combination of his adoption and his craftsman father planted the seeds of Steve’s adult personality: his penchant for design detail, his need to prove himself, his messianic zeal. The recent movie starring Michael Fassbender especially plays up the latter cause; Jobs’s feeling of abandonment drove his success. Fassbender’s emotional portrayal earned him an Oscar nomination.

Nassim Taleb describes a memorable experience of a similar type in his book The Black Swan.

In Rome, Taleb is having an animated discussion with a professor who has read his first book Fooled by Randomness, parts of which promote the idea that our mind creates more cause-and-effect links than reality would support. The professor proceeds to congratulate Taleb on his great luck by being born in Lebanon:

… had you grown up in a Protestant society where people are told that efforts are linked to rewards and individual responsibility is emphasized, you would never have seen the world in such a manner. You were able to see luck and separate cause-and-effect because of your Eastern Orthodox Mediterranean heritage.

These types of stories strike a deep chord: They give us deep, affecting reasons on which to hang our understanding of reality. They help us make sense of our own lives. And, most importantly, they frequently cause us to believe we can predict the future. The problem is, most of them are a sham.

As flattered as he was by the professor’s praise, Nassim Taleb knew instantly that attributing his success to his background was a fallacy:

How do I know that this attribution to the background is bogus? I did my own empirical test by checking how many traders with my background who experienced the same war become skeptical empiricists, and found none out of twenty-six.

The professor who had just praised the idea that we overestimate our ability to understand cause-and-effect couldn’t stop himself from committing the very same error in conversation with Taleb himself.

Steve Jobs felt the same about the idea that his adoption had anything but a coincidental effect on his success:

There’s some notion that because I was abandoned, I worked very hard so I could do well and make my parents wish they had me back, or some such nonsense, but that’s ridiculous […] Knowing I was adopted may have made me feel more independent, but I have never felt abandoned. I’ve always felt special. My parents made me feel special.

The Narrative Fallacy

Such is the power of the Narrative Fallacy — the backward-looking mental tripwire that causes us to attribute a linear and discernable cause-and-effect chain to our knowledge of the past. As Nassim points out, there is a deep biological basis to the problem: we are inundated with so much sensory information that our brains have no other choice; we must put things in order so we can process the world around us. It’s implicit in how we understand the world. When the coffee cup falls, we need to know why it fell. (We knocked it over.) If someone gets the job instead of us, we need to know why they were deemed better. (They had more experience, they were more likeable.) Without a deep search for reasons, we would go around with blinders on, one thing simply happening after another. The world does not make sense without cause-and-effect.

This necessary mental function serves us well, in general. But we also must come to terms with the types of situations where our broadly useful “ordering” function causes us to make errors.

What You Don’t See

We fall for narrative regularly and in a wide variety of contexts. Sports are the most obvious example: Try to recall the last time you watched a profile of a famous athlete — the rise from obscurity, the rags-to-riches story, the dream-turned-reality. How did ESPN portray the success of the athlete?

If it was like most profiles, you’d most likely see some combination of the following: Parents or a coach that pushed him/her to strive for excellence; a natural gift for the sport, or at the very least, a strong inborn athleticism; an impactful life event and/or some form of adversity; and a hard work ethic.

The copy might read as follows,

It was clear from a young age that Steven was destined for greatness. He was taller than his whole class, had skills that none of his peers had, and a mother who never let him indulge laziness or sloth. Losing his father at a young age pushed him to work harder than ever, knowing he’d have to support his family. And once he met someone willing to tutor him, someone like Central High basketball coach Ed Johnson, the future was all but assured — Steven was going to be an NBA player come hell or high-water.

If you were to read this story about how a tall, strong, fast, skilled young man with good coaching and a hard work ethic came to dominate the NBA, would you stop for even a second to question that those were the total causes of his success? If you’re like most people, the answer is no. We hear about similar stories over and over again.

The problem is, these stories are subject to a deep narrative fallacy. Here’s why: think again about the supposed causes of Steven’s success — work ethic, great parents, strong coaching, a formative life event. How many young men in the United States alone have the exact same background and yet failed to achieve their dreams of NBA stardom? The question answers itself: there are probably thousands of them.

That’s the problem with narrative: it lures us into believing that we can explain the past through cause-and-effect when we hear a story that supports our prior beliefs. To take the case of our fictitious basketball player, we have been conditioned to believe that hard work, pushy parents, and coaches, and natural gifts lead to fame and success.

To be clear, many of these factors are contributive in important ways. There aren’t many short, slow guys with no work ethic and bad hand-eye coordination playing in the NBA. And passion goes a long way towards deserved success. But if it’s true that there are a thousand times as many similarly qualified men who do not end up playing professional basketball, then our diagnosis must be massively incomplete. There is no other sensible explanation.

What might we be missing? The list is endless, but some combination of luck, opportunism, and timing must have played into Steven’s and any other NBA player’s success. It is very difficult to understand cause-and-effect chains, and this is a simple example compared to a complex case like a war or an economic crisis, which would have had multiple causes working in a variety of directions and enough red herrings to make a proper analysis difficult.

When it comes to understanding success in basketball, the problem might seem benign. (Although if you’re an NBA executive, maybe not so benign.) But the rest of us deal with it in a practical way all the time. Taleb points out in his book that you could prove how narratives influence our decision making by giving a friend a great detective novel and then, before they got to the final reveal, asking them to give the odds of each individual suspect being the culprit. It’s almost certain that unless you allowed them to write down the odds as they went along, they’d add up to more than 100% — the better the novel, the higher the number. This would be nonsense from a probability standpoint, the odds must add to 100%, but each narrative is so strong that we lose our bearings.

Of course, the more salient the given reasons, the more likely we are to attribute them as causes. By salient, we mean evidence which is mentally vivid and seemingly relevant to the situation — as in the case of the young basketball prodigy losing his father, a common, Disney-made plot point. We cling to these highly available images and they cause us to make avoidable errors in judgment, as with those of us afraid to board an airplane because of the thought of a harrowing airplane crash, one that is statistically unlikely to occur in many thousands of lifetimes.

Less is More

Daniel Kahneman makes this point wonderfully in his book Thinking: Fast, and Slow, in a chapter titled “Linda: Less is More”.

The best-known and most controversial of our experiments involved a fictitious lady called Linda. Amos and I made up the Linda problem to provide conclusive evidence of the role of heuristics in judgment and of their incompatibility with logic. This is how we described Linda:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

[…]

In what we later described as an “increasingly desperate” attempt to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:

Which alternative is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.

This again demonstrates the power of narrative. We are so willing to take a description and mentally categorize the person it describes — in this case, feminist bank teller is much more salient than simply bank teller — that we will violate probabilities and logic in order to uphold our first conclusion. The extra description makes our mental picture much more vivid, and we conclude that the vivid picture must be the correct one. This error very likely contributes to our tendency to stereotype based on limited sample sizes; a remnant of our primate days which probably served us well in a very different environment.

The point is made again by Kahneman as he explains how the corporate performance of companies included in famous business books like In Search of Excellence and Good to Great regressed to the mean after the books were written, a now well-known phenomenon:

You are probably tempted to think of causal explanations for these observations: perhaps the successful firms became complacent, the less successful firms tried harder. But this is the wrong way to think about what happened. The average gap must shrink, because the original gap was due in good part to luck, which contributed both to the success of the top firms and to the lagging performance of the rest. We have already encountered this statistical fact of life: regression to the mean.

Stories of how businesses rise and fall strike a chord with readers by offering what the human mind needs: a simple message of triumph and failure that identifies clear causes and ignores the determinative power of luck and the inevitability of regression. These stories induce and maintain an illusion of understanding, imparting lessons of enduring value to readers who are all too eager to believe them.

What’s the harm, you say? Aren’t we just making our lives a little more interesting with these stories? Very true. Stories serve many wonderful functions: teaching, motivating, inspiring. The problem though is that we too often believe our stories are predictive. We make them more real than they are. The writers of the business case-study books certainly believed that the explanations of success they put forth would be predictive of future success (the title Built to Last certainly implies as much), yet a good many of the companies soon became shells of their former selves – Citigroup, Hewlett Packard, Motorola, and Sony among them.

Is a good corporate culture helpful in generating success? Certainly! As it is with height and NBA success. But it’s far more difficult to determine cause and effect than simply recognizing the no-brainers. Just as many tall, talented, hard-working basketball players have failed to make it, many corporate cultures which met all of the Built to Last structures have subsequently failed. The road to success was simply more complicated than the reductive narrative of the book would allow. Strategic choices, luck, circumstance, and the contributions of specific individual personalities may have all played a role. It’s hard to say. And unless we recognize the Narrative Fallacy for what it is, a simplified and often incorrect view of past causality, we carry an arrogance about our knowledge of the past and its usefulness in predicting the future.

Reason-Respecting Tendency

A close cousin of the narrative fallacy is what Charlie Munger refers to as Reason-Respecting Tendency in Poor Charlie’s Almanack. Here’s how Charlie describes the tendency:

There is in man, particularly one in an advanced culture, a natural love of accurate cognition and a joy in its exercise. This accounts for the widespread popularity of crossword puzzles, other puzzles, and bridge and chess columns, as well as all games requiring mental skill.

This tendency has an obvious implication. It makes man especially prone to learn well when a would-be teacher gives correct reasons for what is being taught, instead of simply laying out the desired belief ex-cathedra with no reasons given. Few practices, therefore, are wiser than not only thinking through reasons before giving orders but also communicating these reasons to the recipient of the order.

[…]

Unfortunately, Reason-Respecting Tendency is so strong that even a person’s giving of meaningless or incorrect reasons will increase compliance with his orders and requests. This has been demonstrated in psychology experiments wherein “compliance practitioners” successfully jump to the head of lines in front of copying machines by explaining their reason: “I have to make some copies.” This sort of unfortunate byproduct of Reason-Respecting Tendency is a conditioned reflex, based on a widespread appreciation of the importance of reasons. And, naturally, the practice of laying out various claptrap reasons is much used by commercial and cult “compliance practitioners” to help them get what they don’t deserve.

The deep structure of the mind is such that stories, reasons, and causes, things that point an arrow in the direction of Why are the ones that stick most deeply. Our need to look for cause-and-effect chains in anything we encounter is simply an extension of our inbuilt pattern recognition software, which can deepen and broaden as we learn new things. It has been shown, for example, that a master chess player cannot remember the pieces on a randomly assembled chessboard any better than a complete novice. But a master chess player can memorize the pieces on a board which represents an actual game in progress. If you take the pieces away, the master can replicate their positions with very high fidelity, whereas a novice cannot. The difference is that the pattern recognition software of the chess player has been developed to a high degree through deliberate practice — they have been in a thousand game situations just like it in the past. And while most of us may not be able to memorize chess games, we all have brains that perform the same function in other contexts.

Taleb hits on the same idea in The Black Swan:

Consider a collection of words glued together to constitute a 500-page book. If the words are purely random, picked up from the dictionary in a totally unpredictable way, you will not be able to summarize, transfer, or reduce the dimensions of that book without losing something significant from it. You need 100,000 words to carry the exact message of a random 100,000 words with you on your next trip to Siberia. Now consider the opposite: a book filled with the repetition of the following sentence: “The chairman of [insert here your company’s name] is a lucky fellow who happened to be in the right place at the right time and claims credit for the company’s success, without making a single allowance for luck,” running ten times per page for 500 pages. The entire book can be accurately compressed, as I have just done, into 34 words (out of 100,000); you could reproduce it with total fidelity out of such a kernel.

If we combine the ideas of Reason-Respecting Tendency and the mind’s deep craving for order, the interesting truth is that the best teaching, learning, and storytelling methods — those involving reasons and narrative, on which our brain can store information in a more useful and efficient way — are also the ones that cause us to make some of the worst mistakes. Our craving for order betrays us.

What Can We Do?

The first step, clearly, is to become aware of the problem. Once we understand our brain’s craving for narrative, we begin to see narratives every day, all the time, especially as we consume news. The key question we must ask ourselves is “Of the population of X subject to the same initial conditions, how many turned out similarly to Y? What hard-to-measure causes might have played a role?” This is what we did when we unraveled Steven’s narrative above. How many kids just like him, with the same stated conditions — tall, skilled, good parents, good coaches, etc. — achieved the same result? We don’t have to run an empirical test to understand that our narrative sense is providing some misleading cause-and-effect. Common sense tells us there are likely to be many more failures than successes in that pool, leading us to understand that there must have been other unrealized factors at play; luck being a crucial one. Some identified factors were necessary but not sufficient — height, talent, and coaching among them — and some factors might have been negligible or even negative. (Would it have helped or hurt Steven’s NBA chances if he had not lost his father? Impossible to say.)

Modern scientific thought is built on just this sort of edifice to solve the cause-and-effect problem. A thousand years ago, much of what we thought we knew was based on naïve backward-looking causality. (Steve put leeches on his skin and then survived the plague = leeches cure the plague.) Only when we learned to take the concept of [leeches = cure for plague] and call it a hypothesis did we begin to understand the physical world. Only by downgrading our naïve assumptions to the status of a hypothesis, which needs to be tested with rigorous experiment – give 100 plague victims leeches and let 100 of them go leech-less and tally the results – did we find a method to parse actual cause and effect.

And it is just as relevant to ask ourselves the inverse of the question posed above: “Of the population not subject to X, how many still ended up with the results of Y?” This is where we ask: which basketball players had intact families, easy childhoods, and, yet ended up in the NBA anyway? Which corporations lacked the traits described in Good to Great but achieved Greatness anyway? When we are willing to ask both types of questions and try our best to answer them, we can start to see which elements are simply part of the story rather than causal contributors.

A second way we can circumvent narrative is to simply avoid or reinterpret sources of information most subject to the bias. Turn the TV news off. Stop reading so many newspapers. Be skeptical of biographies, memoirs, and personal histories. Be careful of writers who are incredibly talented at painting a narrative, but claim to be writing facts. (I love Malcolm Gladwell’s books, but he would be an excellent example here.) We learned above that narrative is so powerful it can overcome basic logic, so we must be rigorous to some extent about what kinds of information we allow to pass through our filters. Strong narrative flow is exactly why we enjoy a fictional story, but when we enter the non-fiction world of understanding and decision making, the power of narrative is not always on our side. We want to use narrative to our advantage — to teach ourselves or others useful concepts — but be wary of where it can mislead.

One way to assess how narrative affects your decision-making is to start keeping a journal of your decisions or predictions in any arena that you consider important. It’s important to note the why behind your prediction or decision. If you’re going to invest in your cousin’s new social media startup — sure to succeed — explain to yourself exactly why you think it will work. Be detailed. Whether the venture succeeds or fails, you will now have a forward-looking document to refer to later, so that when you have the benefit of hindsight, you can evaluate your original assumptions instead of finding convenient reasons to justify the success or failure. The more you’re able to do this exercise, the more you’ll come to understand how complicated cause-and-effect factors are when we look ahead rather than behind.

Munger also gives us a few prescriptions in Poor Charlie’s Almanack after describing the Availability Bias, another kissing cousin of the Narrative Fallacy and the Reason-Respecting Tendency. We are wise to take heed of them as we approach our fight with narrative:

The main antidote to miscues from the Availability-Misweighing Tendency often involves procedures, including use of checklists, which are almost always helpful.

Another antidote is to behave somewhat like Darwin did when he emphasized disconfirming evidence. What should be done is to especially emphasize factors that don’t produce reams of easily available numbers, instead of drifting mostly or entirely into considering factors that do produce such numbers. Still another antidote is to find and hire some skeptical, articulate people with far-reaching minds to act as advocates for notions that are opposite to the incumbent notions. [Ed: Or simply ask a smart friend to do the same.]

One consequence of this tendency is that extra-vivid evidence, being so memorable and thus more available in cognition, should often consciously be under weighed while less vivid evidence should be overweighed.

Munger’s prescriptions are almost certainly as applicable to solving the narrative problem as the close-related Availability problem, especially the issue we discussed earlier of vivid evidence.

Lastly, the final prescription comes from Taleb himself; the progenitor of the idea of our problem with narrative: when searching for real truth, favor experimentation over storytelling (data over anecdote), favor experience over history (which can be cherry-picked), and favor clinical knowledge over grand theories. Figure out what you know and what’s a guess, and become humble about your understanding of the past.

This recognition and respect of the power of our minds to invent and love stories can help us reduce our misunderstanding of the world.

Daniel Kahneman in Conversation with Michael Mauboussin on Intuition, Causality, Loss Aversion and More

Ever want to be the fly on the wall for a fascinating conversation. Well, here’s your chance. Santa Fe Institute Board of Trustees Chair Michael Mauboussin interviews Nobel Prize winner Daniel Kahneman.

The wide-ranging conversation talks about disciplined intuition, causality, base rates, loss aversion and so much more. You don’t want to miss this.

Here’s an excerpt from Kahneman I think you’ll enjoy. You can read the entire transcript here.

The Sources of Power is a very eloquent book on expert intuition with magnificent examples, and so he is really quite hostile to my point of view, basically.

We spent years working on that, on the question of when can intuitions be trusted? What’s the boundary between trustworthy and untrustworthy intuitions?

I would summarize the answer as saying there is one thing you should not do. People’s confidence in their intuition is not a good guide to their validity. Confidence is something else entirely, and maybe we can talk about confidence separately later, but confidence is not it.

What there is, if you want to know whether you can trust intuition, it really is like deciding on a painting, whether it’s genuine or not. You can look at the painting all you want, but asking about the provenance is usually the best guide about whether a painting is genuine or not.

Similarly for expertise and intuition, you have to ask not how happy the individual is with his or her own intuitions, but first of all, you have to ask about the domain. Is the domain one where there is enough regularity to support intuitions? That’s true in some medical domains, it certainly is true in chess, it is probably not true in stock picking, and so there are domains in which intuition can develop and others in which it cannot. Then you have to ask whether, if it’s a good domain, one in which there are regularities that can be picked up by the limited human learning machine. If there are regularities, did the individual have an opportunity to learn those regularities? That primarily has to do with the quality of the feedback.

Those are the questions that I think should be asked, so there is a wide domain where intuitions can be trusted, and they should be trusted, and in a way, we have no option but to trust them because most of the time, we have to rely on intuition because it takes too long to do anything else.

Then there is a wide domain where people have equal confidence but are not to be trusted, and that may be another essential point about expertise. People typically do not know the limits of their expertise, and that certainly is true in the domain of finances, of financial analysis and financial knowledge. There is no question that people who advise others about finances have expertise about finance that their advisees do not have. They know how to look at balance sheets, they understand what happens in conversations with analysts.

There is a great deal that they know, but they do not really know what is going to happen to a particular stock next year. They don’t know that, that is one of the typical things about expert intuition in that we know domains where we have it, there are domains where we don’t, but we feel the same confidence and we do not know the limits of our expertise, and that sometimes is quite dangerous.

***

Still curious? See our interview with Kahneman here.

How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas

Analogies are a means of drawing a parallel between two different things which we often use to convey complex ideas and to communicate effectively. We often use analogies to aid our reasoning. In this post, we explore how analogies work and how you can best utilize them.

***

John Pollack is a former Presidential Speechwriter. If anyone knows the power of words to move people to action, shape arguments, and persuade, it is he.

In Shortcut: How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas, he explores the powerful role of analogy in persuasion and creativity.

One of the key tools he uses for this is analogy.

While they often operate unnoticed, analogies aren’t accidents, they’re arguments—arguments that, like icebergs, conceal most of their mass and power beneath the surface. In arguments, whoever has the best argument wins.

But analogies do more than just persuade others — they also play a role in innovation and decision making.

From the bloody Chicago slaughterhouse that inspired Henry Ford’s first moving assembly line, to the “domino theory” that led America into the Vietnam War, to the “bicycle for the mind” that Steve Jobs envisioned as a Macintosh computer, analogies have played a dynamic role in shaping the world around us.

Despite their importance, many people have only a vague sense of the definition.

What is an Analogy?

In broad terms, an analogy is simply a comparison that asserts a parallel—explicit or implicit—between two distinct things, based on the perception of a share property or relation. In everyday use, analogies actually appear in many forms. Some of these include metaphors, similes, political slogans, legal arguments, marketing taglines, mathematical formulas, biblical parables, logos, TV ads, euphemisms, proverbs, fables and sports clichés.

Because they are so disguised they play a bigger role than we consciously realize. Not only do analogies effectively make arguments, but they trigger emotions. And emotions make it hard to make rational decisions.

While we take analogies for granted, the ideas they convey are notably complex.

All day every day, in fact, we make or evaluate one analogy after the other, because some comparisons are the only practical way to sort a flood of incoming data, place it within the content of our experience, and make decisions accordingly.

Remember the powerful metaphor — that arguments are war. This shapes a wide variety of expressions like “your claims are indefensible,” “attacking the weakpoints,” and “You disagree, OK shoot.”

Or consider the Map and the Territory — Analogies give people the map but explain nothing of the territory.

Warren Buffett is one of the best at using analogies to communicate effectively. One of my favorite analogies is when he noted “You never know who’s swimming naked until the tide goes out.” In other words, when times are good everyone looks amazing. When times suck, hidden weaknesses are exposed. The same could be said for analogies:

We never know what assumptions, deceptions, or brilliant insights they might be hiding until we look beneath the surface.

Most people underestimate the importance of a good analogy. As with many things in life, this lack of awareness comes at a cost. Ignorance is expensive.

Evidence suggests that people who tend to overlook or underestimate analogy’s influence often find themselves struggling to make their arguments or achieve their goals. The converse is also true. Those who construct the clearest, most resonant and apt analogies are usually the most successful in reaching the outcomes they seek.

The key to all of this is figuring out why analogies function so effectively and how they work. Once we know that, we should be able to craft better ones.

Don’t Think of an Elephant

Effective, persuasive analogies frame situations and arguments, often so subtly that we don’t even realize there is a frame, let alone one that might not work in our favor. Such conceptual frames, like picture frames, include some ideas, images, and emotions and exclude others. By setting a frame, a person or organization can, for better or worse, exert remarkable influence on the direction of their own thinking and that of others.

He who holds the pen frames the story. The first person to frame the story controls the narrative and it takes a massive amount of energy to change the direction of the story. Sometimes even the way that people come across information, shapes it — stories that would be a non-event if disclosed proactively became front page stories because someone found out.

In Don’t Think of an Elephant, George Lakoff explores the issue of framing. The book famously begins with the instruction “Don’t think of an elephant.”

What’s the first thing we all do? Think of an elephant, of course. It’s almost impossible not to think of an elephant. When we stop consciously thinking about it, it floats away and we move on to other topics — like the new email that just arrived. But then again it will pop back into consciousness and bring some friends — associated ideas, other exotic animals, or even thoughts of the GOP.

“Every word, like elephant, evokes a frame, which can be an image of other kinds of knowledge,” Lakoff writes. This is why we want to control the frame rather than be controlled by it.

In Shortcut Pollack tells of Lakoff talking about an analogy that President George W. Bush made in the 2004 State of the Union address, in which he argued the Iraq war was necessary despite the international criticism. Before we go on, take Bush’s side here and think about how you would argue this point – how would you defend this?

In the speech, Bush proclaimed that “America will never seek a permission slip to defend the security of our people.”

As Lakoff notes, Bush could have said, “We won’t ask permission.” But he didn’t. Instead he intentionally used the analogy of permission slip and in so doing framed the issue in terms that would “trigger strong, more negative emotional associations that endured in people’s memories of childhood rules and restrictions.”

Commenting on this, Pollack writes:

Through structure mapping, we correlate the role of the United States to that of a young student who must appeal to their teacher for permission to do anything outside the classroom, even going down the hall to use the toilet.

But is seeking diplomatic consensus to avoid or end a war actually analogous to a child asking their teacher for permission to use the toilet? Not at all. Yet once this analogy has been stated (Farnam Street editorial: and tweeted), the debate has been framed. Those who would reject a unilateral, my-way-or-the-highway approach to foreign policy suddenly find themselves battling not just political opposition but people’s deeply ingrained resentment of childhood’s seemingly petty regulations and restrictions. On an even subtler level, the idea of not asking for a permission slip also frames the issue in terms of sidestepping bureaucratic paperwork, and who likes bureaucracy or paperwork.

Deconstructing Analogies

Deconstructing analogies, we find out how they function so effectively. Pollack argues they meet five essential criteria.

  1. Use the highly familiar to explain something less familiar.
  2. Highlight similarities and obscure differences.
  3. Identify useful abstractions.
  4. Tell a coherent story.
  5. Resonate emotionally.

Let’s explore how these work in greater detail. Let’s use the example of master-thief, Bruce Reynolds, who described the Great Train Robbery as his Sistine Chapel.

The Great Train Robbery

In the dark early hours of August 8, 1963, an intrepid gang of robbers hot-wired a six-volt battery to a railroad signal not far from the town of Leighton Buzzard, some forty miles north of London. Shortly, the engineer of an approaching mail train, spotting the red light ahead, slowed his train to a halt and sent one of his crew down the track, on foot, to investigate. Within minutes, the gang overpowered the train’s crew and, in less than twenty minutes, made off with the equivalent of more than $60 million in cash.

Years later, Bruce Reynolds, the mastermind of what quickly became known as the Great Train Robbery, described the spectacular heist as “my Sistine Chapel.”

Use the familiar to explain something less familiar

Reynolds exploits the public’s basic familiarity with the famous chapel in the Vatican City, which after Leonardo da Vinci’s Mona Lisa is perhaps the best-known work of Renaissance art in the world. Millions of people, even those who aren’t art connoisseurs, would likely share the cultural opinion that the paintings in the chapel represent “great art” (as compared to a smaller subset of people who might feel the same way about Jackson Pollock’s drip paintings, or Marcel Duchamp’s upturned urinal).

Highlight similarities and obscure differences

Reynold’s analogy highlights, through implication, similarities between the heist and the chapel—both took meticulous planning and masterful execution. After all, stopping a train and stealing the equivalent of $60m—and doing it without guns—does require a certain artistry. At the same time, the analogy obscures important differences. By invoking the image of a holy sanctuary, Reynolds triggers a host of associations in the audience’s mind—God, faith, morality, and forgiveness, among others—that camouflage the fact that he’s describing an action few would consider morally commendable, even if the artistry involved in robbing that train was admirable.

Identify useful abstractions

The analogy offers a subtle but useful abstraction: Genius is genius and art is art, no matter what the medium. The logic? If we believe that genius and artistry can transcend genre, we must concede that Reynolds, whose artful, ingenious theft netted millions, is an artist.

Tell a coherent story

The analogy offers a coherent narrative. Calling the Great Train Robbery his Sistine Chapel offers the audience a simple story that, at least on the surface makes sense: Just as Michelangelo was called by God, the pope, and history to create his greatest work, so too was Bruce Reynolds called by destiny to pull off the greatest robbery in history. And if the Sistine Chapel endures as an expression of genius, so too must the Great Train Robbery. Yes, robbing the train was wrong. But the public perceived it as largely a victimless crime, committed by renegades who were nothing if not audacious. And who but the most audacious in history ever create great art? Ergo, according to this narrative, Reynolds is an audacious genius, master of his chosen endeavor, and an artist to be admired in public.

There is an important point here. The narrative need not be accurate. It is the feelings and ideas the analogy evokes that make it powerful. Within the structure of the analogy, the argument rings true. The framing is enough to establish it succulently and subtly. That’s what makes it so powerful.

Resonate emotionally

The analogy resonates emotionally. To many people, mere mention of the Sistine Chapel brings an image to mind, perhaps the finger of Adam reaching out toward the finger of God, or perhaps just that of a lesser chapel with which they are personally familiar. Generally speaking, chapels are considered beautiful, and beauty is an idea that tends to evoke positive emotions. Such positive emotions, in turn, reinforce the argument that Reynolds is making—that there’s little difference between his work and that of a great artist.

Jumping to Conclusions

Daniel Kahneman explains the two thinking structures that govern the way we think: System one and system two . In his book, Thinking Fast and Slow, he writes “Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake are acceptable, and if the jump saves much time and effort.”

“A good analogy serves as an intellectual springboard that helps us jump to conclusions,” Pollack writes. He continues:

And once we’re in midair, flying through assumptions that reinforce our preconceptions and preferences, we’re well on our way to a phenomenon known as confirmation bias. When we encounter a statement and seek to understand it, we evaluate it by first assuming it is true and exploring the implications that result. We don’t even consider dismissing the statement as untrue unless enough of its implications don’t add up. And consider is the operative word. Studies suggest that most people seek out only information that confirms the beliefs they currently hold and often dismiss any contradictory evidence they encounter.

The ongoing battle between fact and fiction commonly takes place in our subconscious systems. In The Political Brain: The Role of Emotion in Deciding the Fate of the Nation, Drew Westen, an Emory University psychologist, writes: “Our brains have a remarkable capacity to find their way toward convenient truths—even if they are not all true.”

This also helps explain why getting promoted has almost nothing to do with your performance.

Remember Apollo Robbins? He’s a professional pickpocket. While he has unique skills, he succeeds largely through the choreography of people’s attention. “Attention,” he says “is like water. It flows. It’s liquid. You create channels to divert it, and you hope that it flows the right way.”

“Pickpocketing and analogies are in a sense the same,” Pollack concludes, “as the misleading analogy picks a listener’s mental pocket.”

And this is true whether someone else diverts our attention through a resonant but misleading analogy—“Judges are like umpires”—or we simply choose the wrong analogy all by ourselves.

Reasoning by Analogy

We rarely stop to see how much of our reasoning is done by analogy. In a 2005 study published in the Harvard Business Review, Giovanni Gavettie and Jan Rivkin wrote: “Leaders tend to be so immersed in the specifics of strategy that they rarely stop to think how much of their reasoning is done by analogy.” As a result they miss things. They make connections that don’t exist. They don’t check assumptions. They miss useful insights. By contrast “Managers who pay attention to their own analogical thinking will make better strategic decisions and fewer mistakes.”

***

Shortcut goes on to explore when to use analogies and how to craft them to maximize persuasion.