Tag: Narrative Fallacy

What Sharks Can Teach Us About Survivorship Bias

Survivorship bias refers to the idea that we get a false representation of reality when we base our understanding only on the experiences of those who live to tell their story. Taking a look at how we misrepresent shark attacks highlights how survivorship bias distorts reality in other situations.

When asked what the deadliest shark is to humans, most people will say the great white. The lasting influence of the movie Jaws, reinforced by dozens of pop culture references and news reports, keeps that species of shark at the top of the mind when one considers the world’s most fearsome predators. While it is true that great white sharks do attack humans (rarely), they also leave a lot of survivors. And they’re not after humans in particular. They usually just mistake us for seals, one of their key food sources.

We must be careful to not let a volume of survivors in one area blind us to the stories of a small number of survivors elsewhere. Most importantly, we need to ask ourselves what stories are not being told because no one is around to tell them. The experiences of the dead are necessary if we want an accurate understanding of the world.


Before we drill down into some interesting statistics, it’s important to understand that great whites are one member of a class of sharks with many common characteristics. Great whites are closely related to tiger and bull sharks. They all have similar habitats, physiology, and instincts. They are also all large, with an average size over ten feet long.

Tiger and bull sharks rarely attack humans, and to someone being bit by one of these huge creatures, there isn’t all that much difference between them. The Florida Museum’s International Shark Attack file explains that “positive identification of attacking sharks is very difficult since victims rarely make adequate observations of the attacker during the ‘heat’ of the interaction. Tooth remains are seldom found in wounds and diagnostic characters for many requiem sharks [of which the great white is one] are difficult to discern even by trained professionals.”

The fatality rate in known attacks is 21.5% for the bull shark, 16% for the great white, and 26% for the tiger shark. But in sheer volume, attacks attributed to great whites outnumber the other two species three to one. So there are three times as many survivors to tell the story of their great white attack.


When it comes to our picture of reality of the most dangerous shark, there are other blind spots. Not all sharks have the same behaviors as those three, such as swimming close to shore and being around enough prey to develop a preference for fat seals versus bony humans. Pelagic sharks live in the water desert that is the open ocean and have to eat pretty much whatever they can find. The oceanic white tip is a pelagic shark that is probably far more dangerous to humans—we just don’t come into contact with them as often.

There are only fifteen documented attacks by an oceanic white tip, with three of those being fatal. But since most attacks occur in the open ocean in more isolated situations (e.g., a couple of people on a boat versus five hundred people swimming at a beach), we really have no idea how dangerous oceanic white tips are. There could be hundreds of undocumented attacks that left behind no survivors to tell the tale.

One famous survivor story gives us a glimpse of how dangerous oceanic white tips might be. In 1945, a Japanese submarine shot down the USS Indianapolis. For a multitude of reasons, partly due to the fact that the Indianapolis was on a top secret mission and partly due to tragic incompetence, a rescue ship was not sent for four days. Those who survived the ship’s sinking had to then try to survive in the open ocean with little gear until rescue arrived. The water was full of sharks.

In Indianapolis: The True Story of the Worst Sea Disaster in US Naval History and the Fifty-Year Fight to Exonerate an Innocent Man, Lynn Vincent and Sara Vladic quote Boatswain’s Mate Second Class Eugene Morgan as he described part of his experience: “All the time, the sharks never let up. We had a cargo net that had Styrofoam things attached to keep it afloat. There were about fifteen sailors on this, and suddenly, ten sharks hit it and there was nothing left. This went on and on.” These sharks are believed to have been oceanic white tips. It’s unknown how many men died from shark attacks. Many also perished due to exposure, dehydration, injury, and exhaustion. Of the 1,195 crewmen originally aboard the ship, only 316 survived. It represents the single biggest loss of life from a single ship in US naval history.

Because humans are rarely in the open ocean in large numbers, not only are attacks by this shark less common, there are also fewer survivor stories. The story of the USS Indianapolis is a rare, brutal case that provides a unique picture.


Our estimation of the shark that could do us the most harm is often formed by survivorship bias. We develop an inaccurate picture based on the stories of those who live to tell the tale of their shark attack. We don’t ask ourselves who didn’t survive, and so we miss out on the information we need to build an accurate picture of reality.

The point is not to shift our fear to oceanic white tips, which are, in fact, critically endangered. Our fear of sharks seems to make us indifferent to what happens to them, even though they are an essential part of the ocean ecosystem. We are also much more of a danger to sharks than they are to us. We kill them by the millions every year. Neither should we shift our fear to other, more lethal animals, which will likely result in the same indifference to their role in the ecosystem.

The point is rather to consider how well you make decisions when you only factor in the stories of the survivors. For instance, if you were to try to reduce instances of shark attacks or try to limit their severity, you will not likely get the results you are after if you only pay attention to the survivor stories. You need to ask who didn’t make it and try to figure out their stories as well. If you try to implement measures aimed only at great whites near beaches, your measures might not be effective against other predatory sharks. And if you conclude that swimmers are better off in the open ocean because sharks seem to only attack near beaches, you’d be completely wrong.


Survivorship bias crops up all over our lives and impedes us from accurately assessing danger. Replace “dangerous sharks” with “dangerous cities” or “dangerous vacation spots” and you can easily see how your picture of a certain location might be skewed based on the experiences of survivors. We can’t be afraid of a tale if no one lives to tell it. More survivors can make something seem more dangerous rather than less dangerous because the volume of stories makes them more memorable.

If fewer people survived shark attacks we wouldn’t have survivor stories influencing our perception about how dangerous sharks are. In all likelihood we would attribute some of the ocean deaths to other causes, like drowning, because it wouldn’t occur to us that sharks could be responsible.

Understanding survivorship bias prompts us to look for the stories of those who weren’t successful. A lack of visible survivors with memorable stories might mean we view other fields as far safer and easier than they are.

For example, a field of business where people who experience failures go on to do other things might seem riskier than one where people who fail are too ashamed to talk about it. The failure of tech start-ups sometimes feels like daily news. We don’t often, however, hear about the real estate agent who has trouble making sales or who keeps getting outbid on offers. Nor do we hear much about architects who design terrible houses or construction companies who don’t complete projects.

Survivorship bias prompts us to associate more risk with industries that exhibit more public failures. But the failures from industries or businesses that aren’t shared are equally important. If we focus only on the survivor stories, we might think that being a real estate agent or an architect is safer than starting a technology company. It might be, but we can’t only base our understanding on which career option is the best bet on the widely shared stories of failure.

If we don’t factor survivorship bias into our thinking we end up in a classic map is not the territory problem. The survivor stories become a poor navigational tool for the terrain.

Most of us know that we shouldn’t become a writer based on the results achieved by J.K Rowling and John Grisham. But even if we go out and talk to other writers, or learn about their careers, or attend writing seminars given by published authors, we are still only talking to the survivors.

Yes, it’s super inspiring to know Stephen King got so many rejections early in his career that the stack of them was enough to pull a nail out of the wall. But what about the writers who got just as many rejections and never published anything? Not only can we learn a lot from them about the publishing industry, we need to consider their experiences if we want to anticipate and understand the challenges involved in being a writer.


Not recognizing survivorship bias can lead to faulty decision making. We don’t see the big picture and end up optimizing for a small slice of reality. We can’t completely overcome survivorship bias. The best we can do is acknowledge it, and when the stakes are high or the result important, stop and look for the stories of those who were unsuccessful. They have just as much, if not more, to teach us.

The next time you’re assessing risk, ask yourself: am I paying too much attention to the great white sharks and not enough to the oceanic white tips?

Avoiding Falling Victim to The Narrative Fallacy

The narrative fallacy leads us to see events as stories, with logical chains of cause and effect. Stories help us make sense of the world. However, if we’re not aware of the narrative fallacy it can lead us to believe we understand the world more than we really do.


A typical biography starts by describing the subject’s young life, trying to show how the ultimate painting began as just a sketch. In Walter Isaacson’s biography of Steve Jobs, for example, Isaacson illustrates that Jobs’s success was determined to a great degree by the childhood influence of his father. Paul Jobs, a careful, detailed-oriented engineer and craftsman – would carefully craft the backs of fences and cabinets even if no one would see – who Jobs later found out was not his biological father. The combination of his adoption and his craftsman father planted the seeds of Steve’s adult personality: his penchant for design detail, his need to prove himself, his messianic zeal. The recent movie starring Michael Fassbender especially plays up the latter cause; Jobs’s feeling of abandonment drove his success. Fassbender’s emotional portrayal earned him an Oscar nomination.

Nassim Taleb describes a memorable experience of a similar type in his book The Black Swan.

In Rome, Taleb is having an animated discussion with a professor who has read his first book Fooled by Randomness, parts of which promote the idea that our mind creates more cause-and-effect links than reality would support. The professor proceeds to congratulate Taleb on his great luck by being born in Lebanon:

… had you grown up in a Protestant society where people are told that efforts are linked to rewards and individual responsibility is emphasized, you would never have seen the world in such a manner. You were able to see luck and separate cause-and-effect because of your Eastern Orthodox Mediterranean heritage.

These types of stories strike a deep chord: They give us deep, affecting reasons on which to hang our understanding of reality. They help us make sense of our own lives. And, most importantly, they frequently cause us to believe we can predict the future. The problem is, most of them are a sham.

As flattered as he was by the professor’s praise, Nassim Taleb knew instantly that attributing his success to his background was a fallacy:

How do I know that this attribution to the background is bogus? I did my own empirical test by checking how many traders with my background who experienced the same war become skeptical empiricists, and found none out of twenty-six.

The professor who had just praised the idea that we overestimate our ability to understand cause-and-effect couldn’t stop himself from committing the very same error in conversation with Taleb himself.

Steve Jobs felt the same about the idea that his adoption had anything but a coincidental effect on his success:

There’s some notion that because I was abandoned, I worked very hard so I could do well and make my parents wish they had me back, or some such nonsense, but that’s ridiculous […] Knowing I was adopted may have made me feel more independent, but I have never felt abandoned. I’ve always felt special. My parents made me feel special.

The Narrative Fallacy

Such is the power of the Narrative Fallacy — the backward-looking mental tripwire that causes us to attribute a linear and discernable cause-and-effect chain to our knowledge of the past. As Nassim points out, there is a deep biological basis to the problem: we are inundated with so much sensory information that our brains have no other choice; we must put things in order so we can process the world around us. It’s implicit in how we understand the world. When the coffee cup falls, we need to know why it fell. (We knocked it over.) If someone gets the job instead of us, we need to know why they were deemed better. (They had more experience, they were more likeable.) Without a deep search for reasons, we would go around with blinders on, one thing simply happening after another. The world does not make sense without cause-and-effect.

This necessary mental function serves us well, in general. But we also must come to terms with the types of situations where our broadly useful “ordering” function causes us to make errors.

What You Don’t See

We fall for narrative regularly and in a wide variety of contexts. Sports are the most obvious example: Try to recall the last time you watched a profile of a famous athlete — the rise from obscurity, the rags-to-riches story, the dream-turned-reality. How did ESPN portray the success of the athlete?

If it was like most profiles, you’d most likely see some combination of the following: Parents or a coach that pushed him/her to strive for excellence; a natural gift for the sport, or at the very least, a strong inborn athleticism; an impactful life event and/or some form of adversity; and a hard work ethic.

The copy might read as follows,

It was clear from a young age that Steven was destined for greatness. He was taller than his whole class, had skills that none of his peers had, and a mother who never let him indulge laziness or sloth. Losing his father at a young age pushed him to work harder than ever, knowing he’d have to support his family. And once he met someone willing to tutor him, someone like Central High basketball coach Ed Johnson, the future was all but assured — Steven was going to be an NBA player come hell or high-water.

If you were to read this story about how a tall, strong, fast, skilled young man with good coaching and a hard work ethic came to dominate the NBA, would you stop for even a second to question that those were the total causes of his success? If you’re like most people, the answer is no. We hear about similar stories over and over again.

The problem is, these stories are subject to a deep narrative fallacy. Here’s why: think again about the supposed causes of Steven’s success — work ethic, great parents, strong coaching, a formative life event. How many young men in the United States alone have the exact same background and yet failed to achieve their dreams of NBA stardom? The question answers itself: there are probably thousands of them.

That’s the problem with narrative: it lures us into believing that we can explain the past through cause-and-effect when we hear a story that supports our prior beliefs. To take the case of our fictitious basketball player, we have been conditioned to believe that hard work, pushy parents, and coaches, and natural gifts lead to fame and success.

To be clear, many of these factors are contributive in important ways. There aren’t many short, slow guys with no work ethic and bad hand-eye coordination playing in the NBA. And passion goes a long way towards deserved success. But if it’s true that there are a thousand times as many similarly qualified men who do not end up playing professional basketball, then our diagnosis must be massively incomplete. There is no other sensible explanation.

What might we be missing? The list is endless, but some combination of luck, opportunism, and timing must have played into Steven’s and any other NBA player’s success. It is very difficult to understand cause-and-effect chains, and this is a simple example compared to a complex case like a war or an economic crisis, which would have had multiple causes working in a variety of directions and enough red herrings to make a proper analysis difficult.

When it comes to understanding success in basketball, the problem might seem benign. (Although if you’re an NBA executive, maybe not so benign.) But the rest of us deal with it in a practical way all the time. Taleb points out in his book that you could prove how narratives influence our decision making by giving a friend a great detective novel and then, before they got to the final reveal, asking them to give the odds of each individual suspect being the culprit. It’s almost certain that unless you allowed them to write down the odds as they went along, they’d add up to more than 100% — the better the novel, the higher the number. This would be nonsense from a probability standpoint, the odds must add to 100%, but each narrative is so strong that we lose our bearings.

Of course, the more salient the given reasons, the more likely we are to attribute them as causes. By salient, we mean evidence which is mentally vivid and seemingly relevant to the situation — as in the case of the young basketball prodigy losing his father, a common, Disney-made plot point. We cling to these highly available images and they cause us to make avoidable errors in judgment, as with those of us afraid to board an airplane because of the thought of a harrowing airplane crash, one that is statistically unlikely to occur in many thousands of lifetimes.

Less is More

Daniel Kahneman makes this point wonderfully in his book Thinking: Fast, and Slow, in a chapter titled “Linda: Less is More”.

The best-known and most controversial of our experiments involved a fictitious lady called Linda. Amos and I made up the Linda problem to provide conclusive evidence of the role of heuristics in judgment and of their incompatibility with logic. This is how we described Linda:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.


In what we later described as an “increasingly desperate” attempt to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:

Which alternative is more probable?
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.

This again demonstrates the power of narrative. We are so willing to take a description and mentally categorize the person it describes — in this case, feminist bank teller is much more salient than simply bank teller — that we will violate probabilities and logic in order to uphold our first conclusion. The extra description makes our mental picture much more vivid, and we conclude that the vivid picture must be the correct one. This error very likely contributes to our tendency to stereotype based on limited sample sizes; a remnant of our primate days which probably served us well in a very different environment.

The point is made again by Kahneman as he explains how the corporate performance of companies included in famous business books like In Search of Excellence and Good to Great regressed to the mean after the books were written, a now well-known phenomenon:

You are probably tempted to think of causal explanations for these observations: perhaps the successful firms became complacent, the less successful firms tried harder. But this is the wrong way to think about what happened. The average gap must shrink, because the original gap was due in good part to luck, which contributed both to the success of the top firms and to the lagging performance of the rest. We have already encountered this statistical fact of life: regression to the mean.

Stories of how businesses rise and fall strike a chord with readers by offering what the human mind needs: a simple message of triumph and failure that identifies clear causes and ignores the determinative power of luck and the inevitability of regression. These stories induce and maintain an illusion of understanding, imparting lessons of enduring value to readers who are all too eager to believe them.

What’s the harm, you say? Aren’t we just making our lives a little more interesting with these stories? Very true. Stories serve many wonderful functions: teaching, motivating, inspiring. The problem though is that we too often believe our stories are predictive. We make them more real than they are. The writers of the business case-study books certainly believed that the explanations of success they put forth would be predictive of future success (the title Built to Last certainly implies as much), yet a good many of the companies soon became shells of their former selves – Citigroup, Hewlett Packard, Motorola, and Sony among them.

Is a good corporate culture helpful in generating success? Certainly! As it is with height and NBA success. But it’s far more difficult to determine cause and effect than simply recognizing the no-brainers. Just as many tall, talented, hard-working basketball players have failed to make it, many corporate cultures which met all of the Built to Last structures have subsequently failed. The road to success was simply more complicated than the reductive narrative of the book would allow. Strategic choices, luck, circumstance, and the contributions of specific individual personalities may have all played a role. It’s hard to say. And unless we recognize the Narrative Fallacy for what it is, a simplified and often incorrect view of past causality, we carry an arrogance about our knowledge of the past and its usefulness in predicting the future.

Reason-Respecting Tendency

A close cousin of the narrative fallacy is what Charlie Munger refers to as Reason-Respecting Tendency in Poor Charlie’s Almanack. Here’s how Charlie describes the tendency:

There is in man, particularly one in an advanced culture, a natural love of accurate cognition and a joy in its exercise. This accounts for the widespread popularity of crossword puzzles, other puzzles, and bridge and chess columns, as well as all games requiring mental skill.

This tendency has an obvious implication. It makes man especially prone to learn well when a would-be teacher gives correct reasons for what is being taught, instead of simply laying out the desired belief ex-cathedra with no reasons given. Few practices, therefore, are wiser than not only thinking through reasons before giving orders but also communicating these reasons to the recipient of the order.


Unfortunately, Reason-Respecting Tendency is so strong that even a person’s giving of meaningless or incorrect reasons will increase compliance with his orders and requests. This has been demonstrated in psychology experiments wherein “compliance practitioners” successfully jump to the head of lines in front of copying machines by explaining their reason: “I have to make some copies.” This sort of unfortunate byproduct of Reason-Respecting Tendency is a conditioned reflex, based on a widespread appreciation of the importance of reasons. And, naturally, the practice of laying out various claptrap reasons is much used by commercial and cult “compliance practitioners” to help them get what they don’t deserve.

The deep structure of the mind is such that stories, reasons, and causes, things that point an arrow in the direction of Why are the ones that stick most deeply. Our need to look for cause-and-effect chains in anything we encounter is simply an extension of our inbuilt pattern recognition software, which can deepen and broaden as we learn new things. It has been shown, for example, that a master chess player cannot remember the pieces on a randomly assembled chessboard any better than a complete novice. But a master chess player can memorize the pieces on a board which represents an actual game in progress. If you take the pieces away, the master can replicate their positions with very high fidelity, whereas a novice cannot. The difference is that the pattern recognition software of the chess player has been developed to a high degree through deliberate practice — they have been in a thousand game situations just like it in the past. And while most of us may not be able to memorize chess games, we all have brains that perform the same function in other contexts.

Taleb hits on the same idea in The Black Swan:

Consider a collection of words glued together to constitute a 500-page book. If the words are purely random, picked up from the dictionary in a totally unpredictable way, you will not be able to summarize, transfer, or reduce the dimensions of that book without losing something significant from it. You need 100,000 words to carry the exact message of a random 100,000 words with you on your next trip to Siberia. Now consider the opposite: a book filled with the repetition of the following sentence: “The chairman of [insert here your company’s name] is a lucky fellow who happened to be in the right place at the right time and claims credit for the company’s success, without making a single allowance for luck,” running ten times per page for 500 pages. The entire book can be accurately compressed, as I have just done, into 34 words (out of 100,000); you could reproduce it with total fidelity out of such a kernel.

If we combine the ideas of Reason-Respecting Tendency and the mind’s deep craving for order, the interesting truth is that the best teaching, learning, and storytelling methods — those involving reasons and narrative, on which our brain can store information in a more useful and efficient way — are also the ones that cause us to make some of the worst mistakes. Our craving for order betrays us.

What Can We Do?

The first step, clearly, is to become aware of the problem. Once we understand our brain’s craving for narrative, we begin to see narratives every day, all the time, especially as we consume news. The key question we must ask ourselves is “Of the population of X subject to the same initial conditions, how many turned out similarly to Y? What hard-to-measure causes might have played a role?” This is what we did when we unraveled Steven’s narrative above. How many kids just like him, with the same stated conditions — tall, skilled, good parents, good coaches, etc. — achieved the same result? We don’t have to run an empirical test to understand that our narrative sense is providing some misleading cause-and-effect. Common sense tells us there are likely to be many more failures than successes in that pool, leading us to understand that there must have been other unrealized factors at play; luck being a crucial one. Some identified factors were necessary but not sufficient — height, talent, and coaching among them — and some factors might have been negligible or even negative. (Would it have helped or hurt Steven’s NBA chances if he had not lost his father? Impossible to say.)

Modern scientific thought is built on just this sort of edifice to solve the cause-and-effect problem. A thousand years ago, much of what we thought we knew was based on naïve backward-looking causality. (Steve put leeches on his skin and then survived the plague = leeches cure the plague.) Only when we learned to take the concept of [leeches = cure for plague] and call it a hypothesis did we begin to understand the physical world. Only by downgrading our naïve assumptions to the status of a hypothesis, which needs to be tested with rigorous experiment – give 100 plague victims leeches and let 100 of them go leech-less and tally the results – did we find a method to parse actual cause and effect.

And it is just as relevant to ask ourselves the inverse of the question posed above: “Of the population not subject to X, how many still ended up with the results of Y?” This is where we ask: which basketball players had intact families, easy childhoods, and, yet ended up in the NBA anyway? Which corporations lacked the traits described in Good to Great but achieved Greatness anyway? When we are willing to ask both types of questions and try our best to answer them, we can start to see which elements are simply part of the story rather than causal contributors.

A second way we can circumvent narrative is to simply avoid or reinterpret sources of information most subject to the bias. Turn the TV news off. Stop reading so many newspapers. Be skeptical of biographies, memoirs, and personal histories. Be careful of writers who are incredibly talented at painting a narrative, but claim to be writing facts. (I love Malcolm Gladwell’s books, but he would be an excellent example here.) We learned above that narrative is so powerful it can overcome basic logic, so we must be rigorous to some extent about what kinds of information we allow to pass through our filters. Strong narrative flow is exactly why we enjoy a fictional story, but when we enter the non-fiction world of understanding and decision making, the power of narrative is not always on our side. We want to use narrative to our advantage — to teach ourselves or others useful concepts — but be wary of where it can mislead.

One way to assess how narrative affects your decision-making is to start keeping a journal of your decisions or predictions in any arena that you consider important. It’s important to note the why behind your prediction or decision. If you’re going to invest in your cousin’s new social media startup — sure to succeed — explain to yourself exactly why you think it will work. Be detailed. Whether the venture succeeds or fails, you will now have a forward-looking document to refer to later, so that when you have the benefit of hindsight, you can evaluate your original assumptions instead of finding convenient reasons to justify the success or failure. The more you’re able to do this exercise, the more you’ll come to understand how complicated cause-and-effect factors are when we look ahead rather than behind.

Munger also gives us a few prescriptions in Poor Charlie’s Almanack after describing the Availability Bias, another kissing cousin of the Narrative Fallacy and the Reason-Respecting Tendency. We are wise to take heed of them as we approach our fight with narrative:

The main antidote to miscues from the Availability-Misweighing Tendency often involves procedures, including use of checklists, which are almost always helpful.

Another antidote is to behave somewhat like Darwin did when he emphasized disconfirming evidence. What should be done is to especially emphasize factors that don’t produce reams of easily available numbers, instead of drifting mostly or entirely into considering factors that do produce such numbers. Still another antidote is to find and hire some skeptical, articulate people with far-reaching minds to act as advocates for notions that are opposite to the incumbent notions. [Ed: Or simply ask a smart friend to do the same.]

One consequence of this tendency is that extra-vivid evidence, being so memorable and thus more available in cognition, should often consciously be under weighed while less vivid evidence should be overweighed.

Munger’s prescriptions are almost certainly as applicable to solving the narrative problem as the close-related Availability problem, especially the issue we discussed earlier of vivid evidence.

Lastly, the final prescription comes from Taleb himself; the progenitor of the idea of our problem with narrative: when searching for real truth, favor experimentation over storytelling (data over anecdote), favor experience over history (which can be cherry-picked), and favor clinical knowledge over grand theories. Figure out what you know and what’s a guess, and become humble about your understanding of the past.

This recognition and respect of the power of our minds to invent and love stories can help us reduce our misunderstanding of the world.

The Paradox of Skill

Michael Mauboussin talking about his new book The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing with the WSJ:

The key is this idea called the paradox of skill. As people become better at an activity, the difference between the best and the average and the best and the worst becomes much narrower. As people become more skillful, luck becomes more important. That’s precisely what happens in the world of investing.

The reason that luck is so important isn’t that investing skill isn’t relevant. It’s that skill is very high and consistent. That said, over longer periods, skill has a much better chance of shining through.

In the short term you may experience good or bad luck [and that can overwhelm skill], but in the long term luck tends to even out and skill determines results.

WSJ: You say people generally aren’t very good at distinguishing the role of luck and skill in investing and other activities. Why not?

Our minds are really good at linking cause to effect. So if I show you an effect that is success, your mind is naturally going to say I need a cause for that. And often you are going to attribute it to the individual or skill rather than to luck.

Also, humans love narratives, they love stories. An essential element of a story is the notion of causality: This caused that, this person did that.

So when you put those two together, we are very poor at discriminating between the relative contributions of skill and luck in outcomes.

When Storytelling Leads To Unhappy Endings

“The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.”
— F. Scott Fitzgerald


John Kay, with an insightful piece in the Financial Times (available for free on his blog) commenting on the narrative fallacy.

We do not often, or easily, think in terms of probabilities, because there are not many situations in which this style of thinking is useful. Probability theory is a marvellous tool for games of chance – such as spinning a roulette wheel. The structure of the problem is comprehensively defined by the rules of the game. The set of outcomes is well defined and bounded, and we will soon know which outcome has occurred. But most of the problems we face in the business and financial worlds – or in our personal lives – are not like that. The rules are ill-defined, the range of outcomes is wider than we can easily imagine and often we do not fully comprehend what has happened even after the event. The real world is characterised by radical uncertainty – the things we do not know that we do not know.

We deal with that world by constructing simplifying narratives. We do this not because we are stupid, or irrational, or have forgotten probability 101, but because story-telling is the best means of making sense of complexity. The test of these narratives is whether they are believable.

And this part, which reminds me of Nassim Taleb’s comments:

The rise of quantitative finance has led people to squeeze many things into the framework of probability. The invention of subjective or personal probabilities proved to be a means of applying a well-established branch of mathematics to a new range of problems. This approach had the appearance of science, and enabled young turks to marginalise the war stories of innumerate old fogies. The old fogies may have known something after all, however.

Nassim Taleb: We Should Read Seneca, Not Jonah Lehrer

For those who didn’t follow him, Jonah Lehrer has a gift for turning science into a great story. His beautiful writing made it hard to resist the narrative fallacy.

The recent news about him fabricating quotes and generally offering a tenuous commitment to the truth caught me by surprise. But one question that we should have asked ourselves long ago — should we have avoided Lehrer and other pop-science journalists altogether?

Nassim Taleb argues yes.

In his book Anti-Fragile, he writes:

We are built to be dupes for theories. But theories come and go; experience stays. Explanations change all the time, and have changed all the time in history (because of causal opacity, the invisibility of causes) with people involved in the incremental development of ideas thinking they always had a definitive theory; experience remains constant.

…what physicists call the phenomenology of the process is the empirical manifestation, without looking at how it glues to existing general theories. Take for instance the following statement, entirely evidence-based: If you build muscle, you can eat more without getting more fat deposits in your belly and can eat plenty of lamb chops without having to buy a new belt. Now in the past the theory to rationalize it was “Your metabolism is higher because muscles burn calories.” Currently I tend to hear “You become more insulin-sensitive and store less fat.” Insulin, shminsulin; metabolism, shmetabolism: another theory will emerge in the future and some other substance will come about, but the exact same effect will continue to prevail.

The same holds for the statement Lifting weights increases your muscle mass. In the past they used to say that weight lifting caused the “micro-tearing of muscles,” with subsequent healing and increase in size. Today some people discuss hormonal signaling or genes, tomorrow they will discuss something else. But the effect has held forever and will continue to do so.

On Facebook, Taleb writes:

When it comes to narratives, the brain seems to be the last province of the theoretician-charlatan. Add neurosomething to a field, and suddenly it rises in respectability and becomes more convincing as people now have the illusion of a strong causal link—yet the brain is too complex for that; it is both the most complex part of the human anatomy and the one that is the most susceptible to sucker-causation and charlatanism of the type “Proust Was A Neuroscientist”. Christopher Chabris and Daniel Simons brought to my attention in their book The Invisible Gorilla the evidence I had been looking for: whatever theory has a reference in it to the brain circuitry seems more “scientific” and more convincing, even when it is just is randomized psycho-neuro-babble.

Taleb’s point, I think, is that most of Lehrer’s writing on science, while narratively sexy, derived from theories based on very little data. Most of these theories, won’t be around or even talked about in 100 years. Seneca, on the other hand, explained things that are still true today. Lehrer is noise. Seneca is signal.


Still curious? A great way to start reading Seneca is to pick up Letters of a Stoic and Dialogues and Essays.

Tyler Cowen on The Dangers of Storytelling

We tell ourselves stories in order to live.” – Joan Didion

Tyler Cowen with an excellent TED talk on the dangers of storytelling:

So if I’m thinking about this talk, I’m wondering, of course, what is it you take away from this talk? What story do you take away from Tyler Cowen? One story you might take away is the story of the quest. “Tyler came here, and he told us not to think so much in terms of stories.” That would be a story you could tell about this talk. It would fit a pretty well-known pattern. You might remember it. You could tell it to other people. “This weird guy came, and he said not to think in terms of stories. Let me tell you what happened today!” and you tell your story. Another possibility is you might tell a story of rebirth. You might say, “I used to think too much in terms of stories, but then I heard Tyler Cowen, and now I think less in terms of stories!” That too, is a narrative you will remember, you can tell to other people, and it may stick. You also could tell a story of deep tragedy. “This guy Tyler Cowen came and he told us not to think in terms of stories, but all he could do was tell us stories about how other people think too much in terms of stories.”

As a simple rule of thumb, just imagine every time you’re telling a good vs. evil story, you’re basically lowering your I.Q. by ten points or more.