Tag: Stories

Michael Mauboussin: Three Things to Consider in Order To Make an Effective Prediction

Michael Mauboussin commenting on Daniel Kahneman:

When asked which was his favorite paper of all-time, Daniel Kahneman pointed to “On the Psychology of Prediction,” which he co-authored with Amos Tversky in 1973. Tversky and Kahneman basically said that there are three things to consider in order to make an effective prediction: the base rate, the individual case, and how to weight the two. In luck-skill language, if luck is dominant you should place most weight on the base rate, and if skill is dominant then you should place most weight on the individual case. And the activities in between get weightings that are a blend.

In fact, there is a concept called the “shrinkage factor” that tells you how much you should revert past outcomes to the mean in order to make a good prediction. A shrinkage factor of 1 means that the next outcome will be the same as the last outcome and indicates all skill, and a factor of 0 means the best guess for the next outcome is the average. Almost everything interesting in life is in between these extremes.

To make this more concrete, consider batting average and on-base percentage, two statistics from baseball. Luck plays a larger role in determining batting average than it does in determining on-base percentage. So if you want to predict a player’s performance (holding skill constant for a moment), you need a shrinkage factor closer to 0 for batting average than for on-base percentage.

I’d like to add one more point that is not analytical but rather psychological. There is a part of the left hemisphere of your brain that is dedicated to sorting out causality. It takes in information and creates a cohesive narrative. It is so good at this function that neuroscientists call it the “interpreter.”

Now no one has a problem with the suggestion that future outcomes combine skill and luck. But once something has occurred, our minds quickly and naturally create a narrative to explain the outcome. Since the interpreter is about finding causality, it doesn’t do a good job of recognizing luck. Once something has occurred, our minds start to believe it was inevitable. This leads to what psychologists call “creeping determinism” – the sense that we knew all along what was going to happen. So while the single most important concept is knowing where you are on the luck-skill continuum, a related point is that your mind will not do a good job of recognizing luck for what it is.

Mauboussin is the author of The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing.

The Paradox of Skill

Michael Mauboussin talking about his new book The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing with the WSJ:

The key is this idea called the paradox of skill. As people become better at an activity, the difference between the best and the average and the best and the worst becomes much narrower. As people become more skillful, luck becomes more important. That’s precisely what happens in the world of investing.

The reason that luck is so important isn’t that investing skill isn’t relevant. It’s that skill is very high and consistent. That said, over longer periods, skill has a much better chance of shining through.

In the short term you may experience good or bad luck [and that can overwhelm skill], but in the long term luck tends to even out and skill determines results.

WSJ: You say people generally aren’t very good at distinguishing the role of luck and skill in investing and other activities. Why not?

Our minds are really good at linking cause to effect. So if I show you an effect that is success, your mind is naturally going to say I need a cause for that. And often you are going to attribute it to the individual or skill rather than to luck.

Also, humans love narratives, they love stories. An essential element of a story is the notion of causality: This caused that, this person did that.

So when you put those two together, we are very poor at discriminating between the relative contributions of skill and luck in outcomes.

Whoever tells the best story wins

From Everything Is Obvious: How Common Sense Fails Us:

Historical explanations, in other words, are neither causal explanations nor even really descriptions—at least not in the sense that we imagine them to be. Rather, they are stories. As the historian John Lewis Gaddis points out, they are stories that are constrained by certain historical facts and other observable evidence. Nevertheless, like a good story, historical explanations concentrate on what’s interesting, downplaying multiple causes and omitting all the things that might have happened but didn’t. As with a good story, they enhance drama by focusing the action around a few events and actors, thereby imbuing them with special significance or meaning. And like good stories, good historical explanations are also coherent, which means they tend to emphasize simple, linear determinism over complexity, randomness, and ambiguity. Most of all, they have a beginning, a middle, and an end, at which point everything—including the characters identified, the order in which the events are presented, and the manner in which both characters and events are described—all has to make sense.

So powerful is the appeal of a good story that even when we are trying to evaluate an explanation scientifically—that is, on the basis of how well it accounts for the data—we can’t help judging it in terms of its narrative attributes. In a range of experiments, for example, psychologists have found that simpler explanations are judged more likely to be true than complex explanations, not because simpler explanations actually explain more, but rather just because they are simpler. In one study, for example, when faced with a choice of explanations for a fictitious set of medical symptoms, a majority of respondents chose an explanation involving only one disease over an alternative explanation involving two diseases, even when the combination of the two diseases was statistically twice as likely as the signle-disease explanation. Somewhat paradoxically, explanations are also judged to be more likely to be true when they have informative details added, even when the extra details are irrelevant or actually make the explanation less likely.

Still curious? Read the book.

The evolutionary function of religion

Excerpts from Jonathan Gottschall’s The Storytelling Animal on the evolutionary function of religion.

In his trailblazing book Darwin’s Cathedral, the biologist David Sloan Wilson proposes that religion emerged as a stable part of all human societies for a simple reason: it made them work better. Human groups that happened to possess a faith instinct so thoroughly dominated non-religious competitors that religious tendencies became deeply entrenched in our species.

Wilson argues that religion provides multiple benefits to groups. As the sociologist Émile Durkheim wrote, “Religion is a unified system of beliefs and practices … which unite into one single moral community called a Church all those who adhere to them.” Second, religion coordinates behavior within the group, setting up rules and norms, punishments and rewards. Third, religion provides a powerful incentive system that promotes group cooperation and suppresses selfishness. The science writer Nicholas Wade expresses the heart of Wilson’s idea succinctly: the evolutionary function of religion “is to bind people together and make them put the group’s interests ahead of their own.

If you’re skeptical

Wilson points out that “elements of religion that appear irrational and dysfunctional often make perfectly good sense when judged by the only appropriate gold standard as far as evolutionary theory is concerned-what they cause people to do.” And what they generally cause people to do is to behave more decently toward members of the group (co-religionists) while vigorously asserting the group’s interests against competitors. As the German evolutionist Gustav Jager argued in 1869, religion can be seen as “a weapon in the [Darwinian] struggle for survival.”

The darkside of religion

There are good things about religion, including the way its stories bind people into more harmonious collectives. But there is an obvious dark side to religion too: the way it is so readily weaponized. Religion draws co-religionists together and drives those of different faiths apart.

Still curious? After you read The Storytelling Animal, pick up a copy of Darwin’s Cathedral to better understand the role of religion in evolution.

“stories equip us with a mental file of dilemmas we might one day face”

From Jonathan Gottschall’s The Storytelling Animal:

In his groundbreaking book How the Mind Works, Pinker argues that stories equip us with a mental file of dilemmas we might one day face, along with workable solutions. In the way that serious chess players memorize optimal responses to a wide variety of attacks and defenses, we equip ourselves for real life by absorbing fictional game plans.


this model has flaws. As some critics have pointed out, fiction can make a terrible guide for real life. What if you actually tried to apply fictional solutions to your problems? You might end up running around like the comically insane Don Quixote or the tragically deluded Emma Bovary—both of whom go astray because they confuse literary fantasy with reality.

Nassim Taleb: We Should Read Seneca, Not Jonah Lehrer

For those who didn’t follow him, Jonah Lehrer has a gift for turning science into a great story. His beautiful writing made it hard to resist the narrative fallacy.

The recent news about him fabricating quotes and generally offering a tenuous commitment to the truth caught me by surprise. But one question that we should have asked ourselves long ago — should we have avoided Lehrer and other pop-science journalists altogether?

Nassim Taleb argues yes.

In his book Anti-Fragile, he writes:

We are built to be dupes for theories. But theories come and go; experience stays. Explanations change all the time, and have changed all the time in history (because of causal opacity, the invisibility of causes) with people involved in the incremental development of ideas thinking they always had a definitive theory; experience remains constant.

…what physicists call the phenomenology of the process is the empirical manifestation, without looking at how it glues to existing general theories. Take for instance the following statement, entirely evidence-based: If you build muscle, you can eat more without getting more fat deposits in your belly and can eat plenty of lamb chops without having to buy a new belt. Now in the past the theory to rationalize it was “Your metabolism is higher because muscles burn calories.” Currently I tend to hear “You become more insulin-sensitive and store less fat.” Insulin, shminsulin; metabolism, shmetabolism: another theory will emerge in the future and some other substance will come about, but the exact same effect will continue to prevail.

The same holds for the statement Lifting weights increases your muscle mass. In the past they used to say that weight lifting caused the “micro-tearing of muscles,” with subsequent healing and increase in size. Today some people discuss hormonal signaling or genes, tomorrow they will discuss something else. But the effect has held forever and will continue to do so.

On Facebook, Taleb writes:

When it comes to narratives, the brain seems to be the last province of the theoretician-charlatan. Add neurosomething to a field, and suddenly it rises in respectability and becomes more convincing as people now have the illusion of a strong causal link—yet the brain is too complex for that; it is both the most complex part of the human anatomy and the one that is the most susceptible to sucker-causation and charlatanism of the type “Proust Was A Neuroscientist”. Christopher Chabris and Daniel Simons brought to my attention in their book The Invisible Gorilla the evidence I had been looking for: whatever theory has a reference in it to the brain circuitry seems more “scientific” and more convincing, even when it is just is randomized psycho-neuro-babble.

Taleb’s point, I think, is that most of Lehrer’s writing on science, while narratively sexy, derived from theories based on very little data. Most of these theories, won’t be around or even talked about in 100 years. Seneca, on the other hand, explained things that are still true today. Lehrer is noise. Seneca is signal.


Still curious? A great way to start reading Seneca is to pick up Letters of a Stoic and Dialogues and Essays.