Tag: Probability

Nate Silver: The Difference Between Risk and Uncertainty

Nate Silver elaborates on the difference between risk and uncertainty in The Signal and the Noise:

Risk, as first articulated by the economist Frank H. Knight in 1921, is something that you can put a price on. Say that you’ll win a poker hand unless your opponent draws to an inside straight: the chances of that happening are exactly 1 chance in 11. This is risk. It is not pleasant when you take a “bad beat” in poker, but at least you know the odds of it and can account for it ahead of time. In the long run, you’ll make a profit from your opponents making desperate draws with insufficient odds.

Uncertainty, on the other hand, is risk that is hard to measure. You might have some vague awareness of the demons lurking out there. You might even be acutely concerned about them. But you have no real idea how many of them there are or when they might strike. Your back-of-the-envelope estimate might be off by a factor of 100 or by a factor of 1,000; there is no good way to know. This is uncertainty. Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt.

When Storytelling Leads To Unhappy Endings

“The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.”
— F. Scott Fitzgerald

***

John Kay, with an insightful piece in the Financial Times (available for free on his blog) commenting on the narrative fallacy.

We do not often, or easily, think in terms of probabilities, because there are not many situations in which this style of thinking is useful. Probability theory is a marvellous tool for games of chance – such as spinning a roulette wheel. The structure of the problem is comprehensively defined by the rules of the game. The set of outcomes is well defined and bounded, and we will soon know which outcome has occurred. But most of the problems we face in the business and financial worlds – or in our personal lives – are not like that. The rules are ill-defined, the range of outcomes is wider than we can easily imagine and often we do not fully comprehend what has happened even after the event. The real world is characterised by radical uncertainty – the things we do not know that we do not know.

We deal with that world by constructing simplifying narratives. We do this not because we are stupid, or irrational, or have forgotten probability 101, but because story-telling is the best means of making sense of complexity. The test of these narratives is whether they are believable.

And this part, which reminds me of Nassim Taleb’s comments:

The rise of quantitative finance has led people to squeeze many things into the framework of probability. The invention of subjective or personal probabilities proved to be a means of applying a well-established branch of mathematics to a new range of problems. This approach had the appearance of science, and enabled young turks to marginalise the war stories of innumerate old fogies. The old fogies may have known something after all, however.

Nassim Taleb: The Winner-Take-All Effect In Longevity

Nassim Taleb elaborates on the Copernican Principle, a concept first introduced on Farnam Street in How To Predict Everything.

For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day implies a longer life expectancy.

So the longer a technology lives, the longer it is expected to live. Let me illustrate the point. Say I have for sole information about a gentleman that he is 40 years old and I want to predict how long he will live. I can look at actuarial tables and find his age-adjusted life expectancy as used by insurance companies. The table will predict that he has an extra 44 to go. Next year, when he turns 41 (or, equivalently, if apply the reasoning today to another person currently 41), he will have a little more than 43 years to go. So every year that lapses reduces his life expectancy by about a year (actually, a little less than a year, so if his life expectancy at birth is 80, his life expectancy at 80 will not be zero, but another decade or so).

The opposite applies to nonperishable items. I am simplifying numbers here for clarity. If a book has been in print for forty years, I can expect it to be in print for another forty years. But, and that is the main difference, if it survives another decade, then it will be expected to be in print another fifty years. This, simply, as a rule, tells you why things that have been around for a long time are not “aging” like persons, but “aging” in reverse. Every year that passes without extinction doubles the additional life expectancy. This is an indicator of some robustness. The robustness of an item is proportional to its life!

This is the “winner-take-all” effect in longevity.

The main argument against this idea is the counterexample — newspapers and traditional telephone lines come to mind. These technologies, widely considered inefficient and dying, have been around for a long time. Yet the Copernican Principle would suggest they will continue to live on for a long time.

These arguments miss the point of probability. The argument is not about a specific example, but rather about the life expectancy, which is, Taleb writes “simply a probabilistically derived average.”

Perhaps an example, from Taleb, will help illustrate. If I were to ask you to guess the life expectancy of the average 40 year old man, you would probably guess around 80 (at least that’s what the actuarial tables likely reveal). However, if I now add that the man is suffering from cancer, we would revisit our decision and most likely revise our estimate downward. “It would,” Taleb writes, “be a mistake to think that he has forty four more years to live, like others in his age group who are cancer-free.”

“In general, the older the technology, not only the longer it is expected to last, but the more certainty I can attach to such statement.”

***

If you liked this, you’ll love these three other Farnam Street articles:

The Copernican Principle: How To Predict Everything — Based on one of the most famous and successful prediction methods in the history of science.

Ten Commandments for Aspiring Superforecasters — The ten key themes that have been “experimentally demonstrated to boost accuracy” in the real-world.

Philip Tetlock on The Art and Science of Prediction — How we can get better at the art and science of prediction, including diving into makes some people better at making predictions and how we can learn to improve our ability to guess the future.

Source

Predicting the Improbable

One natural human bias is that we tend to draw strong conclusions based on few observations. This bias, misconceptions of chance, shows itself in many ways including the gambler and hot hand fallacies. Such biases may induce public opinion and the media to call for dramatic swings in policies or regulation in response to highly improbable events. These biases are made even worse by our natural tendency to “do something.”

***

An event like an earthquake happens, making it more available in our mind.

We think the event is more probable than evidence would support so we run out and buy earthquake insurance. Over many years as the earthquake fades from our mind (making it less available) we believe, paradoxically, that the risk is lower (based on recent evidence) so we cancel our policy. …

Some events are hard to predict. This becomes even more complicated when you consider not only predicting the event but the timing of the event as well. This article below points out that experts, like the rest of us, base their predictions on inference from observing the past and are just as prone to biases as the rest of us.

Why do people over infer from recent events?

There are two plausible but apparently contradicting intuitions about how people over-infer from observing recent events.

The gambler’s fallacy claims that people expect rapid reversion to the mean.

For example, upon observing three outcomes of red in roulette, gamblers tend to think that black is now due and tend to bet more on black (Croson and Sundali 2005).

The hot hand fallacy claims that upon observing an unusual streak of events, people tend to predict that the streak will continue. (See Misconceptions of Chance)

The hot hand fallacy term originates from basketball where players who scored several times in a row are believed to have a “hot hand”, i.e. are more likely to score at their next attempt.

Recent behavioural theory has proposed a foundation to reconcile the apparent contradiction between the two types of over-inference. The intuition behind the theory can be explained with reference to the example of roulette play.

A person believing in the law of small numbers thinks that small samples should look like the parent distribution, i.e. that the sample should be representative of the parent distribution. Thus, the person believes that out of, say 6, spins 3 should be red and 3 should be black (ignoring green). If observed outcomes in the small sample differ from the 50:50 ratio, immediate reversal is expected. Thus, somebody observing 2 times red in 6 consecutive spins believes that black is “due” on the 3rd spin to restore the 50:50 ratio.

Now suppose such person is uncertain about the fairness of the roulette wheel. Upon observing an improbable event (6 times red in 6 spins, say), the person starts to doubt about the fairness of the roulette wheel because a long streak does not correspond to what he believes a random sequence should look like. The person then revises his model of the data generating process and starts to believe the event on streak is more likely. The upshot of the theory is that the same person may at first (when the streak is short) believe in reversion of the trend (the gambler’s fallacy) and later – when the streak is long – in continuation of the trend (the hot hand fallacy).

Continue Reading

The Art and Science of High-Stakes Decisions

How can anyone make rational decisions in a world where knowledge is limited, time is pressing, and deep thought is often unattainable?

Some decisions are more difficult than others and yet we often make these decisions in the same way easy decisions are made, on autopilot.

We have difficulty contemplating and taking protective actions towards low probability, high stakes threats. It almost seems perverse when you consider we are least prepared to make the decisions that matter most.

Sure we can pick between the store brand of peanut butter and the Kraft label and we can no doubt surf the internet with relative ease, yet life seems to offer few opportunities to prepare for decisions where the consequences of a poor decision are catastrophic. If we pick the wrong type of peanut butter, we are generally not penalized too harshly. If we fail to purchase flood insurance, on the other hand, we can be financially and emotionally wiped out.

Shortly after the planes crashed into the towers in Manhattan some well-known academics got together to discuss1 how skilled people were at making choices involving low and ambiguous probability of a high-stakes loss

High-stakes decisions involve two distinctive properties: 1) existence of a possible large loss (financial or emotional) and 2) the costs to reverse decisions once made are high. More importantly, these professors wanted to determine if prescriptive guidelines for improving decision-making process could be created in an effort to help make better decisions.

Whether we’re buying something at the grocery store or making a decision to purchase earthquake insurance, we operate in the same way. The presence of potentially catastrophic costs of errors does little to reduce our reliance on heuristics (or rules of thumb). Such heuristics serve us well on a daily basis. For simple decisions, not only are heuristics generally right but the costs of errors are small, such as being caught without an umbrella or regretting not picking up the Kraft peanut butter after discovering the store band doesn’t taste as you remember. However, in high-stakes decisions, heuristics can often be a poor method of forecasting.

In order to make better high-stakes decisions, we need a better understanding of why we generally make poor decisions.

Here are several causes.

Poor understanding of probability.
Several studies show that people either utilize probability information insufficiently when it is made available to them or ignore it all together. In one study, 78% of subjects failed to seek out probability information when evaluating between several risky managerial decisions.

In the context of high-stakes decisions, the probability of an event causing loss may seem sufficiently low that organizations and individuals consider them not worth worry about. In doing so, they effectively treat the probability of something as zero or close to it.

An excessive focus on short time horizons.
Many high-stakes decisions are not obvious to the decision-maker. In part, this is because people tend to focus on the immediate consequences and not the long-term consequences.

A CEO near retirement has incentives to skimp on insurance to report slightly higher profits before leaving (shareholders are unaware of the increased risk and appreciate the increased profits). Governments tend to under-invest in less visible things like infrastructure because they have short election cycles. The long-term consequences of short-term thinking can be disastrous.

The focus on short-term decision making is one of the most widely-documented failings of human decision making. People have difficulty considering the future consequences of current actions over long periods of time. Garrett Hardin, the author of Filters against Folly, suggests we look at things through three filters (literacy, numeracy, and ecolacy). In ecolacy, the key question is “and then what?” And then what helps us avoid a focus solely on the short-term.

Excessive attention to what’s available
Decisions requiring difficult trade-offs between attributes or entailing ambiguity as to what a right answer looks like often leads people to resolve choices by focusing on the information most easily brought to mind. Sometimes things can be difficult to bring to mind.

Constant exposure to low-risk events without realization leads to us being less concerned than we probably would warrant (it makes these events less available) and “proves” our past decisions to ignore low-risk events were right.

People refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value. Kunreuther et. al. (1993) suggests underreaction to threats of flooding may arise from “the inability of individuals to conceptualize floods that have never occurred… Men on flood plains appear to be very much prisoners of their experience… Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned.”Paradoxically, we feel more secure even as the “risk” may have increased.

Distortions under stress
Most high-stakes decisions will be made under perceived (or real) stress. A large number of empirical studies find that stress focuses decision-makers on a selective set of cues when evaluating options and leads to greater reliance on simplifying heuristics. When we’re stressed, we’re less likely to think things through.

Over-reliance on social norms
Most individuals have little experience with high-stakes decisions and are highly uncertain about how to resolve them (procedural uncertainty). In such cases—and combined with stress—the natural course of action is to mimic the behavior of others or follow established social norms. This is based on the psychological desire to fail conventionally.

The tendency to prefer the status-quo
What happens when people are presented with difficult choices and no obvious right answer? We tend to prefer making no decision at all, we choose the norm.

In high-stakes decisions many options are better than the status-quo and we must make trade-offs. Yet, when faced with decisions that involve life-and-death trade-offs, people frequently remark “I’d rather not think about it.”

Failures to learn
Although individuals and organizations are eager to derive intelligence from experience, the inferences stemming from that eagerness are often misguided. The problems lie partly in errors in how people think, but even more so in properties of experience that confound learning from it. Experience may possibly be the best teacher, but it is not a particularly good teacher.

As an illustration, one study finds that participants in an earthquake simulation tended to over-invest in mitigation that was normatively ineffective but under-invest when it is normatively effective. The reason was a misinterpretation of feedback; when mitigation was ineffective, respondents attributed the persistence of damage to the fact that they had not invested enough. by contract, when it was effective, they attributed the absence of damage to a belief that earthquakes posted limited damage risk.

Gresham’s Law of Decision making
Over time, bad decisions will tend to drive out good decisions in an organization.

Improving
What can you do to improve your decision-making?

A few things: 1) learn more about judgment and decision making; 2) encourage decision makers to see events through alternative frames, such as gains versus losses and changes in the status-quo; 3) adjust the time frame of decisions—while the probability of an earthquake at your plant may be 1/100 in any given year, the probability over the 25 year life of the plant will be 1/5; and 4) read Farnam Street!

Footnotes
  • 1

    http://marketing.wharton.upenn.edu/ideas/pdf/Kahn/High%20Stakes%20Decision%20Making.pdf

The Famous Game Show Problem

game show problem

Ahh, the famous game show problem (also known as The Monty Hall Problem).

This is a probability puzzle you’ve heard of:

Suppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?

Marilyn Vos Savant, one of the “smartest” people in the word, offers her answer:

When you switch, you win 2/3 of the time and lose 1/3, but when you don’t switch, you only win 1/3 of the time and lose 2/3. You can try it yourself and see…..

The winning odds of 1/3 on the first choice can’t go up to 1/2 just because the host opens a losing door. To illustrate this, let’s say we play a shell game. You look away, and I put a pea under one of three shells. Then I ask you to put your finger on a shell. The odds that your choice contains a pea are 1/3, agreed? Then I simply lift up an empty shell from the remaining other two. As I can (and will) do this regardless of what you’ve chosen, we’ve learned nothing to allow us to revise the odds on the shell under your finger.

The benefits of switching are readily proven by playing through the six games that exhaust all the possibilities. For the first three games, you choose #1 and “switch” each time, for the second three games, you choose #1 and “stay” each time, and the host always opens a loser.