Tag: Nate Silver

Predicting the Future with Bayes’ Theorem

In a recent podcast, we talked with professional poker player Annie Duke about thinking in probabilities, something good poker players do all the time. At the poker table or in life, it’s really useful to think in probabilities versus absolutes based on all the information you have available to you. You can improve your decisions and get better outcomes. Probabilistic thinking leads you to ask yourself, how confident am I in this prediction? What information would impact this confidence?

Bayes’ Theorem

Bayes’ theorem is an accessible way of integrating probability thinking into our lives. Thomas Bayes was an English minister in the 18th century, whose most famous work, “An Essay toward Solving a Problem in the Doctrine of Chances,” was brought to the attention of the Royal Society in 1763—two years after his death—by his friend Richard Price. The essay did not contain the theorem as we now know it, but had the seeds of the idea. It looked at how we should adjust our estimates of probabilities when we encounter new data that influence a situation. Later development by French scholar Pierre-Simon Laplace and others helped codify the theorem and develop it into a useful tool for thinking.

Knowing the exact math of probability calculations is not the key to understanding Bayesian thinking. More critical is your ability and desire to assign probabilities of truth and accuracy to anything you think you know, and then being willing to update those probabilities when new information comes in. Here is a short example, found in Investing: The Last Liberal Art, of how it works:

Let’s imagine that you and a friend have spent the afternoon playing your favorite board game, and now, at the end of the game, you are chatting about this and that. Something your friend says leads you to make a friendly wager: that with one roll of the die from the game, you will get a 6. Straight odds are one in six, a 16 percent probability. But then suppose your friend rolls the die, quickly covers it with her hand, and takes a peek. “I can tell you this much,” she says; “it’s an even number.” Now you have new information and your odds change dramatically to one in three, a 33 percent probability. While you are considering whether to change your bet, your friend teasingly adds: “And it’s not a 4.” With this additional bit of information, your odds have changed again, to one in two, a 50 percent probability. With this very simple example, you have performed a Bayesian analysis. Each new piece of information affected the original probability, and that is Bayesian [updating].

Both Nate Silver and Eliezer Yudkowsky have written about Bayes’ theorem in the context of medical testing, specifically mammograms. Imagine you live in a country with 100 million women under 40. Past trends have revealed that there is a 1.4% chance of a woman under 40 in this country getting breast cancer—so roughly 1.4 million women.

Mammograms will detect breast cancer 75% of the time. They will give out false positives—say a woman has breast cancer when she actually doesn’t—about 10% of the time. At first, you might focus just on the mammogram numbers and think that 75% success rate means that a positive is bad news. Let’s do the math.

If all the women under 40 get mammograms, then the false positive rate will give 10 million women under 40 the news that they have breast cancer. But because you know the first statistic, that only 1.4 women under 40 actually get breast cancer, you know that 8.6 million of the women who tested positive are not actually going to have breast cancer!
That’s a lot of needless worrying, which leads to a lot of needless medical care. In order to remedy this poor understanding and make better decisions about using mammograms, we absolutely must consider prior knowledge when we look at the results, and try to update our beliefs with that knowledge in mind.

Weigh the Evidence

Often we ignore prior information, simply called “priors” in Bayesian-speak. We can blame this habit in part on the availability heuristic—we focus on what’s readily available. In this case, we focus on the newest information and the bigger picture gets lost. We fail to adjust the probability of old information to reflect what we have learned.

The big idea behind Bayes’ theorem is that we must continuously update our probability estimates on an as-needed basis. In their book The Signal and the Noise, Nate Silver and Allen Lane give a contemporary example, reminding us that new information is often most useful when we put it in the larger context of what we already know:

Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Skeptics react with glee, while true believers dismiss the new information.

A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming—but only a little.

The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.

Limitations of the Bayesian

Don’t walk away thinking the Bayesian approach will enable you to predict everything! In addition to seeing the world as an ever-shifting array of probabilities, we must also remember the limitations of inductive reasoning. A high probability of something being true is not the same as saying it is true. A great example of this is from Bertrand Russell’s The Problems of Philosophy:

A horse which has been often driven along a certain road resists the attempt to drive him in a different direction. Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.

In the final analysis, though, picking up Bayesian reasoning can truly change your life, as observed in this Big Think video by Julia Galef of the Center for Applied Rationality:

After you’ve been steeped in Bayes’ rule for a little while, it starts to produce some fundamental changes to your thinking. For example, you become much more aware that your beliefs are grayscale. They’re not black and white and that you have levels of confidence in your beliefs about how the world works that are less than 100 percent but greater than zero percent and even more importantly as you go through the world and encounter new ideas and new evidence, that level of confidence fluctuates, as you encounter evidence for and against your beliefs.

So be okay with uncertainty, and use it to your advantage. Instead of holding on to outdated beliefs by rejecting new information, take in what comes your way through a system of evaluating probabilities.

Bayes’ Theorem is part of the Farnam Street latticework of mental models. Still Curious? Read Bayes and Deadweight: Using Statistics to Eject the Deadweight From Your Life next. 

Learning community members can discuss this on the member forum

Nate Silver: Confidence Kills Predictions

Best known for accurate election predictions, statistician Nate Silver is also the author of The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t. Heather Bell, Managing Editor of Journal of Indexes, recently spoke with Silver.

IU: What do you see as the common theme among bad predictions? What most often leads people astray?
Silver: A lot of it is overconfidence. People tend to underestimate what the uncertainty that is intrinsic to a problem actually is. If you have someone estimate what they think a confidence interval is that’s supposed to cover 90 percent of all outcomes, it usually only covers 50 percent. You have upside outcomes and downside outcomes in the market certainly more often than people realize.

There are a variety of reasons for this. Part of it is that we can sometimes get stuck in the recent past and examples that are most familiar to us, kind of what Daniel Kahneman called “the availability heuristic,” where we assume that the current trend will always perpetuate itself, when actually it can be an anomaly or a fluke, or where we always think that the period we’re living through is the “signal,” so to speak. That’s often not true—sometimes you’re living in the outlier period, like when you have a housing bubble period that you haven’t historically had before.

Overconfidence is the core linkage between most of the failures of predictions that we’ve looked at. Obviously, you can look at that in a more technical sense and see where sometimes people are fitting models where they don’t have as much data as they think, but the root of it comes down to a failure to understand that it’s tough to be objective and that we often come at a problem with different biases and perverse incentives—and if we don’t check those, we tend to get ourselves into trouble.

IU: What standards or conditions must be met, in your opinion, for something to be considered “predictable”?
Silver: I tend not to think in terms of black and white absolutes. There are two ways to define “predictable,” I’d say. One is by asking, How well we are able to model the system? The other is more of a cosmic predictability: How intrinsically random is something over the long run?

I look at baseball as an example. Even the best teams only win about two-thirds of their games. Even the best hitters only get on base about 40 percent of the time. In that sense, baseball is highly unpredictable. In another sense though, baseball is very easy to measure relative to a lot of other things. It’s easy to set up models for it, and the statistics are of very high quality. A lot of smart people have worked on the problem. As a result, we are able to measure and quantify the uncertainty pretty accurately. We still can’t predict who’s going to win every game, but we are doing a pretty good job with that. Things are predictable in theory, but our capabilities are not nearly as strong.

Predictability is a tricky question, but I always say we almost always have some notion of what’s going to happen next, but it’s just never a perfect notion. The question is more, Where do you sit along that spectrum?

How Good Gamblers Think

Warren Buffett

From The Signal And The Noise:

Successful gamblers – and successful forecasters of any kind – do not think of the future in terms of no-lose best, unimpeachable theories, and infinitely precise measurements. These are the illusions of the sucker, the sirens of his over-confidence. Successful gamblers, instead, think of the future as speckles of probability, flickering upward and downward like a stock market ticker to every new jolt of information. When their estimates of these probabilities diverge by a sufficient margin from the odds on offer, they may place a bet.

This sounds an awful lot like how Warren Buffett, Charlie Munger, and Benjamin Graham think about investing.

In his 1987 letter to shareholders, Warren Buffett explains Graham’s concept of Mr. Market:

Ben Graham, my friend and teacher, long ago described the mental attitude toward market fluctuations that I believe to be most conducive to investment success. He said that you should imagine market quotations as coming from a remarkably accommodating fellow named Mr. Market who is your partner in a private business. Without fail, Mr. Market appears daily and names a price at which he will either buy your interest or sell you his.

Even though the business that the two of you own may have economic characteristics that are stable, Mr. Market’s quotations will be anything but. For, sad to say, the poor fellow has incurable emotional problems. At times he feels euphoric and can see only the favorable factors affecting the business. When in that mood, he names a very high buy-sell price because he fears that you will snap up his interest and rob him of imminent gains. At other times he is depressed and can see nothing but trouble ahead for both the business and the world. On these occasions he will name a very low price, since he is terrified that you will unload your interest on him.

Mr. Market has another endearing characteristic: He doesn’t mind being ignored. If his quotation is uninteresting to you today, he will be back with a new one tomorrow. Transactions are strictly at your option. Under these conditions, the more manic-depressive his behavior, the better for you.

But, like Cinderella at the ball, you must heed one warning or everything will turn into pumpkins and mice: Mr. Market is there to serve you, not to guide you. It is his pocketbook, not his wisdom, that you will find useful. If he shows up some day in a particularly foolish mood, you are free to either ignore him or to take advantage of him, but it will be disastrous if you fall under his influence. Indeed, if you aren’t certain that you understand and can value your business far better than Mr. Market, you don’t belong in the game. As they say in poker, “If you’ve been in the game 30 minutes and you don’t know who the patsy is, you’re the patsy.”

And Charlie Munger’s take:

The model I like—to sort of simplify the notion of what goes on in a market for common stocks—is the pari-mutuel system at the racetrack. If you stop to think about it, a pari-mutuel system is a market. Everybody goes there and bets and the odds change based on what’s bet. That’s what happens in the stock market.

Any damn fool can see that a horse carrying a light weight with a wonderful win rate and a good post position etc., etc. is way more likely to win than a horse with a terrible record and extra weight and so on and so on. But if you look at the odds, the bad horse pays 100 to 1, whereas the good horse pays 3 to 2. Then it’s not clear which is statistically the best bet using the mathematics of Fermat and Pascal. The prices have changed in such a way that it’s very hard to beat the system.

And then the track is taking 17% off the top. So not only do you have to outwit all the other betters, but you’ve got to outwit them by such a big margin that on average, you can afford to take 17% of your gross bets off the top and give it to the house before the rest of your money can be put to work.

Given those mathematics, is it possible to beat the horses only using one’s intelligence? Intelligence should give some edge, because lots of people who don’t know anything go out and bet lucky numbers and so forth. Therefore, somebody who really thinks about nothing but horse performance and is shrewd and mathematical could have a very considerable edge, in the absence of the frictional cost caused by the house take.

Unfortunately, what a shrewd horseplayer’s edge does in most cases is to reduce his average loss over a season of betting from the 17% that he would lose if he got the average result to maybe 10%. However, there are actually a few people who can beat the game after paying the full 17%.

I used to play poker when I was young with a guy who made a substantial living doing nothing but bet harness races…. Now, harness racing is a relatively inefficient market. You don’t have the depth of intelligence betting on harness races that you do on regular races. What my poker pal would do was to think about harness races as his main profession. And he would bet only occasionally when he saw some mispriced bet available. And by doing that, after paying the full handle to the house—which I presume was around 17%—he made a substantial living.

You have to say that’s rare. However, the market was not perfectly efficient. And if it weren’t for that big 17% handle, lots of people would regularly be beating lots of other people at the horse races. It’s efficient, yes. But it’s not perfectly efficient. And with enough shrewdness and fanaticism, some people will get better results than others.

The stock market is the same way—except that the house handle is so much lower. If you take transaction costs—the spread between the bid and the ask plus the commissions—and if you don’t trade too actively, you’re talking about fairly low transaction costs. So that with enough fanaticism and enough discipline, some of the shrewd people are going to get way better results than average in the nature of things.

It is not a bit easy. And, of course, 50% will end up in the bottom half and 70% will end up in the bottom 70%. But some people will have an advantage. And in a fairly low transaction cost operation, they will get better than average results in stock picking.

How do you get to be one of those who is a winner—in a relative sense—instead of a loser?

Here again, look at the pari-mutuel system. I had dinner last night by absolute accident with the president of Santa Anita. He says that there are two or three betters who have a credit arrangement with them, now that they have off-track betting, who are actually beating the house. They’re sending money out net after the full handle—a lot of it to Las Vegas, by the way—to people who are actually winning slightly, net, after paying the full handle. They’re that shrewd about something with as much unpredictability as horse racing.

And the one thing that all those winning betters in the whole history of people who’ve beaten the pari-mutuel system have is quite simple. They bet very seldom.

It’s not given to human beings to have such talent that they can just know everything about everything all the time. But it is given to human beings who work hard at it—who look and sift the world for a mispriced be—that they can occasionally find one.

And the wise ones bet heavily when the world offers them that opportunity. They bet big when they have the odds. And the rest of the time, they don’t. It’s just that simple.

That is a very simple concept. And to me it’s obviously right—based on experience not only from the pari-mutuel system, but everywhere else.

And yet, in investment management, practically nobody operates that way. We operate that way—I’m talking about Buffett and Munger. And we’re not alone in the world. But a huge majority of people have some other crazy construct in their heads. And instead of waiting for a near cinch and loading up, they apparently ascribe to the theory that if they work a little harder or hire more business school students, they’ll come to know everything about everything all the time.

Silver concludes, “Finding patterns is easy in any kind of data-rich environment; that’s what mediocre gamblers do. The key is in determining whether the patterns represent signal or noise.”

Nate Silver: The Difference Between Risk and Uncertainty

Nate Silver elaborates on the difference between risk and uncertainty in The Signal and the Noise:

Risk, as first articulated by the economist Frank H. Knight in 1921, is something that you can put a price on. Say that you’ll win a poker hand unless your opponent draws to an inside straight: the chances of that happening are exactly 1 chance in 11. This is risk. It is not pleasant when you take a “bad beat” in poker, but at least you know the odds of it and can account for it ahead of time. In the long run, you’ll make a profit from your opponents making desperate draws with insufficient odds.

Uncertainty, on the other hand, is risk that is hard to measure. You might have some vague awareness of the demons lurking out there. You might even be acutely concerned about them. But you have no real idea how many of them there are or when they might strike. Your back-of-the-envelope estimate might be off by a factor of 100 or by a factor of 1,000; there is no good way to know. This is uncertainty. Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt.

What makes predictions succeed or fail?

That’s the ambitious question that Nate Silver tries to answer in k The Signal and the Noise.

The book appeals to me because it “takes a comprehensive look at prediction across 13 fields, ranging from sports betting to earthquake forecasting.” Despite our best efforts we’re not that great at prediction.

Silver published an excerpt of his book in the Times. While most disciplines are not good at making predictions, weather forecasters have managed to beat the odds and improve their accuracy over time. So what, if anything, can we learn from them?

The problem with weather is that our knowledge of its initial conditions is highly imperfect, both in theory and practice. A meteorologist at the National Oceanic and Atmospheric Administration told me that it wasn’t unheard-of for a careless forecaster to send in a 50-degree reading as 500 degrees. The more fundamental issue, though, is that we can observe our surroundings with only a certain degree of precision. No thermometer is perfect, and it isn’t physically possible to stick one into every molecule in the atmosphere.

Weather also has two additional properties that make forecasting even more difficult. First, weather is nonlinear, meaning that it abides by exponential rather than by arithmetic relationships. Second, it’s dynamic — its behavior at one point in time influences its behavior in the future. Imagine that we’re supposed to be taking the sum of 5 and 5, but we keyed in the second number as 6 by mistake. That will give us an answer of 11 instead of 10. We’ll be wrong, but not by much; addition, as a linear operation, is pretty forgiving. Exponential operations, however, extract a lot more punishment when there are inaccuracies in our data. If instead of taking 55 — which should be 3,125 — we instead take 56, we wind up with an answer of 15,625. This problem quickly compounds when the process is dynamic, because outputs at one stage of the process become our inputs in the next.

Given how daunting the challenge was, it must have been tempting to give up on the idea of building a dynamic weather model altogether. A thunderstorm might have remained roughly as unpredictable as an earthquake. But by embracing the uncertainty of the problem, their predictions started to make progress. “What may have distinguished [me] from those that proceeded,” Lorenz later reflected in “The Essence of Chaos,” his 1993 book, “was the idea that chaos was something to be sought rather than avoided.”

Perhaps because chaos theory has been a part of meteorological thinking for nearly four decades, professional weather forecasters have become comfortable treating uncertainty the way a stock trader or poker player might. When weather.gov says that there’s a 20 percent chance of rain in Central Park, it’s because the National Weather Service recognizes that our capacity to measure and predict the weather is accurate only up to a point. “The forecasters look at lots of different models: Euro, Canadian, our model — there’s models all over the place, and they don’t tell the same story,” Ben Kyger, a director of operations for the National Oceanic and Atmospheric Administration, told me. “Which means they’re all basically wrong.” The National Weather Service forecasters who adjusted temperature gradients with their light pens were merely interpreting what was coming out of those models and making adjustments themselves. “I’ve learned to live with it, and I know how to correct for it,” Kyger said. “My whole career might be based on how to interpret what it’s telling me.”

Despite their astounding ability to crunch numbers in nanoseconds, there are still things that computers can’t do, contends Hoke at the National Weather Service. They are especially bad at seeing the big picture when it comes to weather. They are also too literal, unable to recognize the pattern once it’s subjected to even the slightest degree of manipulation. Supercomputers, for instance, aren’t good at forecasting atmospheric details in the center of storms. One particular model, Hoke said, tends to forecast precipitation too far south by around 100 miles under certain weather conditions in the Eastern United States. So whenever forecasters see that situation, they know to forecast the precipitation farther north.

Still curious? Read The Signal and the Noise. While you’re at it, check out Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better and Expert Political Judgment: How Good Is It? How Can We Know?.