In this video, stats guru and political forecaster Nate Silver (author of The Signal and the Noise: Why So Many Predictions Fail but Some Don’t) reveals why most predictions fail, and shows how we can isolate a true “signal” from a universe of increasingly big and noisy data.
Tag: Nate Silver
Best known for accurate election predictions, statistician Nate Silver is also the author of The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t. Heather Bell, Managing Editor of Journal of Indexes, recently spoke with Silver.
IU: What do you see as the common theme among bad predictions? What most often leads people astray?
Silver: A lot of it is overconfidence. People tend to underestimate what the uncertainty that is intrinsic to a problem actually is. If you have someone estimate what they think a confidence interval is that’s supposed to cover 90 percent of all outcomes, it usually only covers 50 percent. You have upside outcomes and downside outcomes in the market certainly more often than people realize.
There are a variety of reasons for this. Part of it is that we can sometimes get stuck in the recent past and examples that are most familiar to us, kind of what Daniel Kahneman called “the availability heuristic,” where we assume that the current trend will always perpetuate itself, when actually it can be an anomaly or a fluke, or where we always think that the period we’re living through is the “signal,” so to speak. That’s often not true—sometimes you’re living in the outlier period, like when you have a housing bubble period that you haven’t historically had before.
Overconfidence is the core linkage between most of the failures of predictions that we’ve looked at. Obviously, you can look at that in a more technical sense and see where sometimes people are fitting models where they don’t have as much data as they think, but the root of it comes down to a failure to understand that it’s tough to be objective and that we often come at a problem with different biases and perverse incentives—and if we don’t check those, we tend to get ourselves into trouble.
IU: What standards or conditions must be met, in your opinion, for something to be considered “predictable”?
Silver: I tend not to think in terms of black and white absolutes. There are two ways to define “predictable,” I’d say. One is by asking, How well we are able to model the system? The other is more of a cosmic predictability: How intrinsically random is something over the long run?
I look at baseball as an example. Even the best teams only win about two-thirds of their games. Even the best hitters only get on base about 40 percent of the time. In that sense, baseball is highly unpredictable. In another sense though, baseball is very easy to measure relative to a lot of other things. It’s easy to set up models for it, and the statistics are of very high quality. A lot of smart people have worked on the problem. As a result, we are able to measure and quantify the uncertainty pretty accurately. We still can’t predict who’s going to win every game, but we are doing a pretty good job with that. Things are predictable in theory, but our capabilities are not nearly as strong.
Predictability is a tricky question, but I always say we almost always have some notion of what’s going to happen next, but it’s just never a perfect notion. The question is more, Where do you sit along that spectrum?
From The Signal And The Noise:
Successful gamblers – and successful forecasters of any kind – do not think of the future in terms of no-lose best, unimpeachable theories, and infinitely precise measurements. These are the illusions of the sucker, the sirens of his over-confidence. Successful gamblers, instead, think of the future as speckles of probability, flickering upward and downward like a stock market ticker to every new jolt of information. When their estimates of these probabilities diverge by a sufficient margin from the odds on offer, they may place a bet.
This sounds an awful lot like how Warren Buffett, Charlie Munger, and Benjamin Graham think about investing.
Ben Graham, my friend and teacher, long ago described the mental attitude toward market fluctuations that I believe to be most conducive to investment success. He said that you should imagine market quotations as coming from a remarkably accommodating fellow named Mr. Market who is your partner in a private business. Without fail, Mr. Market appears daily and names a price at which he will either buy your interest or sell you his.
Even though the business that the two of you own may have economic characteristics that are stable, Mr. Market’s quotations will be anything but. For, sad to say, the poor fellow has incurable emotional problems. At times he feels euphoric and can see only the favorable factors affecting the business. When in that mood, he names a very high buy-sell price because he fears that you will snap up his interest and rob him of imminent gains. At other times he is depressed and can see nothing but trouble ahead for both the business and the world. On these occasions he will name a very low price, since he is terrified that you will unload your interest on him.
Mr. Market has another endearing characteristic: He doesn’t mind being ignored. If his quotation is uninteresting to you today, he will be back with a new one tomorrow. Transactions are strictly at your option. Under these conditions, the more manic-depressive his behavior, the better for you.
But, like Cinderella at the ball, you must heed one warning or everything will turn into pumpkins and mice: Mr. Market is there to serve you, not to guide you. It is his pocketbook, not his wisdom, that you will find useful. If he shows up some day in a particularly foolish mood, you are free to either ignore him or to take advantage of him, but it will be disastrous if you fall under his influence. Indeed, if you aren’t certain that you understand and can value your business far better than Mr. Market, you don’t belong in the game. As they say in poker, “If you’ve been in the game 30 minutes and you don’t know who the patsy is, you’re the patsy.”
And Charlie Munger’s take:
The model I like—to sort of simplify the notion of what goes on in a market for common stocks—is the pari-mutuel system at the racetrack. If you stop to think about it, a pari-mutuel system is a market. Everybody goes there and bets and the odds change based on what’s bet. That’s what happens in the stock market.
Any damn fool can see that a horse carrying a light weight with a wonderful win rate and a good post position etc., etc. is way more likely to win than a horse with a terrible record and extra weight and so on and so on. But if you look at the odds, the bad horse pays 100 to 1, whereas the good horse pays 3 to 2. Then it’s not clear which is statistically the best bet using the mathematics of Fermat and Pascal. The prices have changed in such a way that it’s very hard to beat the system.
And then the track is taking 17% off the top. So not only do you have to outwit all the other betters, but you’ve got to outwit them by such a big margin that on average, you can afford to take 17% of your gross bets off the top and give it to the house before the rest of your money can be put to work.
Given those mathematics, is it possible to beat the horses only using one’s intelligence? Intelligence should give some edge, because lots of people who don’t know anything go out and bet lucky numbers and so forth. Therefore, somebody who really thinks about nothing but horse performance and is shrewd and mathematical could have a very considerable edge, in the absence of the frictional cost caused by the house take.
Unfortunately, what a shrewd horseplayer’s edge does in most cases is to reduce his average loss over a season of betting from the 17% that he would lose if he got the average result to maybe 10%. However, there are actually a few people who can beat the game after paying the full 17%.
I used to play poker when I was young with a guy who made a substantial living doing nothing but bet harness races…. Now, harness racing is a relatively inefficient market. You don’t have the depth of intelligence betting on harness races that you do on regular races. What my poker pal would do was to think about harness races as his main profession. And he would bet only occasionally when he saw some mispriced bet available. And by doing that, after paying the full handle to the house—which I presume was around 17%—he made a substantial living.
You have to say that’s rare. However, the market was not perfectly efficient. And if it weren’t for that big 17% handle, lots of people would regularly be beating lots of other people at the horse races. It’s efficient, yes. But it’s not perfectly efficient. And with enough shrewdness and fanaticism, some people will get better results than others.
The stock market is the same way—except that the house handle is so much lower. If you take transaction costs—the spread between the bid and the ask plus the commissions—and if you don’t trade too actively, you’re talking about fairly low transaction costs. So that with enough fanaticism and enough discipline, some of the shrewd people are going to get way better results than average in the nature of things.
It is not a bit easy. And, of course, 50% will end up in the bottom half and 70% will end up in the bottom 70%. But some people will have an advantage. And in a fairly low transaction cost operation, they will get better than average results in stock picking.
How do you get to be one of those who is a winner—in a relative sense—instead of a loser?
Here again, look at the pari-mutuel system. I had dinner last night by absolute accident with the president of Santa Anita. He says that there are two or three betters who have a credit arrangement with them, now that they have off-track betting, who are actually beating the house. They’re sending money out net after the full handle—a lot of it to Las Vegas, by the way—to people who are actually winning slightly, net, after paying the full handle. They’re that shrewd about something with as much unpredictability as horse racing.
And the one thing that all those winning betters in the whole history of people who’ve beaten the pari-mutuel system have is quite simple. They bet very seldom.
It’s not given to human beings to have such talent that they can just know everything about everything all the time. But it is given to human beings who work hard at it—who look and sift the world for a mispriced be—that they can occasionally find one.
And the wise ones bet heavily when the world offers them that opportunity. They bet big when they have the odds. And the rest of the time, they don’t. It’s just that simple.
That is a very simple concept. And to me it’s obviously right—based on experience not only from the pari-mutuel system, but everywhere else.
And yet, in investment management, practically nobody operates that way. We operate that way—I’m talking about Buffett and Munger. And we’re not alone in the world. But a huge majority of people have some other crazy construct in their heads. And instead of waiting for a near cinch and loading up, they apparently ascribe to the theory that if they work a little harder or hire more business school students, they’ll come to know everything about everything all the time.
Silver concludes, “Finding patterns is easy in any kind of data-rich environment; that’s what mediocre gamblers do. The key is in determining whether the patterns represent signal or noise.”
Thomas Bayes was an English minister in the first half of the 18th century, whose (now) most famous work, “An Essay toward Solving a Problem is the Doctrine of Chances,” was brought to the attention of the Royal Society in 1763 – two years after his death – by his friend Richard Price. The essay, the key to what we now know as Bayes’s Theorem, concerned how we should adjust probabilities when we encounter new data.
In The Signal And The Noise, Nate Silver explains the theory:
[Richard] Price, in framing Bayes’s essay, gives the example of a person who emerges into the world (perhaps he is Adam, or perhaps he came from Plato’s cave) and sees the sun rise for the first time. At first, he does not know whether this is typical or some sort of freak occurrence. However, each day that he survives and the sun rises again, his confidence increases that it is a permanent feature of nature. Gradually, through this purely statistical form of inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches (although never exactly reaches) 100 percent.
The argument made by Bayes and Price is not that the world is intrinsically probabilistic or uncertain Bayes was a believer in divine perfection; he was also an advocate of Isaac Newton’s work, which had seemed to suggest that nature follows regular and predictable laws. It is, rather, a statement—expressed both mathematically and philosophically—about how we learn about the universe: that we learn about it through approximation, getting closer and closer to the truth as we gather more evidence.
This contrasted with the more skeptical viewpoint of the Scottish philosopher David Hume, who argued that since we could not be certain that the sun would rise again, a prediction that it would was inherently no more rational than one that it wouldn’t. The Bayesian viewpoint, instead, regards rationality as a probabilistic matter. In essence, Bayes and Price are telling Hume, don’t blame nature because you are too daft to understand it: if you step out of your skeptical shell and make some predictions about its behavior, perhaps you will get a little closer to the truth.
Bayes’s theorem wasn’t first formulated by Thomas Bayes. Instead it was developed by the French mathematician and astronomer Pierre-Simon Laplace.
Laplace believed in scientific determinism — given the location of every particle in the universe and enough computing power we could predict the universe perfectly. However it was the disconnect between the perfection of nature and our human imperfections in measuring and understanding it that led to Laplace’s involvement in a theory based on probabilism.
Laplace was frustrated at the time by astronomical observations that appeared to show anomalies in the orbits of Jupiter and Saturn — they seemed to predict that Jupiter would crash into the sun while Saturn would drift off into outer space. These prediction were, of course, quite wrong and Laplace devoted much of his life to developing much more accurate measurements of these planets’ orbits. The improvements that Laplace made relied on probabilistic inferences in lieu of exacting measurements, since instruments like the telescope were still very crude at the time. Laplace came to view probability as a waypoint between ignorance and knowledge. It seemed obvious to him that a more thorough understanding of probability was essential to scientific progress.
The Bayesian approach to probability is simple: take the odds of something happening, and adjust for new information. This, of course, is most useful in the cases where you have strong prior knowledge. If your initial probability is off the Bayesian approach is much less helpful.
In her book, The Theory That Would Not Die, Sharon Bertsch McGrayne lays out the Bayesian process:
We modify our opinions with objective information: Initial Beliefs + Recent Objective Data = A New and Improved Belief. … each time the system is recalculated, the posterior becomes the prior of the new iteration. It was an evolving system, with each bit of new information pushed closer and closer to certitude.
Here is a short example, found in Investing: The Last Liberal Art, on how it works:
Let’s imagine that you and a friend have spent the afternoon playing your favorite board game, and now, at the end of the game, you are chatting about this and that. Something your friend says leads you to make a friendly wager: that with one roll of the die from the game, you will get a 6. Straight odds are one in six, a 16 percent probability. But then suppose your friend rolls the die, quickly covers it with her hand, and takes a peek. “I can tell you this much,” she says; “it’s an even number.” Now you have new information and your odds change dramatically to one in three, a 33 percent probability. While you are considering whether to change your bet, your friend teasingly adds: “And it’s not a 4.” With this additional bit of information, your odds have changed again, to one in two, a 50 percent probability. With this very simple example, you have performed a Bayesian analysis. Each new piece of information affected the original probability, and that is a Bayesian inference.
Knowing the exact math is not really the key to understanding Bayesian thinking, although being able to quantify is a huge advantage in thinking and life.
“Bayes’s theorem,” Silver continues, “is concerned with conditional probability. That is, it tells us the probability that a theory or hypothesis is true if some event has happened.”
When our priors are strong, they can be surprisingly resilient in the face of new evidence. One classic example of this is the presence of breast cancer among women in their forties. The chance that a woman will develop breast cancer in her forties is fortunately quite low — about 1.4 percent. But what is the probability if she has a positive mammogram?
Studies show that if a woman does not have cancer, a mammogram will incorrectly claim that she does only about 10 percent of the time. If she does have cancer, on the other hand, they will detect it about 75 percent of the time. When you see those statistics, a positive mammogram seems like very bad news indeed. But if you apply Bayes’s Theorem to these numbers, you’ll come to a different conclusion: the chance that a woman in her forties has breast cancer given that she’s had a positive mammogram is still only about 10 percent. These false positive dominate the equation because very few young women have breast cancer to begin with. For this reason, many doctors recommend that women do not begin getting regular mammograms until they are in their fifties and the prior probability of having breast cancer is higher.
When doing research for this post, I stumbled on Eliezer Yudkowsky’s intuitive explanation (building upon the mammogram example above):
The most common mistake is to ignore the original fraction of women with breast cancer, and the fraction of women without breast cancer who receive false positives, and focus only on the fraction of women with breast cancer who get positive results. For example, the vast majority of doctors in these studies seem to have thought that if around 80% of women with breast cancer have positive mammographies, then the probability of a women with a positive mammography having breast cancer must be around 80%.
Figuring out the final answer always requires all three pieces of information – the percentage of women with breast cancer, the percentage of women without breast cancer who receive false positives, and the percentage of women with breast cancer who receive (correct) positives.
To see that the final answer always depends on the original fraction of women with breast cancer, consider an alternate universe in which only one woman out of a million has breast cancer. Even if mammography in this world detects breast cancer in 8 out of 10 cases, while returning a false positive on a woman without breast cancer in only 1 out of 10 cases, there will still be a hundred thousand false positives for every real case of cancer detected. The original probability that a woman has cancer is so extremely low that, although a positive result on the mammography does increase the estimated probability, the probability isn’t increased to certainty or even “a noticeable chance”; the probability goes from 1:1,000,000 to 1:100,000.
Similarly, in an alternate universe where only one out of a million women does not have breast cancer, a positive result on the patient’s mammography obviously doesn’t mean that she has an 80% chance of having breast cancer! If this were the case her estimated probability of having cancer would have been revised drastically downward after she got a positive result on her mammography – an 80% chance of having cancer is a lot less than 99.9999%! If you administer mammographies to ten million women in this world, around eight million women with breast cancer will get correct positive results, while one woman without breast cancer will get false positive results. Thus, if you got a positive mammography in this alternate universe, your chance of having cancer would go from 99.9999% up to 99.999987%. That is, your chance of being healthy would go from 1:1,000,000 down to 1:8,000,000.
These two extreme examples help demonstrate that the mammography result doesn’t replace your old information about the patient’s chance of having cancer; the mammography slides the estimated probability in the direction of the result.
Part of the problem is the availability heuristic — we focus on what’s readily available. In this case that’s the newest information and the bigger picture gets lost. We fail to adjust the probability to reflect new information.
The big idea behind Bayes’s theorem is that we must continuously update our probability estimates on an as-needed basis.
Let’s take a look at another example, only this time we’ll do some basic algebra.
Consider a somber example: the September 11 attacks. Most of us would have assigned almost no probability to terrorists crashing planes into buildings in Manhattan when we woke up that morning. But we recognized that a terror attack was an obvious possibility once the first plane hit the World Trade Center. And we had no doubt we were being attacked once the second tower was hit. Bayes’s theorem can replicate this result.
For instances, say that before the first plane hit, our estimate of the possibility of terror attack on tall buildings in Manhattan was just 1 chance in 20,000, or 0.005 percent. However, we would also have assigned a very low probability to a plane hitting the World Trade Center by accident. This figure can actually be estimated empirically: in the previous 25,000 days of aviation over Manhattan prior to September 11, there had been two such accidents: one involving the Empire State building in 1945 and another at 40 Wall Street in 1946. That would make the possibility of such an accident about 1 chance in 12,500 on any given day. If you use Bayes’s theorem to run these numbers (see below), the probability we’d assign to a terror attack increased form 0.005 percent to 38 percent the moment that the first plane hit.
Weigh the Evidence
Tim Harford, adds:
Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Sceptics react with glee, while true believers dismiss the new information.
A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming – but only a little.
The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.
Here is another example, this time from Quora. A reader poses the question, “What does it mean when a girl smiles at you every time she sees you?” Another reader, using Bayes’s Theorem replies:
The probability she likes you is
is what you want to know – the probability she likes you given the fact that she smiles at you.
is the probability that she will smile given that she sees someone she likes.
is the probability that she likes a random person.
is the probability that she will smile at a random person.
For example, suppose she just smiles at everyone. Then intuition says that fact that she smiles at you doesn’t mean anything one way or another. Indeed, and , and we have
meaning that knowing that she smiles at you doesn’t change anything.
At the other extreme, suppose she smiles at everyone she likes, and only those she likes. Then and . Then we have
and she is certain to like you.
In the intermediate case, what you need to do is find the ratio of odds of smiling to people she likes to smiles in general, multiply by the percentage of people she likes, and there is your answer.
The more she smiles in general, the lower the chance she likes you. The more she smiles at people she likes, the better the chance. And of course the more people she likes, the better your chances are.
Of course, how to actually determine these values is a mystery I have never solved.
In The Essential Buffett: Timeless Principles for the New Economy, Robert Hagstrom writes:
Bayesian analysis is an attempt to incorporate all available information into a process for making inferences, or decisions, about the underlying state of nature. Colleges and universities use Bayes’s theorem to help their students study decision making. In the classroom, the Bayesian approach is more popularly called the decision tree theory; each branch of the tree represents new information that, in turn, changes the odds in making decisions. “At Harvard Business School,” explains Charlie Munger, “the great quantitative thing that bonds the first-year class together is what they call decision tree theory. All they do is take high school algebra and apply it to real life problems. The students love it. They’re amazed to find that high school algebra works in life.
Limitations of the Bayesian
Besides seeing the the world as an ever shifting array of probabilities, we must also remember the limitations of inductive reasoning such as the “sun rising every day” example given by Price/Bayes above.
The most useful example of this is explained by Nassim Taleb in the Black Swan:
Consider a turkey that is fed everyday. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed everyday by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
Don’t walk away thinking the Bayesian approach will enable you to predict everything. In fact, with the volume of information is increasing exponentially, the future may be as unpredictable as ever, concludes Silver:
There is no reason to conclude that the affairs of man are becoming more predictable. The opposite may well be true. The same sciences that uncover the laws of nature are making the organization of society more complex.
In the final analysis, though, picking up Bayesian reasoning can truly change your life, as said well in this Big Think video by Julia Galef of the Center for Applied Rationality:
After you’ve been steeped in Bayes’ rule for a little while, it starts to produce some fundamental changes to your thinking. For example, you become much more aware that your beliefs are grayscale. They’re not black and white and that you have levels of confidence in your beliefs about how the world works that are less than 100 percent but greater than zero percent and even more importantly as you go through the world and encounter new ideas and new evidence, that level of confidence fluctuates, as you encounter evidence for and against your beliefs.
Bayes’s Theorem is part of the Farnam Street latticework of mental models.
Nate Silver elaborates on the difference between risk and uncertainty in The Signal and the Noise:
Risk, as first articulated by the economist Frank H. Knight in 1921, is something that you can put a price on. Say that you’ll win a poker hand unless your opponent draws to an inside straight: the chances of that happening are exactly 1 chance in 11. This is risk. It is not pleasant when you take a “bad beat” in poker, but at least you know the odds of it and can account for it ahead of time. In the long run, you’ll make a profit from your opponents making desperate draws with insufficient odds.
Uncertainty, on the other hand, is risk that is hard to measure. You might have some vague awareness of the demons lurking out there. You might even be acutely concerned about them. But you have no real idea how many of them there are or when they might strike. Your back-of-the-envelope estimate might be off by a factor of 100 or by a factor of 1,000; there is no good way to know. This is uncertainty. Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt.
That’s the ambitious question that Nate Silver tries to answer in k The Signal and the Noise.
The book appeals to me because it “takes a comprehensive look at prediction across 13 fields, ranging from sports betting to earthquake forecasting.” Despite our best efforts we’re not that great at prediction.
Silver published an excerpt of his book in the Times. While most disciplines are not good at making predictions, weather forecasters have managed to beat the odds and improve their accuracy over time. So what, if anything, can we learn from them?
The problem with weather is that our knowledge of its initial conditions is highly imperfect, both in theory and practice. A meteorologist at the National Oceanic and Atmospheric Administration told me that it wasn’t unheard-of for a careless forecaster to send in a 50-degree reading as 500 degrees. The more fundamental issue, though, is that we can observe our surroundings with only a certain degree of precision. No thermometer is perfect, and it isn’t physically possible to stick one into every molecule in the atmosphere.
Weather also has two additional properties that make forecasting even more difficult. First, weather is nonlinear, meaning that it abides by exponential rather than by arithmetic relationships. Second, it’s dynamic — its behavior at one point in time influences its behavior in the future. Imagine that we’re supposed to be taking the sum of 5 and 5, but we keyed in the second number as 6 by mistake. That will give us an answer of 11 instead of 10. We’ll be wrong, but not by much; addition, as a linear operation, is pretty forgiving. Exponential operations, however, extract a lot more punishment when there are inaccuracies in our data. If instead of taking 55 — which should be 3,125 — we instead take 56, we wind up with an answer of 15,625. This problem quickly compounds when the process is dynamic, because outputs at one stage of the process become our inputs in the next.
Given how daunting the challenge was, it must have been tempting to give up on the idea of building a dynamic weather model altogether. A thunderstorm might have remained roughly as unpredictable as an earthquake. But by embracing the uncertainty of the problem, their predictions started to make progress. “What may have distinguished [me] from those that proceeded,” Lorenz later reflected in “The Essence of Chaos,” his 1993 book, “was the idea that chaos was something to be sought rather than avoided.”
Perhaps because chaos theory has been a part of meteorological thinking for nearly four decades, professional weather forecasters have become comfortable treating uncertainty the way a stock trader or poker player might. When weather.gov says that there’s a 20 percent chance of rain in Central Park, it’s because the National Weather Service recognizes that our capacity to measure and predict the weather is accurate only up to a point. “The forecasters look at lots of different models: Euro, Canadian, our model — there’s models all over the place, and they don’t tell the same story,” Ben Kyger, a director of operations for the National Oceanic and Atmospheric Administration, told me. “Which means they’re all basically wrong.” The National Weather Service forecasters who adjusted temperature gradients with their light pens were merely interpreting what was coming out of those models and making adjustments themselves. “I’ve learned to live with it, and I know how to correct for it,” Kyger said. “My whole career might be based on how to interpret what it’s telling me.”
Despite their astounding ability to crunch numbers in nanoseconds, there are still things that computers can’t do, contends Hoke at the National Weather Service. They are especially bad at seeing the big picture when it comes to weather. They are also too literal, unable to recognize the pattern once it’s subjected to even the slightest degree of manipulation. Supercomputers, for instance, aren’t good at forecasting atmospheric details in the center of storms. One particular model, Hoke said, tends to forecast precipitation too far south by around 100 miles under certain weather conditions in the Eastern United States. So whenever forecasters see that situation, they know to forecast the precipitation farther north.
|Still curious? Read The Signal and the Noise. While you’re at it, check out Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better and Expert Political Judgment: How Good Is It? How Can We Know?.|