• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Farnam Street Logo

Farnam Street

Mastering the best of what other people have already figured out

  • Articles
  • Newsletter
  • Podcast
  • Books
  • Courses
  • Log In
  • Become a Member
TweetEmailLinkedInPrint
Uncategorized|Reading Time: 3 minutes

What makes predictions succeed or fail?

That’s the ambitious question that Nate Silver tries to answer in The Signal and the Noise.

The book appeals to me because it “takes a comprehensive look at prediction across 13 fields, ranging from sports betting to earthquake forecasting.” Despite our best efforts we’re not that great at prediction.

Silver published an excerpt of his book in the Times. While most disciplines are not good at making predictions, weather forecasters have managed to beat the odds and improve their accuracy over time. So what, if anything, can we learn from them?

The problem with weather is that our knowledge of its initial conditions is highly imperfect, both in theory and practice. A meteorologist at the National Oceanic and Atmospheric Administration told me that it wasn’t unheard-of for a careless forecaster to send in a 50-degree reading as 500 degrees. The more fundamental issue, though, is that we can observe our surroundings with only a certain degree of precision. No thermometer is perfect, and it isn’t physically possible to stick one into every molecule in the atmosphere.

Weather also has two additional properties that make forecasting even more difficult. First, weather is nonlinear, meaning that it abides by exponential rather than by arithmetic relationships. Second, it’s dynamic — its behavior at one point in time influences its behavior in the future. Imagine that we’re supposed to be taking the sum of 5 and 5, but we keyed in the second number as 6 by mistake. That will give us an answer of 11 instead of 10. We’ll be wrong, but not by much; addition, as a linear operation, is pretty forgiving. Exponential operations, however, extract a lot more punishment when there are inaccuracies in our data. If instead of taking 55 — which should be 3,125 — we instead take 56, we wind up with an answer of 15,625. This problem quickly compounds when the process is dynamic, because outputs at one stage of the process become our inputs in the next.

Given how daunting the challenge was, it must have been tempting to give up on the idea of building a dynamic weather model altogether. A thunderstorm might have remained roughly as unpredictable as an earthquake. But by embracing the uncertainty of the problem, their predictions started to make progress. “What may have distinguished [me] from those that proceeded,” Lorenz later reflected in “The Essence of Chaos,” his 1993 book, “was the idea that chaos was something to be sought rather than avoided.”

Perhaps because chaos theory has been a part of meteorological thinking for nearly four decades, professional weather forecasters have become comfortable treating uncertainty the way a stock trader or poker player might. When weather.gov says that there’s a 20 percent chance of rain in Central Park, it’s because the National Weather Service recognizes that our capacity to measure and predict the weather is accurate only up to a point. “The forecasters look at lots of different models: Euro, Canadian, our model — there’s models all over the place, and they don’t tell the same story,” Ben Kyger, a director of operations for the National Oceanic and Atmospheric Administration, told me. “Which means they’re all basically wrong.” The National Weather Service forecasters who adjusted temperature gradients with their light pens were merely interpreting what was coming out of those models and making adjustments themselves. “I’ve learned to live with it, and I know how to correct for it,” Kyger said. “My whole career might be based on how to interpret what it’s telling me.”

Despite their astounding ability to crunch numbers in nanoseconds, there are still things that computers can’t do, contends Hoke at the National Weather Service. They are especially bad at seeing the big picture when it comes to weather. They are also too literal, unable to recognize the pattern once it’s subjected to even the slightest degree of manipulation. Supercomputers, for instance, aren’t good at forecasting atmospheric details in the center of storms. One particular model, Hoke said, tends to forecast precipitation too far south by around 100 miles under certain weather conditions in the Eastern United States. So whenever forecasters see that situation, they know to forecast the precipitation farther north.

Still curious? Read The Signal and the Noise. While you’re at it, check out Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better and Expert Political Judgment: How Good Is It? How Can We Know?.

Read Next

Next Post:MisinformationMonosodium glutamate causes headaches. You have to wait 24 hours to file a missing person report in the US. The black belt in martial arts …

Discover What You’re Missing

Get the weekly email full of actionable ideas and insights you can use at work and home.


As seen on:

Forbes logo
New York Times logo
Wall Street Journal logo
The Economist logo
Financial Times logo
Farnam Street Logo

© 2023 Farnam Street Media Inc. All Rights Reserved.
Proudly powered by WordPress. Hosted by Pressable. See our Privacy Policy.

  • Speaking
  • Sponsorship
  • About
  • Support
  • Education

We’re Syrus Partners.
We buy amazing businesses.


Farnam Street participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising commissions by linking to Amazon.