Tag: Thomas Bayes

Predicting the Future with Bayes’s Theorem

In a recent podcast, we talked with professional poker player Annie Duke about thinking in probabilities, something good poker players do all the time. At the poker table or in life, it’s really useful to think in probabilities versus absolutes based on all the information you have available to you. You can improve your decisions and get better outcomes. Probabilistic thinking leads you to ask yourself, how confident am I in this prediction? What information would impact this confidence?

Bayes’s Theorem

Bayes’s theorem is an accessible way of integrating probability thinking into our lives. Thomas Bayes was an English minister in the 18th century, whose most famous work, “An Essay toward Solving a Problem in the Doctrine of Chances,” was brought to the attention of the Royal Society in 1763—two years after his death—by his friend Richard Price. The essay did not contain the theorem as we now know it, but had the seeds of the idea. It looked at how we should adjust our estimates of probabilities when we encounter new data that influence a situation. Later development by French scholar Pierre-Simon Laplace and others helped codify the theorem and develop it into a useful tool for thinking.

Knowing the exact math of probability calculations is not the key to understanding Bayesian thinking. More critical is your ability and desire to assign probabilities of truth and accuracy to anything you think you know, and then being willing to update those probabilities when new information comes in. Here is a short example, found in Investing: The Last Liberal Art, of how it works:

Let’s imagine that you and a friend have spent the afternoon playing your favorite board game, and now, at the end of the game, you are chatting about this and that. Something your friend says leads you to make a friendly wager: that with one roll of the die from the game, you will get a 6. Straight odds are one in six, a 16 percent probability. But then suppose your friend rolls the die, quickly covers it with her hand, and takes a peek. “I can tell you this much,” she says; “it’s an even number.” Now you have new information and your odds change dramatically to one in three, a 33 percent probability. While you are considering whether to change your bet, your friend teasingly adds: “And it’s not a 4.” With this additional bit of information, your odds have changed again, to one in two, a 50 percent probability. With this very simple example, you have performed a Bayesian analysis. Each new piece of information affected the original probability, and that is Bayesian [updating].

Both Nate Silver and Eliezer Yudkowsky have written about Bayes’s theorem in the context of medical testing, specifically mammograms. Imagine you live in a country with 100 million women under 40. Past trends have revealed that there is a 1.4% chance of a woman under 40 in this country getting breast cancer—so roughly 1.4 million women.

Mammograms will detect breast cancer 75% of the time. They will give out false positives—say a woman has breast cancer when she actually doesn’t—about 10% of the time. At first, you might focus just on the mammogram numbers and think that 75% success rate means that a positive is bad news. Let’s do the math.

If all the women under 40 get mammograms, then the false positive rate will give 10 million women under 40 the news that they have breast cancer. But because you know the first statistic, that only 1.4 women under 40 actually get breast cancer, you know that 8.6 million of the women who tested positive are not actually going to have breast cancer!
That’s a lot of needless worrying, which leads to a lot of needless medical care. In order to remedy this poor understanding and make better decisions about using mammograms, we absolutely must consider prior knowledge when we look at the results, and try to update our beliefs with that knowledge in mind.

Weigh the Evidence

Often we ignore prior information, simply called “priors” in Bayesian-speak. We can blame this habit in part on the availability heuristic—we focus on what’s readily available. In this case, we focus on the newest information and the bigger picture gets lost. We fail to adjust the probability of old information to reflect what we have learned.

The big idea behind Bayes’s theorem is that we must continuously update our probability estimates on an as-needed basis. In their book The Signal and the Noise, Nate Silver and Allen Lane give a contemporary example, reminding us that new information is often most useful when we put it in the larger context of what we already know:

Bayes’s theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Skeptics react with glee, while true believers dismiss the new information.

A better response is to use Bayes’s theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming—but only a little.

The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’s theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.

Limitations of the Bayesian

Don’t walk away thinking the Bayesian approach will enable you to predict everything! In addition to seeing the the world as an ever-shifting array of probabilities, we must also remember the limitations of inductive reasoning. A high probability of something being true is not the same as saying it is true. A great example of this is from Bertrand Russell’s The Problems of Philosophy:

A horse which has been often driven along a certain road resists the attempt to drive him in a different direction. Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.

In the final analysis, though, picking up Bayesian reasoning can truly change your life, as observed in this Big Think video by Julia Galef of the Center for Applied Rationality:

After you’ve been steeped in Bayes’s rule for a little while, it starts to produce some fundamental changes to your thinking. For example, you become much more aware that your beliefs are grayscale. They’re not black and white and that you have levels of confidence in your beliefs about how the world works that are less than 100 percent but greater than zero percent and even more importantly as you go through the world and encounter new ideas and new evidence, that level of confidence fluctuates, as you encounter evidence for and against your beliefs.

So be okay with uncertainty, and use it to your advantage. Instead of holding on to outdated beliefs by rejecting new information, take in what comes your way through a system of evaluating probabilities.

Bayes’s Theorem is part of the Farnam Street latticework of mental models. Still Curious? Read Bayes and Deadweight: Using Statistics to Eject the Deadweight From Your Life next. 

Learning community members can discuss this on the member forum

What Are You Doing About It? Reaching Deep Fluency with Mental Models

The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)

The whole idea is to take the world’s greatest, most useful ideas and make them work for you!

How hard can it be?

Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayesian updating, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.

There’s a bit of a problem we’re seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It’s not becoming part of their standard repertoire.

Let’s say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes‘ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!

But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn’t,” then haven’t you really wasted your time?

Ironically, it’s this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!

See, the common reason why people don’t truly “follow through” with all of this stuff is that they haven’t raised their knowledge to a “deep fluency” — they’re skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.

The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.

Let’s work through an example.

***

Say you’re just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we’ve landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.

This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:

“In the choice between changing one’s mind and proving there’s no need to do so, most people get busy on the proof.”

Now, what most people do, the ones you’re trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.

Don’t do that!

The next step would be to push a bit further, to get beyond the sound bite: What’s the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?

The answers are out there: They’re in Daniel Kahneman and in Charlie Munger and in Elster. They’re available by searching through Farnam Street.

The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.

Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That’s fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.

While that’s great work, you’re not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.

The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else’s that are obviously sound.

In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.

Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe. 

As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Now we’re getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.

Sometimes when we get outside the heuristic/biases stuff, it’s less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.

But that’s also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you’ll come up with hundreds of them, and people might even look to you when they’re having problems doing it themselves!

Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they’ve learned.

For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That’s how it’s done.

Even if you can’t come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.

Sometimes it’s just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)

We did this very thing recently with Lee Kuan Yew’s Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!

But that’s exactly the point. Give the thing a name and a life and, like clockwork, you’ll start recalling it. The phrase “Lee Kuan Yew’s Rule” actually appears in my head when I’m approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I’d hoped.

Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came. 

***

I can hear the objection coming. Who has time for this stuff?

You do. It’s about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system? I solemnly promise that you’re spending way more time right now making sub-optimal decisions and trying to deal with the fallout.

If you need help learning to manage your time right this second, check out our Productivity Seminar, one that’s changed some people’s lives entirely. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you’ll notice you do have an hour a day to spend on this Big Ideas stuff. It’s worth the 59 bucks.

If you don’t have 59 bucks, at least imitate Cal Newport and start scheduling your days and put an hour in there for “Getting better at making all of my decisions.”

Once you find that solid hour (or more), start using it in the way outlined above, and let the world’s great knowledge actually start making an impact. Just do a little every day.

What you’ll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.

Unless and until you really understand this, you’ll continue spinning your wheels. So here’s your call to action. Go get to it!