Tag: Mental Models

Mental Models For a Pandemic

Mental models help us understand the world better, something which is especially valuable during times of confusion, like a pandemic. Here’s how to apply mental models to gain a more accurate picture of reality and keep a cool head.

***

It feels overwhelming when the world changes rapidly, abruptly, and extensively. The changes come so fast it can be hard to keep up—and the future, which a few months ago seemed reliable, now has so many unknown dimensions. In the face of such uncertainty, mental models are valuable tools for helping you think through significant disruptions such as a pandemic.

A mental model is simply a representation of how something works. They are how we simplify complexity, why we consider some things more relevant than others, and how we reason. Using them increases your clarity of understanding, providing direction for the choices you need to make and the options you want to keep open.

Models for ourselves

During a pandemic, a useful model is “the map is not the territory.” In rapidly changing situations like a global health crisis, any reporting is an incomplete snapshot in time. Our maps are going to be inaccurate for many reasons: limited testing availability, poor reporting, ineffective information sharing, lack of expertise in analyzing the available information. The list goes on.

If past reporting hasn’t been completely accurate, then why would you assume current reporting is? You have to be careful when interpreting the information you receive, using it as a marker to scope out a range of what is happening in the territory.

In our current pandemic, we can easily spot our map issues. There aren’t enough tests available in most countries. Because COVID-19 isn’t fatal for the majority of people who contract it, there are likely many people who get it but don’t meet the testing criteria. Therefore, we don’t know how many people have it.

When we look at country-level reporting, we can also see not all countries are reporting to the same standard. Sometimes this isn’t a matter of “better” or “worse”; there are just different ways of collating the numbers. Some countries don’t have the infrastructure for widespread data collection and sharing. Different countries also have different standards for what counts as a death caused by COVID-19.

In other nations, incentives affect reporting. Some countries downplay their infection rate so as to not create panic. Some governments avoid reporting because it undermines their political interests. Others are more worried about the information on the economic map than the health one.

Although it is important to be realistic about our maps, it doesn’t mean we shouldn’t seek to improve their quality. Paying attention to information from experts and ignoring unverified soundbites is one step to increasing the accuracy of our maps. The more accurate we can get them, the more likely it is that we’ll be able to unlock new possibilities that help us deal with the crisis and plan for the future.

There are two models that we can use to improve the effectiveness of the maps we do have: “compounding” and “probabilistic thinking.”

Compounding is exponential growth, something a lot of us tend to have a poor intuitive grasp on. We see the immediate linear relationships in the situation, like how one test diagnoses one person, while not understanding the compounding effects of that relationship. Increased testing can lead to an exponential decrease in virus transmission because each infected person usually passes the virus onto more than just one other person.

One of the clearest stories to illustrate exponential growth is the story of the man who asked to be paid in rice. In this story, a servant is to be rewarded for his service. When asked how he wanted to be paid, he asks to be paid in rice, using a chessboard to determine the final amount. Starting with one grain, the amount of rice is to be doubled for each square. One grain on the first square looks pathetic. But halfway through the chessboard, the servant is making a good yearly living. And after doubling the rice sixty-four times, the servant is owed more rice than the whole world can produce.

Improving our ability to think exponentially helps us understand how more testing can lead to both an exponential decrease in testing prices and an exponential increase in the production of those tests. It also makes clear just how far-reaching the impact of our actions can be if we don’t take precautions with the assumption that we could be infected.

Probabilistic thinking is also invaluable in helping us make decisions based on the incomplete information we have. In the absence of enough testing, for example, we need to use probabilistic thinking to make decisions on what actions to pursue. We ask ourselves questions like: Do I have COVID-19? If there’s a 1% chance I have it, is it worth visiting my grandparents?

Being able to evaluate reasonable probability has huge impacts on how we approach physical distancing. Combining the models of probabilistic thinking and map is not the territory suggests our actions need to be guided by infection numbers much higher than the ones we have. We are likely to make significantly different social decisions if we estimate the probability of infection as being three people out of ten instead of one person out of one thousand.

Bayesian updating can also help clarify the physical distancing actions you should take. There’s a small probability of being part of a horrendous chain of events that might not just have poor direct consequences but also follow you for the rest of your life. Evaluating how responsible you are being in terms of limiting transmission, would you bet a loved one’s life on it?

Which leads us to Hanlon’s Razor. It’s hard not to get angry at reports of beach parties during spring break or at the guy four doors down who has his friends over to hang out every night. For your own sanity, try using Hanlon’s Razor to evaluate their behavior. They are not being malicious and trying to kill people. They are just exceptionally and tragically ignorant.

Finally, on a day-to-day basis, trying to make small decisions with incomplete information, you can use inversion. You can look at the problem backwards. When the best way forward is far from clear, you ask yourself what you could do to make things worse, and then avoid doing those things.

Models for society

Applying mental models aids in the understanding the dynamics of the large-scale social response.

Currently we are seeing the counterintuitive measures with first-order negatives (closing businesses) but second- and third-order positives (reduced transmission, less stress on the healthcare system). Second-order thinking is an invaluable tool at all times, including during a pandemic. It’s so important that we encourage the thinking, analysis, and decision-making that factors in the effects of the effects of the decisions we make.

In order to improve the maps that our leaders have to make decisions, we need to sort through the feedback loops providing the content. If we can improve not only the feedback but also the pace of iterations, we have a better chance of making good decisions.

For example, if we improve the rate of testing and the speed of the results, it would be a major game-changer. Imagine if knowing whether you had the virus or not was a $0.01 test that gave you a result in less than a minute. In that case, we could make different decisions about social openness, even in the absence of a vaccine (however, this may have invasive privacy implications, as tracking this would be quite difficult otherwise).

As we watch the pandemic and its consequences unfold, it becomes clear that leadership and authority are not the same thing. Our hierarchical instincts emerge strongly in times of crisis. Leadership vacuums, then, are devastating, and disasters expose the cracks in our hierarchies. However, we also see that people can display strong leadership without needing any authority. A pandemic provides opportunities for such leadership to emerge at community and local levels, providing alternate pathways for meeting the needs of many.

One critical model we can use to look at society during a pandemic is Ecosystems. When we think about ecosystems, we might imagine a variety of organisms interacting in a forest or the ocean. But our cities are also ecosystems, as is the earth as a whole. Understanding system dynamics can give us a lot of insight into what is happening in our societies, both at the micro and macro level.

One property of ecosystems that is useful to contemplate in situations like a pandemic is resilience—the speed at which an ecosystem recovers after a disturbance. There are many factors that contribute to resilience, such as diversity and adaptability. Looking at our global situation, one factor threatening to undermine our collective resilience is that our economy has rewarded razor-thin efficiency in the recent past. The problem with thin margins is they offer no buffer in the face of disruption. Therefore, ecosystems with thin margins are not at all resilient. Small disturbances can bring them down completely. And a pandemic is not a small disturbance.

Some argue that what we are facing now is a Black Swan: an unpredictable event beyond normal expectations with severe consequences. Most businesses are not ready to face one. You could argue that an economic recession is not a black swan, but the particular shape of this pandemic is testing the resiliency of our social and economic ecosystems regardless. The closing of shops and business, causing huge disruption, has exposed fragile supply chains. We just don’t see these types of events often enough, even if we know they’re theoretically possible. So we don’t prepare for them. We don’t or can’t create big enough personal and social margins of safety. Individuals and businesses don’t have enough money in the bank. We don’t have enough medical facilities and supplies. Instead, we have optimized for a narrow range of possibilities, compromising the resilience of systems we rely on.

Finally, as we look at the role national borders are playing during this pandemic, we can use the Thermodynamics model to gain insight into how to manage flows of people during and after restrictions. Insulation requires a lot of work, as we are seeing with our borders and the subsequent effect on our economies. It’s unsustainable for long periods of time. Just like how two objects of different temperatures that come into contact with each other eventually reach thermal equilibrium, people will mix with each other. All borders have openings of some sort. It’s important to extend planning to incorporate the realistic tendencies of reintegration.

Some final thoughts about the future

As we look for opportunities about how to move forward both as individuals and societies, Cooperation provides a useful lens. Possibly more critical to evolution than competition, cooperation is a powerful force. It’s rampant throughout the biological world; even bacteria cooperate. As a species, we have been cooperating with each other for a long time. All of us have given up some independence for access to resources provided by others.

Pandemics are intensified because of connection. But we can use that same connectivity to mitigate some negative effects by leveraging our community networks to create cooperative interactions that fill gaps in the government response. We can also use the cooperation lens to create more resilient connections in the future.

Finally, we need to ask ourselves how we can improve our antifragility. How can we get to a place where we grow stronger through change and challenge? It’s not about getting “back to normal.” The normal that was our world in 2019 has proven to be fragile. We shouldn’t want to get back to a time when we were unprepared and vulnerable.

Existential threats are a reality of life on earth. One of the best lessons we can learn is to open our eyes and integrate planning for massive change into how we approach our lives. This will not be the last pandemic, no matter how careful we are. The goal now should not be about assigning blame or succumbing to hindsight bias to try to implement rules designed to prevent a similar situation in the future. We will be better off if we make changes aimed at increasing our resilience and embracing the benefits of challenge.

Still curious? Learn more by reading The Great Mental Models.

Using Models to Stay Calm in Charged Situations

When polarizing topics are discussed in meetings, passions can run high and cloud our judgment. Learn how mental models can help you see clearly from this real-life scenario.

***

Mental models can sometimes come off as an abstract concept. They are, however, actual tools you can use to navigate through challenging or confusing situations. In this article, we are going to apply our mental models to a common situation: a meeting with conflict.

A recent meeting with the school gave us an opportunity to use our latticework. Anyone with school-age kids has dealt with the bureaucracy of a school system and the other parents who interact with it. Call it what you will, all school environments usually have some formal interface between parents and the school administration that is aimed at progressing issues and ideas of importance to the school community.

The particular meeting was an intense one. At issue was the school’s communication around a potentially harmful leak in the heating system. Some parents felt the school had communicated reasonably about the problem and the potential consequences. Others felt their child’s life had been put in danger due to potential exposure to mold and asbestos. Some parents felt the school could have done a better job of soliciting feedback from students about their experiences during the previous week, and others felt the school administration had done a poor job about communicating potential risks to parents.

The first thing you’ll notice if you’re in a meeting like this is that emotions on all sides run high. After some discussion you might also notice a few more things, like how many people do the following:

Any of these occurrences, when you hear them via statements from people around the table, are a great indication that using a few mental models might improve the dynamics of the situation.

The first mental model that is invaluable in situations like this is Hanlon’s Razor: don’t attribute to maliciousness that which is more easily explained by incompetence. (Hanlon’s Razor is one of the 9 general thinking concepts in The Great Mental Models Volume One.) When people feel victimized, they can get angry and lash out in an attempt to fight back against a perceived threat. When people feel accused of serious wrongdoing, they can get defensive and withhold information to protect themselves. Neither of these reactions is useful in a situation like this. Yes, sometimes people intentionally do bad things. But more often than not, bad things are the result of incompetence. In a school meeting situation, it’s safe to assume everyone at the table has the best interests of the students at heart. School staff and administrators usually go into teaching motivated by a deep love of education. They genuinely want their schools to be amazing places of learning, and they devote time and attention to improving the lives of their students.

It makes no sense to assume a school’s administration would deliberately withhold harmful information. Yes, it could happen. But, in either case, you are going to obtain more valuable information if you assume poor decisions were the result of incompetence versus maliciousness.

When we feel people are malicious toward us, we instinctively become a negatively coiled spring, waiting for the right moment to take them down a notch or two. Removing malice from the equation, you give yourself emotional breathing room to work toward better solutions and apply more models.

The next helpful model is relativity, adapted from the laws of physics. This model is about remembering that everyone’s perspective is different from yours. Understanding how others see the same situation can help you move toward a more meaningful dialogue with the people in the meeting. You can do this by looking around the room and asking yourself what is influencing people’s approaches to the situation.

In our school meeting, we see some people are afraid for their child’s health. Others are influenced by past dealings with the school administration. Authorities are worried about closing the school. Teachers are concerned about how missed time might impact their students’ learning. Administrators are trying to balance the needs of parents with their responsibility to follow the necessary procedures. Some parents are stressed because they don’t have care for their children when the school closes. There is a lot going on, and relativity gives us a lens to try to identify the dynamics impacting communication.

After understanding the different perspectives, it becomes easier to incorporate them into your thinking. You can diffuse conflict by identifying what it is you think you hear. Often, just the feeling of being heard will help people start to listen and engage more objectively.

Now you can dive into some of the details. First up is probabilistic thinking. Before we worry about mold levels or sick children, let’s try to identify the base rates. What is the mold content in the air outside? How many children are typically absent due to sickness at this time of year? Reminding people that severity has to be evaluated against something in a situation like this can really help diffuse stress and concern. If 10% of the student population is absent on any given day, and in the week leading up to these events 12% to 13% of the population was absent, then it turns out we are not actually dealing with a huge statistical anomaly.

Then you can evaluate the anecdotes with the model of the Law of Large Numbers in mind. Small sample sizes can be misleading. The larger your group for evaluation, the more relevant the conclusions. In a situation such as our school council meeting, small sample sizes only serve to ratchet up the emotion by implying they are the causal outcomes of recent events.

In reality, any one-off occurrence can often be explained in multiple ways. One or two children coming home with hives? There are a dozen reasonable explanations for that: allergies, dry skin, reaction to skin cream, symptom of an illness unrelated to the school environment, and so on. However, the more children that develop hives, the more it is statistically possible the cause relates to the only common denominator between all children: the school environment.

Even then, correlation does not equal causation. It might not be a recent leaky steam pipe; is it exam time? Are there other stressors in the culture? Other contaminants in the environment? The larger your sample size, the more likely you will obtain relevant information.

Finally, you can practice systems thinking and contribute to the discussion by identifying the other components in the system you are all dealing with. After all, a school council is just one part of a much larger system involving governments, school boards, legislators, administrators, teachers, students, parents, and the community. When you put your meeting into the bigger context of the entire system, you can identify the feedback loops: Who is responding to what information, and how quickly does their behavior change? When you do this, you can start to suggest some possible steps and solutions to remedy the situation and improve interactions going forward.

How is the information flowing? How fast does it move? How much time does each recipient have to adjust before receiving more information? Chances are, you aren’t going to know all this at the meeting. So you can ask questions. Does the principal have to get approval from the school board before sending out communications involving risk to students? Can teachers communicate directly with parents? What are the conditions for communicating possible risk? Will speculation increase the speed of a self-reinforcing feedback loop causing panic? What do parents need to know to make an informed decision about the welfare of their child? What does the school need to know to make an informed decision about the welfare of their students?

In meetings like the one described here, there is no doubt that communication is important. Using the meeting to discuss and debate ways of improving communication so that outcomes are generally better in the future is a valuable use of time.

A school meeting is one practical example of how having a latticework of mental models can be useful. Using mental models can help you diffuse some of the emotions that create an unproductive dynamic. They can also help you bring forward valuable, relevant information to assist the different parties in improving their decision-making process going forward.

At the very least, you will walk away from the meeting with a much better understanding of how the world works, and you will have gained some strategies you can implement in the future to leverage this knowledge instead of fighting against it.

Prisoner’s Dilemma: What Game Are you Playing?

In this classic game theory experiment, you must decide: rat out another for personal benefit, or cooperate? The answer may be more complicated than you think.

***

What does it take to make people cooperate with each other when the incentives to act primarily out of self-interest are often so strong?

The Prisoner’s Dilemma is a thought experiment originating from game theory. Designed to analyze the ways in which we cooperate, it strips away the variations between specific situations where people are called to overcome the urge to be selfish. Political scientist Robert Axelrod lays down its foundations in The Evolution of Cooperation:

Under what conditions will cooperation emerge in a world of egoists without a central authority? This question has intrigued people for a long time. And for good reason. We all know that people are not angels and that they tend to look after themselves and their own first. Yet we also know that cooperation does occur and that our civilization is based on it. But in situations where each individual has an incentive to be selfish, how can cooperation ever develop?

…To make headway in understanding the vast array of specific situations which have this property, a way is needed to represent what is common to these situations without becoming bogged down in the details unique to each…the famous Prisoner’s Dilemma game.

The thought experiment goes as such: two criminals are in separate cells, unable to communicate, accused of a crime they both participated in. The police do not have enough evidence to sentence both without further evidence, though they are certain enough to wish to ensure they both spend time in prison. So they offer the prisoners a deal. They can accuse each other of the crime, with the following conditions:

  • If both prisoners say the other did it, each will serve two years in prison.
  • If one prisoner says the other did it and the other stays silent, the accused will serve three years and the accuser zero.
  • If both prisoners stay silent, each will serve one year in prison.

In game theory, the altruistic behavior (staying silent) is called “cooperating,” while accusing the other is called “defecting.”

What should they do?

If they were able to communicate and they trusted each other, the rational choice is to stay silent; that way each serves less time in prison than they would otherwise. But how can each know the other won’t accuse them? After all, people tend to act out of self-interest. The cost of being the one to stay silent is too high. The expected outcome when the game is played is that both accuse the other and serve two years. (In the real world, we doubt it would. After they served their time, it’s not hard to imagine each of them still being upset. Two years is a lot of time for a spring to coil in a negative way. Perhaps they spend the rest of their lives sabatoging each other.)

The Iterated Prisoner’s Dilemma

A more complex form of the thought experiment is the iterated Prisoner’s Dilemma, in which we imagine the same two prisoners being in the same situation multiple times. In this version of the experiment, they are able to adjust their strategy based on the previous outcome.

If we repeat the scenario, it may seem as if the prisoners will begin to cooperate. But this doesn’t make sense in game theory terms. When they know how many times the game will repeat, both have an incentive to accuse on the final round, seeing as there can be no retaliation. Knowing the other will surely accuse on the final round, both have an incentive to accuse on the penultimate round—and so on, back to the start.

Gregory Mankiw summarizes how difficult it is to model cooperation in Business Economics as follows:

To see how difficult it is to maintain cooperation, imagine that, before the police captured . . . the two criminals, [they] had made a pact not to confess. Clearly, this agreement would make them both better off if they both live up to it, because they would each spend only one year in jail. But would the two criminals in fact remain silent, simply because they had agreed to? Once they are being questioned separately, the logic of self-interest takes over and leads them to confess. Cooperation between the two prisoners is difficult to maintain because cooperation is individually irrational.

However, cooperative strategies can evolve if we model the game as having random or infinite iterations. If each prisoner knows they will likely interact with each other in the future, with no knowledge or expectation their relationship will have a definite end, the cooperation becomes significantly more likely. If we imagine that the prisoners will go to the same jail or will run in the same circles once released, we can understand how the incentive for cooperation might increase. If you’re a defector, running into the person you defected on is awkward at best, and leaves you sleeping with the fishes at worst.

Real-world Prisoner’s Dilemmas

We can use the Prisoner’s Dilemma as a means of understanding many real-world situations based on cooperation and trust. As individuals, being selfish tends to benefit us, at least in the short term. But when everyone is selfish, everyone suffers.

In The Prisoner’s Dilemma, Martin Peterson asks readers to imagine two car manufacturers, Row Cars and Col Motors. As the only two actors in their market, the price each sells cars at has a direct connection to the price the other sells cars at. If one opts to sell at a higher price than the other, they will sell fewer cars as customers transfer. If one sells at a lower price, they will sell more cars at a lower profit margin, gaining customers from the other. In Peterson’s example, if both set their prices high, both will make $100 million per year. Should one decide to set their prices lower, they will make $150 million while the other makes nothing. If both set low prices, both make $20 million. Peterson writes:

Imagine that you serve on the board of Row Cars. In a board meeting, you point out that irrespective of what Col Motors decides to do, it will be better for your company to opt for low prices. This is because if Col Motors sets its price low, then a profit of $20 million is better than $0, and if Col Motors sets its price high, then a profit of $150 million is better than $100 million.

Gregory Mankiw gives another real-world example in Microeconomics, detailed here:

Consider an oligopoly with two members, called Iran and Saudi Arabia. Both countries sell crude oil. After prolonged negotiation, the countries agree to keep oil production low in order to keep the world price of oil high. After they agree on production levels, each country must decide whether to cooperate and live up to this agreement or to ignore it and produce at a higher level. The following image shows how the profits of the two countries depend on the strategies they choose.

Suppose you are the leader of Saudi Arabia. You might reason as follows:

I could keep production low as we agreed, or I could raise my production and sell more oil on world markets. If Iran lives up to the agreement and keeps its production low, then my country ears profit of $60 billion with high production and $50 billion with low production. In this case, Saudi Arabia is better off with high production. If Iran fails to live up to the agreement and produces at a high level, then my country earns $40 billion with high production and $30 billion with low production. Once again, Saudi Arabia is better off with high production. So, regardless of what Iran chooses to do, my country is better off reneging on our agreement and producing at a high level.

Producing at a high level is a dominant strategy for Saudi Arabia. Of course, Iran reasons in exactly the same way, and so both countries produce at a high level. The result is the inferior outcome (from both Iran and Saudi Arabia’s standpoint) with low profits in each country. This example illustrates why oligopolies have trouble maintaining monopoly profits. The monopoly outcome is jointly rational for the oligopoly, but each oligopolist has an incentive to cheat. Just as self-interest drives the prisoners in the prisoners’ dilemma to confess, self-interest makes it difficult for the oligopoly to maintain the cooperative outcome with low production, high prices and monopoly prices.

Other examples of prisoners’ dilemmas include arms races, advertising, and common resources (see The Tragedy of the Commons). Understanding the Prisoner’s Dilemma is an important component of the dynamics of cooperation, an extremely useful mental model.

Thinking of life as an iterative game changes how you play. Positioning yourself for the future carries more weight than “winning” in the moment.

How to Use Occam’s Razor Without Getting Cut

Occam’s razor is one of the most useful, (yet misunderstood,) models in your mental toolbox to solve problems more quickly and efficiently. Here’s how to use it.

***

Occam’s razor (also known as the “law of parsimony”) is a problem-solving principle which serves as a useful mental model. A philosophical razor is a tool used to eliminate improbable options in a given situation. Occam’s is the best-known example.

Occam’s razor can be summarized as follows:

Among competing hypotheses, the one with the fewest assumptions should be selected.

The Basics

In simpler language, Occam’s razor states that the simplest explanation is preferable to one that is more complex. Simple theories are easier to verify. Simple solutions are easier to execute.

In other words, we should avoid looking for excessively complex solutions to a problem, and focus on what works given the circumstances. Occam’s razor can be used in a wide range of situations, as a means of making rapid decisions and establishing truths without empirical evidence. It works best as a mental model for making initial conclusions before the full scope of information can be obtained.

Science and math offer interesting lessons that demonstrate the value of simplicity. For example, the principle of minimum energy supports Occam’s razor. This facet of the second law of thermodynamics states that wherever possible, the use of energy is minimized. Physicists use Occam’s razor in the knowledge that they can rely on everything to use the minimum energy necessary to function. A ball at the top of a hill will roll down in order to be at the point of minimum potential energy. The same principle is present in biology. If a person repeats the same action on a regular basis in response to the same cue and reward, it will become a habit as the corresponding neural pathway is formed. From then on, their brain will use less energy to complete the same action.

The History of Occam’s Razor

The concept of Occam’s razor is credited to William of Ockham, a 14th-century friar, philosopher, and theologian. While he did not coin the term, his characteristic way of making deductions inspired other writers to develop the heuristic. Indeed, the concept of Occam’s razor is an ancient one. Aristotle produced the oldest known statement of the concept, saying, “We may assume the superiority, other things being equal, of the demonstration which derives from fewer postulates or hypotheses.”

Robert Grosseteste expanded on Aristotle’s writing in the 1200s, declaring

That is better and more valuable which requires fewer, other circumstances being equal…. For if one thing were demonstrated from many and another thing from fewer equally known premises, clearly that is better which is from fewer because it makes us know quickly, just as a universal demonstration is better than particular because it produces knowledge from fewer premises. Similarly, in natural science, in moral science, and in metaphysics the best is that which needs no premises and the better that which needs the fewer, other circumstances being equal.

Nowadays, Occam’s razor is an established mental model which can form a useful part of a latticework of knowledge.

Mental Model Occam's Razor

Examples of the Use of Occam’s Razor

The Development of Scientific Theories

Occam’s razor is frequently used by scientists, in particular for theoretical matters. The simpler a hypothesis is, the more easily it can be proven or falsified. A complex explanation for a phenomenon involves many factors which can be difficult to test or lead to issues with the repeatability of an experiment. As a consequence, the simplest solution which is consistent with the existing data is preferred. However, it is common for new data to allow hypotheses to become more complex over time. Scientists choose to opt for the simplest solution as the current data permits, while remaining open to the possibility of future research allowing for greater complexity.

The version used by scientists can best be summarized as:

When you have two competing theories that make exactly the same predictions, the simpler one is better.

The use of Occam’s razor in science is also a matter of practicality. Obtaining funding for simpler hypotheses tends to be easier, as they are often cheaper to prove.

Albert Einstein referred to Occam’s razor when developing his theory of special relativity. He formulated his own version: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Or, “Everything should be made as simple as possible, but not simpler.”

The physicist Stephen Hawking advocates for Occam’s razor in A Brief History of Time:

We could still imagine that there is a set of laws that determines events completely for some supernatural being, who could observe the present state of the universe without disturbing it. However, such models of the universe are not of much interest to us mortals. It seems better to employ the principle known as Occam’s razor and cut out all the features of the theory that cannot be observed.

Isaac Newton used Occam’s razor too when developing his theories. Newton stated: “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” He sought to make his theories, including the three laws of motion, as simple as possible, with only the necessary minimum of underlying assumptions.

Medicine

Modern doctors use a version of Occam’s razor, stating that they should look for the fewest possible causes to explain their patient’s multiple symptoms, and give preference to the most likely causes. A doctor we know often repeats the aphorism that “common things are common.” Interns are instructed, “when you hear hoofbeats, think horses, not zebras.” For example, a person displaying influenza-like symptoms during an epidemic would be considered more likely to be suffering from influenza than an alternative, rarer disease. Making minimal diagnoses reduces the risk of over-treating a patient, causing panic, or causing dangerous interactions between different treatments. This is of particular importance within the current medical model, where patients are likely to see numerous health specialists and communication between them can be poor.

Prison Abolition and Fair Punishment

Occam’s razor has long played a role in attitudes towards the punishment of crimes. In this context, it refers to the idea that people should be given the least punishment necessary for their crimes. This is to avoid the excessive penal practices which were popular in the past. For example, a 19th-century English convict could receive five years of hard labor for stealing a piece of food.

The concept of penal parsimony was pioneered by Jeremy Bentham, the founder of utilitarianism. He held that punishments should not cause more pain than they prevent. Life imprisonment for murder could be seen as justified in that it might prevent a great deal of potential pain, should the perpetrator offend again. On the other hand, long-term imprisonment of an impoverished person for stealing food causes substantial suffering without preventing any.

Bentham’s writings on the application of Occam’s razor to punishment led to the prison abolition movement and many modern ideas related to rehabilitation.

Exceptions and Issues

It is important to note that, like any mental model, Occam’s razor is not foolproof. Use it with care, lest you cut yourself. This is especially crucial when it comes to important or risky decisions. There are exceptions to any rule, and we should never blindly follow the results of applying a mental model which logic, experience, or empirical evidence contradict. When you hear hoofbeats behind you, in most cases you should think horses, not zebras—unless you are out on the African savannah.

Furthermore, simple is as simple does. A conclusion can’t rely just on its simplicity. It must be backed by empirical evidence. And when using Occam’s razor to make deductions, we must avoid falling prey to confirmation bias. In the case of the NASA moon landing conspiracy theory, for example, some people consider it simpler for the moon landing to have been faked, others for it to have been real. Lisa Randall best expressed the issues with the narrow application of Occam’s razor in her book, Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe:

Another concern about Occam’s Razor is just a matter of fact. The world is more complicated than any of us would have been likely to conceive. Some particles and properties don’t seem necessary to any physical processes that matter—at least according to what we’ve deduced so far. Yet they exist. Sometimes the simplest model just isn’t the correct one.

This is why it’s important to remember that opting for simpler explanations still requires work. They may be easier to falsify, but still require effort. And that the simpler explanation, although having a higher chance of being correct, is not always true.

Occam’s razor is not intended to be a substitute for critical thinking. It is merely a tool to help make that thinking more efficient. Harlan Coben has disputed many criticisms of Occam’s razor by stating that people fail to understand its exact purpose:

Most people oversimplify Occam’s razor to mean the simplest answer is usually correct. But the real meaning, what the Franciscan friar William of Ockham really wanted to emphasize, is that you shouldn’t complicate, that you shouldn’t “stack” a theory if a simpler explanation was at the ready. Pare it down. Prune the excess.

Remember, Occam’s razor is complemented by other mental models, including fundamental error distribution, Hanlon’s razor, confirmation bias, availability heuristic and hindsight bias. The nature of mental models is that they tend to all interlock and work best in conjunction.

Externalities: Why We Can Never Do “One Thing”

No action exists in a vacuum. There are ripples that have consequences that we can and can’t see. Here are the three types of externalities that can help us guide our actions so they don’t come back to bite us.

***

An externality affects someone without them agreeing to it. As with unintended consequences, externalities can be positive or negative. Understanding the types of externalities and the impact they have in our lives can help us improve our decision making, and how we interact with the world.

Externalities provide useful mental models for understanding complex systems. They show us that systems don’t exist in isolation from other systems. Externalities may affect uninvolved third parties which make them a form of market failure —an inefficient allocation of resources.

We both create and are subject to externalities. Most are very minor but compound over time. They can inflict numerous second-order effects. Someone reclines their seat on an airplane. They get the benefit of comfort. The person behind bears the cost of discomfort by having less space. One family member leaves their dirty dishes in the sink. They get the benefit of using the plate. Someone else bears the cost of washing it later. We can’t expect to interact with any system without repercussions. Over time, even minor externalities can cause significant strain in our lives and relationships.

The First Law of Ecology

To understand externalities it is first useful to consider second-order consequences. In Filters Against Folly, Garrett Hardin describes what he considers to be the First Law of Ecology: We can never do one thing. Whenever we interact with a system, we need to ask, “And then what? What will the wider repercussions of our actions be?” There is bound to be at least one externality.

Hardin gives the example of the Prohibition Amendment in the U.S. In 1920, lawmakers banned the production and sale of alcoholic beverages throughout the entire country. This was in response to an extended campaign by those who believed alcohol was evil. It wasn’t enough to restrict its consumption—it needed to go.

The addition of 61 words to the American Constitution changed the social and legal landscape for over a decade. Policymakers presumably thought they could make the change and people would stop drinking. But Prohibition led to numerous externalities. Alcohol is an important part of many people’s lives. Few were willing to suddenly give it up without a fight. The demand was more than strong enough to ensure a black-market supply re-emerged.

Wealthy people stockpiled alcohol in their homes before the ban went into effect. Thousands of speakeasies and gin joints flourished. Walgreens grew from 20 stores to 500, in large part due to its sales of ‘medicinal’ whiskey. Former alcohol producers simply sold the ingredients for people to make their own. Gangsters like Al Capone made their fortune smuggling, and murdered his rivals in the process. Crime gangs undermined official institutions. Tax revenues plummeted. People lost their jobs. Prisons became overcrowded and bribery commonplace. Thousands died from crime and drinking unsafe homemade alcohol.

Policymakers did not fully ask, “And then what?” before legislating. Drinking did decrease during this time, on average by about half.  But this was far from the hope of a total ban. The second-order consequences outweighed any benefits.

As economist Gregory Mankiw explains in Principles of Microeconomics,

In the presence of externalities, society’s interest in a market outcome extends beyond the well-being of buyers and sellers who participate in the market; it also includes the well-being of bystanders who are affected indirectly…. The market equilibrium is not efficient when there are externalities. That is, the equilibrium fails to maximize the total benefit to society as a whole.

Negative Externalities

Negative externalities can occur during the production or consumption of a service or good. Pollution is a useful example. If a factory pollutes nearby water supplies, it causes harm without incurring costs. The costs to society are high and are not reflected in the price of whatever the factory makes. Economists often view environmental damage as another factor in a production process. But even if pollution is taxed, the harmful effects don’t go away.

Transport and manufacturing release toxins into the environment, harming our health and altering our climate. The reality though, is these externalities are hard to see, and it is often difficult to trace them back to their root causes. There’s also the question of whether we are responsible for externalities or not.

Imagine you’re driving down the road. As you go by an apartment, the noise disturbs someone who didn’t agree to it. Your car emits air pollution, which affects everyone living nearby. Each of these small externalities will affect people you don’t see and who didn’t choose them. They won’t receive any compensation from you. Are you really responsible for the externalities you cause? If you’re not being outright careless or malicious, isn’t it just part of life? How much responsibility do we have as individuals, anyway?

Calling something a negative externality can be a convenient way of abdicating responsibility.

Positive Externalities

A positive externality imposes an unexpected benefit on a third party. The producer doesn’t agree to this, nor do they receive compensation for it.

Scientific research often leads to positive externalities. Research findings can have applications beyond their initial scope. The resulting information becomes part of our collective knowledge base. However, the researcher who makes a discovery cannot receive the full benefits. Nor do they necessarily feel entitled to them.

Blaise Pascal and Pierre de Fermat developed probability theory to solve a gambling dispute. Their work went on to inform numerous disciplines (like the field of calculus) and transform our understanding of the world. Probabilities are now a core part of how we think. Pascal and Fermat created a positive externality.

Someone who comes up with an equation cannot expect compensation each time it gets used. As a result, the incentives to invest the time and effort to discover new equations are reduced. Algorithms, patents, and copyright laws change this by allowing creators to protect and profit from their ideas for years before other people can freely use them. We all benefit, and researchers have an incentive to continue their work.

Network effects are an example of a positive externality. Silicon Valley understands this well. Each person who joins a network, like a marketplace app, increases the value to all other users. Those who own the network have an incentive improve it to encourage new users. Everyone benefits from being able to communicate with more people. While we might not join a new network intending to improve it for other people, that is what normally happens. (On the flipside, network effects can also produce negative externalities, as too many members can decrease the value of a network.)

Positive externalities often lead to the “free rider” problem. When we enjoy something that we aren’t paying for, we tend not to value it. Not paying can remove the incentive to look after a resource and leads to a Tragedy of the Commons situation. As Aristotle put it, “For that which is common to the greatest number has the least care bestowed upon it.” A good portion of online content succumbs to the free rider problem. We enjoy it and yet we don’t pay for it. We expect it to be free and yet, if users weren’t willing to support sites like Farnam Street, they would likely fold, start publishing lower quality articles, or sell readers to advertisers who collect their data. The end result, as we see too frequently, is low-quality content funded by page-view advertising. (This is why we have a membership program. Members of our learning community create a positive externality for non-members by helping support the free content.)

Positional Externalities

Positional externalities are a form of second-order effects. They occur when our decisions alter the context of future perception or value.

For example, consider what happens when a person decides to start staying at the office an hour late. Perhaps they want a promotion and think it will endear them to managers. Parkinson’s Law states that tasks expand to fit the time allocated to them. What this person would otherwise get done by 5pm, now takes until 6pm. Staying late becomes their norm. Their co-workers notice and start to also stay late. Before long, staying at the office until 6pm becomes the standard for everyone. Anyone who leaves at 5pm is perceived as lazy. Now that 6pm is the norm, everyone suffers. They are forced to work more without deriving any real benefits. It’s a lose-lose situation for everyone.

Someone we know once made an investment with a nearly unlimited return by gaming the system. He worked for an investment firm that valued employees according to a perception of how hard they worked and not necessarily by their results. Each Monday he brought in a series of sport coats and left them in the office. He paid the cleaning staff $20 a week to change the coat hanging on his chair and to turn on his computer. No matter what happened, it appeared he was always the first one into the office even though he often didn’t show up from a “client meeting” until 10. When it came to bonus time, he’d get an enormous return on that $20 investment.

Purchasing luxury goods can create positional externalities. Veblen goods are items we value because of their scarcity and high cost. Diamonds, Lamborghinis, tailor-made suits — owning them is a status symbol, and they lose their value if they become cheaper or if too many people have them. As Luca Lambertini puts it in The Economics of Vertically Differentiated Markets,

The utility derived from consumption is a function of the quantity purchased relative to the average of the society or the reference group to whom the consumer compares.” In other words, a shiny new car seems more valuable if all your friends are driving battered old wrecks. If they have equally (or more) fancy cars, the value of yours drops. At some point, it seems worthless and it’s time to find a new one. In this way, the purchase of a Veblen good confers a positional externality on other people who own it too.

That utility can also be a matter of comparison. A person earning $40,000 a year while their friends earn $30,000 will be happier than one earning $60,000 when their friends earn $70,000. When someone’s salary increases, it raises the bar, giving others a new point of reference.

We can confer positional externalities on ourselves by changing our attitudes. Let’s say someone enjoys wine but is not a connoisseur. A $10 bottle and a $100 bottle make them equally happy. When they decide to go on a course and learn the subtleties and technicalities of fine wines, they develop an appreciation for the $100 wine and a distaste for the $10. They may no longer be able to enjoy a cheap drink because they raised their standards.

Conclusion

Externalities are everywhere. It’s easy to ignore the impact of our decisions—to recline an airplane seat, to stay late at the office, or drop litter. Eventually though, someone always ends up paying. Like the villagers in Hardin’s Tragedy of the Commons, who end up with no grass for their animals, we run the risk of ruining a good thing if we don’t take care of it. Keeping the three types of externalities in mind is a useful way to make decisions that won’t come back to bite you. Whenever we interact with a system, we should remember to ask Hardin’s question: and then what?

Poker, Speeding Tickets, and Expected Value: Making Decisions in an Uncertain World

You can train your brain to think like CEOs, professional poker players, investors, and others who make tricky decisions in an uncertain world by weighing probabilities.

All decisions involve potential tradeoffs and opportunity costs. The question is, how can we make the best possible choices when the factors involved are often so complicated and confusing? How can we determine which statistics and metrics are worth paying attention to? How do we think about averages?

Expected value is one of the simplest tools you can use to think better. While not a natural way of thinking for most people, it instantly turns the world into shades of grey by forcing us to weigh probabilities and outcomes. Once we’ve mastered it, our decisions become supercharged. We know which risks to take, when to quit projects, and when to go all in.

“Take the probability of loss times the amount of possible loss from the probability of gain times the amount of possible gain. That is what we’re trying to do. It’s imperfect but that’s what it’s all about.”

— Warren Buffett

Expected value refers to the long-run average of a random variable.

If you flip a fair coin ten times, the heads-to-tails ratio will probably not be exactly equal. If you flip it one hundred times, the ratio will be closer to 50:50, though again not exactly. But for a huge number of iterations, you can expect heads to come up half the time and tails the other half. The law of large numbers dictates that the values will, in the long term, regress to the mean, even if the first few flips seem unequal.

The more coin flips, the closer you get to the 50:50 ratio. If you bet a sum of money on a coin flip, the potential winnings on a fair coin have to be bigger than your potential loss to make the expected value positive.

We make many expected-value calculations without even realizing it. If we decide to stay up late and have a few drinks on a Tuesday, we regard the expected value of an enjoyable evening as higher than the expected costs the following day. If we decide to always leave early for appointments, we weigh the expected value of being on time against the frequent instances when we arrive early. When we take on work, we view the expected value in terms of income and other career benefits as higher than the cost in terms of time and/or sanity.

Likewise, anyone who reads a lot knows that most books they choose will have minimal impact on them, while a few books will change their lives and be of tremendous value. Looking at the required time and money as an investment, books have a positive expected value (provided we choose them with care and make use of the lessons they teach).

These decisions might seem obvious. But the math behind them would be somewhat complicated if we tried to sit down and calculate it. Who pulls out a calculator before deciding whether to open a bottle of wine (certainly not me) or walk into a bookstore?

The factors involved are impossible to quantify in a non-subjective manner – like trying to explain how to catch a baseball. We just have a feel for them. This expected-value analysis is unconscious – something to consider if you have ever labeled yourself as “bad at math.”

Parking Tickets

Another example of the expected value is parking tickets. Let’s say that a parking spot costs $5, and the fine for not paying is $10. If you can expect to be caught one-third of the time, why pay for parking? The expected value of doing so is negative. It’s a disincentive. You can park without paying three times and pay only $10 in fines, instead of paying $15 for three parking spots. But if the fine is $100, the probability of getting caught would have to be higher than one in twenty for it to be worthwhile. This is why fines tend to seem excessive. They cover the people who are not caught while giving an incentive for everyone to pay.

Consider speeding tickets. Here, the expected value can be more abstract, encompassing different factors. If speeding on the way to work saves 15 minutes, then a monthly $100 fine might seem worthwhile to some people. For most of us, though, a weekly fine would mean that speeding has a negative expected value. Add in other disincentives (such as the loss of your driver’s license), and speeding is not worth it. So the calculation is not just financial; it takes into account other tradeoffs as well.

The same goes for free samples and trial periods on subscription services. Many companies (such as Graze, Blue Apron, and Amazon Prime) offer generous free trials. How can they afford to do this? Again, it comes down to expected value. The companies know how much the free trials cost them. They also know the probability of someone paying afterward and the lifetime value of a customer. Basic math reveals why free trials are profitable. Say that a free trial costs the company $10 per person, and one in ten people then sign up for the paid service, going on to generate $150 in profits. The expected value is positive. If only one in twenty people sign up, the company needs to find a cheaper free trial or scrap it.

Similarly, expected value applies to services that offer a free “lite” version (such as Buffer and Spotify). Doing so costs them a small amount or even nothing. Yet it increases the chance of someone’s deciding to pay for the premium version. For the expected value to be positive, the combined cost of the people who never upgrade needs to be lower than the profit from the people who do pay.

Lottery tickets prove useless when viewed through the lens of expected value. If a ticket costs $1 and there is a possibility of winning $500,000, it might seem as if the expected value of the ticket is positive. But it is almost always negative. If one million people purchase a ticket, the expected value is $0.50. That difference is the profit that lottery companies make. Only on sporadic occasions is the expected value positive, even though the probability of winning remains minuscule.

Failing to understand expected value is a common logical fallacy. Getting a grasp of it can help us to overcome many limitations and cognitive biases.

“Constantly thinking in expected value terms requires discipline and is somewhat unnatural. But the leading thinkers and practitioners from somewhat varied fields have converged on the same formula: focus not on the frequency of correctness, but on the magnitude of correctness.”

— Michael Mauboussin

Expected Value and Poker

Let’s look at poker. How do professional poker players manage to win large sums of money and hold impressive track records? Well, we can be certain that the answer isn’t all luck, although there is some of that involved.

Professional players rely on mathematical mental models that create order among random variables. Although these models are basic, it takes extensive experience to create the fingerspitzengefühl (“fingertips feeling,” or instinct) necessary to use them.

A player needs to make correct calculations every minute of a game with an automaton-like mindset. Emotions and distractions can corrupt the accuracy of raw math.

In a game of poker, the expected value is the average return on each dollar invested in the pot. Each time a player makes a bet or call, they are taking into account the probability of making more money than they invest. If a player is risking $100, with a 1 in 5 probability of success, the pot must contain at least $500 for the bet to be safe. The expected value per call is at least equal to the amount the player stands to lose. If the pot contains $300 and the probability is 1 in 5, the expected value is negative. The idea is that even if this tactic is unsuccessful at times, in the long run, the player will profit.

Expected-value analysis gives players a clear idea of probabilistic payoffs. Successful poker players can win millions one week, then make nothing or lose money the next, depending on the probability of winning. Even the best possible hands can lose due to simple probability. With each move, players also need to use Bayesian updating to adapt their calculations, because sticking with a prior figure could prove disastrous. Casinos make their fortunes from people who bet on situations with a negative expected value.

Expected Value and the Ludic Fallacy

In The Black Swan, Nassim Taleb explains the difference between everyday randomness and randomness in the context of a game or casino. Taleb coined the term “ludic fallacy” to refer to “the misuse of games to model real-life situations.” (Or, as the website logicallyfallacious.com puts it: the assumption that flawless statistical models apply to situations where they don’t actually apply.)

In Taleb’s words, gambling is “sterilized and domesticated uncertainty. In the casino, you know the rules, you can calculate the odds… ‘The casino is the only human venture I know where the probabilities are known, Gaussian (i.e., bell-curve), and almost computable.’ You cannot expect the casino to pay out a million times your bet, or to change the rules abruptly during the game….”

Games like poker have a defined, calculable expected value. That’s because we know the outcomes, the cards, and the math. Most decisions are more complicated. If you decide to bet $100 that it will rain tomorrow, the expected value of the wager is incalculable. The factors involved are too numerous and complex to compute. Relevant factors do exist; you are more likely to win the bet if you live in England than if you live in the Sahara, for example. But that doesn’t rule out Black Swan events, nor does it give you the neat probabilities which exist in games. In short, there is a key distinction between Knightian risks, which are computable because we have enough information to calculate the odds, and Knightian uncertainty, which is non-computable because we don’t have enough information to calculate odds accurately. (This distinction between risk and uncertainty is based on the writings of economist Frank Knight.) Poker falls into the former category. Real-life is in the latter. If we take the concept literally and only plan for the expected, we will run into some serious problems.

As Taleb writes in Fooled By Randomness:

Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance. Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem or a brain teaser. Mother nature does not tell you how many holes there are on the roulette table, nor does she deliver problems in a textbook way (in the real world one has to guess the problem more than the solution).

The Monte Carlo Fallacy

Even in the domesticated environment of a casino, probabilistic thinking can go awry if the principle of expected value is forgotten. This famously occurred in Monte Carlo Casino in 1913. A group of gamblers lost millions when the roulette table landed on black 26 times in a row. The probability of this occurring is no more or less likely than the other 67,108,863 possible permutations, but the people present kept thinking, “It has to be red next time.” They saw the likelihood of the wheel landing on red as higher each time it landed on black. In hindsight, what sense does that make? A roulette wheel does not remember the color it landed on last time. The likelihood of either outcome is exactly 50% with each spin, regardless of the previous iteration. So the potential winnings for each spin need to be at least twice the bet a player makes, or the expected value is negative.

“A lot of people start out with a 400-horsepower motor but only get 100 horsepower of output. It’s way better to have a 200-horsepower motor and get it all into output.”

— Warren Buffett

Given all the casinos and roulette tables in the world, the Monte Carlo incident had to happen at some point. Perhaps some day a roulette wheel will land on red 26 times in a row and the incident will repeat. The gamblers involved did not consider the negative expected value of each bet they made. We know this mistake as the Monte Carlo fallacy (or the “gambler’s fallacy” or “the fallacy of the maturity of chances”) – the assumption that prior independent outcomes influence future outcomes that are actually also independent. In other words, people assume that “a random process becomes less random and more predictable as it is repeated”1.

It’s a common error. People who play the lottery for years without success think that their chance of winning rises with each ticket, but the expected value is unchanged between iterations. Amos Tversky and Daniel Kahneman consider this kind of thinking a component of the representativeness heuristic, stating that the more we believe we control random events, the more likely we are to succumb to the Monte Carlo fallacy.

Magnitude over Frequency

Steven Crist, in his book Bet with the Best, offers an example of how an expected-value mindset can be applied. Consider a hypothetical race with four horses. If you’re trying to maximize return on investment, you might want to avoid the horse with a high likelihood of winning. Crist writes,

The point of this exercise is to illustrate that even a horse with a very high likelihood of winning can be either a very good or a very bad bet, and that the difference between the two is determined by only one thing: the odds.”2

Everything comes down to payoffs. A horse with a 50% chance of winning might be a good bet, but it depends on the payoff. The same holds for a 100-to-1 longshot. It’s not the frequency of winning but the magnitude of the win that matters.

Error Rates, Averages, and Variability

When Bill Gates walks into a room with 20 people, the average wealth per person in the room quickly goes beyond a billion dollars. It doesn’t matter if the 20 people are wealthy or not; Gates’s wealth is off the charts and distorts the results.

An old joke tells of the man who drowns in a river which is, on average, three feet deep. If you’re deciding to cross a river and can’t swim, the range of depths matters a heck of a lot more than the average depth.

The Use of Expected Value: How to Make Decisions in an Uncertain World

Thinking in terms of expected value requires discipline and practice. And yet, the top performers in almost any field think in terms of probabilities. While this isn’t natural for most of us, once you implement the discipline of the process, you’ll see the quality of your thinking and decisions improve.

In poker, players can predict the likelihood of a particular outcome. In the vast majority of cases, we cannot predict the future with anything approaching accuracy. So what use is the expected value outside gambling? It turns out, quite a lot. Recognizing how expected value works puts any of us at an advantage. We can mentally leap through various scenarios and understand how they affect outcomes.

Expected value takes into account wild deviations. Averages are useful, but they have limits, as the man who tried to cross the river discovered. When making predictions about the future, we need to consider the range of outcomes. The greater the possible variance from the average, the more our decisions should account for a wider range of outcomes.

There’s a saying in the design world: when you design for the average, you design for no one. Large deviations can mean more risk-which is not always a bad thing. So expected-value calculations take into account the deviations. If we can make decisions with a positive expected value and the lowest possible risk, we are open to large benefits.

Investors use expected value to make decisions. Choices with a positive expected value and minimal risk of losing money are wise. Even if some losses occur, the net gain should be positive over time. In investing, unlike in poker, the potential losses and gains cannot be calculated in exact terms. Expected-value analysis reveals opportunities that people who just use probabilistic thinking often miss. A trade with a low probability of success can still carry a high expected value. That’s why it is crucial to have a large number of robust mental models. As useful as probabilistic thinking can be, it has far more utility when combined with expected value.

Understanding expected value is also an effective way to overcome the sunk costs fallacy. Many of our decisions are based on non-recoverable past investments of time, money, or resources. These investments are irrelevant; we can’t recover them, so we shouldn’t factor them into new decisions. Sunk costs push us toward situations with a negative expected value. For example, consider a company that has invested considerable time and money in the development of a new product. As the launch date nears, they receive irrefutable evidence that the product will be a failure. Perhaps research shows that customers are disinterested, or a competitor launches a similar, better product. The sunk costs fallacy would lead them to release their product anyway. Even if they take a loss. Even if it damages their reputation. After all, why waste the money they spent developing the product? Here’s why: Because the product has a negative expected value, which will only worsen their losses. An escalation of commitment will only increase sunk costs.

When we try to justify a prior expense, calculating the expected value can prevent us from worsening the situation. The sunk costs fallacy robs us of our most precious resource: time. Each day we are faced with the choice between continuing and quitting numerous endeavors. Expected-value analysis reveals where we should continue, and where we should cut our losses and move on to a better use of time and resources. It’s an efficient way to work smarter, and not engage in unnecessary projects.

Thinking in terms of expected value will make you feel awkward when you first try it. That’s the hardest thing about it; you need to practice it a while before it becomes second nature. Once you get the hang of it, you’ll see that it’s valuable in almost every decision. That’s why the most rational people in the world constantly think about expected value. They’ve uncovered the key insight that the magnitude of correctness matters more than its frequency. And yet, human nature is such that we’re happier when we’re frequently right.

Footnotes
  • 1

    From https://rationalwiki.org/wiki/Gambler’s_fallacy, accessed on 11 January 2018.

  • 2

    Steven Crist, “Crist on Value,” in Andrew Beyer et al., Bet with the Best: All New Strategies From America’s Leading Handicappers (New York: Daily Racing Form Press, 2001), 63-64.