Category: Mental Models

Mental Models For a Pandemic

Mental models help us understand the world better, something which is especially valuable during times of confusion, like a pandemic. Here’s how to apply mental models to gain a more accurate picture of reality and keep a cool head.

***

It feels overwhelming when the world changes rapidly, abruptly, and extensively. The changes come so fast it can be hard to keep up—and the future, which a few months ago seemed reliable, now has so many unknown dimensions. In the face of such uncertainty, mental models are valuable tools for helping you think through significant disruptions such as a pandemic.

A mental model is simply a representation of how something works. They are how we simplify complexity, why we consider some things more relevant than others, and how we reason. Using them increases your clarity of understanding, providing direction for the choices you need to make and the options you want to keep open.

Models for ourselves

During a pandemic, a useful model is “the map is not the territory.” In rapidly changing situations like a global health crisis, any reporting is an incomplete snapshot in time. Our maps are going to be inaccurate for many reasons: limited testing availability, poor reporting, ineffective information sharing, lack of expertise in analyzing the available information. The list goes on.

If past reporting hasn’t been completely accurate, then why would you assume current reporting is? You have to be careful when interpreting the information you receive, using it as a marker to scope out a range of what is happening in the territory.

In our current pandemic, we can easily spot our map issues. There aren’t enough tests available in most countries. Because COVID-19 isn’t fatal for the majority of people who contract it, there are likely many people who get it but don’t meet the testing criteria. Therefore, we don’t know how many people have it.

When we look at country-level reporting, we can also see not all countries are reporting to the same standard. Sometimes this isn’t a matter of “better” or “worse”; there are just different ways of collating the numbers. Some countries don’t have the infrastructure for widespread data collection and sharing. Different countries also have different standards for what counts as a death caused by COVID-19.

In other nations, incentives affect reporting. Some countries downplay their infection rate so as to not create panic. Some governments avoid reporting because it undermines their political interests. Others are more worried about the information on the economic map than the health one.

Although it is important to be realistic about our maps, it doesn’t mean we shouldn’t seek to improve their quality. Paying attention to information from experts and ignoring unverified soundbites is one step to increasing the accuracy of our maps. The more accurate we can get them, the more likely it is that we’ll be able to unlock new possibilities that help us deal with the crisis and plan for the future.

There are two models that we can use to improve the effectiveness of the maps we do have: “compounding” and “probabilistic thinking.”

Compounding is exponential growth, something a lot of us tend to have a poor intuitive grasp on. We see the immediate linear relationships in the situation, like how one test diagnoses one person, while not understanding the compounding effects of that relationship. Increased testing can lead to an exponential decrease in virus transmission because each infected person usually passes the virus onto more than just one other person.

One of the clearest stories to illustrate exponential growth is the story of the man who asked to be paid in rice. In this story, a servant is to be rewarded for his service. When asked how he wanted to be paid, he asks to be paid in rice, using a chessboard to determine the final amount. Starting with one grain, the amount of rice is to be doubled for each square. One grain on the first square looks pathetic. But halfway through the chessboard, the servant is making a good yearly living. And after doubling the rice sixty-four times, the servant is owed more rice than the whole world can produce.

Improving our ability to think exponentially helps us understand how more testing can lead to both an exponential decrease in testing prices and an exponential increase in the production of those tests. It also makes clear just how far-reaching the impact of our actions can be if we don’t take precautions with the assumption that we could be infected.

Probabilistic thinking is also invaluable in helping us make decisions based on the incomplete information we have. In the absence of enough testing, for example, we need to use probabilistic thinking to make decisions on what actions to pursue. We ask ourselves questions like: Do I have COVID-19? If there’s a 1% chance I have it, is it worth visiting my grandparents?

Being able to evaluate reasonable probability has huge impacts on how we approach physical distancing. Combining the models of probabilistic thinking and map is not the territory suggests our actions need to be guided by infection numbers much higher than the ones we have. We are likely to make significantly different social decisions if we estimate the probability of infection as being three people out of ten instead of one person out of one thousand.

Bayesian updating can also help clarify the physical distancing actions you should take. There’s a small probability of being part of a horrendous chain of events that might not just have poor direct consequences but also follow you for the rest of your life. Evaluating how responsible you are being in terms of limiting transmission, would you bet a loved one’s life on it?

Which leads us to Hanlon’s Razor. It’s hard not to get angry at reports of beach parties during spring break or at the guy four doors down who has his friends over to hang out every night. For your own sanity, try using Hanlon’s Razor to evaluate their behavior. They are not being malicious and trying to kill people. They are just exceptionally and tragically ignorant.

Finally, on a day-to-day basis, trying to make small decisions with incomplete information, you can use inversion. You can look at the problem backwards. When the best way forward is far from clear, you ask yourself what you could do to make things worse, and then avoid doing those things.

Models for society

Applying mental models aids in the understanding the dynamics of the large-scale social response.

Currently we are seeing the counterintuitive measures with first-order negatives (closing businesses) but second- and third-order positives (reduced transmission, less stress on the healthcare system). Second-order thinking is an invaluable tool at all times, including during a pandemic. It’s so important that we encourage the thinking, analysis, and decision-making that factors in the effects of the effects of the decisions we make.

In order to improve the maps that our leaders have to make decisions, we need to sort through the feedback loops providing the content. If we can improve not only the feedback but also the pace of iterations, we have a better chance of making good decisions.

For example, if we improve the rate of testing and the speed of the results, it would be a major game-changer. Imagine if knowing whether you had the virus or not was a $0.01 test that gave you a result in less than a minute. In that case, we could make different decisions about social openness, even in the absence of a vaccine (however, this may have invasive privacy implications, as tracking this would be quite difficult otherwise).

As we watch the pandemic and its consequences unfold, it becomes clear that leadership and authority are not the same thing. Our hierarchical instincts emerge strongly in times of crisis. Leadership vacuums, then, are devastating, and disasters expose the cracks in our hierarchies. However, we also see that people can display strong leadership without needing any authority. A pandemic provides opportunities for such leadership to emerge at community and local levels, providing alternate pathways for meeting the needs of many.

One critical model we can use to look at society during a pandemic is Ecosystems. When we think about ecosystems, we might imagine a variety of organisms interacting in a forest or the ocean. But our cities are also ecosystems, as is the earth as a whole. Understanding system dynamics can give us a lot of insight into what is happening in our societies, both at the micro and macro level.

One property of ecosystems that is useful to contemplate in situations like a pandemic is resilience—the speed at which an ecosystem recovers after a disturbance. There are many factors that contribute to resilience, such as diversity and adaptability. Looking at our global situation, one factor threatening to undermine our collective resilience is that our economy has rewarded razor-thin efficiency in the recent past. The problem with thin margins is they offer no buffer in the face of disruption. Therefore, ecosystems with thin margins are not at all resilient. Small disturbances can bring them down completely. And a pandemic is not a small disturbance.

Some argue that what we are facing now is a Black Swan: an unpredictable event beyond normal expectations with severe consequences. Most businesses are not ready to face one. You could argue that an economic recession is not a black swan, but the particular shape of this pandemic is testing the resiliency of our social and economic ecosystems regardless. The closing of shops and business, causing huge disruption, has exposed fragile supply chains. We just don’t see these types of events often enough, even if we know they’re theoretically possible. So we don’t prepare for them. We don’t or can’t create big enough personal and social margins of safety. Individuals and businesses don’t have enough money in the bank. We don’t have enough medical facilities and supplies. Instead, we have optimized for a narrow range of possibilities, compromising the resilience of systems we rely on.

Finally, as we look at the role national borders are playing during this pandemic, we can use the Thermodynamics model to gain insight into how to manage flows of people during and after restrictions. Insulation requires a lot of work, as we are seeing with our borders and the subsequent effect on our economies. It’s unsustainable for long periods of time. Just like how two objects of different temperatures that come into contact with each other eventually reach thermal equilibrium, people will mix with each other. All borders have openings of some sort. It’s important to extend planning to incorporate the realistic tendencies of reintegration.

Some final thoughts about the future

As we look for opportunities about how to move forward both as individuals and societies, Cooperation provides a useful lens. Possibly more critical to evolution than competition, cooperation is a powerful force. It’s rampant throughout the biological world; even bacteria cooperate. As a species, we have been cooperating with each other for a long time. All of us have given up some independence for access to resources provided by others.

Pandemics are intensified because of connection. But we can use that same connectivity to mitigate some negative effects by leveraging our community networks to create cooperative interactions that fill gaps in the government response. We can also use the cooperation lens to create more resilient connections in the future.

Finally, we need to ask ourselves how we can improve our antifragility. How can we get to a place where we grow stronger through change and challenge? It’s not about getting “back to normal.” The normal that was our world in 2019 has proven to be fragile. We shouldn’t want to get back to a time when we were unprepared and vulnerable.

Existential threats are a reality of life on earth. One of the best lessons we can learn is to open our eyes and integrate planning for massive change into how we approach our lives. This will not be the last pandemic, no matter how careful we are. The goal now should not be about assigning blame or succumbing to hindsight bias to try to implement rules designed to prevent a similar situation in the future. We will be better off if we make changes aimed at increasing our resilience and embracing the benefits of challenge.

Still curious? Learn more by reading The Great Mental Models.

Using Models to Stay Calm in Charged Situations

When polarizing topics are discussed in meetings, passions can run high and cloud our judgment. Learn how mental models can help you see clearly from this real-life scenario.

***

Mental models can sometimes come off as an abstract concept. They are, however, actual tools you can use to navigate through challenging or confusing situations. In this article, we are going to apply our mental models to a common situation: a meeting with conflict.

A recent meeting with the school gave us an opportunity to use our latticework. Anyone with school-age kids has dealt with the bureaucracy of a school system and the other parents who interact with it. Call it what you will, all school environments usually have some formal interface between parents and the school administration that is aimed at progressing issues and ideas of importance to the school community.

The particular meeting was an intense one. At issue was the school’s communication around a potentially harmful leak in the heating system. Some parents felt the school had communicated reasonably about the problem and the potential consequences. Others felt their child’s life had been put in danger due to potential exposure to mold and asbestos. Some parents felt the school could have done a better job of soliciting feedback from students about their experiences during the previous week, and others felt the school administration had done a poor job about communicating potential risks to parents.

The first thing you’ll notice if you’re in a meeting like this is that emotions on all sides run high. After some discussion you might also notice a few more things, like how many people do the following:

Any of these occurrences, when you hear them via statements from people around the table, are a great indication that using a few mental models might improve the dynamics of the situation.

The first mental model that is invaluable in situations like this is Hanlon’s Razor: don’t attribute to maliciousness that which is more easily explained by incompetence. (Hanlon’s Razor is one of the 9 general thinking concepts in The Great Mental Models Volume One.) When people feel victimized, they can get angry and lash out in an attempt to fight back against a perceived threat. When people feel accused of serious wrongdoing, they can get defensive and withhold information to protect themselves. Neither of these reactions is useful in a situation like this. Yes, sometimes people intentionally do bad things. But more often than not, bad things are the result of incompetence. In a school meeting situation, it’s safe to assume everyone at the table has the best interests of the students at heart. School staff and administrators usually go into teaching motivated by a deep love of education. They genuinely want their schools to be amazing places of learning, and they devote time and attention to improving the lives of their students.

It makes no sense to assume a school’s administration would deliberately withhold harmful information. Yes, it could happen. But, in either case, you are going to obtain more valuable information if you assume poor decisions were the result of incompetence versus maliciousness.

When we feel people are malicious toward us, we instinctively become a negatively coiled spring, waiting for the right moment to take them down a notch or two. Removing malice from the equation, you give yourself emotional breathing room to work toward better solutions and apply more models.

The next helpful model is relativity, adapted from the laws of physics. This model is about remembering that everyone’s perspective is different from yours. Understanding how others see the same situation can help you move toward a more meaningful dialogue with the people in the meeting. You can do this by looking around the room and asking yourself what is influencing people’s approaches to the situation.

In our school meeting, we see some people are afraid for their child’s health. Others are influenced by past dealings with the school administration. Authorities are worried about closing the school. Teachers are concerned about how missed time might impact their students’ learning. Administrators are trying to balance the needs of parents with their responsibility to follow the necessary procedures. Some parents are stressed because they don’t have care for their children when the school closes. There is a lot going on, and relativity gives us a lens to try to identify the dynamics impacting communication.

After understanding the different perspectives, it becomes easier to incorporate them into your thinking. You can diffuse conflict by identifying what it is you think you hear. Often, just the feeling of being heard will help people start to listen and engage more objectively.

Now you can dive into some of the details. First up is probabilistic thinking. Before we worry about mold levels or sick children, let’s try to identify the base rates. What is the mold content in the air outside? How many children are typically absent due to sickness at this time of year? Reminding people that severity has to be evaluated against something in a situation like this can really help diffuse stress and concern. If 10% of the student population is absent on any given day, and in the week leading up to these events 12% to 13% of the population was absent, then it turns out we are not actually dealing with a huge statistical anomaly.

Then you can evaluate the anecdotes with the model of the Law of Large Numbers in mind. Small sample sizes can be misleading. The larger your group for evaluation, the more relevant the conclusions. In a situation such as our school council meeting, small sample sizes only serve to ratchet up the emotion by implying they are the causal outcomes of recent events.

In reality, any one-off occurrence can often be explained in multiple ways. One or two children coming home with hives? There are a dozen reasonable explanations for that: allergies, dry skin, reaction to skin cream, symptom of an illness unrelated to the school environment, and so on. However, the more children that develop hives, the more it is statistically possible the cause relates to the only common denominator between all children: the school environment.

Even then, correlation does not equal causation. It might not be a recent leaky steam pipe; is it exam time? Are there other stressors in the culture? Other contaminants in the environment? The larger your sample size, the more likely you will obtain relevant information.

Finally, you can practice systems thinking and contribute to the discussion by identifying the other components in the system you are all dealing with. After all, a school council is just one part of a much larger system involving governments, school boards, legislators, administrators, teachers, students, parents, and the community. When you put your meeting into the bigger context of the entire system, you can identify the feedback loops: Who is responding to what information, and how quickly does their behavior change? When you do this, you can start to suggest some possible steps and solutions to remedy the situation and improve interactions going forward.

How is the information flowing? How fast does it move? How much time does each recipient have to adjust before receiving more information? Chances are, you aren’t going to know all this at the meeting. So you can ask questions. Does the principal have to get approval from the school board before sending out communications involving risk to students? Can teachers communicate directly with parents? What are the conditions for communicating possible risk? Will speculation increase the speed of a self-reinforcing feedback loop causing panic? What do parents need to know to make an informed decision about the welfare of their child? What does the school need to know to make an informed decision about the welfare of their students?

In meetings like the one described here, there is no doubt that communication is important. Using the meeting to discuss and debate ways of improving communication so that outcomes are generally better in the future is a valuable use of time.

A school meeting is one practical example of how having a latticework of mental models can be useful. Using mental models can help you diffuse some of the emotions that create an unproductive dynamic. They can also help you bring forward valuable, relevant information to assist the different parties in improving their decision-making process going forward.

At the very least, you will walk away from the meeting with a much better understanding of how the world works, and you will have gained some strategies you can implement in the future to leverage this knowledge instead of fighting against it.

Prisoner’s Dilemma: What Game Are you Playing?

In this classic game theory experiment, you must decide: rat out another for personal benefit, or cooperate? The answer may be more complicated than you think.

***

What does it take to make people cooperate with each other when the incentives to act primarily out of self-interest are often so strong?

The Prisoner’s Dilemma is a thought experiment originating from game theory. Designed to analyze the ways in which we cooperate, it strips away the variations between specific situations where people are called to overcome the urge to be selfish. Political scientist Robert Axelrod lays down its foundations in The Evolution of Cooperation:

Under what conditions will cooperation emerge in a world of egoists without a central authority? This question has intrigued people for a long time. And for good reason. We all know that people are not angels and that they tend to look after themselves and their own first. Yet we also know that cooperation does occur and that our civilization is based on it. But in situations where each individual has an incentive to be selfish, how can cooperation ever develop?

…To make headway in understanding the vast array of specific situations which have this property, a way is needed to represent what is common to these situations without becoming bogged down in the details unique to each…the famous Prisoner’s Dilemma game.

The thought experiment goes as such: two criminals are in separate cells, unable to communicate, accused of a crime they both participated in. The police do not have enough evidence to sentence both without further evidence, though they are certain enough to wish to ensure they both spend time in prison. So they offer the prisoners a deal. They can accuse each other of the crime, with the following conditions:

  • If both prisoners say the other did it, each will serve two years in prison.
  • If one prisoner says the other did it and the other stays silent, the accused will serve three years and the accuser zero.
  • If both prisoners stay silent, each will serve one year in prison.

In game theory, the altruistic behavior (staying silent) is called “cooperating,” while accusing the other is called “defecting.”

What should they do?

If they were able to communicate and they trusted each other, the rational choice is to stay silent; that way each serves less time in prison than they would otherwise. But how can each know the other won’t accuse them? After all, people tend to act out of self-interest. The cost of being the one to stay silent is too high. The expected outcome when the game is played is that both accuse the other and serve two years. (In the real world, we doubt it would. After they served their time, it’s not hard to imagine each of them still being upset. Two years is a lot of time for a spring to coil in a negative way. Perhaps they spend the rest of their lives sabatoging each other.)

The Iterated Prisoner’s Dilemma

A more complex form of the thought experiment is the iterated Prisoner’s Dilemma, in which we imagine the same two prisoners being in the same situation multiple times. In this version of the experiment, they are able to adjust their strategy based on the previous outcome.

If we repeat the scenario, it may seem as if the prisoners will begin to cooperate. But this doesn’t make sense in game theory terms. When they know how many times the game will repeat, both have an incentive to accuse on the final round, seeing as there can be no retaliation. Knowing the other will surely accuse on the final round, both have an incentive to accuse on the penultimate round—and so on, back to the start.

Gregory Mankiw summarizes how difficult it is to model cooperation in Business Economics as follows:

To see how difficult it is to maintain cooperation, imagine that, before the police captured . . . the two criminals, [they] had made a pact not to confess. Clearly, this agreement would make them both better off if they both live up to it, because they would each spend only one year in jail. But would the two criminals in fact remain silent, simply because they had agreed to? Once they are being questioned separately, the logic of self-interest takes over and leads them to confess. Cooperation between the two prisoners is difficult to maintain because cooperation is individually irrational.

However, cooperative strategies can evolve if we model the game as having random or infinite iterations. If each prisoner knows they will likely interact with each other in the future, with no knowledge or expectation their relationship will have a definite end, the cooperation becomes significantly more likely. If we imagine that the prisoners will go to the same jail or will run in the same circles once released, we can understand how the incentive for cooperation might increase. If you’re a defector, running into the person you defected on is awkward at best, and leaves you sleeping with the fishes at worst.

Real-world Prisoner’s Dilemmas

We can use the Prisoner’s Dilemma as a means of understanding many real-world situations based on cooperation and trust. As individuals, being selfish tends to benefit us, at least in the short term. But when everyone is selfish, everyone suffers.

In The Prisoner’s Dilemma, Martin Peterson asks readers to imagine two car manufacturers, Row Cars and Col Motors. As the only two actors in their market, the price each sells cars at has a direct connection to the price the other sells cars at. If one opts to sell at a higher price than the other, they will sell fewer cars as customers transfer. If one sells at a lower price, they will sell more cars at a lower profit margin, gaining customers from the other. In Peterson’s example, if both set their prices high, both will make $100 million per year. Should one decide to set their prices lower, they will make $150 million while the other makes nothing. If both set low prices, both make $20 million. Peterson writes:

Imagine that you serve on the board of Row Cars. In a board meeting, you point out that irrespective of what Col Motors decides to do, it will be better for your company to opt for low prices. This is because if Col Motors sets its price low, then a profit of $20 million is better than $0, and if Col Motors sets its price high, then a profit of $150 million is better than $100 million.

Gregory Mankiw gives another real-world example in Microeconomics, detailed here:

Consider an oligopoly with two members, called Iran and Saudi Arabia. Both countries sell crude oil. After prolonged negotiation, the countries agree to keep oil production low in order to keep the world price of oil high. After they agree on production levels, each country must decide whether to cooperate and live up to this agreement or to ignore it and produce at a higher level. The following image shows how the profits of the two countries depend on the strategies they choose.

Suppose you are the leader of Saudi Arabia. You might reason as follows:

I could keep production low as we agreed, or I could raise my production and sell more oil on world markets. If Iran lives up to the agreement and keeps its production low, then my country ears profit of $60 billion with high production and $50 billion with low production. In this case, Saudi Arabia is better off with high production. If Iran fails to live up to the agreement and produces at a high level, then my country earns $40 billion with high production and $30 billion with low production. Once again, Saudi Arabia is better off with high production. So, regardless of what Iran chooses to do, my country is better off reneging on our agreement and producing at a high level.

Producing at a high level is a dominant strategy for Saudi Arabia. Of course, Iran reasons in exactly the same way, and so both countries produce at a high level. The result is the inferior outcome (from both Iran and Saudi Arabia’s standpoint) with low profits in each country. This example illustrates why oligopolies have trouble maintaining monopoly profits. The monopoly outcome is jointly rational for the oligopoly, but each oligopolist has an incentive to cheat. Just as self-interest drives the prisoners in the prisoners’ dilemma to confess, self-interest makes it difficult for the oligopoly to maintain the cooperative outcome with low production, high prices and monopoly prices.

Other examples of prisoners’ dilemmas include arms races, advertising, and common resources (see The Tragedy of the Commons). Understanding the Prisoner’s Dilemma is an important component of the dynamics of cooperation, an extremely useful mental model.

Thinking of life as an iterative game changes how you play. Positioning yourself for the future carries more weight than “winning” in the moment.

Survivorship Bias: The Tale of Forgotten Failures

Survivorship bias is a common logical error that distorts our understanding of the world. It happens when we assume that success tells the whole story and when we don’t adequately consider past failures.

There are thousands, even tens of thousands of failures for every big success in the world. But stories of failure are not as sexy as stories of triumph, so they rarely get covered and shared. As we consume one story of success after another, we forget the base rates and overestimate the odds of real success.

“See,” says he, “you who deny a providence, how many have been saved by their prayers to the Gods.”

“Ay,” says Diagoras, “I see those who were saved, but where are those painted who were shipwrecked?”

— Cicero

The Basics

A college dropout becomes a billionaire. Batuli Lamichhane, a chain-smoker, lives to the age of 118. Four young men are rejected by record labels and told “guitar groups are on the way out,” then go on to become the most successful band in history.

Bill Gates, Batuli Lamichhane, and the Beatles are oft-cited examples of people who broke the rules without the expected consequences. We like to focus on people like them—the result of a cognitive shortcut known as survivorship bias.

When we only pay attention to those who survive, we fail to account for base rates and end up misunderstanding how selection processes actually work. The base rate is the probability of a given result we can expect from a sample, expressed as a percentage. If you play roulette, for example, you can be expected to win one out of 38 games, or 2.63%, which is the base rate. The problem arises when we mistake the winners for the rule and not the exception. People like Gates, Lamichhane, and the Beatles are anomalies at one end of a distribution curve. While there is much to learn from them, it would be a mistake to expect the same results from doing the same things.

A stupid decision that works out well becomes a brilliant decision in hindsight.

— Daniel Kahneman

Cause and Effect

Can we achieve anything if we try hard enough? Not necessarily. Survivorship bias leads to an erroneous understanding of cause and effect. People see correlation in mere coincidence. We all love to hear stories of those who beat the odds and became successful, holding them up as proof that the impossible is possible. We ignore failures in pursuit of a coherent narrative about success.

Few would think to write the biography of a business person who goes bankrupt and spends their entire life in debt. Or a musician who tried again and again to get signed and was ignored by record labels. Or of someone who dreams of becoming an actor, moves to LA, and ends up returning a year later, defeated and broke. After all, who wants to hear that? We want the encouragement survivorship bias provides, and the subsequent belief in our own capabilities. The result is an inflated idea of how many people become successful.

The discouraging fact is that success is never guaranteed. Most businesses fail. Most people do not become rich or famous. Most leaps of faith go wrong. It does not mean we should not try, just that we should be realistic with our understanding of reality.

Beware of advice from the successful.

— Barnaby James

Survivorship Bias in Business

Survivorship bias is particularly common in the world of business. Companies which fail early on are ignored, while the rare successes are lauded for decades. Studies of market performance often exclude companies which collapse. This can distort statistics and make success seem more probable than it truly is. Just as history is written by the winners, so is much of our knowledge about business. Those who end up broke and chastened lack a real voice. They may be blamed for their failures by those who ignore the role coincidence plays in the upward trajectories of the successful.

Nassim Taleb writes of our tendency to ignore the failures: “We favor the visible, the embedded, the personal, the narrated, and the tangible; we scorn the abstract.” Business books laud the rule-breakers who ignore conventional advice and still create profitable enterprises. For most entrepreneurs, taking excessive risks and eschewing all norms is an ill-advised gamble. Many of the misfit billionaires who are widely celebrated succeeded in spite of their unusual choices, not because of them. We also ignore the role of timing, luck, connections and socio-economic background. A person from a prosperous family, with valuable connections, who founds a business at a lucrative time has a greater chance of survival, even if they drop out of college or do something unconventional. Someone with a different background, acting at an inopportune time, will have less of a chance.

In No Startup Hipsters: Build Scalable Technology Companies, Samir Rath and Teodora Georgieva write:

Almost every single generic presentation for startups starts with “Ninety Five percent of all startups fail”, but very rarely do we pause for a moment and think “what does this really mean?” We nod our heads in somber acknowledgement and with great enthusiasm turn to the heroes who “made it” — Zuckerberg, Gates, etc. to absorb pearls of wisdom and find the Holy Grail of building successful companies. Learning from the successful is a much deeper problem and can reduce the probability of success more than we might imagine.

Examining the lives of successful entrepreneurs teaches us very little. We would do far better to analyze the causes of failure, then act accordingly. Even better would be learning from both failures and successes.

Focusing on successful outliers does not account for base rates. As Rath and Georgieva go on to write:

After any process that picks winners, the non-survivors are often destroyed or hidden or removed from public view. The huge failure rate for start-ups is a classic example; if failures become invisible, not only do we fail to recognise that missing instances hold important information, but we may also fail to acknowledge that there is any missing information at all.

They describe how this leads us to base our choices on inaccurate assumptions:

Often, as we revel in stories of start-up founders who struggled their way through on cups of ramen before the tide finally turned on viral product launches, high team performance or strategic partnerships, we forget how many other founders did the same thing, in the same industry and perished…The problem we mention is compounded by biographical or autobiographical narratives. The human brain is obsessed with building a cause and effect narrative. The problem arises when this cognitive machinery misfires and finds patterns where there are none.

These success narratives are created both by those within successful companies and those outside. Looking back on their ramen days, founders may believe they had a plan all along. They always knew everything would work out. In truth, they may lack an idea of the cause and effect relationships underlying their progress. When external observers hear their stories, they may, in a quasi-superstitious manner, spot “signs” of the success to come. As Daniel Kahneman has written, the only true similarity is luck.

Consider What You Don’t See

When we read about survivorship bias, we usually come across the archetypical story of Abraham Wald, a statistician studying World War II airplanes. His research group at Columbia University was asked to figure out how to better protect airplanes from damage. The initial approach to the problem was to look at the planes coming back, seeing where they were hit the worst, then reinforcing that area.

However, Wald realized there was a missing, yet valuable, source of evidence: Planes that were hit that did not make it back. Planes that went down, that weren’t surviving, had much better information to provide on areas that were most important to reinforce. Wald’s approach is an example of how to overcome survivorship bias. Don’t look just at what you can see. Consider all the things that started on the same path but didn’t make it. Try to figure out their story, as there is as much, if not more, to be learned from failure.

Considering survivorship bias when presented with examples of success is difficult. It is not instinctive to pause, reflect, and think through what the base rate odds of success are and whether you’re looking at an outlier or the expected outcome. And yet if you don’t know the real odds, if you don’t know if what you’re looking at is an example of survivorship bias, then you’ve got a blind spot.

Whenever you read about a success story in the media, think of all the people who tried to do what that person did and failed. Of course, understanding survivorship bias isn’t an excuse for not taking action, but rather an essential tool to help you cut through the noise and understand the world. If you’re going to do something, do it fully informed.

To learn more, consider reading Fooled By Randomness, or The Art of Thinking Clearly.

Illusion of Transparency: Your Poker Face is Better Than You Think

We tend to think that people can easily tell what we’re thinking and feeling. They can’t. Understanding the illusion of transparency bias can improve relationships, job performance, and more.

***

“A wonderful fact to reflect upon, that every human creature is constituted to be that profound secret and mystery to every other.” ― Charles Dickens, A Tale of Two Cities

When we experience strong emotions, we tend to think it’s obvious to other people, especially those who know us well. When we’re angry or tired or nervous or miserable, we may assume that anyone who looks at our face can spot it straight away.

That’s not true. Most of the time, other people can’t correctly guess what we’re thinking or feeling. Our emotions are not written all over our face all the time. The gap between our subjective experience and what other people pick up on is known as the illusion of transparency. It’s a fallacy that leads us to overestimate how easily we convey our emotions and thoughts.

For example, you arrive at the office exhausted after a night with too little sleep. You drift around all day, chugging espressos, feeling sluggish and unfocused. Everything you do seems to go wrong. At the end of the day, you sheepishly apologize to a coworker for being “useless all day.”

They look at you, slightly confused. ‘Oh,’ they say. ‘You seemed fine to me.’ Clearly, they’re just being polite. There’s no way your many minor mistakes during the day could have escaped their notice. It must be extra apparent considering your coworkers all show up looking fresh as a daisy every single day.

Or imagine that you have to give a talk in front of a big crowd and you’re terrified. As you step on stage, your hands shake, your voice keeps catching in your throat, you’re sweating and flushed. Afterward, you chat to someone from the audience and remark: ‘So that’s what a slow-motion panic attack looks like.’

‘Well, you seemed like a confident speaker,’ they say. ‘You didn’t look nervous at all. I wish I could be as good at public speaking.’ Evidently, they were sitting at the back or they have bad eyesight. Your shaking hands and nervous pauses were far too apparent. Especially compared to the two wonderful speakers who came after you.

No one cares

“Words are the source of misunderstandings.” ― Antoine de Saint-Exupéry, The Little Prince

The reality is that other people pay much less attention to you than you think. They’re often far too absorbed in their own subjective experiences to pick up on subtle cues related to the feelings of others. If you’re annoyed at your partner, they’re probably too busy thinking about what they need to do at work tomorrow or what they’re planning to cook for dinner to scrutinize your facial expressions. They’re not deliberately ignoring you, they’re just thinking about other things. While you’re having a bad day at work, your coworkers are probably distracted by their own deadlines and personal problems. You could fall asleep sitting up and many of them wouldn’t even notice. And when you give a talk in front of people, most of them are worrying about the next time they have to do any public speaking or when they can get a coffee.

In your own subjective experience, you’re in the eye of the storm. But what other people have to go on are things like your tone of voice, facial expressions, and body language. The clues these provide can be hard to read. Unless someone is trying their best to figure out what you’re thinking or feeling, they’re not going to be particularly focused on your body language. If you make even the slightest effort to conceal your inner state, you’re quite able to hide it altogether from everyone.

Our tendency to overestimate how much attention people are paying to us is a result of seeing our own perspective as the only perspective. If we’re feeling a strong emotion, we assume other people care about how we feel as much as we do. This egocentric bias leads to the spotlight effect—in social situations, we feel like there’s a spotlight shining on us. It’s not self-obsession, it’s natural. But overall, this internal self-focus is what makes you think other people can tell what you’re thinking.

Take the case of lying. Even if we try to err on the side of honesty, we all face situations where we feel we have no option except to tell a lie. Setting aside the ethics of the matter, most of us probably don’t feel good about lying. It makes us uncomfortable. It’s normal to worry that whoever you’re lying to will easily be able to tell. Again, unless you’re being very obvious, the chances of someone else picking up on it are smaller than you think. In one study, participants asked to lie to other participants estimated they’d be caught about half the time. In fact, people only guessed they were lying about a quarter of the time—a rate low enough for random chance to account for it.

Tactics

“Even if one is neither vain nor self-obsessed, it is so extraordinary to be oneself—exactly oneself and no one else—and so unique, that it seems natural that one should also be unique for someone else.” ― Simone de Beauvoir

Understanding how the illusion of transparency works can help you navigate otherwise challenging situations with ease.

Start with accepting that other people don’t usually know what you’re thinking and feeling. If you want someone to know your mental state, you need to tell them in the clearest terms possible. You can’t make assumptions. Being subtle about your feelings is not the best idea, especially in high-stakes situations. Err on the side of caution whenever possible by communicating plainly in words about your feelings or views.

Likewise, if you think you know how someone else feels, you should ask them to confirm. You shouldn’t assume you’ve got it right—you probably haven’t. If it’s important, you need to double check. The person who seems calm on the surface might be frenzied underneath. Some of us just appear unhappy to others all the time, no matter how we’re feeling. If you can’t pick up on someone’s mental state, they might not be vocalizing it because they think it’s obvious. So ask.

As Dylan Evans writes in Risk Intelligence: How To Live With Uncertainty,

The first and most basic remedy is simply to treat all your hunches about the thoughts and feelings of other people with a pinch of salt and to be similarly skeptical about their ability to read your mind. It can be hard to resist the feeling that someone is lying to you, or that your own honesty will shine through, but with practice it can be done.

The illusion of transparency doesn’t go away just because you know someone well. Even partners, family members and close friends have difficulty reading each other’s mental states. The problem compounds when we think they should be able to do this. We can easily become annoyed when they can’t. If you’re upset or angry and someone close to you doesn’t make any attempt to make you feel better, they are not necessarily ignoring you. They just haven’t noticed anything is wrong, or they may not know how you want them to respond. As Hanlon’s razor teaches us, it’s best not to assume malicious intent. Understanding this can help avoid arguments that spring up based on thinking we’re communicating clearly when we’re not.

“Much unhappiness has come into the world because of bewilderment and things left unsaid.” ― Fyodor Dostoevsky

Set yourself free

Knowing about the illusion of transparency can be liberating. Guess what? No one really cares. Or almost no one. If you’ve got food stuck between your teeth or you stutter during a speech or you’re exhausted at work, you might as well assume no one has noticed. Most of the time, they haven’t.

Back to public speaking: We get it all wrong when we think people can tell we’re nervous about giving a talk. In a study entitled “The illusion of transparency and the alleviation of speech anxiety,” Kenneth Savitskya and Thomas Gilovich tested how knowing about the effect could help people feel less scared about public speaking.1 When participants were asked to give a speech, their self-reported levels of nervousness were well above what audience members guessed they were experiencing. Inside, they felt like a nervous wreck. On the outside, they looked calm and collected.

But when speakers learned about the illusion of transparency beforehand, they were less concerned about audience perceptions and therefore less nervous. They ended up giving better speeches, according to both their own and audience assessments. It’s a lot easier to focus on what you’re saying if you’re not so worried about what everyone else is thinking.

The sun revolves around me, doesn’t it?

In psychology, anchoring refers to our tendency to make an estimated guess by selecting whatever information is easily available as our “anchor,” then adjusting from that point. Often, the adjustments are insufficient. This is exactly what happens when you try to guess the mental state of others. If we try to estimate how a friend feels, we take how we feel as our starting point, then adjust our guess from there.

According to the authors of a paper entitled “The Illusion of Transparency: Biased Assessments of Other’s Ability to Read One’s Emotional States,”

People are typically quite aware of their own internal states and tend to focus on them rather intently when they are strong. To be sure, people recognize that others are not privy to the same information as they are, and they attempt to adjust for this fact when trying to anticipate another’s perspective. Nevertheless, it can be hard to get beyond one’s own perspective even when one knows that.

This is similar to hindsight bias, where things seem obvious in retrospect, even if they weren’t beforehand. When you look back on an event, it’s hard to disentangle what you knew then from what you know now. You can only use your current position as an anchor, a perspective which is inevitably skewed.

If you’re trying to hide your mental state, you’re probably doing better than you think. Unless you’re talking to, say, a trained police interrogator or professional poker player, other people are easy to fool. They’re not looking that hard, so a mild effort to hide your emotions is likely to work well. People can’t read your mind, whether you’re trying to pretend you don’t hate the taste of a trendy new beer, or trying to conceal your true standing in a negotiation to gain more leverage.

The illusion of transparency explains why, even once you’re no longer a teenager, it still seems like few people understand you. It’s not that other people are ambivalent or confused. Your feelings just aren’t as clear as you think. Often you can’t see beyond the confines of your own head and neither can anyone else. It’s best to make allowances for that.

Footnotes

How to Use Occam’s Razor Without Getting Cut

Occam’s razor is one of the most useful, (yet misunderstood,) models in your mental toolbox to solve problems more quickly and efficiently. Here’s how to use it.

***

Occam’s razor (also known as the “law of parsimony”) is a problem-solving principle which serves as a useful mental model. A philosophical razor is a tool used to eliminate improbable options in a given situation. Occam’s is the best-known example.

Occam’s razor can be summarized as follows:

Among competing hypotheses, the one with the fewest assumptions should be selected.

The Basics

In simpler language, Occam’s razor states that the simplest explanation is preferable to one that is more complex. Simple theories are easier to verify. Simple solutions are easier to execute.

In other words, we should avoid looking for excessively complex solutions to a problem, and focus on what works given the circumstances. Occam’s razor can be used in a wide range of situations, as a means of making rapid decisions and establishing truths without empirical evidence. It works best as a mental model for making initial conclusions before the full scope of information can be obtained.

Science and math offer interesting lessons that demonstrate the value of simplicity. For example, the principle of minimum energy supports Occam’s razor. This facet of the second law of thermodynamics states that wherever possible, the use of energy is minimized. Physicists use Occam’s razor in the knowledge that they can rely on everything to use the minimum energy necessary to function. A ball at the top of a hill will roll down in order to be at the point of minimum potential energy. The same principle is present in biology. If a person repeats the same action on a regular basis in response to the same cue and reward, it will become a habit as the corresponding neural pathway is formed. From then on, their brain will use less energy to complete the same action.

The History of Occam’s Razor

The concept of Occam’s razor is credited to William of Ockham, a 14th-century friar, philosopher, and theologian. While he did not coin the term, his characteristic way of making deductions inspired other writers to develop the heuristic. Indeed, the concept of Occam’s razor is an ancient one. Aristotle produced the oldest known statement of the concept, saying, “We may assume the superiority, other things being equal, of the demonstration which derives from fewer postulates or hypotheses.”

Robert Grosseteste expanded on Aristotle’s writing in the 1200s, declaring

That is better and more valuable which requires fewer, other circumstances being equal…. For if one thing were demonstrated from many and another thing from fewer equally known premises, clearly that is better which is from fewer because it makes us know quickly, just as a universal demonstration is better than particular because it produces knowledge from fewer premises. Similarly, in natural science, in moral science, and in metaphysics the best is that which needs no premises and the better that which needs the fewer, other circumstances being equal.

Nowadays, Occam’s razor is an established mental model which can form a useful part of a latticework of knowledge.

Mental Model Occam's Razor

Examples of the Use of Occam’s Razor

The Development of Scientific Theories

Occam’s razor is frequently used by scientists, in particular for theoretical matters. The simpler a hypothesis is, the more easily it can be proven or falsified. A complex explanation for a phenomenon involves many factors which can be difficult to test or lead to issues with the repeatability of an experiment. As a consequence, the simplest solution which is consistent with the existing data is preferred. However, it is common for new data to allow hypotheses to become more complex over time. Scientists choose to opt for the simplest solution as the current data permits, while remaining open to the possibility of future research allowing for greater complexity.

The version used by scientists can best be summarized as:

When you have two competing theories that make exactly the same predictions, the simpler one is better.

The use of Occam’s razor in science is also a matter of practicality. Obtaining funding for simpler hypotheses tends to be easier, as they are often cheaper to prove.

Albert Einstein referred to Occam’s razor when developing his theory of special relativity. He formulated his own version: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Or, “Everything should be made as simple as possible, but not simpler.”

The physicist Stephen Hawking advocates for Occam’s razor in A Brief History of Time:

We could still imagine that there is a set of laws that determines events completely for some supernatural being, who could observe the present state of the universe without disturbing it. However, such models of the universe are not of much interest to us mortals. It seems better to employ the principle known as Occam’s razor and cut out all the features of the theory that cannot be observed.

Isaac Newton used Occam’s razor too when developing his theories. Newton stated: “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” He sought to make his theories, including the three laws of motion, as simple as possible, with only the necessary minimum of underlying assumptions.

Medicine

Modern doctors use a version of Occam’s razor, stating that they should look for the fewest possible causes to explain their patient’s multiple symptoms, and give preference to the most likely causes. A doctor we know often repeats the aphorism that “common things are common.” Interns are instructed, “when you hear hoofbeats, think horses, not zebras.” For example, a person displaying influenza-like symptoms during an epidemic would be considered more likely to be suffering from influenza than an alternative, rarer disease. Making minimal diagnoses reduces the risk of over-treating a patient, causing panic, or causing dangerous interactions between different treatments. This is of particular importance within the current medical model, where patients are likely to see numerous health specialists and communication between them can be poor.

Prison Abolition and Fair Punishment

Occam’s razor has long played a role in attitudes towards the punishment of crimes. In this context, it refers to the idea that people should be given the least punishment necessary for their crimes. This is to avoid the excessive penal practices which were popular in the past. For example, a 19th-century English convict could receive five years of hard labor for stealing a piece of food.

The concept of penal parsimony was pioneered by Jeremy Bentham, the founder of utilitarianism. He held that punishments should not cause more pain than they prevent. Life imprisonment for murder could be seen as justified in that it might prevent a great deal of potential pain, should the perpetrator offend again. On the other hand, long-term imprisonment of an impoverished person for stealing food causes substantial suffering without preventing any.

Bentham’s writings on the application of Occam’s razor to punishment led to the prison abolition movement and many modern ideas related to rehabilitation.

Exceptions and Issues

It is important to note that, like any mental model, Occam’s razor is not foolproof. Use it with care, lest you cut yourself. This is especially crucial when it comes to important or risky decisions. There are exceptions to any rule, and we should never blindly follow the results of applying a mental model which logic, experience, or empirical evidence contradict. When you hear hoofbeats behind you, in most cases you should think horses, not zebras—unless you are out on the African savannah.

Furthermore, simple is as simple does. A conclusion can’t rely just on its simplicity. It must be backed by empirical evidence. And when using Occam’s razor to make deductions, we must avoid falling prey to confirmation bias. In the case of the NASA moon landing conspiracy theory, for example, some people consider it simpler for the moon landing to have been faked, others for it to have been real. Lisa Randall best expressed the issues with the narrow application of Occam’s razor in her book, Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe:

Another concern about Occam’s Razor is just a matter of fact. The world is more complicated than any of us would have been likely to conceive. Some particles and properties don’t seem necessary to any physical processes that matter—at least according to what we’ve deduced so far. Yet they exist. Sometimes the simplest model just isn’t the correct one.

This is why it’s important to remember that opting for simpler explanations still requires work. They may be easier to falsify, but still require effort. And that the simpler explanation, although having a higher chance of being correct, is not always true.

Occam’s razor is not intended to be a substitute for critical thinking. It is merely a tool to help make that thinking more efficient. Harlan Coben has disputed many criticisms of Occam’s razor by stating that people fail to understand its exact purpose:

Most people oversimplify Occam’s razor to mean the simplest answer is usually correct. But the real meaning, what the Franciscan friar William of Ockham really wanted to emphasize, is that you shouldn’t complicate, that you shouldn’t “stack” a theory if a simpler explanation was at the ready. Pare it down. Prune the excess.

Remember, Occam’s razor is complemented by other mental models, including fundamental error distribution, Hanlon’s razor, confirmation bias, availability heuristic and hindsight bias. The nature of mental models is that they tend to all interlock and work best in conjunction.