Category: Mental Models

Descriptions Aren’t Prescriptions

When we look at a representation of reality, we can choose to either see it as descriptive, meaning it tells us what the world is currently like, or as prescriptive, meaning it tells us how the world should be. Descriptions teach us, but they also give us room to innovate. Prescriptions can get us stuck. One place this tension shows up is in language.

In one chapter of The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy, David Graeber describes his experience of learning Malagasy, the national language of Madagascar. While the language’s writing system came about in the fifteenth century, it wasn’t until the early nineteenth century that missionaries documented the rules of Malagasy grammar for the purpose of translating scripture.

Of course, the “rules” of Malagasy the missionaries recorded weren’t rules at all. They were reflections of how people spoke at that point in time, as far as outside observers could tell. Languages don’t usually come into existence when someone invents the rules for them. Instead, languages evolve and change over time as speakers make modifications or respond to new needs.

However, those early nineteenth-century records remained in place as the supposed “official” version of Malagasy. Children learned the old form of grammar in school, even as they spoke a somewhat different version of the language at home. For Graeber, learning to speak the version of Malagasy people actually understood in conversation was a challenge. Native speakers he hired would instruct him on the nineteenth-century grammatical principles, then turn and speak to each other in a whole other fashion.

When asked why they couldn’t teach him the version of the language they spoke, Graeber’s Malagasy teachers responded that they were just using slang. Asked why no one seemed to speak the official version, they said people were too lazy. Graeber writes, “Clearly the problem was that the entire population had failed to memorize their lessons properly. But what they were actually denying was the legitimacy of collective creativity, the free play of the system. ” While the official rules stayed the same over the decades, the language itself kept evolving. People assumed the fault of not speaking “proper” Malagasy lay with them, not with the outdated dictionary and grammar. They confused a description for a prescription. He writes:

It never seems to occur to anyone—until you point it out—that had the missionaries came and written their books two hundred years later, current usages would be considered the correct ones, and anyone speaking as they had two hundred years ago would themselves be assumed to be in error.

Graeber sees the same phenomenon playing out in other languages for which grammars and dictionaries only came into existence a century or two ago. Often, such languages were mostly spoken and, like Malagasy, no one made formal records until the need arose for people from elsewhere to make translations. Instead of treating those records as descriptive and outdated, those teaching the language treat them as prescriptive—despite knowing they’re not practical for everyday use.

***

Why don’t people talk “proper”?

So why can’t people just speak a language per the official rules? If someone has gone to all the effort of identifying and recording the rules and people received instruction on them in school, why not follow them? Why keep changing things up?

If languages didn’t evolve, it would make life a lot easier for historians looking at texts from the past. It would also simplify matters for people learning the language, for those coming from different areas, and even for speakers across generations. Yet all languages change all the time.

Graeber suggests the reason for this is because people like to play. We find it dull to speak according to the official rules of our language. We seek out novelty in our everyday lives and do whatever it takes to avoid boredom. Even if each person only plays a little bit once in a while, the results compound. Graeber explains that “this playing around will have cumulative effects.”

Languages still need conventions so people can understand each other. The higher the similarity between the versions of a language different people speak, the more they can communicate. At the same time, they cannot remain rigid. Trying to follow an unyielding set of strict rules will inevitably curtail the usefulness of a language and prevent it from developing in interesting and necessary ways. Languages need a balance: enough guidance to help everyone understand each other and provide an entry point for learners, and enough flexibility to keep updating the rules as actual usage changes.

As a result, languages call into question our idea of freedom: “It’s worth thinking about language for a moment, because one thing it reveals, probably better than any other example, is that there is a basic paradox in our very idea of freedom. On the one hand, rules are by their nature constraining. Speech codes, rules of etiquette, and grammatical rules, all have the effect of limiting what we can and cannot say. ” On the other hand, no rules whatsoever mean no one can understand each other.

Languages need frameworks, but no amount of grammar classes or official dictionaries will prevent people from playing and having fun with their speech.

***

The dictionary is not the language

“The map is not the territory” means that any representation of reality has to be a simplification that may contain errors, become outdated, or reflect biases. Maps remove details that aren’t necessary for their intended use. Representations of complex systems may show expected behavior or ideal behavior. For example, the London Underground map doesn’t reflect the distances between stations because this information isn’t important to most commuters. If a map represented its territory without reducing anything, it would be identical to the territory and therefore would be useless. In fact, the simplest maps can be the most useful because they’re the easiest to understand and remember.

Sometimes maps are descriptive, and sometimes they’re prescriptive; often they’re a bit of both. We run into problems when we confuse one type for another and try to navigate an idealized territory or make the real territory fit an idealized image.

A language’s grammar and dictionary are a sort of map. They take a complex system—a language spoken by what could be tens of millions of people—and aim to represent it with something which is, by comparison, simple. The official rules are not the language itself, but they provide guidance for navigating it. Much like a map of a city needs periodic updates as parts are torn down, built up, renamed, destroyed, added, and so on, the official rules need updating as the language changes. Trying to learn Malagasy using grammar rules written two hundred years ago is like trying to navigate Antananarivo using a street map made two hundred years ago.

A map of a complex system, like a language, is meant to help us find our way by giving us a sense of how things looked at one point in time—it’s usually descriptive. It doesn’t necessarily tell us how that system should look, and we may run into problems if we try to make it conform to the map, ignoring the system’s own adaptive properties. Even if the cartographer never intended this, we can end up treating a map as a prescription. We try to make reality conform to the map. This is what occurs with languages. Graeber calls this the “grammar-book effect”:

People do not invent languages by writing grammars, they write grammars—at least, the first grammars to be written for any given language—by observing the tacit, largely unconscious rules that people seem to be employing when they speak. Yet once a book exists, and especially once it is employed in schoolrooms, people feel that the rules are not just descriptions of how people do talk, but prescriptions for how they should talk.

As we’ve seen, one reason the map is not the territory with language is because people feel compelled to play and experiment. When we encounter representations of systems involving people, we should keep in mind that while we may need rules for the sake of working together and understanding each other, we’re always pushing up against and reshaping those rules. We find it boring to follow a rigid prescription.

For instance, imagine some of the documents you might receive upon starting a role at a new company. Process documents showing step by step how to do the main tasks you’ll be expected to perform. But when the person you’re replacing shows you how to do those same tasks, you notice they don’t follow the listed steps at all. When you ask why, they explain that the process documents were written before they started actually carrying out those tasks, meaning they discovered more efficient ways afterward.

Why keep the process documents, then? Because for someone filling in or starting out, it might make sense to follow them. It’s the most defensible option. Once you truly know the territory and won’t change something without considering why it was there in the first place, you can play with the rules. Those documents might be useful as a description, but they’re unlikely to remain a prescription for long.

The same is true for laws. Sometimes aspects of them are just descriptive of how things are at one point in time, but we end up having to keep following them to the letter because they haven’t been updated. A law might have been written at a time when documents needed sending by letter, meaning certain delays for shipping. Now they can be sent by email. If the law hasn’t been updated, those delay allowances turn from descriptions into prescriptions. Or a law might reflect what people were permitted to do at the time, but now we assume people should have the right to do that thing even if we have new evidence it’s not the best idea. We are less likely to change laws if we persist in viewing them as prescriptive.

***

Conclusion

Descriptions of reality are practical for helping us navigate it, while also giving us room to change things. Prescriptions are helpful for giving us ways of understanding each other and providing enough structure for shared conventions, but they can also become outdated or end up limiting flexibility. When you encounter a representation of something, it’s useful to consider which parts are descriptive and which parts are prescriptive. Remember that both prescriptions and descriptions can and should change over time.

***

The FS team were saddened to hear of David Graeber’s passing, shortly after we completed this article. We hope his books will continue to inspire and educate new readers for many years to come.

What Sharks Can Teach Us About Survivorship Bias

Survivorship bias refers to the idea that we get a false representation of reality when we base our understanding only on the experiences of those who live to tell their story. Taking a look at how we misrepresent shark attacks highlights how survivorship bias distorts reality in other situations.

When asked what the deadliest shark is to humans, most people will say the great white. The lasting influence of the movie Jaws, reinforced by dozens of pop culture references and news reports, keeps that species of shark at the top of the mind when one considers the world’s most fearsome predators. While it is true that great white sharks do attack humans (rarely), they also leave a lot of survivors. And they’re not after humans in particular. They usually just mistake us for seals, one of their key food sources.

We must be careful to not let a volume of survivors in one area blind us to the stories of a small number of survivors elsewhere. Most importantly, we need to ask ourselves what stories are not being told because no one is around to tell them. The experiences of the dead are necessary if we want an accurate understanding of the world.

***

Before we drill down into some interesting statistics, it’s important to understand that great whites are one member of a class of sharks with many common characteristics. Great whites are closely related to tiger and bull sharks. They all have similar habitats, physiology, and instincts. They are also all large, with an average size over ten feet long.

Tiger and bull sharks rarely attack humans, and to someone being bit by one of these huge creatures, there isn’t all that much difference between them. The Florida Museum’s International Shark Attack file explains that “positive identification of attacking sharks is very difficult since victims rarely make adequate observations of the attacker during the ‘heat’ of the interaction. Tooth remains are seldom found in wounds and diagnostic characters for many requiem sharks [of which the great white is one] are difficult to discern even by trained professionals.”

The fatality rate in known attacks is 21.5% for the bull shark, 16% for the great white, and 26% for the tiger shark. But in sheer volume, attacks attributed to great whites outnumber the other two species three to one. So there are three times as many survivors to tell the story of their great white attack.

***

When it comes to our picture of reality of the most dangerous shark, there are other blind spots. Not all sharks have the same behaviors as those three, such as swimming close to shore and being around enough prey to develop a preference for fat seals versus bony humans. Pelagic sharks live in the water desert that is the open ocean and have to eat pretty much whatever they can find. The oceanic white tip is a pelagic shark that is probably far more dangerous to humans—we just don’t come into contact with them as often.

There are only fifteen documented attacks by an oceanic white tip, with three of those being fatal. But since most attacks occur in the open ocean in more isolated situations (e.g., a couple of people on a boat versus five hundred people swimming at a beach), we really have no idea how dangerous oceanic white tips are. There could be hundreds of undocumented attacks that left behind no survivors to tell the tale.

One famous survivor story gives us a glimpse of how dangerous oceanic white tips might be. In 1945, a Japanese submarine shot down the USS Indianapolis. For a multitude of reasons, partly due to the fact that the Indianapolis was on a top secret mission and partly due to tragic incompetence, a rescue ship was not sent for four days. Those who survived the ship’s sinking had to then try to survive in the open ocean with little gear until rescue arrived. The water was full of sharks.

In Indianapolis: The True Story of the Worst Sea Disaster in US Naval History and the Fifty-Year Fight to Exonerate an Innocent Man, Lynn Vincent and Sara Vladic quote Boatswain’s Mate Second Class Eugene Morgan as he described part of his experience: “All the time, the sharks never let up. We had a cargo net that had Styrofoam things attached to keep it afloat. There were about fifteen sailors on this, and suddenly, ten sharks hit it and there was nothing left. This went on and on.” These sharks are believed to have been oceanic white tips. It’s unknown how many men died from shark attacks. Many also perished due to exposure, dehydration, injury, and exhaustion. Of the 1,195 crewmen originally aboard the ship, only 316 survived. It represents the single biggest loss of life from a single ship in US naval history.

Because humans are rarely in the open ocean in large numbers, not only are attacks by this shark less common, there are also fewer survivor stories. The story of the USS Indianapolis is a rare, brutal case that provides a unique picture.

***

Our estimation of the shark that could do us the most harm is often formed by survivorship bias. We develop an inaccurate picture based on the stories of those who live to tell the tale of their shark attack. We don’t ask ourselves who didn’t survive, and so we miss out on the information we need to build an accurate picture of reality.

The point is not to shift our fear to oceanic white tips, which are, in fact, critically endangered. Our fear of sharks seems to make us indifferent to what happens to them, even though they are an essential part of the ocean ecosystem. We are also much more of a danger to sharks than they are to us. We kill them by the millions every year. Neither should we shift our fear to other, more lethal animals, which will likely result in the same indifference to their role in the ecosystem.

The point is rather to consider how well you make decisions when you only factor in the stories of the survivors. For instance, if you were to try to reduce instances of shark attacks or try to limit their severity, you will not likely get the results you are after if you only pay attention to the survivor stories. You need to ask who didn’t make it and try to figure out their stories as well. If you try to implement measures aimed only at great whites near beaches, your measures might not be effective against other predatory sharks. And if you conclude that swimmers are better off in the open ocean because sharks seem to only attack near beaches, you’d be completely wrong.

***

Survivorship bias crops up all over our lives and impedes us from accurately assessing danger. Replace “dangerous sharks” with “dangerous cities” or “dangerous vacation spots” and you can easily see how your picture of a certain location might be skewed based on the experiences of survivors. We can’t be afraid of a tale if no one lives to tell it. More survivors can make something seem more dangerous rather than less dangerous because the volume of stories makes them more memorable.

If fewer people survived shark attacks we wouldn’t have survivor stories influencing our perception about how dangerous sharks are. In all likelihood we would attribute some of the ocean deaths to other causes, like drowning, because it wouldn’t occur to us that sharks could be responsible.

Understanding survivorship bias prompts us to look for the stories of those who weren’t successful. A lack of visible survivors with memorable stories might mean we view other fields as far safer and easier than they are.

For example, a field of business where people who experience failures go on to do other things might seem riskier than one where people who fail are too ashamed to talk about it. The failure of tech start-ups sometimes feels like daily news. We don’t often, however, hear about the real estate agent who has trouble making sales or who keeps getting outbid on offers. Nor do we hear much about architects who design terrible houses or construction companies who don’t complete projects.

Survivorship bias prompts us to associate more risk with industries that exhibit more public failures. But the failures from industries or businesses that aren’t shared are equally important. If we focus only on the survivor stories, we might think that being a real estate agent or an architect is safer than starting a technology company. It might be, but we can’t only base our understanding on which career option is the best bet on the widely shared stories of failure.

If we don’t factor survivorship bias into our thinking we end up in a classic map is not the territory problem. The survivor stories become a poor navigational tool for the terrain.

Most of us know that we shouldn’t become a writer based on the results achieved by J.K Rowling and John Grisham. But even if we go out and talk to other writers, or learn about their careers, or attend writing seminars given by published authors, we are still only talking to the survivors.

Yes, it’s super inspiring to know Stephen King got so many rejections early in his career that the stack of them was enough to pull a nail out of the wall. But what about the writers who got just as many rejections and never published anything? Not only can we learn a lot from them about the publishing industry, we need to consider their experiences if we want to anticipate and understand the challenges involved in being a writer.

***

Not recognizing survivorship bias can lead to faulty decision making. We don’t see the big picture and end up optimizing for a small slice of reality. We can’t completely overcome survivorship bias. The best we can do is acknowledge it, and when the stakes are high or the result important, stop and look for the stories of those who were unsuccessful. They have just as much, if not more, to teach us.

The next time you’re assessing risk, ask yourself: am I paying too much attention to the great white sharks and not enough to the oceanic white tips?

Mental Models For a Pandemic

Mental models help us understand the world better, something which is especially valuable during times of confusion, like a pandemic. Here’s how to apply mental models to gain a more accurate picture of reality and keep a cool head.

***

It feels overwhelming when the world changes rapidly, abruptly, and extensively. The changes come so fast it can be hard to keep up—and the future, which a few months ago seemed reliable, now has so many unknown dimensions. In the face of such uncertainty, mental models are valuable tools for helping you think through significant disruptions such as a pandemic.

A mental model is simply a representation of how something works. They are how we simplify complexity, why we consider some things more relevant than others, and how we reason. Using them increases your clarity of understanding, providing direction for the choices you need to make and the options you want to keep open.

Models for ourselves

During a pandemic, a useful model is “the map is not the territory.” In rapidly changing situations like a global health crisis, any reporting is an incomplete snapshot in time. Our maps are going to be inaccurate for many reasons: limited testing availability, poor reporting, ineffective information sharing, lack of expertise in analyzing the available information. The list goes on.

If past reporting hasn’t been completely accurate, then why would you assume current reporting is? You have to be careful when interpreting the information you receive, using it as a marker to scope out a range of what is happening in the territory.

In our current pandemic, we can easily spot our map issues. There aren’t enough tests available in most countries. Because COVID-19 isn’t fatal for the majority of people who contract it, there are likely many people who get it but don’t meet the testing criteria. Therefore, we don’t know how many people have it.

When we look at country-level reporting, we can also see not all countries are reporting to the same standard. Sometimes this isn’t a matter of “better” or “worse”; there are just different ways of collating the numbers. Some countries don’t have the infrastructure for widespread data collection and sharing. Different countries also have different standards for what counts as a death caused by COVID-19.

In other nations, incentives affect reporting. Some countries downplay their infection rate so as to not create panic. Some governments avoid reporting because it undermines their political interests. Others are more worried about the information on the economic map than the health one.

Although it is important to be realistic about our maps, it doesn’t mean we shouldn’t seek to improve their quality. Paying attention to information from experts and ignoring unverified soundbites is one step to increasing the accuracy of our maps. The more accurate we can get them, the more likely it is that we’ll be able to unlock new possibilities that help us deal with the crisis and plan for the future.

There are two models that we can use to improve the effectiveness of the maps we do have: “compounding” and “probabilistic thinking.”

Compounding is exponential growth, something a lot of us tend to have a poor intuitive grasp on. We see the immediate linear relationships in the situation, like how one test diagnoses one person, while not understanding the compounding effects of that relationship. Increased testing can lead to an exponential decrease in virus transmission because each infected person usually passes the virus onto more than just one other person.

One of the clearest stories to illustrate exponential growth is the story of the man who asked to be paid in rice. In this story, a servant is to be rewarded for his service. When asked how he wanted to be paid, he asks to be paid in rice, using a chessboard to determine the final amount. Starting with one grain, the amount of rice is to be doubled for each square. One grain on the first square looks pathetic. But halfway through the chessboard, the servant is making a good yearly living. And after doubling the rice sixty-four times, the servant is owed more rice than the whole world can produce.

Improving our ability to think exponentially helps us understand how more testing can lead to both an exponential decrease in testing prices and an exponential increase in the production of those tests. It also makes clear just how far-reaching the impact of our actions can be if we don’t take precautions with the assumption that we could be infected.

Probabilistic thinking is also invaluable in helping us make decisions based on the incomplete information we have. In the absence of enough testing, for example, we need to use probabilistic thinking to make decisions on what actions to pursue. We ask ourselves questions like: Do I have COVID-19? If there’s a 1% chance I have it, is it worth visiting my grandparents?

Being able to evaluate reasonable probability has huge impacts on how we approach physical distancing. Combining the models of probabilistic thinking and map is not the territory suggests our actions need to be guided by infection numbers much higher than the ones we have. We are likely to make significantly different social decisions if we estimate the probability of infection as being three people out of ten instead of one person out of one thousand.

Bayesian updating can also help clarify the physical distancing actions you should take. There’s a small probability of being part of a horrendous chain of events that might not just have poor direct consequences but also follow you for the rest of your life. Evaluating how responsible you are being in terms of limiting transmission, would you bet a loved one’s life on it?

Which leads us to Hanlon’s Razor. It’s hard not to get angry at reports of beach parties during spring break or at the guy four doors down who has his friends over to hang out every night. For your own sanity, try using Hanlon’s Razor to evaluate their behavior. They are not being malicious and trying to kill people. They are just exceptionally and tragically ignorant.

Finally, on a day-to-day basis, trying to make small decisions with incomplete information, you can use inversion. You can look at the problem backwards. When the best way forward is far from clear, you ask yourself what you could do to make things worse, and then avoid doing those things.

Models for society

Applying mental models aids in the understanding the dynamics of the large-scale social response.

Currently we are seeing the counterintuitive measures with first-order negatives (closing businesses) but second- and third-order positives (reduced transmission, less stress on the healthcare system). Second-order thinking is an invaluable tool at all times, including during a pandemic. It’s so important that we encourage the thinking, analysis, and decision-making that factors in the effects of the effects of the decisions we make.

In order to improve the maps that our leaders have to make decisions, we need to sort through the feedback loops providing the content. If we can improve not only the feedback but also the pace of iterations, we have a better chance of making good decisions.

For example, if we improve the rate of testing and the speed of the results, it would be a major game-changer. Imagine if knowing whether you had the virus or not was a $0.01 test that gave you a result in less than a minute. In that case, we could make different decisions about social openness, even in the absence of a vaccine (however, this may have invasive privacy implications, as tracking this would be quite difficult otherwise).

As we watch the pandemic and its consequences unfold, it becomes clear that leadership and authority are not the same thing. Our hierarchical instincts emerge strongly in times of crisis. Leadership vacuums, then, are devastating, and disasters expose the cracks in our hierarchies. However, we also see that people can display strong leadership without needing any authority. A pandemic provides opportunities for such leadership to emerge at community and local levels, providing alternate pathways for meeting the needs of many.

One critical model we can use to look at society during a pandemic is Ecosystems. When we think about ecosystems, we might imagine a variety of organisms interacting in a forest or the ocean. But our cities are also ecosystems, as is the earth as a whole. Understanding system dynamics can give us a lot of insight into what is happening in our societies, both at the micro and macro level.

One property of ecosystems that is useful to contemplate in situations like a pandemic is resilience—the speed at which an ecosystem recovers after a disturbance. There are many factors that contribute to resilience, such as diversity and adaptability. Looking at our global situation, one factor threatening to undermine our collective resilience is that our economy has rewarded razor-thin efficiency in the recent past. The problem with thin margins is they offer no buffer in the face of disruption. Therefore, ecosystems with thin margins are not at all resilient. Small disturbances can bring them down completely. And a pandemic is not a small disturbance.

Some argue that what we are facing now is a Black Swan: an unpredictable event beyond normal expectations with severe consequences. Most businesses are not ready to face one. You could argue that an economic recession is not a black swan, but the particular shape of this pandemic is testing the resiliency of our social and economic ecosystems regardless. The closing of shops and business, causing huge disruption, has exposed fragile supply chains. We just don’t see these types of events often enough, even if we know they’re theoretically possible. So we don’t prepare for them. We don’t or can’t create big enough personal and social margins of safety. Individuals and businesses don’t have enough money in the bank. We don’t have enough medical facilities and supplies. Instead, we have optimized for a narrow range of possibilities, compromising the resilience of systems we rely on.

Finally, as we look at the role national borders are playing during this pandemic, we can use the Thermodynamics model to gain insight into how to manage flows of people during and after restrictions. Insulation requires a lot of work, as we are seeing with our borders and the subsequent effect on our economies. It’s unsustainable for long periods of time. Just like how two objects of different temperatures that come into contact with each other eventually reach thermal equilibrium, people will mix with each other. All borders have openings of some sort. It’s important to extend planning to incorporate the realistic tendencies of reintegration.

Some final thoughts about the future

As we look for opportunities about how to move forward both as individuals and societies, Cooperation provides a useful lens. Possibly more critical to evolution than competition, cooperation is a powerful force. It’s rampant throughout the biological world; even bacteria cooperate. As a species, we have been cooperating with each other for a long time. All of us have given up some independence for access to resources provided by others.

Pandemics are intensified because of connection. But we can use that same connectivity to mitigate some negative effects by leveraging our community networks to create cooperative interactions that fill gaps in the government response. We can also use the cooperation lens to create more resilient connections in the future.

Finally, we need to ask ourselves how we can improve our antifragility. How can we get to a place where we grow stronger through change and challenge? It’s not about getting “back to normal.” The normal that was our world in 2019 has proven to be fragile. We shouldn’t want to get back to a time when we were unprepared and vulnerable.

Existential threats are a reality of life on earth. One of the best lessons we can learn is to open our eyes and integrate planning for massive change into how we approach our lives. This will not be the last pandemic, no matter how careful we are. The goal now should not be about assigning blame or succumbing to hindsight bias to try to implement rules designed to prevent a similar situation in the future. We will be better off if we make changes aimed at increasing our resilience and embracing the benefits of challenge.

Still curious? Learn more by reading The Great Mental Models.

Using Models to Stay Calm in Charged Situations

When polarizing topics are discussed in meetings, passions can run high and cloud our judgment. Learn how mental models can help you see clearly from this real-life scenario.

***

Mental models can sometimes come off as an abstract concept. They are, however, actual tools you can use to navigate through challenging or confusing situations. In this article, we are going to apply our mental models to a common situation: a meeting with conflict.

A recent meeting with the school gave us an opportunity to use our latticework. Anyone with school-age kids has dealt with the bureaucracy of a school system and the other parents who interact with it. Call it what you will, all school environments usually have some formal interface between parents and the school administration that is aimed at progressing issues and ideas of importance to the school community.

The particular meeting was an intense one. At issue was the school’s communication around a potentially harmful leak in the heating system. Some parents felt the school had communicated reasonably about the problem and the potential consequences. Others felt their child’s life had been put in danger due to potential exposure to mold and asbestos. Some parents felt the school could have done a better job of soliciting feedback from students about their experiences during the previous week, and others felt the school administration had done a poor job about communicating potential risks to parents.

The first thing you’ll notice if you’re in a meeting like this is that emotions on all sides run high. After some discussion you might also notice a few more things, like how many people do the following:

Any of these occurrences, when you hear them via statements from people around the table, are a great indication that using a few mental models might improve the dynamics of the situation.

The first mental model that is invaluable in situations like this is Hanlon’s Razor: don’t attribute to maliciousness that which is more easily explained by incompetence. (Hanlon’s Razor is one of the 9 general thinking concepts in The Great Mental Models Volume One.) When people feel victimized, they can get angry and lash out in an attempt to fight back against a perceived threat. When people feel accused of serious wrongdoing, they can get defensive and withhold information to protect themselves. Neither of these reactions is useful in a situation like this. Yes, sometimes people intentionally do bad things. But more often than not, bad things are the result of incompetence. In a school meeting situation, it’s safe to assume everyone at the table has the best interests of the students at heart. School staff and administrators usually go into teaching motivated by a deep love of education. They genuinely want their schools to be amazing places of learning, and they devote time and attention to improving the lives of their students.

It makes no sense to assume a school’s administration would deliberately withhold harmful information. Yes, it could happen. But, in either case, you are going to obtain more valuable information if you assume poor decisions were the result of incompetence versus maliciousness.

When we feel people are malicious toward us, we instinctively become a negatively coiled spring, waiting for the right moment to take them down a notch or two. Removing malice from the equation, you give yourself emotional breathing room to work toward better solutions and apply more models.

The next helpful model is relativity, adapted from the laws of physics. This model is about remembering that everyone’s perspective is different from yours. Understanding how others see the same situation can help you move toward a more meaningful dialogue with the people in the meeting. You can do this by looking around the room and asking yourself what is influencing people’s approaches to the situation.

In our school meeting, we see some people are afraid for their child’s health. Others are influenced by past dealings with the school administration. Authorities are worried about closing the school. Teachers are concerned about how missed time might impact their students’ learning. Administrators are trying to balance the needs of parents with their responsibility to follow the necessary procedures. Some parents are stressed because they don’t have care for their children when the school closes. There is a lot going on, and relativity gives us a lens to try to identify the dynamics impacting communication.

After understanding the different perspectives, it becomes easier to incorporate them into your thinking. You can diffuse conflict by identifying what it is you think you hear. Often, just the feeling of being heard will help people start to listen and engage more objectively.

Now you can dive into some of the details. First up is probabilistic thinking. Before we worry about mold levels or sick children, let’s try to identify the base rates. What is the mold content in the air outside? How many children are typically absent due to sickness at this time of year? Reminding people that severity has to be evaluated against something in a situation like this can really help diffuse stress and concern. If 10% of the student population is absent on any given day, and in the week leading up to these events 12% to 13% of the population was absent, then it turns out we are not actually dealing with a huge statistical anomaly.

Then you can evaluate the anecdotes with the model of the Law of Large Numbers in mind. Small sample sizes can be misleading. The larger your group for evaluation, the more relevant the conclusions. In a situation such as our school council meeting, small sample sizes only serve to ratchet up the emotion by implying they are the causal outcomes of recent events.

In reality, any one-off occurrence can often be explained in multiple ways. One or two children coming home with hives? There are a dozen reasonable explanations for that: allergies, dry skin, reaction to skin cream, symptom of an illness unrelated to the school environment, and so on. However, the more children that develop hives, the more it is statistically possible the cause relates to the only common denominator between all children: the school environment.

Even then, correlation does not equal causation. It might not be a recent leaky steam pipe; is it exam time? Are there other stressors in the culture? Other contaminants in the environment? The larger your sample size, the more likely you will obtain relevant information.

Finally, you can practice systems thinking and contribute to the discussion by identifying the other components in the system you are all dealing with. After all, a school council is just one part of a much larger system involving governments, school boards, legislators, administrators, teachers, students, parents, and the community. When you put your meeting into the bigger context of the entire system, you can identify the feedback loops: Who is responding to what information, and how quickly does their behavior change? When you do this, you can start to suggest some possible steps and solutions to remedy the situation and improve interactions going forward.

How is the information flowing? How fast does it move? How much time does each recipient have to adjust before receiving more information? Chances are, you aren’t going to know all this at the meeting. So you can ask questions. Does the principal have to get approval from the school board before sending out communications involving risk to students? Can teachers communicate directly with parents? What are the conditions for communicating possible risk? Will speculation increase the speed of a self-reinforcing feedback loop causing panic? What do parents need to know to make an informed decision about the welfare of their child? What does the school need to know to make an informed decision about the welfare of their students?

In meetings like the one described here, there is no doubt that communication is important. Using the meeting to discuss and debate ways of improving communication so that outcomes are generally better in the future is a valuable use of time.

A school meeting is one practical example of how having a latticework of mental models can be useful. Using mental models can help you diffuse some of the emotions that create an unproductive dynamic. They can also help you bring forward valuable, relevant information to assist the different parties in improving their decision-making process going forward.

At the very least, you will walk away from the meeting with a much better understanding of how the world works, and you will have gained some strategies you can implement in the future to leverage this knowledge instead of fighting against it.

Prisoner’s Dilemma: What Game Are you Playing?

In this classic game theory experiment, you must decide: rat out another for personal benefit, or cooperate? The answer may be more complicated than you think.

***

What does it take to make people cooperate with each other when the incentives to act primarily out of self-interest are often so strong?

The Prisoner’s Dilemma is a thought experiment originating from game theory. Designed to analyze the ways in which we cooperate, it strips away the variations between specific situations where people are called to overcome the urge to be selfish. Political scientist Robert Axelrod lays down its foundations in The Evolution of Cooperation:

Under what conditions will cooperation emerge in a world of egoists without a central authority? This question has intrigued people for a long time. And for good reason. We all know that people are not angels and that they tend to look after themselves and their own first. Yet we also know that cooperation does occur and that our civilization is based on it. But in situations where each individual has an incentive to be selfish, how can cooperation ever develop?

…To make headway in understanding the vast array of specific situations which have this property, a way is needed to represent what is common to these situations without becoming bogged down in the details unique to each…the famous Prisoner’s Dilemma game.

The thought experiment goes as such: two criminals are in separate cells, unable to communicate, accused of a crime they both participated in. The police do not have enough evidence to sentence both without further evidence, though they are certain enough to wish to ensure they both spend time in prison. So they offer the prisoners a deal. They can accuse each other of the crime, with the following conditions:

  • If both prisoners say the other did it, each will serve two years in prison.
  • If one prisoner says the other did it and the other stays silent, the accused will serve three years and the accuser zero.
  • If both prisoners stay silent, each will serve one year in prison.

In game theory, the altruistic behavior (staying silent) is called “cooperating,” while accusing the other is called “defecting.”

What should they do?

If they were able to communicate and they trusted each other, the rational choice is to stay silent; that way each serves less time in prison than they would otherwise. But how can each know the other won’t accuse them? After all, people tend to act out of self-interest. The cost of being the one to stay silent is too high. The expected outcome when the game is played is that both accuse the other and serve two years. (In the real world, we doubt it would. After they served their time, it’s not hard to imagine each of them still being upset. Two years is a lot of time for a spring to coil in a negative way. Perhaps they spend the rest of their lives sabatoging each other.)

The Iterated Prisoner’s Dilemma

A more complex form of the thought experiment is the iterated Prisoner’s Dilemma, in which we imagine the same two prisoners being in the same situation multiple times. In this version of the experiment, they are able to adjust their strategy based on the previous outcome.

If we repeat the scenario, it may seem as if the prisoners will begin to cooperate. But this doesn’t make sense in game theory terms. When they know how many times the game will repeat, both have an incentive to accuse on the final round, seeing as there can be no retaliation. Knowing the other will surely accuse on the final round, both have an incentive to accuse on the penultimate round—and so on, back to the start.

Gregory Mankiw summarizes how difficult it is to model cooperation in Business Economics as follows:

To see how difficult it is to maintain cooperation, imagine that, before the police captured . . . the two criminals, [they] had made a pact not to confess. Clearly, this agreement would make them both better off if they both live up to it, because they would each spend only one year in jail. But would the two criminals in fact remain silent, simply because they had agreed to? Once they are being questioned separately, the logic of self-interest takes over and leads them to confess. Cooperation between the two prisoners is difficult to maintain because cooperation is individually irrational.

However, cooperative strategies can evolve if we model the game as having random or infinite iterations. If each prisoner knows they will likely interact with each other in the future, with no knowledge or expectation their relationship will have a definite end, the cooperation becomes significantly more likely. If we imagine that the prisoners will go to the same jail or will run in the same circles once released, we can understand how the incentive for cooperation might increase. If you’re a defector, running into the person you defected on is awkward at best, and leaves you sleeping with the fishes at worst.

Real-world Prisoner’s Dilemmas

We can use the Prisoner’s Dilemma as a means of understanding many real-world situations based on cooperation and trust. As individuals, being selfish tends to benefit us, at least in the short term. But when everyone is selfish, everyone suffers.

In The Prisoner’s Dilemma, Martin Peterson asks readers to imagine two car manufacturers, Row Cars and Col Motors. As the only two actors in their market, the price each sells cars at has a direct connection to the price the other sells cars at. If one opts to sell at a higher price than the other, they will sell fewer cars as customers transfer. If one sells at a lower price, they will sell more cars at a lower profit margin, gaining customers from the other. In Peterson’s example, if both set their prices high, both will make $100 million per year. Should one decide to set their prices lower, they will make $150 million while the other makes nothing. If both set low prices, both make $20 million. Peterson writes:

Imagine that you serve on the board of Row Cars. In a board meeting, you point out that irrespective of what Col Motors decides to do, it will be better for your company to opt for low prices. This is because if Col Motors sets its price low, then a profit of $20 million is better than $0, and if Col Motors sets its price high, then a profit of $150 million is better than $100 million.

Gregory Mankiw gives another real-world example in Microeconomics, detailed here:

Consider an oligopoly with two members, called Iran and Saudi Arabia. Both countries sell crude oil. After prolonged negotiation, the countries agree to keep oil production low in order to keep the world price of oil high. After they agree on production levels, each country must decide whether to cooperate and live up to this agreement or to ignore it and produce at a higher level. The following image shows how the profits of the two countries depend on the strategies they choose.

Suppose you are the leader of Saudi Arabia. You might reason as follows:

I could keep production low as we agreed, or I could raise my production and sell more oil on world markets. If Iran lives up to the agreement and keeps its production low, then my country ears profit of $60 billion with high production and $50 billion with low production. In this case, Saudi Arabia is better off with high production. If Iran fails to live up to the agreement and produces at a high level, then my country earns $40 billion with high production and $30 billion with low production. Once again, Saudi Arabia is better off with high production. So, regardless of what Iran chooses to do, my country is better off reneging on our agreement and producing at a high level.

Producing at a high level is a dominant strategy for Saudi Arabia. Of course, Iran reasons in exactly the same way, and so both countries produce at a high level. The result is the inferior outcome (from both Iran and Saudi Arabia’s standpoint) with low profits in each country. This example illustrates why oligopolies have trouble maintaining monopoly profits. The monopoly outcome is jointly rational for the oligopoly, but each oligopolist has an incentive to cheat. Just as self-interest drives the prisoners in the prisoners’ dilemma to confess, self-interest makes it difficult for the oligopoly to maintain the cooperative outcome with low production, high prices and monopoly prices.

Other examples of prisoners’ dilemmas include arms races, advertising, and common resources (see The Tragedy of the Commons). Understanding the Prisoner’s Dilemma is an important component of the dynamics of cooperation, an extremely useful mental model.

Thinking of life as an iterative game changes how you play. Positioning yourself for the future carries more weight than “winning” in the moment.

Survivorship Bias: The Tale of Forgotten Failures

Survivorship bias is a common logical error that distorts our understanding of the world. It happens when we assume that success tells the whole story and when we don’t adequately consider past failures.

There are thousands, even tens of thousands of failures for every big success in the world. But stories of failure are not as sexy as stories of triumph, so they rarely get covered and shared. As we consume one story of success after another, we forget the base rates and overestimate the odds of real success.

“See,” says he, “you who deny a providence, how many have been saved by their prayers to the Gods.”

“Ay,” says Diagoras, “I see those who were saved, but where are those painted who were shipwrecked?”

— Cicero

The Basics

A college dropout becomes a billionaire. Batuli Lamichhane, a chain-smoker, lives to the age of 118. Four young men are rejected by record labels and told “guitar groups are on the way out,” then go on to become the most successful band in history.

Bill Gates, Batuli Lamichhane, and the Beatles are oft-cited examples of people who broke the rules without the expected consequences. We like to focus on people like them—the result of a cognitive shortcut known as survivorship bias.

When we only pay attention to those who survive, we fail to account for base rates and end up misunderstanding how selection processes actually work. The base rate is the probability of a given result we can expect from a sample, expressed as a percentage. If you play roulette, for example, you can be expected to win one out of 38 games, or 2.63%, which is the base rate. The problem arises when we mistake the winners for the rule and not the exception. People like Gates, Lamichhane, and the Beatles are anomalies at one end of a distribution curve. While there is much to learn from them, it would be a mistake to expect the same results from doing the same things.

A stupid decision that works out well becomes a brilliant decision in hindsight.

— Daniel Kahneman

Cause and Effect

Can we achieve anything if we try hard enough? Not necessarily. Survivorship bias leads to an erroneous understanding of cause and effect. People see correlation in mere coincidence. We all love to hear stories of those who beat the odds and became successful, holding them up as proof that the impossible is possible. We ignore failures in pursuit of a coherent narrative about success.

Few would think to write the biography of a business person who goes bankrupt and spends their entire life in debt. Or a musician who tried again and again to get signed and was ignored by record labels. Or of someone who dreams of becoming an actor, moves to LA, and ends up returning a year later, defeated and broke. After all, who wants to hear that? We want the encouragement survivorship bias provides, and the subsequent belief in our own capabilities. The result is an inflated idea of how many people become successful.

The discouraging fact is that success is never guaranteed. Most businesses fail. Most people do not become rich or famous. Most leaps of faith go wrong. It does not mean we should not try, just that we should be realistic with our understanding of reality.

Beware of advice from the successful.

— Barnaby James

Survivorship Bias in Business

Survivorship bias is particularly common in the world of business. Companies which fail early on are ignored, while the rare successes are lauded for decades. Studies of market performance often exclude companies which collapse. This can distort statistics and make success seem more probable than it truly is. Just as history is written by the winners, so is much of our knowledge about business. Those who end up broke and chastened lack a real voice. They may be blamed for their failures by those who ignore the role coincidence plays in the upward trajectories of the successful.

Nassim Taleb writes of our tendency to ignore the failures: “We favor the visible, the embedded, the personal, the narrated, and the tangible; we scorn the abstract.” Business books laud the rule-breakers who ignore conventional advice and still create profitable enterprises. For most entrepreneurs, taking excessive risks and eschewing all norms is an ill-advised gamble. Many of the misfit billionaires who are widely celebrated succeeded in spite of their unusual choices, not because of them. We also ignore the role of timing, luck, connections and socio-economic background. A person from a prosperous family, with valuable connections, who founds a business at a lucrative time has a greater chance of survival, even if they drop out of college or do something unconventional. Someone with a different background, acting at an inopportune time, will have less of a chance.

In No Startup Hipsters: Build Scalable Technology Companies, Samir Rath and Teodora Georgieva write:

Almost every single generic presentation for startups starts with “Ninety Five percent of all startups fail”, but very rarely do we pause for a moment and think “what does this really mean?” We nod our heads in somber acknowledgement and with great enthusiasm turn to the heroes who “made it” — Zuckerberg, Gates, etc. to absorb pearls of wisdom and find the Holy Grail of building successful companies. Learning from the successful is a much deeper problem and can reduce the probability of success more than we might imagine.

Examining the lives of successful entrepreneurs teaches us very little. We would do far better to analyze the causes of failure, then act accordingly. Even better would be learning from both failures and successes.

Focusing on successful outliers does not account for base rates. As Rath and Georgieva go on to write:

After any process that picks winners, the non-survivors are often destroyed or hidden or removed from public view. The huge failure rate for start-ups is a classic example; if failures become invisible, not only do we fail to recognise that missing instances hold important information, but we may also fail to acknowledge that there is any missing information at all.

They describe how this leads us to base our choices on inaccurate assumptions:

Often, as we revel in stories of start-up founders who struggled their way through on cups of ramen before the tide finally turned on viral product launches, high team performance or strategic partnerships, we forget how many other founders did the same thing, in the same industry and perished…The problem we mention is compounded by biographical or autobiographical narratives. The human brain is obsessed with building a cause and effect narrative. The problem arises when this cognitive machinery misfires and finds patterns where there are none.

These success narratives are created both by those within successful companies and those outside. Looking back on their ramen days, founders may believe they had a plan all along. They always knew everything would work out. In truth, they may lack an idea of the cause and effect relationships underlying their progress. When external observers hear their stories, they may, in a quasi-superstitious manner, spot “signs” of the success to come. As Daniel Kahneman has written, the only true similarity is luck.

Consider What You Don’t See

When we read about survivorship bias, we usually come across the archetypical story of Abraham Wald, a statistician studying World War II airplanes. His research group at Columbia University was asked to figure out how to better protect airplanes from damage. The initial approach to the problem was to look at the planes coming back, seeing where they were hit the worst, then reinforcing that area.

However, Wald realized there was a missing, yet valuable, source of evidence: Planes that were hit that did not make it back. Planes that went down, that weren’t surviving, had much better information to provide on areas that were most important to reinforce. Wald’s approach is an example of how to overcome survivorship bias. Don’t look just at what you can see. Consider all the things that started on the same path but didn’t make it. Try to figure out their story, as there is as much, if not more, to be learned from failure.

Considering survivorship bias when presented with examples of success is difficult. It is not instinctive to pause, reflect, and think through what the base rate odds of success are and whether you’re looking at an outlier or the expected outcome. And yet if you don’t know the real odds, if you don’t know if what you’re looking at is an example of survivorship bias, then you’ve got a blind spot.

Whenever you read about a success story in the media, think of all the people who tried to do what that person did and failed. Of course, understanding survivorship bias isn’t an excuse for not taking action, but rather an essential tool to help you cut through the noise and understand the world. If you’re going to do something, do it fully informed.

To learn more, consider reading Fooled By Randomness, or The Art of Thinking Clearly.