Tag: Risk

The Precautionary Principle: Better Safe than Sorry?

Also known as the Precautionary Approach or Precautionary Action, the Precautionary Principle is a concept best summed up by the proverb “better safe than sorry” or the medical maxim to “first do no harm.”

While there is no single definition, it typically refers to acting to prevent harm by not doing anything that could have negative consequences, even if the possibility of those consequences is uncertain.

In this article, we will explore how the Precautionary Principle works, its strengths and drawbacks, the best way to use it, and how we can apply it in our own lives.

Guilty until proven innocent

Whenever we make even the smallest change within a complex system, we risk dramatic unintended consequences.

The interconnections and dependencies within systems make it almost impossible to predict outcomes—and seeing as they often require a reasonably precise set of conditions to function, our interventions can wreak havoc.

The Precautionary Principle reflects the reality of working with and within complex systems. It shifts the burden of proof from proving something was dangerous after the fact to proving it is safe before taking chances. It emphasizes waiting for more complete information before risking causing damage, especially if some of the possible impacts would be irreversible, hard to contain, or would affect people who didn’t choose to be involved.

The possibility of harm does not need to be specific to that particular circumstance; sometimes we can judge a category of actions as one that always requires precaution because we know it has a high risk of unintended consequences.

For example, invasive species (plants or animals that cause harm after being introduced into a new environment by humans) have repeatedly caused native species to become extinct. So it’s reasonable to exercise precaution and not introduce living things into new places without strong evidence it will be harmless.

Preventing risks and protecting resources

Best known for its use as a regulatory guideline in environmental law and public health, the Precautionary Principle originated with the German term “Vorsorgeprinzip” applied to regulations for preventing air pollution. Konrad Von Moltke, director of the Institute for European Environmental Policy, later translated it into English.

Seeing as the natural world is a highly complex system we have repeatedly disrupted in serious, permanent ways, the Precautionary Principle has become a guiding part of environmental policy in many countries.

For example, the Umweltbundesamt (German Environmental Protection Agency) explains that the Precautionary Principle has two core components in German environmental law today: preventing risks and protecting resources.

Preventing risks means legislators shouldn’t take actions where our knowledge of the potential for environmental damage is incomplete or uncertain but there is cause for concern. The burden of proof is on proving lack of harm, not on proving harm. Protecting resources means preserving things like water and soil in a form future generations can use.

To give another example, some countries evoke versions of the Precautionary Principle to justify bans on genetically modified foods—in some cases for good, in others until evidence of their safety is considered stronger. It is left to legislators to interpret and apply the Precautionary Principle within specific situations.

The flexibility of the Precautionary Principle is both a source of strength and a source of weakness. We live in a fast-moving world where regulation does not always keep up with innovation, meaning guidelines (as opposed to rules) can often prove useful.

Another reason the Precautionary Principle can be a practical addition to legislation is that science doesn’t necessarily move fast enough to protect us from potential risks, especially ones that shift harm elsewhere or take a long time to show up. For example, thousands of human-made substances are present in the food we eat, ranging from medications given to livestock to materials used in packaging. Proving that a new additive has health risks once it’s in the food supply could take decades because it’s incredibly difficult to isolate causative factors. So some regulators, including the Food and Drug Administration in America, require manufacturers to prove something is safe before it goes to market. This approach isn’t perfect, but it’s far safer than waiting to discover harm after we start eating something.

The Precautionary Principle forces us to ask a lot of difficult questions about the nature of risk, uncertainty, probability, the role of government, and ethics. It can also prompt us to question our intuitions surrounding the right decisions to make in certain situations.

When and how to use the Precautionary Principle

When handling risks, it is important to be aware of what we don’t or can’t know for sure. The Precautionary Principle is not intended to be a stifling justification for banning things—it’s a tool for handling particular kinds of uncertainty. Heuristics can guide us in making important decisions, but we still need to be flexible and treat each case as unique.

So how should we use the Precautionary Principle? Sven Ove Hansson suggests two requirements in How Extreme Is the Precautionary Principle? First, if there are competing priorities (beyond avoidance of harm), it should be combined with other decision-making principles. For example, the idea of “explore versus exploit” teaches us that we need to balance doubling down on existing options with trying out new ones. Second, the decision to take precautionary action should be based on the most up-to-date science, and there should be plans in place for how to update that decision if the science changes. That includes planning how often to revaluate the evidence and how to assess its quality.

When is it a good idea to use the Precautionary Principle? There are a few types of situations where it’s better to be safe rather than sorry if things are uncertain.

When the costs of waiting are low. As we’ve already seen, the Precautionary Principle is intended as a tool for handling uncertainty, rather than a justification for arbitrary bans. This means that if the safety of something is uncertain but the costs of waiting to learn more are low, it’s a good idea to use precaution.

When preserving optionality is a priority. The Precautionary Principle is most often evoked for potential risks that would cause irreversible, far-reaching, uncontainable harm. Seeing as we don’t know what the future holds, keeping our options open by avoiding limiting choices gives us the most flexibility later on. The Precautionary Principle preserves optionality by ensuring we don’t restrict the resources we have available further down the line or leave messes for our future selves to clean up.

When the potential costs of a risk are far greater than the cost of preventative action. If a potential risk would be devastating or even ruinous, and it’s possible to protect against it, precautionary action is key. Sometimes winning is just staying in the game—and sometimes staying in the game boils down to not letting anything wipe you out.

For example, in 1963 the Swiss government pledged to provide bunker spaces to all citizens in the event of a nuclear attack or disaster. The country still maintains a national system of thousands of warning sirens and distributes potassium iodide tablets (used to reduce the effects of radiation) to people living near nuclear plants in case of an accident. Given the potential effects of an incident on Switzerland (regardless of how likely it is), these precautionary actions are considered worthwhile.

When alternatives are available. If there are alternative courses of action we know to be safe, it’s a good idea to wait for more information before adopting a new risky one.

When not to use the Precautionary Principle

As the third criteria for using the Precautionary Principle usefully, Sven Ove Hansson recommends it not be used when the likelihood or scale of a potential risk is too low for precautionary action to have any benefit. For example, if one person per year dies from an allergic reaction to a guinea pig bite, it’s probably not worth banning pet guinea pigs. We can add a few more examples of situations where it’s generally not a good idea to use the Precautionary Principle.

When the tradeoffs are substantial and known. The whole point of the Precautionary Principle is to avoid harm. If we know for sure that not taking an action will cause more damage than taking it possibly could, it’s not a good idea to use precaution.

For example, following a 2011 accident at Fukushima, Japan shut down all nuclear power plants. Seeing as nuclear power is cheaper than fossil fuels, this resulted in a sharp increase in electricity prices in parts of the country. According to the authors of the paper Be Cautious with the Precautionary Principle, the resulting increase in mortality from people being unable to spend as much on heating was higher than the fatalities from the actual accident.

When the risks are known and priced in. We all have different levels of risk appetite and we make judgments about whether certain activities are worth the risks involved. When a risk is priced in, that means people are aware of it and voluntarily decide it is worthwhile—or even desirable.

For example, riskier investments tend to have higher potential returns. Although they might not make sense for someone who doesn’t want to risk losing any money, they do make sense for those who consider the potential gains worth the potential losses.

When only a zero-risk option would be satisfying. It’s impossible to completely avoid risks, so it doesn’t make much sense to exercise precaution with the expectation that a 100% safe option will appear.

When taking risks could strengthen us. As individuals, we can sometimes be overly risk averse and too cautious—to the point where it makes us fragile. Our ancestors had the best chance of surviving if they overreacted, rather than underreacted, to risks. But for many of us today, the biggest risk we face can be the stress caused by worrying too much about improbable dangers. We can end up fearing the kinds of risks, like social rejection, that are unavoidable and that tend to make us stronger if we embrace them as inevitable. Never taking any risks is generally a far worse idea than taking sensible ones.

***

We all face decisions every day that involve balancing risk. The Precautionary Principle is a tool that helps us determine when a particular choice is worth taking a gamble on, or when we need to sit tight and collect more information.

What Sharks Can Teach Us About Survivorship Bias

Survivorship bias refers to the idea that we get a false representation of reality when we base our understanding only on the experiences of those who live to tell their story. Taking a look at how we misrepresent shark attacks highlights how survivorship bias distorts reality in other situations.

When asked what the deadliest shark is to humans, most people will say the great white. The lasting influence of the movie Jaws, reinforced by dozens of pop culture references and news reports, keeps that species of shark at the top of the mind when one considers the world’s most fearsome predators. While it is true that great white sharks do attack humans (rarely), they also leave a lot of survivors. And they’re not after humans in particular. They usually just mistake us for seals, one of their key food sources.

We must be careful to not let a volume of survivors in one area blind us to the stories of a small number of survivors elsewhere. Most importantly, we need to ask ourselves what stories are not being told because no one is around to tell them. The experiences of the dead are necessary if we want an accurate understanding of the world.

***

Before we drill down into some interesting statistics, it’s important to understand that great whites are one member of a class of sharks with many common characteristics. Great whites are closely related to tiger and bull sharks. They all have similar habitats, physiology, and instincts. They are also all large, with an average size over ten feet long.

Tiger and bull sharks rarely attack humans, and to someone being bit by one of these huge creatures, there isn’t all that much difference between them. The Florida Museum’s International Shark Attack file explains that “positive identification of attacking sharks is very difficult since victims rarely make adequate observations of the attacker during the ‘heat’ of the interaction. Tooth remains are seldom found in wounds and diagnostic characters for many requiem sharks [of which the great white is one] are difficult to discern even by trained professionals.”

The fatality rate in known attacks is 21.5% for the bull shark, 16% for the great white, and 26% for the tiger shark. But in sheer volume, attacks attributed to great whites outnumber the other two species three to one. So there are three times as many survivors to tell the story of their great white attack.

***

When it comes to our picture of reality of the most dangerous shark, there are other blind spots. Not all sharks have the same behaviors as those three, such as swimming close to shore and being around enough prey to develop a preference for fat seals versus bony humans. Pelagic sharks live in the water desert that is the open ocean and have to eat pretty much whatever they can find. The oceanic white tip is a pelagic shark that is probably far more dangerous to humans—we just don’t come into contact with them as often.

There are only fifteen documented attacks by an oceanic white tip, with three of those being fatal. But since most attacks occur in the open ocean in more isolated situations (e.g., a couple of people on a boat versus five hundred people swimming at a beach), we really have no idea how dangerous oceanic white tips are. There could be hundreds of undocumented attacks that left behind no survivors to tell the tale.

One famous survivor story gives us a glimpse of how dangerous oceanic white tips might be. In 1945, a Japanese submarine shot down the USS Indianapolis. For a multitude of reasons, partly due to the fact that the Indianapolis was on a top secret mission and partly due to tragic incompetence, a rescue ship was not sent for four days. Those who survived the ship’s sinking had to then try to survive in the open ocean with little gear until rescue arrived. The water was full of sharks.

In Indianapolis: The True Story of the Worst Sea Disaster in US Naval History and the Fifty-Year Fight to Exonerate an Innocent Man, Lynn Vincent and Sara Vladic quote Boatswain’s Mate Second Class Eugene Morgan as he described part of his experience: “All the time, the sharks never let up. We had a cargo net that had Styrofoam things attached to keep it afloat. There were about fifteen sailors on this, and suddenly, ten sharks hit it and there was nothing left. This went on and on.” These sharks are believed to have been oceanic white tips. It’s unknown how many men died from shark attacks. Many also perished due to exposure, dehydration, injury, and exhaustion. Of the 1,195 crewmen originally aboard the ship, only 316 survived. It represents the single biggest loss of life from a single ship in US naval history.

Because humans are rarely in the open ocean in large numbers, not only are attacks by this shark less common, there are also fewer survivor stories. The story of the USS Indianapolis is a rare, brutal case that provides a unique picture.

***

Our estimation of the shark that could do us the most harm is often formed by survivorship bias. We develop an inaccurate picture based on the stories of those who live to tell the tale of their shark attack. We don’t ask ourselves who didn’t survive, and so we miss out on the information we need to build an accurate picture of reality.

The point is not to shift our fear to oceanic white tips, which are, in fact, critically endangered. Our fear of sharks seems to make us indifferent to what happens to them, even though they are an essential part of the ocean ecosystem. We are also much more of a danger to sharks than they are to us. We kill them by the millions every year. Neither should we shift our fear to other, more lethal animals, which will likely result in the same indifference to their role in the ecosystem.

The point is rather to consider how well you make decisions when you only factor in the stories of the survivors. For instance, if you were to try to reduce instances of shark attacks or try to limit their severity, you will not likely get the results you are after if you only pay attention to the survivor stories. You need to ask who didn’t make it and try to figure out their stories as well. If you try to implement measures aimed only at great whites near beaches, your measures might not be effective against other predatory sharks. And if you conclude that swimmers are better off in the open ocean because sharks seem to only attack near beaches, you’d be completely wrong.

***

Survivorship bias crops up all over our lives and impedes us from accurately assessing danger. Replace “dangerous sharks” with “dangerous cities” or “dangerous vacation spots” and you can easily see how your picture of a certain location might be skewed based on the experiences of survivors. We can’t be afraid of a tale if no one lives to tell it. More survivors can make something seem more dangerous rather than less dangerous because the volume of stories makes them more memorable.

If fewer people survived shark attacks we wouldn’t have survivor stories influencing our perception about how dangerous sharks are. In all likelihood we would attribute some of the ocean deaths to other causes, like drowning, because it wouldn’t occur to us that sharks could be responsible.

Understanding survivorship bias prompts us to look for the stories of those who weren’t successful. A lack of visible survivors with memorable stories might mean we view other fields as far safer and easier than they are.

For example, a field of business where people who experience failures go on to do other things might seem riskier than one where people who fail are too ashamed to talk about it. The failure of tech start-ups sometimes feels like daily news. We don’t often, however, hear about the real estate agent who has trouble making sales or who keeps getting outbid on offers. Nor do we hear much about architects who design terrible houses or construction companies who don’t complete projects.

Survivorship bias prompts us to associate more risk with industries that exhibit more public failures. But the failures from industries or businesses that aren’t shared are equally important. If we focus only on the survivor stories, we might think that being a real estate agent or an architect is safer than starting a technology company. It might be, but we can’t only base our understanding on which career option is the best bet on the widely shared stories of failure.

If we don’t factor survivorship bias into our thinking we end up in a classic map is not the territory problem. The survivor stories become a poor navigational tool for the terrain.

Most of us know that we shouldn’t become a writer based on the results achieved by J.K Rowling and John Grisham. But even if we go out and talk to other writers, or learn about their careers, or attend writing seminars given by published authors, we are still only talking to the survivors.

Yes, it’s super inspiring to know Stephen King got so many rejections early in his career that the stack of them was enough to pull a nail out of the wall. But what about the writers who got just as many rejections and never published anything? Not only can we learn a lot from them about the publishing industry, we need to consider their experiences if we want to anticipate and understand the challenges involved in being a writer.

***

Not recognizing survivorship bias can lead to faulty decision making. We don’t see the big picture and end up optimizing for a small slice of reality. We can’t completely overcome survivorship bias. The best we can do is acknowledge it, and when the stakes are high or the result important, stop and look for the stories of those who were unsuccessful. They have just as much, if not more, to teach us.

The next time you’re assessing risk, ask yourself: am I paying too much attention to the great white sharks and not enough to the oceanic white tips?

When Safety Proves Dangerous

Not everything we do with the aim of making ourselves safer has that effect. Sometimes, knowing there are measures in place to protect us from harm can lead us to take greater risks and cancel out the benefits. This is known as risk compensation. Understanding how it affects our behavior can help us make the best possible decisions in an uncertain world.

***

The world is full of risks. Every day we take endless chances, whether we’re crossing the road, standing next to someone with a cough on the train, investing in the stock market, or hopping on a flight.

From the moment we’re old enough to understand, people start teaching us crucial safety measures to remember: don’t touch that, wear this, stay away from that, don’t do this. And society is endlessly trying to mitigate the risks involved in daily life, from the ongoing efforts to improve car safety to signs reminding employees to wash their hands after using the toilet.

But the things we do to reduce risk don’t always make us safer. They can end up having the opposite effect. This is because we tend to change how we behave in response to our perceived safety level. When we feel safe, we take more risks. When we feel unsafe, we are more cautious.

Risk compensation means that efforts to protect ourselves can end up having a smaller effect than expected, no effect at all, or even a negative effect. Sometimes the danger is transferred to a different group of people, or a behavior modification creates new risks. Knowing how we respond to risk can help us avoid transferring danger to other more vulnerable individuals or groups.

Examples of Risk Compensation

There are many documented instances of risk compensation. One of the first comes from a 1975 paper by economist Sam Peltzman, entitled “The Effects of Automobile Safety Regulation.” Peltzman looked at the effects of new vehicle safety laws introduced several years earlier, finding that they led to no change in fatalities. While people in cars were less likely to die in accidents, pedestrians were at a higher risk. Why? Because drivers took more risks, knowing they were safer if they crashed.

Although Peltzman’s research has been both replicated and called into question over the years (there are many ways to interpret the same dataset), risk compensation is apparent in many other areas. As Andrew Zolli and Ann Marie Healy write in Resilience: Why Things Bounce Back, children who play sports involving protective gear (like helmets and knee pads) take more physical risks, and hikers who think they can be easily rescued are less cautious on the trails.

A study of taxi drivers in Munich, Germany, found that those driving vehicles with antilock brakes had more accidents than those without—unsurprising, considering they tended to accelerate faster and stop harder. Another study suggested that childproof lids on medicine bottles did not reduce poisoning rates. According to W. Kip Viscusi at Duke University, parents became more complacent with all medicines, including ones without the safer lids. Better ripcords on parachutes lead skydivers to pull them too late.

As defenses against natural disasters have improved, people have moved into riskier areas, and deaths from events like floods or hurricanes have not necessarily decreased. After helmets were introduced in American football, tackling fatalities actually increased for a few years, as players were more willing to strike heads (this changed with the adoption of new tackling standards.) Bailouts and protective mechanisms for financial institutions may have contributed to the scale of the 2008 financial crisis, as they led to banks taking greater and greater risks. There are numerous other examples.

We can easily see risk compensation play out in our lives and those of people around us. Someone takes up a healthy habit, like going to the gym, then compensates by drinking more. Having an emergency fund in place can encourage us to take greater financial risks.

Risk Homeostasis

According to psychology professor Gerald Wilde, we all internally have a desired level of risk that varies depending on who we are and the context we are in. Our risk tolerance is like a thermostat—we take more risks if we feel too safe, and vice versa, in order to remain at our desired “temperature.” It all comes down to the costs and benefits we expect from taking on more or less risk.

The notion of risk homeostasis, although controversial, can help explain risk compensation. It means that enforcing measures to make people safer will inevitably lead to changes in behavior that maintain the amount of risk we’d like to experience, like driving faster while wearing a seatbelt. A feedback loop communicating our perceived risk helps us keep things as dangerous as we wish them to be. We calibrate our actions to how safe we’d like to be, making adjustments if it swings too far in one direction or the other.

What We Can Learn from Risk Compensation

We can learn many lessons from risk compensation and the research that has been done on the subject. First, safety measures are more effective the less visible they are. If people don’t know about a risk reduction, they won’t change their behavior to compensate for it. When we want to make something safer, it’s best to ensure changes go as unnoticed as possible.

Second, an effective method to reduce risk-taking behavior is to provide incentives for prudent behavior, giving people a reason to adjust their risk thermostat. Just because it seems like something has become safer doesn’t mean the risk hasn’t transferred elsewhere, putting a different group of people in danger as when seat belt laws lead to more pedestrian fatalities. So, for instance, lower insurance premiums for careful drivers might result in fewer fatalities than stricter road safety laws because it causes them to make positive changes to their behavior, instead of shifting the risk elsewhere.

Third, we are biased towards intervention. When we want to improve a situation, our first instinct tends to be to step in and change something, anything. Sometimes it is wiser to do less, or even nothing. Changing something does not always make people safer, sometimes it just changes the nature of the danger.

Fourth, when we make a safety change, we may need to implement corresponding rules to avoid risk compensation. Football helmets made the sport more dangerous at first, but new rules about tackling helped cancel out the behavior changes because the league was realistic about the need for more than just physical protection.

Finally, making people feel less safe can actually improve their behavior. Serious injuries in car crashes are rarer when the roads are icy, even if minor incidents are more common, because drivers take more care. If we want to improve safety, we can make risks more visible through better education.

Risk compensation certainly doesn’t mean it’s not a good idea to take steps to make ourselves safer, but it does illustrate how we need to be aware of unintended consequences that occur when we interact with complex systems. We can’t always expect to achieve the changes we desire first time around. Once we make a change, we should pay careful attention to the effects on the whole system to see what happens. Sometimes it will take the testing of a few alternate approaches to bring us closer to the desired effect.

The Code of Hammurabi: The Best Rule To Manage Risk

Almost 4,000 years ago, King Hammurabi of Babylon, Mesopotamia, laid out one of the first sets of laws.

Hammurabi’s Code is among the oldest translatable writings. It consists of 282 laws, most concerning punishment. Each law takes into account the perpetrator’s status. The code also includes the earliest known construction laws, designed to align the incentives of builder and occupant to ensure that builders created safe homes:

  1. If a builder builds a house for a man and does not make its construction firm, and the house which he has built collapses and causes the death of the owner of the house, that builder shall be put to death.
  2. If it causes the death of the son of the owner of the house, they shall put to death a son of that builder.
  3. If it causes the death of a slave of the owner of the house, he shall give to the owner of the house a slave of equal value.
  4. If it destroys property, he shall restore whatever it destroyed, and because he did not make the house which he builds firm and it collapsed, he shall rebuild the house which collapsed at his own expense.
  5. If a builder builds a house for a man and does not make its construction meet the requirements and a wall falls in, that builder shall strengthen the wall at his own expense.

Hammurabi became ruler of Babylon in 1792 BC and held the position for 43 years. In the era of city-states, Hammurabi grew his modest kingdom (somewhere between 60 and 160 square kilometers) by conquering several neighboring states. Satisfied, then, with the size of the area he controlled, Hammurabi settled down to rule his people.

“This world of ours appears to be separated by a slight and precarious margin of safety from a most singular and unexpected danger.”

— Arthur Conan Doyle

Hammurabi was a fair leader (from the little we know about him) and concerned with the well-being of his people. He transformed the area, ordering the construction of irrigation ditches to improve agricultural productivity, as well as supplying cities with protective walls and fortresses. Hammurabi also renovated temples and religious sites.

By today’s standards, Hammurabi was a dictator. Far from abusing his power, however, he considered himself the “shepherd” of his people. Although the Babylonians kept slaves, they too had rights. Slaves could marry other people of any status, start businesses, and purchase their freedom, and they were protected from mistreatment.

At first glance, it might seem as if we have little to learn from Hammurabi. I mean, why bother learning about the ancient Babylonians? They were just barbaric farmers, right?

It seems we’re not as different as it appears. Our modern beliefs are not separate from those of people in Hammurabi’s time; they are a continuation of them. Early legal codes are the ancestors of the ones we now put our faith in.

Whether a country is a dictatorship or democracy, one of the keys to any effective legal system is the ability for anyone to understand its laws. We’re showing cracks in ours and we can learn from the simplicity of Hammurabi’s Code, which concerned itself with practical justice and not lofty principles. To even call it a set of laws is misleading. The ancient Babylonians did not appear to have an equivalent term.

Three important concepts are implicit in Hammurabi’s Code: reciprocity, accountability, and incentives.

We have no figures for how often Babylonian houses fell down before and after the implementation of the Code. We have no idea how many (if any) people were put to death as a result of failing to adhere to Hammurabi’s construction laws. But we do know that human self-preservation instincts are strong. More than strong, they underlie most of our behavior. Wanting to avoid death is the most powerful incentive we have. If we assume that people felt and thought the same way 4000 years ago, we can guess at the impact of the Code.

Imagine yourself as a Babylonian builder. Each time you construct a house, there is a risk it will collapse if you make any mistakes. So, what do you do? You allow for the widest possible margin of safety. You plan for any potential risks. You don’t cut corners or try to save a little bit of money. No matter what, you are not going to allow any known flaws in the construction. It wouldn’t be worth it. You want to walk away certain that the house is solid.

Now contrast that with modern engineers or builders.

They don’t have much skin in the game. The worst they face if they cause a death is a fine. We saw this in Hurricane Katrina —1600 people died due to flooding caused in part by the poor design of hurricane protection systems in New Orleans. Hindsight analysis showed that the city’s floodwalls, levees, pumps, and gates were ill designed and maintained. The death toll was worse than it would otherwise have been. And yet, no one was held accountable.

Hurricane Katrina is regarded as a disaster that was part natural and part man-made. In recent months, in the Grenfell Tower fire in London, we saw the effects of negligent construction. At least 80 people died in a blaze that is believed to have started accidentally but that, according to expert analysis, was accelerated by the conscious use of cheap building materials that had failed safety tests.

The portions of Hammurabi’s Code that deal with construction laws, as brutal as they are (and as uncertain as we are of their short-term effects) illustrate an important concept: margins of safety. When we construct a system, ensuring that it can handle the expected pressures is insufficient.

A Babylonian builder would not have been content to make a house that was strong enough to handle just the anticipated stressors. A single Black Swan event — such as abnormal weather — could cause its collapse and in turn the builder’s own death, so builders had to allow for a generous margin of safety. The larger the better. In 59 mph winds, we do not want to be in a house built to withstand 60 mph winds.

But our current financial systems do not incentivize people to create wide margins of safety. Instead, they do the opposite — they encourage dangerous risk-taking.

Nassim Taleb referred to Hammurabi’s Code in a New York Times opinion piece in which he described a way to prevent bankers from threatening the public well-being. His solution? Stop offering bonuses for the risky behavior of people who will not be the ones paying the price if the outcome is bad. Taleb wrote:

…it’s time for a fundamental reform: Any person who works for a company that, regardless of its current financial health, would require a taxpayer-financed bailout if it failed should not get a bonus, ever. In fact, all pay at systemically important financial institutions — big banks, but also some insurance companies and even huge hedge funds — should be strictly regulated.

The issue, in Taleb’s opinion, is not the usual complaint of income inequality or overpay. Instead, he views bonuses as asymmetric incentives. They reward risks but do not punish the subsequent mistakes that cause “hidden risks to accumulate in the financial system and become a catalyst for disaster.” It’s a case of “heads, I win; tails, you lose.”

Bonuses encourage bankers to ignore the potential for Black Swan events, with the 2008 financial crisis being a prime (or rather, subprime) example. Rather than ignoring these events, banks should seek to minimize the harm caused.

Some career fields have a strict system of incentives and disincentives, both official and unofficial. Doctors get promotions and respect if they do their jobs well, and risk heavy penalties for medical malpractice. With the exception of experiments in which patients are fully informed of and consent to the risks, doctors don’t get a free pass for taking risks that cause harm to patients.

The same goes for military and security personnel. As Taleb wrote, “we trust the military and homeland security personnel with our lives, yet we don’t give them lavish bonuses. They get promotions and the honor of a job well done if they succeed, and the severe disincentive of shame if they fail.”

Hammurabi and his advisors were unconcerned with complex laws and legalese. Instead, they wanted the Code to produce results and to be understandable by everyone. And Hammurabi understood how incentives work — a lesson we’d be well served to learn.

When you align incentives of everyone in both positive and negative ways, you create a system that takes care of itself. Taleb describes Law 229 of Hammurabi’s Code as “the best risk-management rule ever.” Although barbaric to modern eyes, it took into account certain truisms. Builders typically know more about construction than their clients do and can take shortcuts in ways that aren’t obvious. After completing construction, a builder can walk away with a little extra profit, while the hapless client is unknowingly left with an unsafe house.

The little extra profit that builders can generate is analogous to the bonus system in some of today’s industries. It rewards those who take unwise risks, trick their customers, and harm other people for their own benefit. Hammurabi’s system had the opposite effect; it united the interests of the person getting paid and the person paying. Rather than the builder being motivated to earn as much profit as possible and the homeowner being motivated to get a safe house, they both shared the latter goal.

The Code illustrates the efficacy of using self-preservation as an incentive. We feel safer in airplanes that are flown by a person and not by a machine because, in part, we believe that pilots want to protect their own lives along with ours.

When we lack an incentive to protect ourselves, we are far more likely to risk the safety of other people. This is why bankers are willing to harm their customers if it means the bankers get substantial bonuses. This is why companies that market harmful products, such as fast food and tobacco, are content to play down the risks. Or why the British initiative to reduce the population of Indian cobras by compensating those who caught the snakes had the opposite effect. Or why Wells Fargo employees opened millions of fake accounts to reach sales targets.

Incentives backfire when there are no negative consequences for those who exploit them. External incentives are based on extrinsic motivation, which easily goes awry.

When we have real skin in the game—when we have upsides and downsides—we care about outcomes in a way that we wouldn’t otherwise. We act in a different way. We take our time. We use second-order thinking and inversion. We look for evidence or a way to disprove it.

Four thousand years ago, the Babylonians understood the power of incentives, yet we seem to have since forgotten about the flaws in human nature that make it difficult to resist temptation.

The Probability Distribution of the Future

The best colloquial definition of risk may be the following:

“Risk means more things can happen than will happen.”

We found it through the inimitable Howard Marks, but it’s a quote from Elroy Dimson of the London Business School. Doesn’t that capture it pretty well?

Another way to state it is: If there were only one thing that could happen, how much risk would there be, except in an extremely banal sense? You’d know the exact probability distribution of the future. If I told you there was a 100% probability that you’d get hit by a car today if you walked down the street, you simply wouldn’t do it. You wouldn’t call walking down the street a “risky gamble” right? There’s no gamble at all.

But the truth is that in practical reality, there aren’t many 100% situations to bank on. Way more things can happen than will happen. That introduces great uncertainty into the future, no matter what type of future you’re looking at: An investment, your career, your relationships, anything.

How do we deal with this in a pragmatic way? The investor Howard Marks starts it this way:

Key point number one in this memo is that the future should be viewed not as a fixed outcome that’s destined to happen and capable of being predicted, but as a range of possibilities and, hopefully on the basis of insight into their respective likelihoods, as a probability distribution.

This is the most sensible way to think about the future: A probability distribution where more things can happen than will happen. Knowing that we live in a world of great non-linearity and with the potential for unknowable and barely understandable Black Swan events, we should never become too confident that we know what’s in store, but we can also appreciate that some things are a lot more likely than others. Learning to adjust probabilities on the fly as we get new information is called Bayesian updating.

But.

Although the future is certainly a probability distribution, Marks makes another excellent point in the wonderful memo above: In reality, only one thing will happen. So you must make the decision: Are you comfortable if that one thing happens, whatever it might be? Even if it only has a 1% probability of occurring? Echoing the first lesson of biology, Warren Buffett stated that “In order to win, you must first survive.” You have to live long enough to play out your hand.

Which leads to an important second point: Uncertainty about the future does not necessarily equate with risk, because risk has another component: Consequences. The world is a place where “bad outcomes” are only “bad” if you know their (rough) magnitude. So in order to think about the future and about risk, we must learn to quantify.

It’s like the old saying (usually before something terrible happens): What’s the worst that could happen? Let’s say you propose to undertake a six month project that will cost your company $10 million, and you know there’s a reasonable probability that it won’t work. Is that risky?

It depends on the consequences of losing $10 million, and the probability of that outcome. It’s that simple! (Simple, of course, does not mean easy.) A company with $10 billion in the bank might consider that a very low-risk bet even if it only had a 10% chance of succeeding.

In contrast, a company with only $10 million in the bank might consider it a high-risk bet even if it only had a 10% of failing. Maybe five $2 million projects with uncorrelated outcomes would make more sense to the latter company.

In the real world, risk = probability of failure x consequences. That concept, however, can be looked at through many lenses. Risk of what? Losing money? Losing my job? Losing face? Those things need to be thought through. When we observe others being “too risk averse,” we might want to think about which risks they’re truly avoiding. Sometimes the risk is not only financial. 

***

Let’s cover one more under-appreciated but seemingly obvious aspect of risk, also pointed out by Marks: Knowing the outcome does not teach you about the risk of the decision.

This is an incredibly important concept:

If you make an investment in 2012, you’ll know in 2014 whether you lost money (and how much), but you won’t know whether it was a risky investment – that is, what the probability of loss was at the time you made it.

To continue the analogy, it may rain tomorrow, or it may not, but nothing that happens tomorrow will tell you what the probability of rain was as of today. And the risk of rain is a very good analogue (although I’m sure not perfect) for the risk of loss.

How many times do we see this simple dictum violated? Knowing that something worked out, we argue that it wasn’t that risky after all. But what if, in reality, we were simply fortunate? This is the Fooled by Randomness effect.

The way to think about it is the following: The worst thing that can happen to a young gambler is that he wins the first time he goes to the casinoHe might convince himself he can beat the system.

The truth is that most times we don’t know the probability distribution at all. Because the world is not a predictable casino game — an error Nassim Taleb calls the Ludic Fallacy — the best we can do is guess.

With intelligent estimations, we can work to get the rough order of magnitude right, understand the consequences if we’re wrong, and always be sure to never fool ourselves after the fact.

If you’re into this stuff, check out Howard Marks’ memos to his clients, or check out his excellent book, The Most Important Thing. Nate Silver also has an interesting similar idea about the difference between risk and uncertainty. And lastly, another guy that understands risk pretty well is Jason Zweig, who we’ve interviewed on our podcast before.

***

If you liked this article you’ll love:

Nassim Taleb on the Notion of Alternative Histories — “The quality of a decision cannot be solely judged based on its outcome.”

The Four Types of Relationships — As Seneca said, “Time discovers truth.”

The Psychology of Risk and Reward

The Psychology of Risk and Reward

An excerpt from The Aspirational Investor: Taming the Markets to Achieve Your Life’s Goals that I think you’d enjoy.

Most of us have a healthy understanding of risk in the short term.

When crossing the street, for example, you would no doubt speed up to avoid an oncoming car that suddenly rounds the corner.

Humans are wired to survive: it’s a basic instinct that takes command almost instantly, enabling our brains to resolve ambiguity quickly so that we can take decisive action in the face of a threat.

The impulse to resolve ambiguity manifests itself in many ways and in many contexts, even those less fraught with danger. Glance at the (above) picture for no more than a couple of seconds. What do you see?

Some observers perceive the profile of a young woman with flowing hair, an elegant dress, and a bonnet. Others see the image of a woman stooped in old age with a wart on her large nose. Still others—in the gifted minority—are able to see both of the images simultaneously.

What is interesting about this illusion is that our brains instantly decide what image we are looking at, based on our first glance. If your initial glance was toward the vertical profile on the left-hand side, you were all but destined to see the image of the elegant young woman: it was just a matter of your brain interpreting every line in the picture according to the mental image that you already formed, even though each line can be interpreted in two different ways. Conversely, if your first glance fell on the central dark horizontal line that emphasizes the mouth and chin, your brain quickly formed an image of the older woman.

Regardless of your interpretation, your brain wasn’t confused. It simply decided what the picture was and filled in the missing pieces. Your brain resolved ambiguity and extracted order from conflicting information.

What does this have to do with decision making? Every bit of information can be interpreted differently according to our perspective. Ashvin Chhabra directs us to investing. I suggest you reframe this in the context of decision making in general.

Every trade has a seller and a buyer: your state of mind is paramount. If you are in a risk-averse mental framework, then you are likely to interpret a further fall in stocks as additional confirmation of your sell bias. If instead your framework is positive, you will interpret the same event as a buying opportunity.

The challenge of investing is compounded by the fact that our brains, which excel at resolving ambiguity in the face of a threat, are less well equipped to navigate the long term intelligently. Since none of us can predict the future, successful investing requires planning and discipline.

Unfortunately, when reason is in apparent conflict with our instincts—about markets or a “hot stock,” for example—it is our instincts that typically prevail. Our “reptilian brain” wins out over our “rational brain,” as it so often does in other facets of our lives. And as we have seen, investors trade too frequently, and often at the wrong time.

One way our brains resolve conflicting information is to seek out safety in numbers. In the animal kingdom, this is called “moving with the herd,” and it serves a very important purpose: helping to ensure survival. Just as a buffalo will try to stay with the herd in order to minimize its individual vulnerability to predators, we tend to feel safer and more confident investing alongside equally bullish investors in a rising market, and we tend to sell when everyone around us is doing the same. Even the so-called smart money falls prey to a herd mentality: one study, aptly titled “Thy Neighbor’s Portfolio,” found that professional mutual fund managers were more likely to buy or sell a particular stock if other managers in the same city were also buying or selling.

This comfort is costly. The surge in buying activity and the resulting bullish sentiment is self-reinforcing, propelling markets to react even faster. That leads to overvaluation and the inevitable crash when sentiment reverses. As we shall see, such booms and busts are characteristic of all financial markets, regardless of size, location, or even the era in which they exist.

Even though the role of instinct and human emotions in driving speculative bubbles has been well documented in popular books, newspapers, and magazines for hundreds of years, these factors were virtually ignored in conventional financial and economic models until the 1970s.

This is especially surprising given that, in 1951, a young PhD student from the University of Chicago, Harry Markowitz, published two very important papers. The first, entitled “Portfolio Selection,” published in the Journal of Finance, led to the creation of what we call modern portfolio theory, together with the widespread adoption of its important ideas such as asset allocation and diversification. It earned Harry Markowitz a Nobel Prize in Economics.

The second paper, entitled “The Utility of Wealth” and published in the prestigious Journal of Political Economy, was about the propensity of people to hold insurance (safety) and to buy lottery tickets at the same time. It delved deeper into the psychological aspects of investing but was largely forgotten for decades.

The field of behavioral finance really came into its own through the pioneering work of two academic psychologists, Amos Tversky and Daniel Kahneman, who challenged conventional wisdom about how people make decisions involving risk. Their work garnered Kahneman the Nobel Prize in Economics in 2002. Behavioral finance and neuroeconomics are relatively new fields of study that seek to identify and understand human behavior and decision making with regard to choices involving trade-offs between risk and reward. Of particular interest are the human biases that prevent individuals from making fully rational financial decisions in the face of uncertainty.

As behavioral economists have documented, our propensity for herd behavior is just the tip of the iceberg. Kahneman and Tversky, for example, showed that people who were asked to choose between a certain loss and a gamble, in which they could either lose more money or break even, would tend to choose the double down (that is, gamble to avoid the prospect of losses), a behavior the authors called “loss aversion.” Building on this work, Hersh Shefrin and Meir Statman, professors at the University of Santa Clara Leavey School of Business, have linked the propensity for loss aversion to investors’ tendency to hold losing investments too long and to sell winners too soon. They called this bias the disposition effect.

The lengthy list of behaviorally driven market effects often converge in an investor’s tale of woe. Overconfidence causes investors to hold concentrated portfolios and to trade excessively, behaviors that can destroy wealth. The illusion of control causes investors to overestimate the probability of success and underestimate risk because of familiarity—for example, causing investors to hold too much employer stock in their 401(k) plans, resulting in under-diversification. Cognitive dissonance causes us to ignore evidence that is contrary to our opinions, leading to myopic investing behavior. And the representativeness bias leads investors to assess risk and return based on superficial characteristics—for example, by assuming that shares of companies that make products you like are good investments.

Several other key behavioral biases come into play in the realm of investing. Framing can cause investors to make a decision based on how the question is worded and the choices presented. Anchoring often leads investors to unconsciously create a reference point, say for securities prices, and then adjust decisions or expectations with respect to that anchor. This bias might impede your ability to sell a losing stock, for example, in the false hope that you can earn your money back. Similarly, the endowment bias might lead you to overvalue a stock that you own and thus hold on to the position too long. And regret aversion may lead you to avoid taking a tough action for fear that it will turn out badly. This can lead to decision paralysis in the wake of a market crash, even though, statistically, it is a good buying opportunity.

Behavioral finance has generated plenty of debate. Some observers have hailed the field as revolutionary; others bemoan the discipline’s seeming lack of a transcendent, unifying theory. This much is clear: behavioral finance treats biases as mistakes that, in academic parlance, prevent investors from thinking “rationally” and cause them to hold “suboptimal” portfolios.

But is that really true? In investing, as in life, the answer is more complex than it appears. Effective decision making requires us to balance our “reptilian brain,” which governs instinctive thinking, with our “rational brain,” which is responsible for strategic thinking. Instinct must integrate with experience.

Put another way, behavioral biases are nothing more than a series of complex trade-offs between risk and reward. When the stock market is taking off, for example, a failure to rebalance by selling winners is considered a mistake. The same goes for a failure to add to a position in a plummeting market. That’s because conventional finance theory assumes markets to be inherently stable, or “mean-reverting,” so most deviations from the historical rate of return are viewed as fluctuations that will revert to the mean, or self-correct, over time.

But what if a precipitous market drop is slicing into your peace of mind, affecting your sleep, your relationships, and your professional life? What if that assumption about markets reverting to the mean doesn’t hold true and you cannot afford to hold on for an extended period of time? In both cases, it might just be “rational” to sell and accept your losses precisely when investment theory says you should be buying. A concentrated bet might also make sense, if you possess the skill or knowledge to exploit an opportunity that others might not see, even if it flies in the face of conventional diversification principles.

Of course, the time to create decision rules for extreme market scenarios and concentrated bets is when you are building your investment strategy, not in the middle of a market crisis or at the moment a high-risk, high-reward opportunity from a former business partner lands on your desk and gives you an adrenaline jolt. A disciplined process for managing risk in relation to a clear set of goals will enable you to use the insights offered by behavioral finance to your advantage, rather than fall prey to the common pitfalls. This is one of the central insights of the Wealth Allocation Framework. But before we can put these insights to practical use, we need to understand the true nature of financial markets.