Tag: Uncertainty

The Probability Distribution of the Future

The best colloquial definition of risk may be the following:

“Risk means more things can happen than will happen.”

We found it through the inimitable Howard Marks, but it’s a quote from Elroy Dimson of the London Business School. Doesn’t that capture it pretty well?

Another way to state it is: If there were only one thing that could happen, how much risk would there be, except in an extremely banal sense? You’d know the exact probability distribution of the future. If I told you there was a 100% probability that you’d get hit by a car today if you walked down the street, you simply wouldn’t do it. You wouldn’t call walking down the street a “risky gamble” right? There’s no gamble at all.

But the truth is that in practical reality, there aren’t many 100% situations to bank on. Way more things can happen than will happen. That introduces great uncertainty into the future, no matter what type of future you’re looking at: An investment, your career, your relationships, anything.

How do we deal with this in a pragmatic way? The investor Howard Marks starts it this way:

Key point number one in this memo is that the future should be viewed not as a fixed outcome that’s destined to happen and capable of being predicted, but as a range of possibilities and, hopefully on the basis of insight into their respective likelihoods, as a probability distribution.

This is the most sensible way to think about the future: A probability distribution where more things can happen than will happen. Knowing that we live in a world of great non-linearity and with the potential for unknowable and barely understandable Black Swan events, we should never become too confident that we know what’s in store, but we can also appreciate that some things are a lot more likely than others. Learning to adjust probabilities on the fly as we get new information is called Bayesian updating.

But.

Although the future is certainly a probability distribution, Marks makes another excellent point in the wonderful memo above: In reality, only one thing will happen. So you must make the decision: Are you comfortable if that one thing happens, whatever it might be? Even if it only has a 1% probability of occurring? Echoing the first lesson of biology, Warren Buffett stated that “In order to win, you must first survive.” You have to live long enough to play out your hand.

Which leads to an important second point: Uncertainty about the future does not necessarily equate with risk, because risk has another component: Consequences. The world is a place where “bad outcomes” are only “bad” if you know their (rough) magnitude. So in order to think about the future and about risk, we must learn to quantify.

It’s like the old saying (usually before something terrible happens): What’s the worst that could happen? Let’s say you propose to undertake a six month project that will cost your company $10 million, and you know there’s a reasonable probability that it won’t work. Is that risky?

It depends on the consequences of losing $10 million, and the probability of that outcome. It’s that simple! (Simple, of course, does not mean easy.) A company with $10 billion in the bank might consider that a very low-risk bet even if it only had a 10% chance of succeeding.

In contrast, a company with only $10 million in the bank might consider it a high-risk bet even if it only had a 10% of failing. Maybe five $2 million projects with uncorrelated outcomes would make more sense to the latter company.

In the real world, risk = probability of failure x consequences. That concept, however, can be looked at through many lenses. Risk of what? Losing money? Losing my job? Losing face? Those things need to be thought through. When we observe others being “too risk averse,” we might want to think about which risks they’re truly avoiding. Sometimes risk is not only financial. 

***

Let’s cover one more under-appreciated but seemingly obvious aspect of risk, also pointed out by Marks: Knowing the outcome does not teach you about the risk of the decision.

This is an incredibly important concept:

If you make an investment in 2012, you’ll know in 2014 whether you lost money (and how much), but you won’t know whether it was a risky investment – that is, what the probability of loss was at the time you made it.

To continue the analogy, it may rain tomorrow, or it may not, but nothing that happens tomorrow will tell you what the probability of rain was as of today. And the risk of rain is a very good analogue (although I’m sure not perfect) for the risk of loss.

How many times do we see this simple dictum violated? Knowing that something worked out, we argue that it wasn’t that risky after all. But what if, in reality, we were simply fortunate? This is the Fooled by Randomness effect.

The way to think about it is the following: The worst thing that can happen to a young gambler is that he wins the first time he goes to the casinoHe might convince himself he can beat the system.

The truth is that most times we don’t know the probability distribution at all. Because the world is not a predictable casino game — an error Nassim Taleb calls the Ludic Fallacy — the best we can do is guess.

With intelligent estimations, we can work to get the rough order of magnitude right, understand the consequences if we’re wrong, and always be sure to never fool ourselves after the fact.

If you’re into this stuff, check out Howard Marks’ memos to his clients, or check out his excellent book, The Most Important Thing. Nate Silver also has an interesting similar idea about the difference between risk and uncertainty. And lastly, another guy that understands risk pretty well is Jason Zweig, who we’ve interviewed on our podcast before.

***

If you liked this article you’ll love:

Nassim Taleb on the Notion of Alternative Histories — “The quality of a decision cannot be solely judged based on its outcome.”

The Four Types of Relationships — As Seneca said, “Time discovers truth.”

Certainty Is an Illusion

We all try to avoid uncertainty, even if it means being wrong. We take comfort in certainty and we demand it of others, even when we know it’s impossible.

Gerd Gigerenzer argues in Risk Savvy: How to Make Good Decisions that life would be pretty dull without uncertainty.

If we knew everything about the future with certainty, our lives would be drained of emotion. No surprise and pleasure, no joy or thrill— we knew it all along. The first kiss, the first proposal, the birth of a healthy child would be about as exciting as last year’s weather report. If our world ever turned certain, life would be mind-numbingly dull.

***
The Illusion of Certainty

We demand certainty of others. We ask our bankers, doctors, and political leaders (among others) to give it to us. What they deliver, however, is the illusion of certainty. We feel comfortable with this.

Many of us smile at old-fashioned fortune-tellers. But when the soothsayers work with computer algorithms rather than tarot cards, we take their predictions seriously and are prepared to pay for them. The most astounding part is our collective amnesia: Most of us are still anxious to see stock market predictions even if they have been consistently wrong year after year.

Technology changes how we see things – it amplifies the illusion of certainty.

When an astrologer calculates an expert horoscope for you and foretells that you will develop a serious illness and might even die at age forty-nine, will you tremble when the date approaches? Some 4 percent of Germans would; they believe that an expert horoscope is absolutely certain.

But when technology is involved, the illusion of certainty is amplified. Forty-four percent of people surveyed think that the result of a screening mammogram is certain. In fact, mammograms fail to detect about ten percent of cancers, and the younger the women being tested, the more error-prone the results, because their breasts are denser.

“Not understanding a new technology is one thing,” Gigerenzer writes, “believing that it delivers certainty is another.”

It’s best to remember Ben Franklin: “In this world nothing can be said to be certain, except death and taxes.”

***
The Security Blanket

Where does our need for certainty come from?

People with a high need for certainty are more prone to stereotypes than others and are less inclined to remember information that contradicts their stereotypes. They find ambiguity confusing and have a desire to plan out their lives rationally. First get a degree, a car, and then a career, find the most perfect partner, buy a home, and have beautiful babies. But then the economy breaks down, the job is lost, the partner has an affair with someone else, and one finds oneself packing boxes to move to a cheaper place. In an uncertain world, we cannot plan everything ahead. Here, we can only cross each bridge when we come to it, not beforehand. The very desire to plan and organize everything may be part of the problem, not the solution. There is a Yiddish joke: “Do you know how to make God laugh? Tell him your plans.”

To be sure, illusions have their function. Small children often need security blankets to soothe their fears. Yet for the mature adult, a high need for certainty can be a dangerous thing. It prevents us from learning to face the uncertainty pervading our lives. As hard as we try, we cannot make our lives risk-free the way we make our milk fat-free.

At the same time, a psychological need is not entirely to blame for the illusion of certainty. Manufacturers of certainty play a crucial role in cultivating the illusion. They delude us into thinking that our future is predictable, as long as the right technology is at hand.

***
Risk and Uncertainty

Two magnificently dressed young women sit upright on their chairs, calmly facing each other. Yet neither takes notice of the other. Fortuna, the fickle, wheel-toting goddess of chance, sits blindfolded on the left while human figures desperately climb, cling to, or tumble off the wheel in her hand. Sapientia, the calculating and vain deity of science, gazes into a hand-mirror, lost in admiration of herself. These two allegorical figures depict a long-standing polarity: Fortuna brings good or bad luck, depending on her mood, but science promises certainty.

Fortuna, the wheel-toting goddess of chance (left), facing Sapientia, the divine goddess of science (right).
Fortuna, the wheel-toting goddess of chance (left), facing Sapientia, the divine goddess of science (right).

This sixteenth -century woodcut was carved a century before one of the greatest revolutions in human thinking, the “probabilistic revolution,” colloquially known as the taming of chance. Its domestication began in the mid-seventeenth century. Since then, Fortuna’s opposition to Sapientia has evolved into an intimate relationship, not without attempts to snatch each other’s possessions. Science sought to liberate people from Fortuna’s wheel, to banish belief in fate, and replace chances with causes. Fortuna struck back by undermining science itself with chance and creating the vast empire of probability and statistics. After their struggles, neither remained the same: Fortuna was tamed, and science lost its certainty.

I explain more on the difference between risk and uncertainty here, but this diagram helps simplify things.
certainty_risk_uncertainty

***
The value of heuristics

The twilight of uncertainty comes in different shades and degrees. Beginning in the seventeenth century, the probabilistic revolution gave humankind the skills of statistical thinking to triumph over Fortuna, but these skills were designed for the palest shade of uncertainty, a world of known risk, in short, risk. I use this term for a world where all alternatives, consequences, and probabilities are known. Lotteries and games of chance are examples. Most of the time, however, we live in a changing world where some of these are unknown: where we face unknown risks, or uncertainty. The world of uncertainty is huge compared to that of risk. … In an uncertain world, it is impossible to determine the optimal course of action by calculating the exact risks. We have to deal with “unknown unknowns.” Surprises happen. Even when calculation does not provide a clear answer, however, we have to make decisions. Thankfully we can do much better than frantically clinging to and tumbling off Fortuna’s wheel. Fortuna and Sapientia had a second brainchild alongside mathematical probability, which is often passed over: rules of thumb, known in scientific language as heuristics.

***
How decisions change based on risk/uncertainty

When making decisions, the two sets of mental tools are required:
1. RISK: If risks are known, good decisions require logic and statistical thinking.
2. UNCERTAINTY: If some risks are unknown, good decisions also require intuition and smart rules of thumb.

Most of the time we need a combination of the two.

***

Risk Savvy: How to Make Good Decisions is a great read throughout.

The Ability To Focus And Make The Best Move When There Are No Good Moves

"The indeterminate future is somehow one in which probability and statistics are the dominant modality for making sense of the world."
“The indeterminate future is somehow one in which probability and statistics are the dominant modality for making sense of the world.”

Decisions, where outcomes (and therefore probabilities) are unknown, are often the hardest. The default method problem solving often falls short.

Sometimes you have to play the odds and sometimes you have to play the calculus.

There are several different frameworks one could use to get a handle on the indeterminate vs. determinate question. The math version is calculus vs. statistics. In a determinate world, calculus dominates. You can calculate specific things precisely and deterministically. When you send a rocket to the moon, you have to calculate precisely where it is at all times. It’s not like some iterative startup where you launch the rocket and figure things out step by step. Do you make it to the moon? To Jupiter? Do you just get lost in space? There were lots of companies in the ’90s that had launch parties but no landing parties.

But the indeterminate future is somehow one in which probability and statistics are the dominant modality for making sense of the world. Bell curves and random walks define what the future is going to look like. The standard pedagogical argument is that high schools should get rid of calculus and replace it with statistics, which is really important and actually useful. There has been a powerful shift toward the idea that statistical ways of thinking are going to drive the future.

With calculus, you can calculate things far into the future. You can even calculate planetary locations years or decades from now. But there are no specifics in probability and statistics—only distributions. In these domains, all you can know about the future is that you can’t know it. You cannot dominate the future; antitheories dominate instead. The Larry Summers line about the economy was something like, “I don’t know what’s going to happen, but anyone who says he knows what will happen doesn’t know what he’s talking about.” Today, all prophets are false prophets. That can only be true if people take a statistical view of the future.

— Peter Thiel

And this quote from The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers by Ben Horowitz:

I learned one important lesson: Startup CEOs should not play the odds. When you are building a company, you must believe there is an answer and you cannot pay attention to your odds of finding it. You just have to find it. It matters not whether your chances are nine in ten or one in a thousand; your task is the same. … I don’t believe in statistics. I believe in calculus.

People always ask me, “What’s the secret to being a successful CEO?” Sadly, there is no secret, but if there is one skill that stands out, it’s the ability to focus and make the best move when there are no good moves. It’s the moments where you feel most like hiding or dying that you can make the biggest difference as a CEO. In the rest of this chapter, I offer some lessons on how to make it through the struggle without quitting or throwing up too much.

… I follow the first principle of the Bushido—the way of the warrior: keep death in mind at all times. If a warrior keeps death in mind at all times and lives as though each day might be his last, he will conduct himself properly in all his actions. Similarly, if a CEO keeps the following lessons in mind, she will maintain the proper focus when hiring, training , and building her culture.

It’s interesting to me that the skill that stands out to Horowitz is one that we can use to teach how to think and one Tyler Cowen feels is in short supply. Cowen says:

The more information that’s out there, the greater the returns to just being willing to sit down and apply yourself. Information isn’t what’s scarce; it’s the willingness to do something with it.

Nassim Taleb on the Notion of Alternative Histories

We see what’s visible and available. Often this is nothing more than randomness and yet we wrap a narrative around it. The trader who is rich must know what he is doing. A good outcome means we made the right decisions, right? Not so quick. If we were wise we would not judge the quality of a decision on its outcome. There are alternative histories worth considering.

***

Writing in Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets Nassim Taleb hits on the notion of alternative histories.

Taleb argues that we should judge people by the costs of the alternative, that is if history played out in another way. These “substitute courses of events are called alternative histories.”

Russian Roulette

Taleb writes:

Clearly, the quality of a decision cannot be solely judged based on its outcome, but such a point seems to be voiced only by people who fail (those who succeed attribute their success to the quality of their decision).

[…]

One can illustrate the strange concept of alternative histories as follows. Imagine an eccentric (and bored) tycoon offering you $ 10 million to play Russian roulette, i.e., to put a revolver containing one bullet in the six available chambers to your head and pull the trigger. Each realization would count as one history, for a total of six possible histories of equal probabilities. Five out of these six histories would lead to enrichment; one would lead to a statistic, that is, an obituary with an embarrassing (but certainly original) cause of death. The problem is that only one of the histories is observed in reality; and the winner of $ 10 million would elicit the admiration and praise of some fatuous journalist (the very same ones who unconditionally admire the Forbes 500 billionaires). Like almost every executive I have encountered during an eighteen-year career on Wall Street (the role of such executives in my view being no more than a judge of results delivered in a random manner), the public observes the external signs of wealth without even having a glimpse at the source (we call such source the generator.) Consider the possibility that the Russian roulette winner would be used as a role model by his family, friends, and neighbors.

While the remaining five histories are not observable, the wise and thoughtful person could easily make a guess as to their attributes. It requires some thoughtfulness and personal courage. In addition, in time, if the roulette-betting fool keeps playing the game, the bad histories will tend to catch up with him. Thus, if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim possibility of his surviving until his fiftieth birthday— but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).

[…]

The reader can see my unusual notion of alternative accounting: $ 10 million earned through Russian roulette does not have the same value as $ 10 million earned through the diligent and artful practice of dentistry. They are the same, can buy the same goods, except that one’s dependence on randomness is greater than the other.

Reality is different than roulette. Consider that in the example above, while the result is unknown you know the odds, most of life is dealing with uncertainty. Bullets are infrequent, “like a revolver that would have hundreds, even thousands, of chambers instead of six.” After a while you forget about the bullet. You can’t see the chamber and we generally think of risk in terms of what is visible.

Interestingly this is the core of the black swan, which is really the induction problem. No amount of evidence can allow the inference that something is true whereas one counterexample can refute a conclusion. This idea is also related to the “denigration of history,” where we think things that happen to others would not happen to us.

Footnotes
  • 1

    image source: http://koulin.deviantart.com/art/Baccano-Russian-Roulette-147120870

Decisions Under Uncertainty

"We confuse risk and uncertainty."
“We confuse risk and uncertainty.”

If you’re a knowledge worker you make decisions every day. In fact, whether you realize it or not, decisions are your job.

Decisions are how you make a living. Of course, not every decision is easy. Decisions tend to fall into different categories.  The way we approach the actual decision should vary based on category.

Here are a few basic categories that decisions fall into.

There are decisions where:

  1. Outcomes are known. In this case, the range of outcomes is known and the individual outcome is also known. This is the easiest way to make decisions. If I hold out my hand and drop a ball, it will fall to the ground. I know this with near certainty.
  2. Outcomes are unknown, but probabilities are known. In this case, the range of outcomes are known but the individual outcome is unknown. This is risk. Think of this as going to Vegas and gambling. Before you set foot at the table, all of the outcomes are known as are the probabilities of each. No outcome surprises an objective third party.
  3. Outcomes are unknown and probabilities are unknown. In this case, the distribution of outcomes are unknown and the individual outcomes are necessarily unknown. This is uncertainty.

We often think we’re making decisions in #2 but we’re really operating in #3. The difference may seem trivial but it makes a world of difference.

Decisions Under Uncertainty

Ignorance is a state of the world where some possible outcomes are unknown: when we’ve moved from #2 to #3.

One way to realize how ignorant we are is to look back, read some old newspapers, and see how often the world did something that wasn’t even imagined.

Some examples include the Arab Spring, the collapse of the Soviet Union, the financial meltdown.

We’re prepared for a world much like #2 — the world of risk, with known outcomes and probability that can be estimated, yet we live in a world with a closer resemblance to #3.

Read part two of this series: Two types of ignorance.

References: Ignorance: Lessons from the Laboratory of Literature (Joy and Zeckhauser).

Five Rules to help you Learn to Love Volatility

Nassim Taleb’s book Antifragile: Things That Gain from Disorder is having a profound impact on how I see the world.

In this adapted piece from Antifragile, which appeared in the Wall Street Journal, Taleb offers five policy rules that can help us establish antifragility as a principle of our socioeconomic life.

***

Rule 1: Think of the economy as being more like a cat than a washing machine.

We are victims of the post-Enlightenment view that the world functions like a sophisticated machine, to be understood like a textbook engineering problem and run by wonks. In other words, like a home appliance, not like the human body. If this were so, our institutions would have no self-healing properties and would need someone to run and micromanage them, to protect their safety, because they cannot survive on their own.

By contrast, natural or organic systems are antifragile: They need some dose of disorder in order to develop. Deprive your bones of stress and they become brittle. This denial of the antifragility of living or complex systems is the costliest mistake that we have made in modern times. Stifling natural fluctuations masks real problems, causing the explosions to be both delayed and more intense when they do take place. As with the flammable material accumulating on the forest floor in the absence of forest fires, problems hide in the absence of stressors, and the resulting cumulative harm can take on tragic proportions.

And yet our economic policy makers have often aimed for maximum stability, even for eradicating the business cycle. “No more boom and bust,” as voiced by the U.K. Labor leader Gordon Brown, was the policy pursued by Alan Greenspan in order to “smooth” things out, thus micromanaging us into the current chaos. Mr. Greenspan kept trying to iron out economic fluctuations by injecting cheap money into the system, which eventually led to monstrous hidden leverage and real-estate bubbles. On this front there is now at least a glimmer of hope, in the U.K. rather than the U.S., alas: Mervyn King, governor of the Bank of England, has advocated the idea that central banks should intervene only when an economy is truly sick and should otherwise defer action.

Promoting antifragility doesn’t mean that government institutions should avoid intervention altogether. In fact, a key problem with overzealous intervention is that, by depleting resources, it often results in a failure to intervene in more urgent situations, like natural disasters. So in complex systems, we should limit government (and other) interventions to important matters: The state should be there for emergency-room surgery, not nanny-style maintenance and overmedication of the patient—and it should get better at the former.

In social policy, when we provide a safety net, it should be designed to help people take more entrepreneurial risks, not to turn them into dependents. This doesn’t mean that we should be callous to the underprivileged. In the long run, bailing out people is less harmful to the system than bailing out firms; we should have policies now that minimize the possibility of being forced to bail out firms in the future, with the moral hazard this entails.

Rule 2: Favor businesses that benefit from their own mistakes, not those whose mistakes percolate into the system.

Some businesses and political systems respond to stress better than others. The airline industry is set up in such a way as to make travel safer after every plane crash. A tragedy leads to the thorough examination and elimination of the cause of the problem. The same thing happens in the restaurant industry, where the quality of your next meal depends on the failure rate in the business—what kills some makes others stronger. Without the high failure rate in the restaurant business, you would be eating Soviet-style cafeteria food for your next meal out.

These industries are antifragile: The collective enterprise benefits from the fragility of the individual components, so nothing fails in vain. These businesses have properties similar to evolution in the natural world, with a well-functioning mechanism to benefit from evolutionary pressures, one error at a time.

By contrast, every bank failure weakens the financial system, which in its current form is irremediably fragile: Errors end up becoming large and threatening. A reformed financial system would eliminate this domino effect, allowing no systemic risk from individual failures. A good starting point would be reducing the amount of debt and leverage in the economy and turning to equity financing. A firm with highly leveraged debt has no room for error; it has to be extremely good at predicting future revenues (and black swans). And when one leveraged firm fails to meet its obligations, other borrowers who need to renew their loans suffer as the chastened lenders lose their appetite to extend credit. So debt tends to make failures spread through the system.

A firm with equity financing can survive drops in income, however. Consider the abrupt deflation of the technology bubble during 2000. Because technology firms were relying on equity rather than debt, their failures didn’t ripple out into the wider economy. Indeed, their failures helped to strengthen the technology sector.

Rule 3: Small is beautiful, but it is also efficient.

Experts in business and government are always talking about economies of scale. They say that increasing the size of projects and institutions brings costs savings. But the “efficient,” when too large, isn’t so efficient. Size produces visible benefits but also hidden risks; it increases exposure to the probability of large losses. Projects of $100 million seem rational, but they tend to have much higher percentage overruns than projects of, say, $10 million. Great size in itself, when it exceeds a certain threshold, produces fragility and can eradicate all the gains from economies of scale. To see how large things can be fragile, consider the difference between an elephant and a mouse: The former breaks a leg at the slightest fall, while the latter is unharmed by a drop several multiples of its height. This explains why we have so many more mice than elephants.

So we need to distribute decisions and projects across as many units as possible, which reinforces the system by spreading errors across a wider range of sources. In fact, I have argued that government decentralization would help to lower public deficits. A large part of these deficits comes from underestimating the costs of projects, and such underestimates are more severe in large, top-down governments. Compare the success of the bottom-up mechanism of canton-based decision making in Switzerland to the failures of authoritarian regimes in Soviet Russia and Baathist Iraq and Syria.

Rule 4: Trial and error beats academic knowledge.

Things that are antifragile love randomness and uncertainty, which also means—crucially—that they can learn from errors. Tinkering by trial and error has traditionally played a larger role than directed science in Western invention and innovation. Indeed, advances in theoretical science have most often emerged from technological development, which is closely tied to entrepreneurship. Just think of the number of famous college dropouts in the computer industry.

But I don’t mean just any version of trial and error. There is a crucial requirement to achieve antifragility: The potential cost of errors needs to remain small; the potential gain should be large. It is the asymmetry between upside and downside that allows antifragile tinkering to benefit from disorder and uncertainty.

Perhaps because of the success of the Manhattan Project and the space program, we greatly overestimate the influence and importance of researchers and academics in technological advancement. These people write books and papers; tinkerers and engineers don’t, and are thus less visible. Consider Britain, whose historic rise during the Industrial Revolution came from tinkerers who gave us innovations like iron making, the steam engine and textile manufacturing. The great names of the golden years of English science were hobbyists, not academics: Charles Darwin, Henry Cavendish, William Parsons, the Rev. Thomas Bayes. Britain saw its decline when it switched to the model of bureaucracy-driven science.

America has emulated this earlier model, in the invention of everything from cybernetics to the pricing formulas for derivatives. They were developed by practitioners in trial-and-error mode, drawing continuous feedback from reality. To promote antifragility, we must recognize that there is an inverse relationship between the amount of formal education that a culture supports and its volume of trial-and-error by tinkering. Innovation doesn’t require theoretical instruction, what I like to compare to “lecturing birds on how to fly.”

Rule 5: Decision makers must have skin in the game.

At no time in the history of humankind have more positions of power been assigned to people who don’t take personal risks. But the idea of incentive in capitalism demands some comparable form of disincentive. In the business world, the solution is simple: Bonuses that go to managers whose firms subsequently fail should be clawed back, and there should be additional financial penalties for those who hide risks under the rug. This has an excellent precedent in the practices of the ancients. The Romans forced engineers to sleep under a bridge once it was completed.

Because our current system is so complex, it lacks elementary clarity: No regulator will know more about the hidden risks of an enterprise than the engineer who can hide exposures to rare events and be unharmed by their consequences. This rule would have saved us from the banking crisis, when bankers who loaded their balance sheets with exposures to small probability events collected bonuses during the quiet years and then transferred the harm to the taxpayer, keeping their own compensation.

***

In these five rules, I have sketched out only a few of the more obvious policy conclusions that we might draw from a proper appreciation of antifragility. But the significance of antifragility runs deeper. It is not just a useful heuristic for socioeconomic matters but a crucial property of life in general. Things that are antifragile only grow and improve under adversity. This dynamic can be seen not just in economic life but in the evolution of all things, from cuisine, urbanization and legal systems to our own existence as a species on this planet.

We all know that the stressors of exercise are necessary for good health, but people don’t translate this insight into other domains of physical and mental well-being. We also benefit, it turns out, from occasional and intermittent hunger, short-term protein deprivation, physical discomfort and exposure to extreme cold or heat. Newspapers discuss post-traumatic stress disorder, but nobody seems to account for post-traumatic growth. Walking on smooth surfaces with “comfortable” shoes injures our feet and back musculature: We need variations in terrain.

Modernity has been obsessed with comfort and cosmetic stability, but by making ourselves too comfortable and eliminating all volatility from our lives, we do to our bodies and souls what Mr. Greenspan did to the U.S. economy: We make them fragile. We must instead learn to gain from disorder.

***

Still curious? Buy the book. It’ll change the way you see the world.

12