Category: Thinking

The Precautionary Principle: Better Safe than Sorry?

Also known as the Precautionary Approach or Precautionary Action, the Precautionary Principle is a concept best summed up by the proverb “better safe than sorry” or the medical maxim to “first do no harm.”

While there is no single definition, it typically refers to acting to prevent harm by not doing anything that could have negative consequences, even if the possibility of those consequences is uncertain.

In this article, we will explore how the Precautionary Principle works, its strengths and drawbacks, the best way to use it, and how we can apply it in our own lives.

Guilty until proven innocent

Whenever we make even the smallest change within a complex system, we risk dramatic unintended consequences.

The interconnections and dependencies within systems make it almost impossible to predict outcomes—and seeing as they often require a reasonably precise set of conditions to function, our interventions can wreak havoc.

The Precautionary Principle reflects the reality of working with and within complex systems. It shifts the burden of proof from proving something was dangerous after the fact to proving it is safe before taking chances. It emphasizes waiting for more complete information before risking causing damage, especially if some of the possible impacts would be irreversible, hard to contain, or would affect people who didn’t choose to be involved.

The possibility of harm does not need to be specific to that particular circumstance; sometimes we can judge a category of actions as one that always requires precaution because we know it has a high risk of unintended consequences.

For example, invasive species (plants or animals that cause harm after being introduced into a new environment by humans) have repeatedly caused native species to become extinct. So it’s reasonable to exercise precaution and not introduce living things into new places without strong evidence it will be harmless.

Preventing risks and protecting resources

Best known for its use as a regulatory guideline in environmental law and public health, the Precautionary Principle originated with the German term “Vorsorgeprinzip” applied to regulations for preventing air pollution. Konrad Von Moltke, director of the Institute for European Environmental Policy, later translated it into English.

Seeing as the natural world is a highly complex system we have repeatedly disrupted in serious, permanent ways, the Precautionary Principle has become a guiding part of environmental policy in many countries.

For example, the Umweltbundesamt (German Environmental Protection Agency) explains that the Precautionary Principle has two core components in German environmental law today: preventing risks and protecting resources.

Preventing risks means legislators shouldn’t take actions where our knowledge of the potential for environmental damage is incomplete or uncertain but there is cause for concern. The burden of proof is on proving lack of harm, not on proving harm. Protecting resources means preserving things like water and soil in a form future generations can use.

To give another example, some countries evoke versions of the Precautionary Principle to justify bans on genetically modified foods—in some cases for good, in others until evidence of their safety is considered stronger. It is left to legislators to interpret and apply the Precautionary Principle within specific situations.

The flexibility of the Precautionary Principle is both a source of strength and a source of weakness. We live in a fast-moving world where regulation does not always keep up with innovation, meaning guidelines (as opposed to rules) can often prove useful.

Another reason the Precautionary Principle can be a practical addition to legislation is that science doesn’t necessarily move fast enough to protect us from potential risks, especially ones that shift harm elsewhere or take a long time to show up. For example, thousands of human-made substances are present in the food we eat, ranging from medications given to livestock to materials used in packaging. Proving that a new additive has health risks once it’s in the food supply could take decades because it’s incredibly difficult to isolate causative factors. So some regulators, including the Food and Drug Administration in America, require manufacturers to prove something is safe before it goes to market. This approach isn’t perfect, but it’s far safer than waiting to discover harm after we start eating something.

The Precautionary Principle forces us to ask a lot of difficult questions about the nature of risk, uncertainty, probability, the role of government, and ethics. It can also prompt us to question our intuitions surrounding the right decisions to make in certain situations.

When and how to use the Precautionary Principle

When handling risks, it is important to be aware of what we don’t or can’t know for sure. The Precautionary Principle is not intended to be a stifling justification for banning things—it’s a tool for handling particular kinds of uncertainty. Heuristics can guide us in making important decisions, but we still need to be flexible and treat each case as unique.

So how should we use the Precautionary Principle? Sven Ove Hansson suggests two requirements in How Extreme Is the Precautionary Principle? First, if there are competing priorities (beyond avoidance of harm), it should be combined with other decision-making principles. For example, the idea of “explore versus exploit” teaches us that we need to balance doubling down on existing options with trying out new ones. Second, the decision to take precautionary action should be based on the most up-to-date science, and there should be plans in place for how to update that decision if the science changes. That includes planning how often to revaluate the evidence and how to assess its quality.

When is it a good idea to use the Precautionary Principle? There are a few types of situations where it’s better to be safe rather than sorry if things are uncertain.

When the costs of waiting are low. As we’ve already seen, the Precautionary Principle is intended as a tool for handling uncertainty, rather than a justification for arbitrary bans. This means that if the safety of something is uncertain but the costs of waiting to learn more are low, it’s a good idea to use precaution.

When preserving optionality is a priority. The Precautionary Principle is most often evoked for potential risks that would cause irreversible, far-reaching, uncontainable harm. Seeing as we don’t know what the future holds, keeping our options open by avoiding limiting choices gives us the most flexibility later on. The Precautionary Principle preserves optionality by ensuring we don’t restrict the resources we have available further down the line or leave messes for our future selves to clean up.

When the potential costs of a risk are far greater than the cost of preventative action. If a potential risk would be devastating or even ruinous, and it’s possible to protect against it, precautionary action is key. Sometimes winning is just staying in the game—and sometimes staying in the game boils down to not letting anything wipe you out.

For example, in 1963 the Swiss government pledged to provide bunker spaces to all citizens in the event of a nuclear attack or disaster. The country still maintains a national system of thousands of warning sirens and distributes potassium iodide tablets (used to reduce the effects of radiation) to people living near nuclear plants in case of an accident. Given the potential effects of an incident on Switzerland (regardless of how likely it is), these precautionary actions are considered worthwhile.

When alternatives are available. If there are alternative courses of action we know to be safe, it’s a good idea to wait for more information before adopting a new risky one.

When not to use the Precautionary Principle

As the third criteria for using the Precautionary Principle usefully, Sven Ove Hansson recommends it not be used when the likelihood or scale of a potential risk is too low for precautionary action to have any benefit. For example, if one person per year dies from an allergic reaction to a guinea pig bite, it’s probably not worth banning pet guinea pigs. We can add a few more examples of situations where it’s generally not a good idea to use the Precautionary Principle.

When the tradeoffs are substantial and known. The whole point of the Precautionary Principle is to avoid harm. If we know for sure that not taking an action will cause more damage than taking it possibly could, it’s not a good idea to use precaution.

For example, following a 2011 accident at Fukushima, Japan shut down all nuclear power plants. Seeing as nuclear power is cheaper than fossil fuels, this resulted in a sharp increase in electricity prices in parts of the country. According to the authors of the paper Be Cautious with the Precautionary Principle, the resulting increase in mortality from people being unable to spend as much on heating was higher than the fatalities from the actual accident.

When the risks are known and priced in. We all have different levels of risk appetite and we make judgments about whether certain activities are worth the risks involved. When a risk is priced in, that means people are aware of it and voluntarily decide it is worthwhile—or even desirable.

For example, riskier investments tend to have higher potential returns. Although they might not make sense for someone who doesn’t want to risk losing any money, they do make sense for those who consider the potential gains worth the potential losses.

When only a zero-risk option would be satisfying. It’s impossible to completely avoid risks, so it doesn’t make much sense to exercise precaution with the expectation that a 100% safe option will appear.

When taking risks could strengthen us. As individuals, we can sometimes be overly risk averse and too cautious—to the point where it makes us fragile. Our ancestors had the best chance of surviving if they overreacted, rather than underreacted, to risks. But for many of us today, the biggest risk we face can be the stress caused by worrying too much about improbable dangers. We can end up fearing the kinds of risks, like social rejection, that are unavoidable and that tend to make us stronger if we embrace them as inevitable. Never taking any risks is generally a far worse idea than taking sensible ones.

***

We all face decisions every day that involve balancing risk. The Precautionary Principle is a tool that helps us determine when a particular choice is worth taking a gamble on, or when we need to sit tight and collect more information.

The Availability Bias: How to Overcome a Common Cognitive Distortion

“The attention which we lend to an experience is proportional to its vivid or interesting character, and it is a notorious fact that what interests us most vividly at the time is, other things equal, what we remember best.” —William James

The availability heuristic explains why winning an award makes you more likely to win another award. It explains why we sometimes avoid one thing out of fear and end up doing something else that’s objectively riskier. It explains why governments spend enormous amounts of money mitigating risks we’ve already faced. It explains why the five people closest to you have a big impact on your worldview. It explains why mountains of data indicating something is harmful don’t necessarily convince everyone to avoid it. It explains why it can seem as if everything is going well when the stock market is up. And it explains why bad publicity can still be beneficial in the long run.

Here’s how the availability heuristic works, how to overcome it, and how to use it to your advantage.

***

How the availability heuristic works

Before we explain the availability heuristic, let’s quickly recap the field it comes from.

Behavioral economics is a field of study bringing together knowledge from psychology and economics to reveal how real people behave in the real world. This is in contrast to the traditional economic view of human behavior, which assumed people always behave in accordance with rational, stable interests. The field largely began in the 1960s and 1970s with the work of psychologists Amos Tversky and Daniel Kahneman.

Behavioral economics posits that people often make decisions and judgments under uncertainty using imperfect heuristics, rather than by weighing up all of the relevant factors. Quick heuristics enable us to make rapid decisions without taking the time and mental energy to think through all the details.

Most of the time, they lead to satisfactory outcomes. However, they can bias us towards certain consistently irrational decisions that contradict what economics would tell us is the best choice. We usually don’t realize we’re using heuristics, and they’re hard to change even if we’re actively trying to be more rational.

One such cognitive shortcut is the availability heuristic, first studied by Tversky and Kahneman in 1973. We tend to judge the likelihood and significance of things based on how easily they come to mind. The more “available” a piece of information is to us, the more important it seems. The result is that we give greater weight to information we learned recently because a news article you read last night comes to mind easier than a science class you took years ago. It’s too much work to try to comb through every piece of information that might be in our heads.

We also give greater weight to information that is shocking or unusual. Shark attacks and plane crashes strike us more than an accidental drowning or car accidents, so we overestimate their odds.

If we’re presented with a set of similar things with one that differs from the rest, we’ll find it easier to remember. For example, of the sequence of characters “RTASDT9RTGS,” the most common character remembered would be the “9” because it stands out from the letters.

In Behavioural Law and Economics, Timur Kuran and Cass Sunstein write:

“Additional examples from recent years include mass outcries over Agent Orange, asbestos in schools, breast implants, and automobile airbags that endanger children. Their common thread is that people tended to form their risk judgments largely, if not entirely, on the basis of information produced through a social process, rather than personal experience or investigation. In each case, a public upheaval occurred as vast numbers of players reacted to each other’s actions and statements. In each, moreover, the demand for swift, extensive, and costly government action came to be considered morally necessary and socially desirable—even though, in most or all cases, the resulting regulations may well have produced little good, and perhaps even relatively more harm.”

Narratives are more memorable than disjointed facts. There’s a reason why cultures around the world teach important life lessons and values through fables, fairy tales, myths, proverbs, and stories.

Personal experience can also make information more salient. If you’ve recently been in a car accident, you may well view car accidents as more common in general than you did before. The base rates haven’t changed; you just have an unpleasant, vivid memory coming to mind whenever you get in a car. We too easily assume that our recollections are representative and true and discount events that are outside of our immediate memory. To give another example, you may be more likely to buy insurance against a natural disaster if you’ve just been impacted by one than you are before it happens.

Anything that makes something easier to remember increases its impact on us. In an early study, Tversky and Kahneman asked subjects whether a random English word is more likely to begin with “K” or have “K” as the third letter. Seeing as it’s typically easier to recall words beginning with a particular letter, people tended to assume the former was more common. The opposite is true.

In Judgment Under Uncertainty: Heuristics and Biases, Tversky and Kahneman write:

“…one may estimate probability by assessing availability, or associative distance. Lifelong experience has taught us that instances of large classes are recalled better and faster than instances of less frequent classes, that likely occurrences are easier to imagine than unlikely ones, and that associative connections are strengthened when two events frequently co-occur.

…For example, one may assess the divorce rate in a given community by recalling divorces among one’s acquaintances; one may evaluate the probability that a politician will lose an election by considering various ways in which he may lose support; and one may estimate the probability that a violent person will ‘see’ beasts of prey in a Rorschach card by assessing the strength of association between violence and beasts of prey. In all of these cases, the assessment of the frequency of a class or the probability of an event is mediated by an assessment of availability.”

They go on to write:

“That associative bonds are strengthened by repetition is perhaps the oldest law of memory known to man. The availability heuristic exploits the inverse form of this law, that is, it uses strength of association as a basis for the judgment of frequency. In this theory, availability is a mediating variable, rather than a dependent variable as is typically the case in the study of memory.”

***

How the availability heuristic misleads us

“People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media.” —Daniel Kahneman, Thinking Fast and Slow

To go back to the points made in the introduction of this post, winning an award can make you more likely to win another award because it gives you visibility, making your name come to mind more easily in connection to that kind of accolade. We sometimes avoid one thing in favor of something objectively riskier, like driving instead of taking a plane, because the dangers of the latter are more memorable. The five people closest to you can have a big impact on your worldview because you frequently encounter their attitudes and opinions, bringing them to mind when you make your own judgments. Mountains of data indicating something is harmful don’t always convince people to avoid it if those dangers aren’t salient, such as if they haven’t personally experienced them. It can seem as if things are going well when the stock market is up because it’s a simple, visible, and therefore memorable indicator. Bad publicity can be beneficial in the long run if it means something, such as a controversial book, gets mentioned often and is more likely to be recalled.

These aren’t empirical rules, but they’re logical consequences of the availability heuristic, in the absence of mitigating factors.

We are what we remember, and our memories have a significant impact on our perception of the world. What we end up remembering is influenced by factors such as the following:

  • Our foundational beliefs about the world
  • Our expectations
  • The emotions a piece of information inspires in us
  • How many times we’re exposed to a piece of information
  • The source of a piece of information

There is no real link between how memorable something is and how likely it is to happen. In fact, the opposite is often true. Unusual events stand out more and receive more attention than commonplace ones. As a result, the availability heuristic skews our perception of risks in two key ways:

We overestimate the likelihood of unlikely events. And we underestimate the likelihood of likely events.

Overestimating the risk of unlikely events leads us to stay awake at night, turning our hair grey, worrying about things that have almost no chance of happening. We can end up wasting enormous amounts of time, money, and other resources trying to mitigate things that have, on balance, a small impact. Sometimes those mitigation efforts end up backfiring, and sometimes they make us feel safer than they should.

On the flipside, we can overestimate the chance of unusually good things happening to us. Looking at everyone’s highlights on social media, we can end up expecting our own lives to also be a procession of grand achievements and joys. But most people’s lives are mundane most of the time, and the highlights we see tend to be exceptional ones, not routine ones.

Underestimating the risk of likely events leads us to fail to prepare for predictable problems and occurrences. We’re so worn out from worrying about unlikely events, we don’t have the energy to think about what’s in front of us. If you’re stressed and anxious much of the time, you’ll have a hard time paying attention to those signals when they really matter.

All of this is not to say that you shouldn’t prepare for the worst. Or that unlikely things never happen (as Littlewood’s Law states, you can expect a one-in-a-million event at least once per month.) Rather, we should be careful about only preparing for the extremes because those extremes are more memorable.

***

How to overcome the availability heuristic

Knowing about a cognitive bias isn’t usually enough to overcome it. Even people like Kahneman who have studied behavioral economics for many years sometimes struggle with the same irrational patterns. But being aware of the availability heuristic is helpful for the times when you need to make an important decision and can step back to make sure it isn’t distorting your view. Here are five ways of mitigating the availability heuristic.

#1. Always consider base rates when making judgments about probability.
The base rate of something is the average prevalence of it within a particular population. For example, around 10% of the population are left-handed. If you had to guess the likelihood of a random person being left-handed, you would be correct to say 1 in 10 in the absence of other relevant information. When judging the probability of something, look at the base rate whenever possible.

#2. Focus on trends and patterns.
The mental model of regression to the mean teaches us that extreme events tend to be followed by more moderate ones. Outlier events are often the result of luck and randomness. They’re not necessarily instructive. Whenever possible, base your judgments on trends and patterns—the longer term, the better. Track record is everything, even if outlier events are more memorable.

#3. Take the time to think before making a judgment.
The whole point of heuristics is that they save the time and effort needed to parse a ton of information and make a judgment. But, as we always say, you can’t make a good decision without taking time to think. There’s no shortcut for that. If you’re making an important decision, the only way to get around the availability heuristic is to stop and go through the relevant information, rather than assuming whatever comes to mind first is correct.

#4. Keep track of information you might need to use in a judgment far off in the future.
Don’t rely on memory. In Judgment in Managerial Decision-Making, Max Bazerman and Don Moore present the example of workplace annual performance appraisals. Managers tend to base their evaluations more on the prior three months than the nine months before that. It’s much easier than remembering what happened over the course of an entire year. Managers also tend to give substantial weight to unusual one-off behavior, such as a serious mistake or notable success, without considering the overall trend. In this case, noting down observations on someone’s performance throughout the entire year would lead to a more accurate appraisal.

#5. Go back and revisit old information.
Even if you think you can recall everything important, it’s a good idea to go back and refresh your memory of relevant information before making a decision.

The availability heuristic is part of Farnam Street’s latticework of mental models.

Better Thinking & Incentives: Lessons From Shakespeare

At Farnam Street, we aim to master the best of what other people have figured out. Not surprisingly, it’s quite a lot. The past is full of useful lessons that have much to teach us. Sometimes, we just need to remember what we’re looking for and why.

Life can be overwhelming. It seems like there’s a new technology, a new hack, a new way of doing things, or a new way we need to be every five minutes. Figuring out what to pay attention to is hard. It’s also a task we take seriously at Farnam Street. If we want to be a signal in the noise, we have to find other signals ourselves.

That’s why we spend a lot of time in the past. We like reading about history, and we like to look for timeless ideas. Learning information that is going to stay relevant for one hundred years is a better time investment than trying to digest information that will expire next week.

However, the past is a big place containing a lot of information. So it’s always appreciated when we find a source that has curated some timeless lessons from the past for us. In his book How to Think Like Shakespeare, professor Scott Newstok dives into history to pull out some of what humanity has already learned about better thinking and applying incentives.

***

Better thinking and education

“Doing and thinking are reciprocal practices.”

How do we get better at thinking? When you think about something, hopefully you learn more about it. But then the challenge becomes doing something with what you’ve learned. Often, we don’t want our knowledge to stay theoretical. We’ve learned something in order to do something. We want to put our knowledge into practice somehow.

The good news is, doing and thinking reinforce and augment each other. It’s a subtle but powerful feedback loop. You learn something. Armed with that new information, you do something. Informed by the results of your doing, you learn something new.

Throughout his book, Newstok weaves in many ideas on how to think better and how to engage with information. One of the ways to think better is to complement thinking with doing. For centuries, we’ve had the concept of “craft,” loosely understood as the knowledge one attains by doing. Newstok explains that the practice of any craft “requires—well, practice. Its difficult-to-codify habits are best transmitted in person, through modeling, observation, imitation, [and] correction adjustment.” You develop a deeper understanding when you apply your knowledge to creating something tangible. Crafting a piece of furniture is similar to crafting a philosophical argument in the sense that actually doing the work is what really develops knowledge. “Incorporating this body of knowledge, learning how to improvise within constraints, [and] appreciating how limited resources shape solutions to problems” lies at the core of mastery.

The application of what you’ve ingested in order to really learn it reminds us of the Feynman Learning Technique. To really master a subject, teach it to a novice. When you break down what you think you know into a teachable format, you begin to truly know something.

Newstok writes, “It’s human to avoid the hard work of thinking, reading, and writing. But we all fail when technology becomes a distraction from, or, worse, a substitute for, the interminable yet rewarding task of confronting the object under study.” Basically, it’s human to be lazy. It’s easier to cruise around on social media than put your ideas into action.

Better thinking takes strength. You have to be able to tune out the noise and walk away from the quick dopamine hits to put the effort into attempting to do something with your thoughts. You also need strength to confront the results and figure out how to do better next time. And even if your job is figuring out how to be better on social media, focusing on the relationship between doing and thinking will produce better results than undirected consumption.

The time and space to do something with our thoughts is how we transform what we learn into something we know.

Admittedly, knowing something often requires courage. First, the courage to admit what you don’t know, and second, the courage to be the least smart person in the room. But when you master a subject, the rewards are incredible. You have flexibility and understanding and options to keep learning.

***

Applying incentives

“If you create an incentive to hit the target, it’s all the less likely you will do so.”

Newstok explains how the wrong incentives do far more damage than diminishing our motivation to attain a goal. Applying bad incentives can diminish the effectiveness of an entire system. You get what you measure, because measuring something incentivizes you to do it.

He explores the problem of incentives in the American education system. The priority is on the immediate utility of information because the incentive is to pass tests. For students, passing tests is the path to higher education, where they can pass more tests and get validated as being a person who knows something. For teachers, students passing tests is the path to higher rankings, more students, and more funding.

Newstok suggests we don’t need to worry so much about being right and feeding the continual assessment pressure this attitude creates. Why? Because we don’t know exactly what we will need to know in the future. He writes, “When Shakespeare was born there wasn’t yet a professional theater in London. His ‘useless’ Latin drills prepared him for a job that didn’t yet exist.…Why are we wasting precious classroom hours on fleeting technical skills—skills that will become obsolete before graduates enter the workforce?” It seems that a better approach is to incentivize teaching tools that will give students the flexibility to develop their thinking in response to changes around them.

Considering the proper application of incentives in relation to future goals has ramifications in all organizations, not just schools.

A common problem in many organizations is that the opportunities to accrue further reward and compensation can only come by climbing ever higher in the pyramid. Thus people are incentivized to get into management, something they may have no interest in and may not be any good at. Not everyone who invents amazing widgets should manage a group of widget inventors. By not incentivizing alternate paths, the organization ends up losing the amazing widget inventors, handicapping itself by diminishing its adaptability.

We’ve written before about another common problem in so many offices: compensation is tied to visibility, physical presence, or volume of output and not to quality of contribution. To be fair, quality is harder to measure. But it is really more about organizational attitude. Do you want people to be busy typing or busy thinking? We all say we want thinkers. We rarely give anyone the time to think. In this case, we end up with organizations that end up being able only to produce more of the same.

And paying people by, say, profit-sharing can be great, as it incentivizes collaboration and commitment to the health of the organization. But even this needs to be managed so that the incentives don’t end up prioritizing short-term money at the expense of long term success—much like students learning only to pass tests at the expense of their future knowledge and resiliency.

Newstok suggests instead that “we all need practice in curiosity, intellectual agility, the determination to analyze, commitment to resourceful communication, historically and culturally situated reflectiveness, [and] the confidence to embrace complexity. In short: the ambition to create something better, in whatever field.” We don’t need to be incentivized for immediate performance. Rather, we need incentives to explore what might need to be known to face future challenges and respond to future opportunities.

***

The most fascinating thing about Newstok’s book is that it rests on ideas that are hundreds of years old. The problems he explores are not new, and the answers he presents to the challenges of better thinking and aligning incentives are based on perspectives provided in history books.

So maybe the ultimate lesson is the reminder that not every problem needs to be approached as a blank slate. Humanity has developed some wisdom and insight on a few topics. Before we reinvent the wheel, it’s worth looking back to leverage what we’ve already figured out.

Your Thinking Rate Is Fixed

You can’t force yourself to think faster. If you try, you’re likely to end up making much worse decisions. Here’s how to improve the actual quality of your decisions instead of chasing hacks to speed them up.

If you’re a knowledge worker, as an ever-growing proportion of people are, the product of your job is decisions.

Much of what you do day to day consists of trying to make the right choices among competing options, meaning you have to process large amounts of information, discern what’s likely to be most effective for moving towards your desired goal, and try to anticipate potential problems further down the line. And all the while, you’re operating in an environment of uncertainty where anything could happen tomorrow.

When the product of your job is your decisions, you might find yourself wanting to be able to make more decisions more quickly so you can be more productive overall.

Chasing speed is a flawed approach. Because decisions—at least good ones—don’t come out of thin air. They’re supported by a lot of thinking.

While experience and education can grant you the pattern-matching abilities to make some kinds of decisions using intuition, you’re still going to run into decisions that require you to sit and consider the problem from multiple angles. You’re still going to need to schedule time to do nothing but think. Otherwise making more decisions will make you less productive overall, not more, because your decisions will suck.

Here’s a secret that might sound obvious but can actually transform the way you work: you can’t force yourself to think faster. Our brains just don’t work that way. The rate at which you make mental discernments is fixed.

Sure, you can develop your ability to do certain kinds of thinking faster over time. You can learn new methods for decision-making. You can develop your mental models. You can build your ability to focus. But if you’re trying to speed up your thinking so you can make an extra few decisions today, forget it.

***

Beyond the “hurry up” culture

Management consultant Tom DeMarco writes in Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency that many knowledge work organizations have a culture where the dominant message at all times is to hurry up.

Everyone is trying to work faster at all times, and they pressure everyone around them to work faster, too. No one wants to be perceived as a slacker. The result is that managers put pressure on their subordinates through a range of methods. DeMarco lists the following examples:

  • “Turning the screws on delivery dates (aggressive scheduling)
  • Loading on extra work
  • Encouraging overtime
  • Getting angry when disappointed
  • Noting one subordinate’s extraordinary effort and praising it in the presence of others
  • Being severe about anything other than superb performance
  • Expecting great things of all your workers
  • Railing against any apparent waste of time
  • Setting an example yourself (with the boss laboring so mightily there is certainly no time for anyone else to goof off)
  • Creating incentives to encourage desired behavior or results.”

All of these things increase pressure in the work environment and repeatedly reinforce the “hurry up!” message. They make managers feel like they’re moving things along faster. That way if work isn’t getting done, it’s not their fault. But, DeMarco writes, they don’t lead to meaningful changes in behavior that make the whole organization more productive. Speeding up often results in poor decisions that create future problems.

The reason more pressure doesn’t mean better productivity is that the rate at which we think is fixed.

We can’t force ourselves to start making faster decisions right now just because we’re faced with an unrealistic deadline. DeMarco writes, “Think rate is fixed. No matter what you do, no matter how hard you try, you can’t pick up the pace of thinking.

If you’re doing a form of physical labor, you can move your body faster when under pressure. (Of course, if it’s too fast, you’ll get injured or won’t be able to sustain it for long.)

If you’re a knowledge worker, you can’t pick up the pace of mental discriminations just because you’re under pressure. Chances are good that you’re already going as fast as you can. Because guess what? You can’t voluntarily slow down your thinking, either.

***

The limits of pressure

Faced with added stress and unable to accelerate our brains instantaneously, we can do any of three things:

  • “Eliminate wasted time.
  • Defer tasks that are not on the critical path.
  • Stay late.”

Even if those might seem like positive things, they’re less advantageous than they appear at first glance. Their effects are marginal at best. The smarter and more qualified the knowledge worker, the less time they’re likely to be wasting anyway. Most people don’t enjoy wasting time. What you’re more likely to end up eliminating is valuable slack time for thinking.

Deferring non-critical tasks doesn’t save any time overall, it just pushes work forwards—to the point where those tasks do become critical. Then something else gets deferred.

Staying late might work once in a while. Again, though, its effects are limited. If we keep doing it night after night, we run out of energy, our personal lives suffer, and we make worse decisions as a result.

None of the outcomes of increasing pressure result in more or better decisions. None of them speed up the rate at which people think. Even if an occasional, tactical increase in pressure (whether it comes from the outside or we choose to apply it to ourselves) can be effective, ongoing pressure increases are unsustainable in the long run.

***

Think rate is fixed

It’s incredibly important to truly understand the point DeMarco makes in this part of Slack: the rate at which we process information is fixed.

When you’re under pressure, the quality of your decisions plummets. You miss possible angles, you don’t think ahead, you do what makes sense now, you panic, and so on. Often, you make a snap judgment then grasp for whatever information will support it for the people you work with. You don’t have breathing room to stress-test your decisions.

The clearer you can think, the better your decisions will be. Trying to think faster can only cloud your judgment. It doesn’t matter how many decisions you make if they’re not good ones. As DeMarco reiterates throughout the book, you can be efficient without being effective.

Try making a list of the worst decisions you’ve made so far in your career. There’s a good chance most of them were made under intense pressure or without taking much time over them.

At Farnam Street, we write a lot about how to make better decisions, and we share a lot of tools for better thinking. We made a whole course on decision-making. But none of these resources are meant to immediately accelerate your thinking. Many of them require you to actually slow down a whole lot and spend more time on your decisions. They improve the rate at which you can do certain kinds of thinking, but it’s not going to be an overnight process.

***

Upgrading your brain

Some people read one of our articles or books about mental models and complain that it’s not an effective approach because it didn’t lead to an immediate improvement in their thinking. That’s unsurprising; our brains don’t work like that. Integrating new, better approaches takes a ton of time and repetition, just like developing any other skill. You have to keep on reflecting and making course corrections.

At the end of the day, your brain is going to go where it wants to go. You’re going to think the way you think. However much you build awareness of how the world works and learn how to reorient, you’re still, to use Jonathan Haidt’s metaphor from The Righteous Mind, a tiny rider atop a gigantic elephant. None of us can reshape how we think overnight.

Making good decisions is hard work. There’s a limit to how many decisions you can make in a day before you need a break. On top of that, many knowledge workers are in fields where the most relevant information has a short half-life. Making good decisions requires constant learning and verifying what you think you know.

If you want to make better decisions, you need to do everything you can to reduce the pressure you’re under. You need to let your brain take whatever time it needs to think through the problem at hand. You need to get out of a reactive mode, recognize when you need to pause, and spend more time looking at problems.

A good metaphor is installing an update to the operating system on your laptop. Would you rather install an update that fixes bugs and improves existing processes, or one that just makes everything run faster? Obviously, you’d prefer the former. The latter would just lead to more crashes. The same is true for updating your mental operating system.

Stop trying to think faster. Start trying to think better.

How Julia Child Used First Principles Thinking

There’s a big difference between knowing how to follow a recipe and knowing how to cook. If you can master the first principles within a domain, you can see much further than those who are just following recipes. That’s what Julia Child, “The French Chef”, did throughout her career.

Following a recipe might get you the results you want, but it doesn’t teach you anything about how cooking works at the foundational level. Or what to do when something goes wrong. Or how to come up with your own recipes when you open the fridge on a Wednesday night and realize you forgot to go grocery shopping. Or how to adapt recipes for your own dietary needs.

Adhering to recipes will only get you so far, and it certainly won’t result in you coming up with anything new or creative.

People who know how to cook understand the basic principles that make food taste, look, and smell good. They have confidence in troubleshooting and solving problems as they go—or adjusting to unexpected outcomes. They can glance at an almost barren kitchen and devise something delicious. They know how to adapt to a guest with a gluten allergy or a child who doesn’t like green food. Sure, they might consult a recipe when it makes sense to do so. But they’re not dependent on it, and they can change it up based on their particular circumstances.

There’s a reason many cooking competition shows feature a segment where contestants need to design their own recipe from a limited assortment of ingredients. Effective improvisation shows the judges that someone can actually cook, not just follow recipes.

We can draw a strong parallel from cooking to thinking. If you want to learn how to think for yourself, you can’t just follow what someone else came up with. You need to understand first principles if you want to be able to solve complex problems or think in a unique, creative fashion. First principles are the building blocks of knowledge, the foundational understanding acquired from breaking something down into its most essential concepts.

One person who exemplifies first principles thinking is Julia Child, an American educator who charmed audiences with her classes, books, and TV shows. First principles thinking enabled Julia to both master her own struggles with cooking and then teach the world to do the same. In Something from the Oven, Laura Shapiro tells the charming story of how she did it. Here’s what we can learn about better thinking from the “French Chef.”

***

Gustave Flaubert wrote that “talent is a long patience, ” something which was all too true for Julia. She wasn’t born with an innate skill for or even love of cooking. Her starting point was falling in love with her future husband, Paul Child, in Ceylon in 1944 when both were working for the Office of Strategic Services. Paul adored food, and his delight in it inspired Julia. When they each returned to their separate homes after the war, she decided she would learn to cook. Things got off to a bad start, as Shapiro explains:

“At first she tried to teach herself at home, but it was frustrating to bushwhack her way through one dish after another. She never knew whether she would find success or failure when she opened the oven door, and worst of all, she didn’t know why this recipe worked and that one didn’t.”

Seeking expert guidance, Julia started taking cooking classes three times a week at a Beverly Hills cooking school. Even that didn’t help much, however, and after she married Paul a year later, her experiments in their Washington, DC kitchen continued to go awry. Only when the couple moved to Paris did an epiphany strike. Julia’s encounters with French cooking instilled in her an understanding of the need for first principles thinking. Trying to follow recipes without comprehending their logic wasn’t going to produce delicious results. She needed to learn how food actually worked.

In 1949, at the age of 37, she enrolled in classes at the famous Cordon Bleu school of cooking. It changed her forever:

“Learning to cook at the Cordon Bleu meant breaking down every dish into its smallest individual steps and doing each laborious and exhausting procedure by hand. In time Child could bone a duck while leaving the skin intact, extract the guts of a chicken through a hole she made in the neck, make a ham mousse by pounding the ham to a pulp with a mortar and pestle, and turn out a swath of elaborate dishes from choucroute garnie to vol-au-vent financière. None of this came effortlessly but she could do it. She had the brains, the considerable physical strength it demanded, and her vast determination. Most important, she could understand for the first time the principles governing how and why a recipe worked as it did.”

Julia had found her calling. After six months of Cordon Bleu classes, she continued studying independently for a year. She immersed herself in French cooking, filled her home with equipment, and befriended two women who shared her passion, Simone Beck and Louisette Bertholle. In the early 1950s, they opened a tiny school together, with a couple of students working out of Julia’s kitchen. She was “adamant that the recipes used in class be absolutely reliable, and she tested every one of them for what she called ‘scientific workability.’” By this, Julia meant that the recipes needed to make sense per her understanding of the science of cooking. If they didn’t agree with the first principles she knew, they were out.

***

When Paul transferred to Marseille, Julia was sad to leave her school. But she and her friends continued their collaboration, working at a distance on a French cookery book aimed at Americans. For what would become Mastering the Art of French Cooking, Julia focused on teaching first principles in a logical order, not copying down mere recipes.

She’d grown frustrated at opening recipe books to see instructions she knew couldn’t work because they contradicted the science of cooking—for example, recipes calling for temperatures she knew would burn a particular ingredient, or omitting key ingredients like baking soda, without which a particular effect would be impossible. It was clear no one had bothered to test anything before they wrote it down, and she was determined not to make the same mistake.

Mastering the Art of French Cooking came out in 1961. Shapiro writes, “The reviews were excellent, there was a gratifying burst of publicity all across the country, and the professional food world acknowledged a new star in Julia Child. What nobody knew for sure was whether everyday homemakers in the nation that invented the TV dinner would buy the book.” Though the book was far from a flop, it was the TV show it inspired that catapulted Julia and her approach to cooking to stardom.

The French Chef first aired in 1963 and was an enormous success from the start. Viewers adored how Julia explained why she did what she did and how it worked. They also loved her spontaneous capacity to adapt to unanticipated outcomes. It was usually only possible to shoot one take so Julia needed to keep going no matter what happened.

Her show appealed to every kind of person because it could make anyone a better cook—or at least help them understand the process better. Not only was Julia “a striking image of unaffected good nature,” the way she taught really worked. Viewers and readers who followed her guidance discovered a way of cooking that made them feel in control.

Julia “believed anybody could cook with distinction from scratch and that’s what she was out to prove.” Many of the people who watched The French Chef were women who needed a new way to think about cooking. As gender roles were being redefined and more women entered the workforce, it no longer seemed like something they were obligated by birth to do. At the same time, treating it as an undesirable chore was no more pleasant than treating it as a duty. Julia taught them another way. Cooking could be an intellectual, creative, enjoyable activity. Once you understood how it actually worked, you could learn from mistakes instead of repeating them again and again.

Shapiro explains that “Child was certainly not the first TV chef. The genre was almost as old as TV itself. But she was the first to make it her own and have an enduring societal impact.”

***

If you can master the first principles within a domain, you can see much further than those who are just following recipes. That’s what Julia managed to do, and it’s part of why she stood out from the other TV chefs of her time—and still stands out today. By mastering first principles, you can find better ways of doing things, instead of having to stick to conventions. If Julia thought a modern piece of equipment worked better than a traditional one or that part of a technique was a pointless custom, she didn’t hesitate to make changes as she saw fit. Once you know the why of something, it is easy to modify the how to achieve your desired result.

The lessons of first principles in cooking are the same for the first principles in any domain. Looking for first principles is just a way of thinking. It’s a commitment to understanding the foundation that something is built on and giving yourself the freedom to adapt, develop, and create. Once you know the first principles, you can keep learning more advanced concepts as well as innovating for yourself.

Learning Through Play

Play is an essential way of learning about the world. Doing things we enjoy without a goal in mind leads us to find new information, better understand our own capabilities, and find unexpected beauty around us. Arithmetic is one example of an area we can explore through play.

Every parent knows that children need space for unstructured play that helps them develop their creativity and problem-solving skills. Free-form experimentation leads to the rapid acquisition of information about the world. When children play together, they expand their social skills and strengthen the ability to regulate their emotions. Young animals, such as elephants, dogs, ravens, and crocodiles, also develop survival skills through play.

The benefits of play don’t disappear as soon as you become an adult. Even if we engage our curiosity in different ways as we grow up, a lot of learning and exploration still comes from analogous activities: things we do for the sheer fun of it.

When the pressure mounts to be productive every minute of the day, we have much to gain from doing all we can to carve out time to play. Take away prescriptions and obligations, and we gravitate towards whatever interests us the most. Just like children and baby elephants, we can learn important lessons through play. It can also give us a new perspective on topics we take for granted—such as the way we represent numbers.

***

Playing with symbols

The book Arithmetic, in addition to being a clear and engaging history of the subject, is a demonstration of how insights and understanding can be combined with enjoyment and fun. The best place to start the book is at the afterword, where author and mathematics professor Paul Lockhart writes, “I especially hope that I have managed to get across the idea of viewing your mind as a playground—a place to create beautiful things for your own pleasure and amusement and to marvel at what you’ve made and at what you have yet to understand.

Arithmetic, the branch of math dealing with the manipulation and properties of numbers, can be very playful. After all, there are many ways to add and multiply numbers that in themselves can be represented in various ways. When we see six cows in a field, we represent that amount with the symbol 6. The Romans used VI. And there are many other ways that unfortunately can’t be typed on a standard English keyboard. If two more cows wander into the field, the usual method of counting them is to add 2 to 6 and conclude there are now 8 cows. But we could just as easily add 2 + 3 + 3. Or turn everything into fractions with a base of 2 and go from there.

One of the most intriguing parts of the book is when Lockhart encourages us to step away from how we commonly label numbers so we can have fun experimenting with them. He says, “The problem with familiarity is not so much that it breeds contempt, but that it breeds loss of perspective.” So we don’t get too hung up on our symbols such as 4 and 5, Lockhart shows us how any symbols can be used to complete some of the main arithmetic tasks such as comparing and grouping. He shows how completely random symbols can represent amounts and gives insight into how they can be manipulated.

When we start to play with the representations, we connect to the underlying reasoning behind what we are doing. We could be counting for the purposes of comparison, and we could also be interested in learning the patterns produced by our actions. Lockhart explains that “every number can be represented in a variety of ways, and we want to choose a form that is as useful and convenient as possible.” We can thus choose our representations of numbers based on curiosity versus what is conventional. It’s easy to extrapolate this thinking to broader life situations. How often do we assume certain parameters are fixed just because that is what has always been done? What else could we accomplish if we let go of convention and focused instead on function?

***

Stepping away from requirements

We all use the Hindu-Arabic number system, which utilizes groups of tens. Ten singles are ten, ten tens are a hundred, and so on. It has a consistent logic to it, and it is a pervasive way of grouping numbers as they increase. But Lockhart explains that grouping numbers by ten is as arbitrary as the symbols we use to represent numbers. He explains how a society might group by fours or sevens. One of the most interesting ideas though, comes when he’s explaining the groupings:

“You might think there is no question about it; we chose four as our grouping size, so that’s that. Of course we will group our groups into fours—as opposed to what? Grouping things into fours and then grouping our groups into sixes? That would be insane! But it happens all the time. Inches are grouped into twelves to make feet, and then three feet make a yard. And the old British monetary system had twelve pence to the shilling and twenty shillings to the pound.”

By reminding us of the options available in such a simple, everyday activity as counting, Lockhart opens a mental door. What other ways might we go about our tasks and solve our problems? It’s a reminder that most of our so-called requirements are ones that we impose on ourselves.

If we think back to being children, we often played with things in ways that were different from what they were intended for. Pots became drums and tape strung around the house became lasers. A byproduct of this type of play is usually learning—we learn what things are normally used for by playing with them. But that’s not the intention behind a child’s play. The fun comes first, and thus they don’t restrain themselves to convention.

***

Have fun with the unfamiliar

There are advantages and disadvantages to all counting systems. For Lockhart, the only way to discover what those are is to play around with them. And it is in the playing that we may learn more than arithmetic. For example, he says: “In fact, getting stuck (say on 7 +8 for instance) is one of the best things that can happen to you because it gives you an opportunity to reinvent and to appreciate exactly what it is that you are doing.” In the case of adding two numbers, we “are rearranging numerical information for comparison purposes.

The larger point is that getting stuck on anything can be incredibly useful. If forces you to stop and consider what it is you are really trying to achieve. Getting stuck can help you identify the first principles in your situation. In getting unstuck, we learn lessons that resonate and help us to grow.

Lockhart says of arithmetic that we need to “not let our familiarity with a particular system blind us to its arbitrariness.” We don’t have to use the symbol 2 to represent how many cows there are in a field, just as we don’t have to group sixty minutes into one hour. We may find those representations useful, but we also may not. There are some people in the world with so much money that the numbers that represent their wealth are almost nonsensical, and most people find the clock manipulation that is the annual flip to daylight savings time to be annoying and stressful.

Playing around with arithmetic can teach the broader lesson that we don’t have to keep using systems that no longer serve us well. Yet how many of us have a hard time letting go of the ineffective simply because it’s familiar?

Which brings us back to play. Play is often the exploration of the unfamiliar. After all, if you knew what the result would be, it likely wouldn’t be considered play. When we play we take chances, we experiment, and we try new combinations just to see what happens. We do all of this in the pursuit of fun because it is the novelty that brings us pleasure and makes play rewarding.

Lockhart makes a similar point about arithmetic:

“The point of studying arithmetic and its philosophy is not merely to get good at it but also to gain a larger perspective and to expand our worldview . . . Plus, it’s fun. Anyway, as connoisseurs of arithmetic, we should always be questioning and critiquing, examining and playing.”

***

We suggest that playing need not be confined to arithmetic. If you happen to enjoy playing with numbers, then go for it. Lockhart’s book gives great inspiration on how to have fun with numbers. Playing is inherently valuable and doesn’t need to be productive. Children and animals have no purpose for play; they merely do what’s fun. It just so happens that unstructured, undirected play often has incredibly powerful byproducts.

Play can lead to new ideas and innovations. It can also lead to personal growth and development, not to mention a better understanding of the world. And, by its definition, play leads to fun. Which is the best part. Arithmetic is just one example of an unexpected area we can approach with the spirit of play.