Category: Thinking

When Safety Proves Dangerous

Not everything we do with the aim of making ourselves safer has that effect. Sometimes, knowing there are measures in place to protect us from harm can lead us to take greater risks and cancel out the benefits. This is known as risk compensation. Understanding how it affects our behavior can help us make the best possible decisions in an uncertain world.

***

The world is full of risks. Every day we take endless chances, whether we’re crossing the road, standing next to someone with a cough on the train, investing in the stock market, or hopping on a flight.

From the moment we’re old enough to understand, people start teaching us crucial safety measures to remember: don’t touch that, wear this, stay away from that, don’t do this. And society is endlessly trying to mitigate the risks involved in daily life, from the ongoing efforts to improve car safety to signs reminding employees to wash their hands after using the toilet.

But the things we do to reduce risk don’t always make us safer. They can end up having the opposite effect. This is because we tend to change how we behave in response to our perceived safety level. When we feel safe, we take more risks. When we feel unsafe, we are more cautious.

Risk compensation means that efforts to protect ourselves can end up having a smaller effect than expected, no effect at all, or even a negative effect. Sometimes the danger is transferred to a different group of people, or a behavior modification creates new risks. Knowing how we respond to risk can help us avoid transferring danger to other more vulnerable individuals or groups.

Examples of Risk Compensation

There are many documented instances of risk compensation. One of the first comes from a 1975 paper by economist Sam Peltzman, entitled “The Effects of Automobile Safety Regulation.” Peltzman looked at the effects of new vehicle safety laws introduced several years earlier, finding that they led to no change in fatalities. While people in cars were less likely to die in accidents, pedestrians were at a higher risk. Why? Because drivers took more risks, knowing they were safer if they crashed.

Although Peltzman’s research has been both replicated and called into question over the years (there are many ways to interpret the same dataset), risk compensation is apparent in many other areas. As Andrew Zolli and Ann Marie Healy write in Resilience: Why Things Bounce Back, children who play sports involving protective gear (like helmets and knee pads) take more physical risks, and hikers who think they can be easily rescued are less cautious on the trails.

A study of taxi drivers in Munich, Germany, found that those driving vehicles with antilock brakes had more accidents than those without—unsurprising, considering they tended to accelerate faster and stop harder. Another study suggested that childproof lids on medicine bottles did not reduce poisoning rates. According to W. Kip Viscusi at Duke University, parents became more complacent with all medicines, including ones without the safer lids. Better ripcords on parachutes lead skydivers to pull them too late.

As defenses against natural disasters have improved, people have moved into riskier areas, and deaths from events like floods or hurricanes have not necessarily decreased. After helmets were introduced in American football, tackling fatalities actually increased for a few years, as players were more willing to strike heads (this changed with the adoption of new tackling standards.) Bailouts and protective mechanisms for financial institutions may have contributed to the scale of the 2008 financial crisis, as they led to banks taking greater and greater risks. There are numerous other examples.

We can easily see risk compensation play out in our lives and those of people around us. Someone takes up a healthy habit, like going to the gym, then compensates by drinking more. Having an emergency fund in place can encourage us to take greater financial risks. Wearing a face mask during a pandemic might mean you’re more willing to hang out in crowded places.

Risk Homeostasis

According to psychology professor Gerald Wilde, we all internally have a desired level of risk that varies depending on who we are and the context we are in. Our risk tolerance is like a thermostat—we take more risks if we feel too safe, and vice versa, in order to remain at our desired “temperature.” It all comes down to the costs and benefits we expect from taking on more or less risk.

The notion of risk homeostasis, although controversial, can help explain risk compensation. It means that enforcing measures to make people safer will inevitably lead to changes in behavior that maintain the amount of risk we’d like to experience, like driving faster while wearing a seatbelt. A feedback loop communicating our perceived risk helps us keep things as dangerous as we wish them to be. We calibrate our actions to how safe we’d like to be, making adjustments if it swings too far in one direction or the other.

What We Can Learn from Risk Compensation

We can learn many lessons from risk compensation and the research that has been done on the subject. First, safety measures are more effective the less visible they are. If people don’t know about a risk reduction, they won’t change their behavior to compensate for it. When we want to make something safer, it’s best to ensure changes go as unnoticed as possible.

Second, an effective method to reduce risk-taking behavior is to provide incentives for prudent behavior, giving people a reason to adjust their risk thermostat. Just because it seems like something has become safer doesn’t mean the risk hasn’t transferred elsewhere, putting a different group of people in danger as when seat belt laws lead to more pedestrian fatalities. So, for instance, lower insurance premiums for careful drivers might result in fewer fatalities than stricter road safety laws because it causes them to make positive changes to their behavior, instead of shifting the risk elsewhere.

Third, we are biased towards intervention. When we want to improve a situation, our first instinct tends to be to step in and change something, anything. Sometimes it is wiser to do less, or even nothing. Changing something does not always make people safer, sometimes it just changes the nature of the danger.

Fourth, when we make a safety change, we may need to implement corresponding rules to avoid risk compensation. Football helmets made the sport more dangerous at first, but new rules about tackling helped cancel out the behavior changes because the league was realistic about the need for more than just physical protection.

Finally, making people feel less safe can actually improve their behavior. Serious injuries in car crashes are rarer when the roads are icy, even if minor incidents are more common, because drivers take more care. If we want to improve safety, we can make risks more visible through better education.

Risk compensation certainly doesn’t mean it’s not a good idea to take steps to make ourselves safer, but it does illustrate how we need to be aware of unintended consequences that occur when we interact with complex systems. We can’t always expect to achieve the changes we desire first time around. Once we make a change, we should pay careful attention to the effects on the whole system to see what happens. Sometimes it will take the testing of a few alternate approaches to bring us closer to the desired effect.

Rethinking Fear

Fear is a state no one wants to embrace, yet for many of us it’s the background music to our lives. But by making friends with fear and understanding why it exists, we can become less vulnerable to harm—and less afraid. Read on to learn a better approach to fear.

***

In The Gift of Fear: Survival Signals That Protect Us From Violence, author Gavin de Becker argues that we all have an intuitive sense of when we are in danger. Drawing upon his experience as a high-stakes security specialist, he explains how we can protect ourselves by paying better attention to our gut feelings and not letting denial lead us to harm. Our intuition, honed by evolution and by a lifetime of experience, deserves more respect.

By telling us to value our intuition, de Becker isn’t telling anyone to live in fear permanently, always alert for possible risks. Quite the opposite. De Becker writes that we misunderstand the value of fear when we think that being constantly hypervigilant will keep us safe. Being afraid all the time doesn’t protect us from danger. Instead, he explains, by trusting that our gut feelings are accurate and learning key signals that portend risk, we can actually feel calmer and safer:

Far too many people are walking around in a constant state of vigilance, their intuition misinformed about what really poses danger. It needn’t be so. When you honor accurate intuitive signals and evaluate them without denial (believing that either the favorable or unfavorable outcome is possible), you need not be wary, for you will come to trust that you’ll be notified if there is something worthy of your attention. Fear will gain credibility because it won’t be applied wastefully.

When we walk around terrified all the time, we can’t pick out the signal from the noise. If you’re constantly scared, you can’t correctly notice when there is something genuine to fear. True fear is a momentary signal, not an ongoing state. De Becker writes that “if one feels fear of all people all the time, there is no signal reserved for the times when it’s really needed.”

What we fear the most is rarely what ends up happening. Fixating on particular dangers blinds us to others. We focus on checking the road for snakes and end up getting knocked over by a car. De Becker writes that it matters that we’re receptive to fear, not that we’re watching out for what scares us the most (though of course, different things pose different risks to different people, and we should evaluate accordingly.) After all, “we are far more open to signals when we don’t focus on the expectation of specific signals.”

Fear vs. anxiety

Fear is not the same as anxiety. Although people experiencing anxiety are often afraid of both the anxiety and what they presume to be its cause, these two states have different triggers. De Becker explains one of the key factors that differentiates the two:

Anxiety, unlike real fear, is always caused by uncertainty. It is caused, ultimately, by predictions in which you have little confidence. When you predict that you will be fired from your job and you are certain the prediction is correct, you don’t have anxiety about being fired. You might have anxiety about the things you can’t predict with certainty, such as the ramifications of losing the job. Predictions in which you have high confidence free you to respond, adjust, feel sadness, accept, prepare, or to do whatever is needed. Accordingly, anxiety is reduced by improving your prediction, thus increasing your certainty.

Understand that when we’re anxious, it’s because we’re uncertain. The solution to this, then, isn’t worrying more—it’s doing all we can to either find clarity or working to accept that uncertainty is part of life.

Using fear

What can we learn from de Becker’s call to rethink fear? We learn that we’ll be in a better position if we can face possible threats with a calm mind, alert to our internal signals but not anticipating every possible bad thing that could happen. While being told to stop panicking never helped anyone, we benefit by understanding that being overwhelmed by fear will hurt us more. Our imaginary fears harm us more than reality ever does.

If this approach sounds familiar, it’s because it echoes ideas from Stoic philosophy. Much like de Becker, the Stoics urged us to be realistic about the fact that bad things can and will happen to us throughout our lives. No one can escape that. Once we’ve faced that reality, some of the shock goes away and we can think about how to prepare. After all, catastrophe and tragedy are part of the journey, not an unexpected detour. Being aware and accepting of the inevitable terrible things that will happen is actually a critical tool in mitigating both their severity and impact.

We don’t need to live in fear to stay safe. A better approach is to be aware of the risks we face, accept that some are unknown or unpredictable, and do all we can to be prepared for any serious or imminent dangers. Then we can focus our energy on maintaining a calm mind and trusting that our intuition will protect us.

“We are more often frightened than hurt; and we suffer more from imagination than from reality.”

— Seneca

 

The Stoics also taught us that we should view terrible events as survivable. It would do us well to give ourselves more credit—we’ve all survived occurrences that once seemed like the worst-case scenario, and we can survive many more.

Bad Arguments and How to Avoid Them

Productive arguments serve two purposes: to open our minds to truths we couldn’t see — and help others do the same. Here’s how to avoid common pitfalls and argue like a master.

***

We’re often faced with situations in which we need to argue a point, whether we’re pitching an investor or competing for a contract. When being powerfully persuasive matters, it’s important that we don’t use bad arguments that prevent useful debate instead of furthering it. To do this, it’s useful to know some common ways people remove the possibility of a meaningful discussion. While it can be a challenge to keep our cool and not sink to using bad arguments when responding to a Twitter troll or during a heated confrontation over Thanksgiving dinner, we can benefit from knowing what to avoid when the stakes are high.

“If the defendant be a man of straw, who is to pay the costs?” 

— Charles Dickens

 

To start, let’s define three common types of bad arguments, or logical fallacies: “straw man,” “hollow man,” and “iron man.”

Straw man arguments

A straw man argument is a misrepresentation of an opinion or viewpoint, designed to be as easy as possible to refute. Just as a person made of straw would be easier to fight with than a real human, a straw man argument is easy to knock to the ground. And just as it might look a bit like a real person from a distance, a straw man argument has the rough outline of the actual discussion. In some cases, it might seem similar to an outside observer. But it lacks any semblance of substance or strength. The sole purpose is for it to be easy to refute. It’s not an argument you happen to find inconvenient or challenging. It’s one that is logically flawed. A straw man argument may not even be invalid; it’s just not relevant.

It’s important not to confuse a strawman argument with a simplified summary of a complex argument. When we’re having a debate, we may sometimes need to explain an opponent’s grounds back to them to ensure we understand it. In this case, this explanation will be by necessity a briefer version. But it is only a straw man if the simplification is used to make it easier to attack, rather than to facilitate clearer understanding

There are a number of common tactics used to construct straw man arguments. One is per fas et nefas (which means “through right and wrong” in Latin) and involves refuting one of the reasons for an opponent’s argument, then claiming that discredits everything they’ve said. Often, this type of straw man argument will focus on an irrelevant or unimportant detail, selecting the weakest part of the argument. Even though they have no response to the rest of the discourse, they purport to have disproven it in its entirety. As Doug Walton, professor of philosophy at the University of Winnipeg, puts it, “The straw man tactic is essentially to take some small part of an arguer’s position and then treat it as if that represented his larger position, even though it is not really representative of that larger position. It is a form of generalizing from one aspect to a larger, broader position, but not in a representative way.”

Oversimplifying an argument makes it easier to attack by removing any important nuance. An example is the “peanut butter argument,” which states life cannot have evolved through natural selection because we do not see the spontaneous appearance of new life forms inside sealed peanut butter jars. The argument claims evolutionary theory asserts life emerged through a simple combination of matter and heat, both of which are present in a jar of peanut butter. It is a straw man because it uses an incorrect statement about evolution as being representative of the whole theory. The defender of evolution gets trapped into explaining a position they didn’t even have: why life doesn’t spontaneously develop inside a jar of peanut butter.

Another tactic is to over-exaggerate a line of reasoning to the point of absurdity, thus making it easier to refute. An example would be someone claiming a politician who is not opposed to immigration is thus in favor of open borders with no restrictions on who can enter a country. Seeing as that would be a weak view that few people hold, the politician then feels obligated to defend border controls and risks losing control of the debate and being charged as a hypocrite.

“The light obtained by setting straw men on fire is not what we mean by illumination.”

— Adam Gopnik

 

Straw man arguments that respond to irrelevant points could involve ad hominem points, which are sort of relevant but don’t refute the argument—for example, responding to the point that wind turbines are a more environmentally friendly means of generating energy than fossil fuels by saying, “But wind turbines are ugly.” This point has a loose connection, yet the way wind turbines look doesn’t discredit their benefits for power generation. A person who made an ad hominem point like that would likely be doing so because they knew they had no rebuttal for the actual assertion.

Quoting an argument out of context is another tactic of straw man arguments. “Quote mining” is the practice of removing any part of a source that proves contradictory, often using ellipses to fill in the gaps. For instance, film posters and book blurbs will sometimes take quotes from bad reviews out of context to make them seem positive. So, “It’s amazing how bad this film is” becomes “Amazing,” and “The perfect book for people who wish to be bored to tears” becomes “The perfect book.” Reviewers face an uphill battle in trying not to write anything that could be taken out of control in this manner.

Hollow man arguments

A hollow man argument is similar to a straw man one. The difference is that it is a weak case attributed to a non-existent group. Someone will fabricate a viewpoint that is easy to refute, then claim it was made by a group they disagree with. Arguing against an opponent which doesn’t exist is a pretty easy way to win any debate. People who use hollow man arguments will often favor vague, non-specific language without explicitly giving any sources or stating who their opponent is.

Hollow man arguments slip into debate because they’re a lazy way of making a strong point without risking anyone refuting you or needing to be accountable for the actual strength of a line of reasoning. In Why We Argue (And How We Should): A Guide to Political Disagreement, Scott F. Aikin and Robert B. Talisse write that “speakers commit the hollow man when they respond critically to arguments that nobody on the opposing side has ever made. The act of erecting a hollow man is an argumentative failure because it distracts attention away from the actual reasons and argument given by one’s opposition. . . . It is a full-bore fabrication of the opposition.”

An example of a hollow man argument would be the claim that animal rights activists want humans and non-human animals to have a perfectly equal legal standing, meaning that dogs would have to start wearing clothes to avoid being arrested for public indecency. This is a hollow man because no one has said that all laws applying to humans should also apply to dogs.

“The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum.”

— Noam Chomsky

 

Iron man argument

An iron man argument is one constructed in such a way that it is resistant to attacks by a challenger. Iron man arguments are difficult to avoid because they have a lot of overlap with legitimate debate techniques. The distinction is whether the person using them is doing so to prevent opposition altogether or if they are open to changing their minds and listening to an opposer. Being proven wrong is painful, which is why we often unthinkingly resort to shielding ourselves from it using iron man arguments.

Someone using an iron man argument often makes their own stance so vague that nothing anyone says about it can weaken it. They’ll make liberal use of caveats, jargon, and imprecise terms. This means they can claim anyone who disagrees didn’t understand them, or they’ll rephrase their contention repeatedly. You could compare this to the language used in the average horoscope or in a fortune cookie. It’s so vague that it’s hard to disagree with or label it as incorrect because it can’t be incorrect. It’s like boxing with a wisp of steam.

An example would be a politician who answers a difficult question about their policies by saying, “I think it’s important that we take the best possible actions to benefit the people of this country. Our priority in this situation is to implement policies that have a positive impact on everyone in society.” They’ve answered the question, just without saying anything that anyone could disagree with.

Why bad arguments are harmful

What is the purpose of debate? Most of us, if asked, would say it’s about helping someone with an incorrect, harmful idea see the light. It’s an act of kindness. It’s about getting to the truth.

But the way we tend to engage in debate contradicts our supposed intentions.

Much of the time, we’re really debating because we want to prove we’re right and our opponent is wrong. Our interest is not in getting to the truth. We don’t even consider the possibility that our opponent might be correct or that we could learn something from them.

As decades of psychological research indicate, our brains are always out to save energy, and part of that is that we prefer not to change our minds about anything. It’s much easier to cling to our existing beliefs through whatever means possible and ignore anything that challenges them. Bad arguments enable us to engage in what looks like a debate but doesn’t pose any risk of forcing us to question what we stand for.

We debate for other reasons, too. Sometimes we’re out to entertain ourselves. Or we want to prove we’re smarter than someone else. Or we’re secretly addicted to the shot of adrenaline we get from picking a fight. And that’s what we’re doing—fighting, not arguing. In these cases, it’s no surprise that shoddy tactics like using straw man or hollow man arguments emerge.

It’s never fun to admit we’re wrong about anything or to have to change our minds. But it is essential if we want to get smarter and see the world as it is, not as we want it to be. Any time we engage in debate, we need to be honest about our intentions. What are we trying to achieve? Are we open to changing our minds? Are we listening to our opponent? Only when we’re out to have a balanced discussion with the possibility of changing our minds can a debate be productive, avoiding the use of logical fallacies.

Bad arguments are harmful to everyone involved in a debate. They don’t get us anywhere because we’re not tackling an opponent’s actual viewpoint. This means we have no hope of convincing them. Worse, this sort of underhand tactic is likely to make an opponent feel frustrated and annoyed by the deliberate misrepresentation of their beliefs. They’re forced to listen to a refutation of something they don’t even believe in the first place, which insults their intelligence. Feeling attacked like this only makes them hold on tighter to their actual belief. It may even make them less willing to engage in any sort of debate in the future.

And if you’re a chronic constructor of bad arguments, as many of us are, it leads people to avoid challenging you or starting discussions. Which means you don’t get to learn from them or have your views questioned. In formal situations, using bad arguments makes it look like you don’t really have a strong point in the first place.

How to avoid using bad arguments

If you want to have useful, productive debates, it’s vital to avoid using bad arguments.

The first thing we need to do to avoid constructing bad arguments is to accept it’s something we’re all susceptible to. It’s easy to look at a logical fallacy and think of all the people we know who use it. It’s much harder to recognize it in ourselves. We don’t always realize when the point we’re making isn’t that strong.

Bad arguments are almost unavoidable if we haven’t taken the time to research both sides of the debate. Sometimes the map is not the territory—that is, our perception of an opinion is not that opinion. The most useful thing we can do is attempt to see the territory. That brings us to steelman arguments and the ideological Turing test.

Steel man arguments

The most powerful way to avoid using bad arguments and to discourage their use by others is to follow the principle of charity and to argue against the strongest and most persuasive version of their grounds. In this case, we suspend disbelief and ignore our own opinions for long enough to understand where they’re coming from. We recognize the good sides of their case and play to its strengths. Ask questions to clarify anything you don’t understand. Be curious about the other person’s perspective. You might not change their mind, but you will at least learn something and hopefully reduce any conflict in the process.

“It is better to debate a question without settling it than to settle a question without debating it.”

— Joseph Joubert

 

In Intuition Pumps and Other Tools for Thinking, the philosopher Daniel Dennett offers some general guidelines for using the principle of charity, formulated by social psychologist and game theorist Anatol Rapoport:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”
  1. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
  1. You should mention anything you have learned from your target.
  1. Only then are you permitted to say so much as a word of rebuttal or criticism.

An argument that is the strongest version of an opponent’s viewpoint is known as a steel man. It’s purposefully constructed to be as difficult as possible to attack. The idea is that we can only say we’ve won a debate when we’ve fought with a steel man, not a straw one. Seeing as we’re biased towards tackling weaker versions of an argument, often without realizing it, this lets us err on the side of caution.

As challenging as this might be, it serves a bigger picture purpose. Steel man arguments help us understand a new perspective, however ludicrous it might be in our eyes, so we’re better positioned to succeed and connect better in the future. It shows a challenger we are empathetic and willing to listen, regardless of personal opinion. The point is to see the strengths, not the weaknesses. If we’re open-minded, not combative, we can learn a lot.

“He who knows only his side of the case knows little of that.”

— John Stuart Mill

 

An exercise in steel manning, the ideological Turing test, proposes that we cannot say we understand an opponent’s position unless we would be able to argue in favor of it so well that an observer would not be able to tell which opinion we actually hold. In other words, we shouldn’t hold opinions we can’t argue against. The ideological Turing test is a great thought experiment to establish whether you understand where an opponent is coming from.

Although we don’t have the option to do this for every single thing we disagree with, when a debate is extremely important to us, the ideological Turing test can be a helpful tool for ensuring we’re fully prepared. Even if we can’t use it all the time, it can serve us well in high-stakes situations.

How to handle other people using bad arguments

“You could not fence with an antagonist who met rapier thrust with blow of battle axe.”

— L.M. Montgomery

 

Let’s say you’re in the middle of a debate with someone with a different opinion than yours. You’re responding to the steel man version of their explanation, staying calm and measured. But what do you do if your opponent starts using bad arguments against you? What if they’re not listening to you?

The first thing you can do when someone uses a bad argument against you is the simplest: point it out. Explain what they’re doing and why it isn’t helpful. There’s not much point in just telling them they’re using a straw man argument or any other type of logical fallacy. If they’re not familiar with the concept, it may just seem like alienating jargon. There’s also not much point in using it as a “gotcha!” point which will likewise foster more tensions. It’s best to define the concept, then reiterate your actual beliefs and how they differ from the bad argument they’re arguing against.

  1. Edward Damer writes in Attacking Faulty Reasoning, “It is not always possible to know whether an opponent has deliberately distorted your argument or has simply failed to understand or interpret it in the way that you intended. For this reason, it might be helpful to recapitulate the basic outline . . . or [ask] your opponent to summarize it for you.”

If this doesn’t work, you can continue to repeat your original point and make no attempt to defend the bad argument. Should your opponent prove unwilling to recognize their use of a bad argument (and you’re 100% certain that’s what they’re doing), it’s worth considering if there is any point in continuing the debate. The reality is that most of the debates we have are not rationally thought out; they’re emotionally driven. This is even more pertinent when we’re arguing with people we have a complex relationship with. Sometimes, it’s better to walk away.

Conclusion

The bad arguments discussed here are incredibly common logical fallacies in debates. We often use them without realizing it or experience them without recognizing it. But these types of debates are unproductive and unlikely to help anyone learn. If we want our arguments to create buy-in and not animosity, we need to avoid making bad ones.

Why We Focus on Trivial Things: The Bikeshed Effect

Bikeshedding is a metaphor to illustrate the strange tendency we have to spend excessive time on trivial matters, often glossing over important ones. Here’s why we do it, and how to stop.

***

How can we stop wasting time on unimportant details? From meetings at work that drag on forever without achieving anything to weeks-long email chains that don’t solve the problem at hand, we seem to spend an inordinate amount of time on the inconsequential. Then, when an important decision needs to be made, we hardly have any time to devote to it.

To answer this question, we first have to recognize why we get bogged down in the trivial. Then we must look at strategies for changing our dynamics towards generating both useful input and time to consider it.

The Law of Triviality

You’ve likely heard of Parkinson’s Law, which states that tasks expand to fill the amount of time allocated to them. But you might not have heard of the lesser-known Parkinson’s Law of Triviality, also coined by British naval historian and author Cyril Northcote Parkinson in the 1950s.

The Law of Triviality states that the amount of time spent discussing an issue in an organization is inversely correlated to its actual importance in the scheme of things. Major, complex issues get the least discussion while simple, minor ones get the most discussion.

Parkinson’s Law of Triviality is also known as “bike-shedding,” after the story Parkinson uses to illustrate it. He asks readers to imagine a financial committee meeting to discuss a three-point agenda. The points are as follows:

  1. A proposal for a £10 million nuclear power plant
  2. A proposal for a £350 bike shed
  3. A proposal for a £21 annual coffee budget

What happens? The committee ends up running through the nuclear power plant proposal in little time. It’s too advanced for anyone to really dig into the details, and most of the members don’t know much about the topic in the first place. One member who does is unsure how to explain it to the others. Another member proposes a redesigned proposal, but it seems like such a huge task that the rest of the committee decline to consider it.

The discussion soon moves to the bike shed. Here, the committee members feel much more comfortable voicing their opinions. They all know what a bike shed is and what it looks like. Several members begin an animated debate over the best possible material for the roof, weighing out options that might enable modest savings. They discuss the bike shed for far longer than the power plant.

At last, the committee moves onto item three: the coffee budget. Suddenly, everyone’s an expert. They all know about coffee and have a strong sense of its cost and value. Before anyone realizes what is happening, they spend longer discussing the £21 coffee budget than the power plant and the bike shed combined! In the end, the committee runs out of time and decides to meet again to complete their analysis. Everyone walks away feeling satisfied, having contributed to the conversation.

Why this happens

Bike-shedding happens because the simpler a topic is, the more people will have an opinion on it and thus more to say about it. When something is outside of our circle of competence, like a nuclear power plant, we don’t even try to articulate an opinion.

But when something is just about comprehensible to us, even if we don’t have anything of genuine value to add, we feel compelled to say something, lest we look stupid. What idiot doesn’t have anything to say about a bike shed? Everyone wants to show that they know about the topic at hand and have something to contribute.

With any issue, we shouldn’t be according equal importance to every opinion anyone adds. We should emphasize the inputs from those who have done the work to have an opinion. And when we decide to contribute, we should be putting our energy into the areas where we have something valuable to add that will improve the outcome of the decision.

Strategies for avoiding bike-shedding

The main thing you can do to avoid bike-shedding is for your meeting to have a clear purpose. In The Art of Gathering: How We Meet and Why It Matters, Priya Parker, who has decades of experience designing high-stakes gatherings, says that any successful gathering (including a business meeting) needs to have a focused and particular purpose. “Specificity,” she says, “is a crucial ingredient.”

Why is having a clear purpose so critical? Because you use it as the lens to filter all other decisions about your meeting, including who to have in the room.

With that in mind, we can see that it’s probably not a great idea to discuss building a nuclear power plant and a bike shed in the same meeting. There’s not enough specificity there.

The key is to recognize that the available input on an issue doesn’t all need considering. The most informed opinions are most relevant. This is one reason why big meetings with lots of people present, most of whom don’t need to be there, are such a waste of time in organizations. Everyone wants to participate, but not everyone has anything meaningful to contribute.

When it comes to choosing your list of invitees, Parker writes, “if the purpose of your meeting is to make a decision, you may want to consider having fewer cooks in the kitchen.” If you don’t want bike-shedding to occur, avoid inviting contributions from those who are unlikely to have relevant knowledge and experience. Getting the result you want—a thoughtful, educated discussion about that power plant—depends on having the right people in the room.

It also helps to have a designated individual in charge of making the final judgment. When we make decisions by committee with no one in charge, reaching a consensus can be almost impossible. The discussion drags on and on. The individual can decide in advance how much importance to accord to the issue (for instance, by estimating how much its success or failure could help or harm the company’s bottom line). They can set a time limit for the discussion to create urgency. And they can end the meeting by verifying that it has indeed achieved its purpose.

Any issue that invites a lot of discussions from different people might not be the most important one at hand. Avoid descending into unproductive triviality by having clear goals for your meeting and getting the best people to the table to have a productive, constructive discussion.

Standing on the Shoulders of Giants

Innovation doesn’t occur in a vacuum. Doers and thinkers from Shakespeare to Jobs, liberally “stole” inspiration from the doers and thinkers who came before. Here’s how to do it right.

***

“If I have seen further,” Isaac Newton wrote in a 1675 letter to fellow scientist Robert Hooke, “it is by standing on the shoulders of giants.”

It can be easy to look at great geniuses like Newton and imagine that their ideas and work came solely out of their minds, that they spun it from their own thoughts—that they were true originals. But that is rarely the case.

Innovative ideas have to come from somewhere. No matter how unique or unprecedented a work seems, dig a little deeper and you will always find that the creator stood on someone else’s shoulders. They mastered the best of what other people had already figured out, then made that expertise their own. With each iteration, they could see a little further, and they were content in the knowledge that future generations would, in turn, stand on their shoulders.

Standing on the shoulders of giants is a necessary part of creativity, innovation, and development. It doesn’t make what you do less valuable. Embrace it.

Everyone gets a lift up

Ironically, Newton’s turn of phrase wasn’t even entirely his own. The phrase can be traced back to the twelfth century, when the author John of Salisbury wrote that philosopher Bernard of Chartres compared people to dwarves perched on the shoulders of giants and said that “we see more and farther than our predecessors, not because we have keener vision or greater height, but because we are lifted up and borne aloft on their gigantic stature.”

Mary Shelley put it this way in the nineteenth century, in a preface for Frankenstein: “Invention, it must be humbly admitted, does not consist in creating out of void but out of chaos.”

There are giants in every field. Don’t be intimidated by them. They offer an exciting perspective. As the film director Jim Jarmusch advised, “Nothing is original. Steal from anywhere that resonates with inspiration or fuels your imagination. Devour old films, new films, music, books, paintings, photographs, poems, dreams, random conversations, architecture, bridges, street signs, trees, clouds, bodies of water, light, and shadows. Select only things to steal from that speak directly to your soul. If you do this, your work (and theft) will be authentic. Authenticity is invaluable; originality is non-existent. And don’t bother concealing your thievery—celebrate it if you feel like it. In any case, always remember what Jean-Luc Godard said: ‘It’s not where you take things from—it’s where you take them to.’”

That might sound demoralizing. Some might think, “My song, my book, my blog post, my startup, my app, my creation—surely they are original? Surely no one has done this before!” But that’s likely not the case. It’s also not a bad thing. Filmmaker Kirby Ferguson states in his TED Talk: “Admitting this to ourselves is not an embrace of mediocrity and derivativeness—it’s a liberation from our misconceptions, and it’s an incentive to not expect so much from ourselves and to simply begin.”

There lies the important fact. Standing on the shoulders of giants enables us to see further, not merely as far as before. When we build upon prior work, we often improve upon it and take humanity in new directions. However original your work seems to be, the influences are there—they might just be uncredited or not obvious. As we know from social proof, copying is a natural human tendency. It’s how we learn and figure out how to behave.

In Antifragile: Things That Gain from Disorder, Nassim Taleb describes the type of antifragile inventions and ideas that have lasted throughout history. He describes himself heading to a restaurant (the likes of which have been around for at least 2,500 years), in shoes similar to those worn at least 5,300 years ago, to use silverware designed by the Mesopotamians. During the evening, he drinks wine based on a 6,000-year-old recipe, from glasses invented 2,900 years ago, followed by cheese unchanged through the centuries. The dinner is prepared with one of our oldest tools, fire, and using utensils much like those the Romans developed.

Much about our societies and cultures has undeniably changed and continues to change at an ever-faster rate. But we continue to stand on the shoulders of those who came before in our everyday life, using their inventions and ideas, and sometimes building upon them.

Not invented here syndrome

When we discredit what came before or try to reinvent the wheel or refuse to learn from history, we hold ourselves back. After all, many of the best ideas are the oldest. “Not Invented Here Syndrome” is a term for situations when we avoid using ideas, products, or data created by someone else, preferring instead to develop our own (even if it is more expensive, time-consuming, and of lower quality.)

The syndrome can also manifest as reluctance to outsource or delegate work. People might think their output is intrinsically better if they do it themselves, becoming overconfident in their own abilities. After all, who likes getting told what to do, even by someone who knows better? Who wouldn’t want to be known as the genius who (re)invented the wheel?

Developing a new solution for a problem is more exciting than using someone else’s ideas. But new solutions, in turn, create new problems. Some people joke that, for example, the largest Silicon Valley companies are in fact just impromptu incubators for people who will eventually set up their own business, firm in the belief that what they create themselves will be better.

The syndrome is also a case of the sunk cost fallacy. If a company has spent a lot of time and money getting a square wheel to work, they may be resistant to buying the round ones that someone else comes out with. The opportunity costs can be tremendous. Not Invented Here Syndrome detracts from an organization or individual’s core competency, and results in wasting time and talent on what are ultimately distractions. Better to use someone else’s idea and be a giant for someone else.

Why Steve Jobs stole his ideas

“Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it. They just saw something. It seemed obvious to them after a while; that’s because they were able to connect experiences they’ve had and synthesize new things.” 

— Steve Jobs

In The Runaway Species: How Human Creativity Remakes the World, Anthony Brandt and David Eagleman trace the path that led to the creation of the iPhone and track down the giants upon whose shoulders Steve Jobs perched. We often hail Jobs as a revolutionary figure who changed how we use technology. Few who were around in 2007 could have failed to notice the buzz created by the release of the iPhone. It seemed so new, a total departure from anything that had come before. The truth is a little messier.

The first touchscreen came about almost half a century before the iPhone, developed by E.A. Johnson for air traffic control. Other engineers built upon his work and developed usable models, filing a patent in 1975. Around the same time, the University of Illinois was developing touchscreen terminals for students. Prior to touchscreens, light pens used similar technology. The first commercial touchscreen computer came out in 1983, soon followed by graphics boards, tablets, watches, and video game consoles. Casio released a touchscreen pocket computer in 1987 (remember, this is still a full twenty years before the iPhone.)

However, early touchscreen devices were frustrating to use, with very limited functionality, often short battery lives, and minimal use cases for the average person. As touchscreen devices developed in complexity and usability, they laid down the groundwork for the iPhone.

Likewise, the iPod built upon the work of Kane Kramer, who took inspiration from the Sony Walkman. Kramer designed a small portable music player in the 1970s. The IXI, as he called it, looked similar to the iPod but arrived too early for a market to exist, and Kramer lacked the marketing skills to create one. When pitching to investors, Kramer described the potential for immediate delivery, digital inventory, taped live performances, back catalog availability, and the promotion of new artists and microtransactions. Sound familiar?

Steve Jobs stood on the shoulders of the many unseen engineers, students, and scientists who worked for decades to build the technology he drew upon. Although Apple has a long history of merciless lawsuits against those they consider to have stolen their ideas, many were not truly their own in the first place. Brandt and Eagleman conclude that “human creativity does not emerge from a vacuum. We draw on our experience and the raw materials around us to refashion the world. Knowing where we’ve been, and where we are, points the way to the next big industries.”

How Shakespeare got his ideas

Nothing will come of nothing.”  

— William Shakespeare, King Lear

Most, if not all, of Shakespeare’s plays draw heavily upon prior works—so much so that some question whether he would have survived today’s copyright laws.

Hamlet took inspiration from Gesta Danorum, a twelfth-century work on Danish history by Saxo Grammaticus, consisting of sixteen Latin books. Although it is doubtful whether Shakespeare had access to the original text, scholars find the parallels undeniable and believe he may have read another play based on it, from which he drew inspiration. In particular, the accounts of the plight of Prince Amleth (which has the same letters as Hamlet) involves similar events.

Holinshed’s Chronicles, a co-authored account of British history from the late sixteenth century, tells stories that mimic the plot of Macbeth, including the three witches. Holinshed’s Chronicles itself was a mélange of earlier texts, which transferred their biases and fabrications to Shakespeare. It also likely inspired King Lear.

Parts of Antony and Cleopatra are copied verbatim from Plutarch’s Life of Mark Anthony. Arthur Brooke’s 1562 poem The Tragicall Historye of Romeus and Juliet was an undisguised template for Romeo and Juliet. Once again, there are more giants behind the scenes—Brooke copied a 1559 poem by Pierre Boaistuau, who in turn drew from a 1554 story by Matteo Bandello, who in turn drew inspiration from a 1530 work by Luigi da Porto. The list continues, with Plutarch, Chaucer, and the Bible acting as inspirations for many major literary, theatrical, and cultural works.

Yet what Shakespeare did with the works he sometimes copied, sometimes learned from, is remarkable. Take a look at any of the original texts and, despite the mimicry, you will find that they cannot compare to his plays. Many of the originals were dry, unengaging, and lacking any sort of poetic language. J.J. Munro wrote in 1908 that The Tragicall Historye of Romeus and Juliet “meanders on like a listless stream in a strange and impossible land; Shakespeare’s sweeps on like a broad and rushing river, singing and foaming, flashing in sunlight and darkening in cloud, carrying all things irresistibly to where it plunges over the precipice into a waste of waters below.”

Despite bordering on plagiarism at times, he overhauled the stories with an exceptional use of the English language, bringing drama and emotion to dreary chronicles or poems. He had a keen sense for the changes required to restructure plots, creating suspense and intensity in their stories. Shakespeare saw far further than those who wrote before him, and with their help, he ushered in a new era of the English language.

Of course, it’s not just Newton, Jobs, and Shakespeare who found a (sometimes willing, sometimes not) shoulder to stand upon. Facebook is presumed to have built upon Friendster. Cormac McCarthy’s books often replicate older history texts, with one character coming straight from Samuel Chamberlain’s My Confessions. John Lennon borrowed from diverse musicians, once writing in a letter to the New York Times that though the Beatles copied black musicians, “it wasn’t a rip off. It was a love in.”

In The Ecstasy of Influence, Jonathan Lethem points to many other instances of influences in classic works. In 1916, journalist Heinz von Lichberg published a story of a man who falls in love with his landlady’s daughter and begins a love affair, culminating in her death and his lasting loneliness. The title? Lolita. It’s hard to question that Nabokov must have read it, but aside from the plot and name, the style of language in his version is absent from the original.

The list continues. The point is not to be flippant about plagiarism but to cultivate sensitivity to the elements of value in a previous work, as well as the ability to build upon those elements. If we restrict the flow of ideas, everyone loses out.

The adjacent possible

What’s this about? Why can’t people come up with their own ideas? Why do so many people come up with a brilliant idea but never profit from it? The answer lies in what scientist Stuart Kauffman calls “the adjacent possible.” Quite simply, each new innovation or idea opens up the possibility of additional innovations and ideas. At any time, there are limits to what is possible, yet those limits are constantly expanding.

In Where Good Ideas Come From: The Natural History of Innovation, Steven Johnson compares this process to being in a house where opening a door creates new rooms. Each time we open the door to a new room, new doors appear and the house grows. Johnson compares it to the formation of life, beginning with basic fatty acids. The first fatty acids to form were not capable of turning into living creatures. When they self-organized into spheres, the groundwork formed for cell membranes, and a new door opened to genetic codes, chloroplasts, and mitochondria. When dinosaurs evolved a new bone that meant they had more manual dexterity, they opened a new door to flight. When our distant ancestors evolved opposable thumbs, dozens of new doors opened to the use of tools, writing, and warfare. According to Johnson, the history of innovation has been about exploring new wings of the adjacent possible and expanding what we are capable of.

A new idea—like those of Newton, Jobs, and Shakespeare—is only possible because a previous giant opened a new door and made their work possible. They in turn opened new doors and expanded the realm of possibility. Technology, art, and other advances are only possible if someone else has laid the groundwork; nothing comes from nothing. Shakespeare could write his plays because other people had developed the structures and language that formed his tools. Newton could advance science because of the preliminary discoveries that others had made. Jobs built Apple out of the debris of many prior devices and technological advances.

The questions we all have to ask ourselves are these: What new doors can I open, based on the work of the giants that came before me? What opportunities can I spot that they couldn’t? Where can I take the adjacent possible? If you think all the good ideas have already been found, you are very wrong. Other people’s good ideas open new possibilities, rather than restricting them.

As time passes, the giants just keep getting taller and more willing to let us hop onto their shoulders. Their expertise is out there in books and blog posts, open-source software and TED talks, podcast interviews, and academic papers. Whatever we are trying to do, we have the option to find a suitable giant and see what can be learned from them. In the process, knowledge compounds, and everyone gets to see further as we open new doors to the adjacent possible.

Unlikely Optimism: The Conjunctive Events Bias

When certain events need to take place to achieve a desired outcome, we’re overly optimistic that those events will happen. Here’s why we should temper those expectations.

***

Why are we so optimistic in our estimation of the cost and schedule of a project? Why are we so surprised when something inevitably goes wrong? If we want to get better at executing our plans successfully, we need to be aware of how the conjunctive events bias can throw us way off track.

We often overestimate the likelihood of conjunctive events—occurrences that must happen in conjunction with one another. The probability of a series of conjunctive events happening is lower than the probability of any individual event. This is often very hard for us to wrap our heads around. But if we don’t try, we risk seriously underestimating the time, money, and effort required to achieve our goals.

The Most Famous Bank Teller

In Thinking, Fast and Slow, Daniel Kahneman gives a now-classic example of the conjunctive events bias. Students at several major universities received a description of a woman. They were told that Linda is 31, single, intelligent, a philosophy major, and concerned with social justice. Students were then asked to estimate which of the following statements is most likely true:

  • Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The majority of students (85% to 95%) chose the latter statement, seeing the conjunctive events (that she is both a bank teller and a feminist activist) as more probable. Two events together seemed more likely that one event. It’s perfectly possible that Linda is a feminist bank teller. It’s just not more probable for her to be a feminist bank teller than it is for her to be a bank teller. After all, the first statement does not exclude the possibility of her being a feminist; it just does not mention it.

The logic underlying the Linda example can be summed up as follows: The extension rule in probability theory states that if B is a subset of A, B cannot be more probable than A. Likewise, the probability of A and B cannot be higher than the probability of A or B. Broader categories are always more probable than their subsets. It’s more likely a randomly selected person is a parent than it is that they are a father. It’s more likely someone has a pet than they have a cat. It’s more likely someone likes coffee than they like cappuccinos. And so on.

It’s not that we always think conjunctive events are more likely. If the second option in the Linda Problem was ‘Linda is a bank teller and likes to ski’, maybe we’d all pick just the bank teller option because we don’t have any information that makes either a good choice. The point here, is that given what we know about Linda, we think it’s likely she’s a feminist. Therefore, we are willing to add almost anything to the Linda package if it appears with ‘feminist’. This willingness to create a narrative with pieces that clearly don’t fit is the real danger of the conjunctive events bias.

“Plans are useless, but planning is indispensable.” 

— Dwight D. Eisenhower

Why the best laid plans often fail

The conjunctive events bias makes us underestimate the effort required to accomplish complex plans. Most plans don’t work out. Things almost always take longer than expected. There are always delays due to dependencies. As Max Bazerman and Don Moore explain in Judgment in Managerial Decision Making, “The overestimation of conjunctive events offers a powerful explanation for the problems that typically occur with projects that require multistage planning. Individuals, businesses, and governments frequently fall victim to the conjunctive events bias in terms of timing and budgets. Home remodeling, new product ventures, and public works projects seldom finish on time.”

Plans don’t work because completing a sequence of tasks requires a great deal of cooperation from multiple events. As a system becomes increasingly complex, the chance of failure increases. A plan can be thought of as a system. Thus, a change in one component will very likely have impacts on the functionality of other parts of the system. The more components you have, the more chances that something will go wrong in one of them, causing delays, setbacks, and fails in the rest of the system. Even if the chance of an individual component failing is slight, a large number of them will increase the probability of failure.

Imagine you’re building a house. Things start off well. The existing structure comes down on schedule. Construction continues and the framing goes up, and you are excited to see the progress. The contractor reassures you that all trades and materials are lined up and ready to go. What is more likely:

  • The building permits get delayed
  • The building permits get delayed and the electrical goes in on schedule

You know a bit about the electrical schedule. You know nothing about the permits. But you bucket them in optimistically, erroneously linking one with the other. So you don’t worry about the building permits and never imagine that their delay will impact the electrical. When the permits do get delayed you have to pay the electrician for the week he can’t work, and then have to wait for him to finish another job before he can resume yours.

Thus, the more steps involved in a plan, the greater the chance of failure, as we associate probabilities to events that aren’t at all related. That is especially true as more people get involved, bringing their individual biases and misconceptions of chance.

In Seeking Wisdom: From Darwin to Munger, Peter Bevelin writes:

A project is composed of a series of steps where all must be achieved for success. Each individual step has some probability of failure. We often underestimate the large number of things that may happen in the future or all opportunities for failure that may cause a project to go wrong. Humans make mistakes, equipment fails, technologies don’t work as planned, unrealistic expectations, biases including sunk cost-syndrome, inexperience, wrong incentives, changing requirements, random events, ignoring early warning signals are reasons for delays, cost overruns, and mistakes. Often we focus too much on the specific base project case and ignore what normally happens in similar situations (base rate frequency of outcomes—personal and others). Why should some project be any different from the long-term record of similar ones? George Bernard Shaw said: “We learn from history that man can never learn anything from history.”

The more independent steps that are involved in achieving a scenario, the more opportunities for failure and the less likely it is that the scenario will happen. We often underestimate the number of steps, people, and decisions involved.

We can’t pretend that knowing about conjunctive events bias will automatically stop us from having it. When, however, we are doing planning where a successful outcome is of importance to us, it’s useful to run through our assumptions with this bias in mind. Sometimes, assigning frequencies instead of probabilities can also show us where our assumptions might be leading us astray. In the housing example above, asking what is the frequency of having building permits delayed in every hundred houses, versus the frequency of having permits delayed and electrical going in on time for the same hundred demonstrates more easily the higher frequency of option one.

It also extremely useful to keep a decision journal for our major decisions, so that we can more realistic in our estimates on the time and resources we need for future plans. The more realistic we are, the higher our chances of accomplishing what we set out to do.

The conjunctive events bias teaches us to be more pessimistic about plans and to consider the worst-case scenario, not just the best. We may assume things will always run smoothly but disruption is the rule rather than the exception.