Category: Thinking

Why We Focus on Trivial Things: The Bikeshed Effect

Bikeshedding is a metaphor to illustrate the strange tendency we have to spend excessive time on trivial matters, often glossing over important ones. Here’s why we do it, and how to stop.

***

How can we stop wasting time on unimportant details? From meetings at work that drag on forever without achieving anything to weeks-long email chains that don’t solve the problem at hand, we seem to spend an inordinate amount of time on the inconsequential. Then, when an important decision needs to be made, we hardly have any time to devote to it.

To answer this question, we first have to recognize why we get bogged down in the trivial. Then we must look at strategies for changing our dynamics towards generating both useful input and time to consider it.

The Law of Triviality

You’ve likely heard of Parkinson’s Law, which states that tasks expand to fill the amount of time allocated to them. But you might not have heard of the lesser-known Parkinson’s Law of Triviality, also coined by British naval historian and author Cyril Northcote Parkinson in the 1950s.

The Law of Triviality states that the amount of time spent discussing an issue in an organization is inversely correlated to its actual importance in the scheme of things. Major, complex issues get the least discussion while simple, minor ones get the most discussion.

Parkinson’s Law of Triviality is also known as “bike-shedding,” after the story Parkinson uses to illustrate it. He asks readers to imagine a financial committee meeting to discuss a three-point agenda. The points are as follows:

  1. A proposal for a £10 million nuclear power plant
  2. A proposal for a £350 bike shed
  3. A proposal for a £21 annual coffee budget

What happens? The committee ends up running through the nuclear power plant proposal in little time. It’s too advanced for anyone to really dig into the details, and most of the members don’t know much about the topic in the first place. One member who does is unsure how to explain it to the others. Another member proposes a redesigned proposal, but it seems like such a huge task that the rest of the committee decline to consider it.

The discussion soon moves to the bike shed. Here, the committee members feel much more comfortable voicing their opinions. They all know what a bike shed is and what it looks like. Several members begin an animated debate over the best possible material for the roof, weighing out options that might enable modest savings. They discuss the bike shed for far longer than the power plant.

At last, the committee moves onto item three: the coffee budget. Suddenly, everyone’s an expert. They all know about coffee and have a strong sense of its cost and value. Before anyone realizes what is happening, they spend longer discussing the £21 coffee budget than the power plant and the bike shed combined! In the end, the committee runs out of time and decides to meet again to complete their analysis. Everyone walks away feeling satisfied, having contributed to the conversation.

Why this happens

Bike-shedding happens because the simpler a topic is, the more people will have an opinion on it and thus more to say about it. When something is outside of our circle of competence, like a nuclear power plant, we don’t even try to articulate an opinion.

But when something is just about comprehensible to us, even if we don’t have anything of genuine value to add, we feel compelled to say something, lest we look stupid. What idiot doesn’t have anything to say about a bike shed? Everyone wants to show that they know about the topic at hand and have something to contribute.

With any issue, we shouldn’t be according equal importance to every opinion anyone adds. We should emphasize the inputs from those who have done the work to have an opinion. And when we decide to contribute, we should be putting our energy into the areas where we have something valuable to add that will improve the outcome of the decision.

Strategies for avoiding bike-shedding

The main thing you can do to avoid bike-shedding is for your meeting to have a clear purpose. In The Art of Gathering: How We Meet and Why It Matters, Priya Parker, who has decades of experience designing high-stakes gatherings, says that any successful gathering (including a business meeting) needs to have a focused and particular purpose. “Specificity,” she says, “is a crucial ingredient.”

Why is having a clear purpose so critical? Because you use it as the lens to filter all other decisions about your meeting, including who to have in the room.

With that in mind, we can see that it’s probably not a great idea to discuss building a nuclear power plant and a bike shed in the same meeting. There’s not enough specificity there.

The key is to recognize that the available input on an issue doesn’t all need considering. The most informed opinions are most relevant. This is one reason why big meetings with lots of people present, most of whom don’t need to be there, are such a waste of time in organizations. Everyone wants to participate, but not everyone has anything meaningful to contribute.

When it comes to choosing your list of invitees, Parker writes, “if the purpose of your meeting is to make a decision, you may want to consider having fewer cooks in the kitchen.” If you don’t want bike-shedding to occur, avoid inviting contributions from those who are unlikely to have relevant knowledge and experience. Getting the result you want—a thoughtful, educated discussion about that power plant—depends on having the right people in the room.

It also helps to have a designated individual in charge of making the final judgment. When we make decisions by committee with no one in charge, reaching a consensus can be almost impossible. The discussion drags on and on. The individual can decide in advance how much importance to accord to the issue (for instance, by estimating how much its success or failure could help or harm the company’s bottom line). They can set a time limit for the discussion to create urgency. And they can end the meeting by verifying that it has indeed achieved its purpose.

Any issue that invites a lot of discussions from different people might not be the most important one at hand. Avoid descending into unproductive triviality by having clear goals for your meeting and getting the best people to the table to have a productive, constructive discussion.

Standing on the Shoulders of Giants

Innovation doesn’t occur in a vacuum. Doers and thinkers from Shakespeare to Jobs, liberally “stole” inspiration from the doers and thinkers who came before. Here’s how to do it right.

***

“If I have seen further,” Isaac Newton wrote in a 1675 letter to fellow scientist Robert Hooke, “it is by standing on the shoulders of giants.”

It can be easy to look at great geniuses like Newton and imagine that their ideas and work came solely out of their minds, that they spun it from their own thoughts—that they were true originals. But that is rarely the case.

Innovative ideas have to come from somewhere. No matter how unique or unprecedented a work seems, dig a little deeper and you will always find that the creator stood on someone else’s shoulders. They mastered the best of what other people had already figured out, then made that expertise their own. With each iteration, they could see a little further, and they were content in the knowledge that future generations would, in turn, stand on their shoulders.

Standing on the shoulders of giants is a necessary part of creativity, innovation, and development. It doesn’t make what you do less valuable. Embrace it.

Everyone gets a lift up

Ironically, Newton’s turn of phrase wasn’t even entirely his own. The phrase can be traced back to the twelfth century, when the author John of Salisbury wrote that philosopher Bernard of Chartres compared people to dwarves perched on the shoulders of giants and said that “we see more and farther than our predecessors, not because we have keener vision or greater height, but because we are lifted up and borne aloft on their gigantic stature.”

Mary Shelley put it this way in the nineteenth century, in a preface for Frankenstein: “Invention, it must be humbly admitted, does not consist in creating out of void but out of chaos.”

There are giants in every field. Don’t be intimidated by them. They offer an exciting perspective. As the film director Jim Jarmusch advised, “Nothing is original. Steal from anywhere that resonates with inspiration or fuels your imagination. Devour old films, new films, music, books, paintings, photographs, poems, dreams, random conversations, architecture, bridges, street signs, trees, clouds, bodies of water, light, and shadows. Select only things to steal from that speak directly to your soul. If you do this, your work (and theft) will be authentic. Authenticity is invaluable; originality is non-existent. And don’t bother concealing your thievery—celebrate it if you feel like it. In any case, always remember what Jean-Luc Godard said: ‘It’s not where you take things from—it’s where you take them to.’”

That might sound demoralizing. Some might think, “My song, my book, my blog post, my startup, my app, my creation—surely they are original? Surely no one has done this before!” But that’s likely not the case. It’s also not a bad thing. Filmmaker Kirby Ferguson states in his TED Talk: “Admitting this to ourselves is not an embrace of mediocrity and derivativeness—it’s a liberation from our misconceptions, and it’s an incentive to not expect so much from ourselves and to simply begin.”

There lies the important fact. Standing on the shoulders of giants enables us to see further, not merely as far as before. When we build upon prior work, we often improve upon it and take humanity in new directions. However original your work seems to be, the influences are there—they might just be uncredited or not obvious. As we know from social proof, copying is a natural human tendency. It’s how we learn and figure out how to behave.

In Antifragile: Things That Gain from Disorder, Nassim Taleb describes the type of antifragile inventions and ideas that have lasted throughout history. He describes himself heading to a restaurant (the likes of which have been around for at least 2,500 years), in shoes similar to those worn at least 5,300 years ago, to use silverware designed by the Mesopotamians. During the evening, he drinks wine based on a 6,000-year-old recipe, from glasses invented 2,900 years ago, followed by cheese unchanged through the centuries. The dinner is prepared with one of our oldest tools, fire, and using utensils much like those the Romans developed.

Much about our societies and cultures has undeniably changed and continues to change at an ever-faster rate. But we continue to stand on the shoulders of those who came before in our everyday life, using their inventions and ideas, and sometimes building upon them.

Not invented here syndrome

When we discredit what came before or try to reinvent the wheel or refuse to learn from history, we hold ourselves back. After all, many of the best ideas are the oldest. “Not Invented Here Syndrome” is a term for situations when we avoid using ideas, products, or data created by someone else, preferring instead to develop our own (even if it is more expensive, time-consuming, and of lower quality.)

The syndrome can also manifest as reluctance to outsource or delegate work. People might think their output is intrinsically better if they do it themselves, becoming overconfident in their own abilities. After all, who likes getting told what to do, even by someone who knows better? Who wouldn’t want to be known as the genius who (re)invented the wheel?

Developing a new solution for a problem is more exciting than using someone else’s ideas. But new solutions, in turn, create new problems. Some people joke that, for example, the largest Silicon Valley companies are in fact just impromptu incubators for people who will eventually set up their own business, firm in the belief that what they create themselves will be better.

The syndrome is also a case of the sunk cost fallacy. If a company has spent a lot of time and money getting a square wheel to work, they may be resistant to buying the round ones that someone else comes out with. The opportunity costs can be tremendous. Not Invented Here Syndrome detracts from an organization or individual’s core competency, and results in wasting time and talent on what are ultimately distractions. Better to use someone else’s idea and be a giant for someone else.

Why Steve Jobs stole his ideas

“Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it. They just saw something. It seemed obvious to them after a while; that’s because they were able to connect experiences they’ve had and synthesize new things.” 

— Steve Jobs

In The Runaway Species: How Human Creativity Remakes the World, Anthony Brandt and David Eagleman trace the path that led to the creation of the iPhone and track down the giants upon whose shoulders Steve Jobs perched. We often hail Jobs as a revolutionary figure who changed how we use technology. Few who were around in 2007 could have failed to notice the buzz created by the release of the iPhone. It seemed so new, a total departure from anything that had come before. The truth is a little messier.

The first touchscreen came about almost half a century before the iPhone, developed by E.A. Johnson for air traffic control. Other engineers built upon his work and developed usable models, filing a patent in 1975. Around the same time, the University of Illinois was developing touchscreen terminals for students. Prior to touchscreens, light pens used similar technology. The first commercial touchscreen computer came out in 1983, soon followed by graphics boards, tablets, watches, and video game consoles. Casio released a touchscreen pocket computer in 1987 (remember, this is still a full twenty years before the iPhone.)

However, early touchscreen devices were frustrating to use, with very limited functionality, often short battery lives, and minimal use cases for the average person. As touchscreen devices developed in complexity and usability, they laid down the groundwork for the iPhone.

Likewise, the iPod built upon the work of Kane Kramer, who took inspiration from the Sony Walkman. Kramer designed a small portable music player in the 1970s. The IXI, as he called it, looked similar to the iPod but arrived too early for a market to exist, and Kramer lacked the marketing skills to create one. When pitching to investors, Kramer described the potential for immediate delivery, digital inventory, taped live performances, back catalog availability, and the promotion of new artists and microtransactions. Sound familiar?

Steve Jobs stood on the shoulders of the many unseen engineers, students, and scientists who worked for decades to build the technology he drew upon. Although Apple has a long history of merciless lawsuits against those they consider to have stolen their ideas, many were not truly their own in the first place. Brandt and Eagleman conclude that “human creativity does not emerge from a vacuum. We draw on our experience and the raw materials around us to refashion the world. Knowing where we’ve been, and where we are, points the way to the next big industries.”

How Shakespeare got his ideas

Nothing will come of nothing.”  

— William Shakespeare, King Lear

Most, if not all, of Shakespeare’s plays draw heavily upon prior works—so much so that some question whether he would have survived today’s copyright laws.

Hamlet took inspiration from Gesta Danorum, a twelfth-century work on Danish history by Saxo Grammaticus, consisting of sixteen Latin books. Although it is doubtful whether Shakespeare had access to the original text, scholars find the parallels undeniable and believe he may have read another play based on it, from which he drew inspiration. In particular, the accounts of the plight of Prince Amleth (which has the same letters as Hamlet) involves similar events.

Holinshed’s Chronicles, a co-authored account of British history from the late sixteenth century, tells stories that mimic the plot of Macbeth, including the three witches. Holinshed’s Chronicles itself was a mélange of earlier texts, which transferred their biases and fabrications to Shakespeare. It also likely inspired King Lear.

Parts of Antony and Cleopatra are copied verbatim from Plutarch’s Life of Mark Anthony. Arthur Brooke’s 1562 poem The Tragicall Historye of Romeus and Juliet was an undisguised template for Romeo and Juliet. Once again, there are more giants behind the scenes—Brooke copied a 1559 poem by Pierre Boaistuau, who in turn drew from a 1554 story by Matteo Bandello, who in turn drew inspiration from a 1530 work by Luigi da Porto. The list continues, with Plutarch, Chaucer, and the Bible acting as inspirations for many major literary, theatrical, and cultural works.

Yet what Shakespeare did with the works he sometimes copied, sometimes learned from, is remarkable. Take a look at any of the original texts and, despite the mimicry, you will find that they cannot compare to his plays. Many of the originals were dry, unengaging, and lacking any sort of poetic language. J.J. Munro wrote in 1908 that The Tragicall Historye of Romeus and Juliet “meanders on like a listless stream in a strange and impossible land; Shakespeare’s sweeps on like a broad and rushing river, singing and foaming, flashing in sunlight and darkening in cloud, carrying all things irresistibly to where it plunges over the precipice into a waste of waters below.”

Despite bordering on plagiarism at times, he overhauled the stories with an exceptional use of the English language, bringing drama and emotion to dreary chronicles or poems. He had a keen sense for the changes required to restructure plots, creating suspense and intensity in their stories. Shakespeare saw far further than those who wrote before him, and with their help, he ushered in a new era of the English language.

Of course, it’s not just Newton, Jobs, and Shakespeare who found a (sometimes willing, sometimes not) shoulder to stand upon. Facebook is presumed to have built upon Friendster. Cormac McCarthy’s books often replicate older history texts, with one character coming straight from Samuel Chamberlain’s My Confessions. John Lennon borrowed from diverse musicians, once writing in a letter to the New York Times that though the Beatles copied black musicians, “it wasn’t a rip off. It was a love in.”

In The Ecstasy of Influence, Jonathan Lethem points to many other instances of influences in classic works. In 1916, journalist Heinz von Lichberg published a story of a man who falls in love with his landlady’s daughter and begins a love affair, culminating in her death and his lasting loneliness. The title? Lolita. It’s hard to question that Nabokov must have read it, but aside from the plot and name, the style of language in his version is absent from the original.

The list continues. The point is not to be flippant about plagiarism but to cultivate sensitivity to the elements of value in a previous work, as well as the ability to build upon those elements. If we restrict the flow of ideas, everyone loses out.

The adjacent possible

What’s this about? Why can’t people come up with their own ideas? Why do so many people come up with a brilliant idea but never profit from it? The answer lies in what scientist Stuart Kauffman calls “the adjacent possible.” Quite simply, each new innovation or idea opens up the possibility of additional innovations and ideas. At any time, there are limits to what is possible, yet those limits are constantly expanding.

In Where Good Ideas Come From: The Natural History of Innovation, Steven Johnson compares this process to being in a house where opening a door creates new rooms. Each time we open the door to a new room, new doors appear and the house grows. Johnson compares it to the formation of life, beginning with basic fatty acids. The first fatty acids to form were not capable of turning into living creatures. When they self-organized into spheres, the groundwork formed for cell membranes, and a new door opened to genetic codes, chloroplasts, and mitochondria. When dinosaurs evolved a new bone that meant they had more manual dexterity, they opened a new door to flight. When our distant ancestors evolved opposable thumbs, dozens of new doors opened to the use of tools, writing, and warfare. According to Johnson, the history of innovation has been about exploring new wings of the adjacent possible and expanding what we are capable of.

A new idea—like those of Newton, Jobs, and Shakespeare—is only possible because a previous giant opened a new door and made their work possible. They in turn opened new doors and expanded the realm of possibility. Technology, art, and other advances are only possible if someone else has laid the groundwork; nothing comes from nothing. Shakespeare could write his plays because other people had developed the structures and language that formed his tools. Newton could advance science because of the preliminary discoveries that others had made. Jobs built Apple out of the debris of many prior devices and technological advances.

The questions we all have to ask ourselves are these: What new doors can I open, based on the work of the giants that came before me? What opportunities can I spot that they couldn’t? Where can I take the adjacent possible? If you think all the good ideas have already been found, you are very wrong. Other people’s good ideas open new possibilities, rather than restricting them.

As time passes, the giants just keep getting taller and more willing to let us hop onto their shoulders. Their expertise is out there in books and blog posts, open-source software and TED talks, podcast interviews, and academic papers. Whatever we are trying to do, we have the option to find a suitable giant and see what can be learned from them. In the process, knowledge compounds, and everyone gets to see further as we open new doors to the adjacent possible.

Unlikely Optimism: The Conjunctive Events Bias

When certain events need to take place to achieve a desired outcome, we’re overly optimistic that those events will happen. Here’s why we should temper those expectations.

***

Why are we so optimistic in our estimation of the cost and schedule of a project? Why are we so surprised when something inevitably goes wrong? If we want to get better at executing our plans successfully, we need to be aware of how the conjunctive events bias can throw us way off track.

We often overestimate the likelihood of conjunctive events—occurrences that must happen in conjunction with one another. The probability of a series of conjunctive events happening is lower than the probability of any individual event. This is often very hard for us to wrap our heads around. But if we don’t try, we risk seriously underestimating the time, money, and effort required to achieve our goals.

The Most Famous Bank Teller

In Thinking, Fast and Slow, Daniel Kahneman gives a now-classic example of the conjunctive events bias. Students at several major universities received a description of a woman. They were told that Linda is 31, single, intelligent, a philosophy major, and concerned with social justice. Students were then asked to estimate which of the following statements is most likely true:

  • Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The majority of students (85% to 95%) chose the latter statement, seeing the conjunctive events (that she is both a bank teller and a feminist activist) as more probable. Two events together seemed more likely that one event. It’s perfectly possible that Linda is a feminist bank teller. It’s just not more probable for her to be a feminist bank teller than it is for her to be a bank teller. After all, the first statement does not exclude the possibility of her being a feminist; it just does not mention it.

The logic underlying the Linda example can be summed up as follows: The extension rule in probability theory states that if B is a subset of A, B cannot be more probable than A. Likewise, the probability of A and B cannot be higher than the probability of A or B. Broader categories are always more probable than their subsets. It’s more likely a randomly selected person is a parent than it is that they are a father. It’s more likely someone has a pet than they have a cat. It’s more likely someone likes coffee than they like cappuccinos. And so on.

It’s not that we always think conjunctive events are more likely. If the second option in the Linda Problem was ‘Linda is a bank teller and likes to ski’, maybe we’d all pick just the bank teller option because we don’t have any information that makes either a good choice. The point here, is that given what we know about Linda, we think it’s likely she’s a feminist. Therefore, we are willing to add almost anything to the Linda package if it appears with ‘feminist’. This willingness to create a narrative with pieces that clearly don’t fit is the real danger of the conjunctive events bias.

“Plans are useless, but planning is indispensable.” 

— Dwight D. Eisenhower

Why the best laid plans often fail

The conjunctive events bias makes us underestimate the effort required to accomplish complex plans. Most plans don’t work out. Things almost always take longer than expected. There are always delays due to dependencies. As Max Bazerman and Don Moore explain in Judgment in Managerial Decision Making, “The overestimation of conjunctive events offers a powerful explanation for the problems that typically occur with projects that require multistage planning. Individuals, businesses, and governments frequently fall victim to the conjunctive events bias in terms of timing and budgets. Home remodeling, new product ventures, and public works projects seldom finish on time.”

Plans don’t work because completing a sequence of tasks requires a great deal of cooperation from multiple events. As a system becomes increasingly complex, the chance of failure increases. A plan can be thought of as a system. Thus, a change in one component will very likely have impacts on the functionality of other parts of the system. The more components you have, the more chances that something will go wrong in one of them, causing delays, setbacks, and fails in the rest of the system. Even if the chance of an individual component failing is slight, a large number of them will increase the probability of failure.

Imagine you’re building a house. Things start off well. The existing structure comes down on schedule. Construction continues and the framing goes up, and you are excited to see the progress. The contractor reassures you that all trades and materials are lined up and ready to go. What is more likely:

  • The building permits get delayed
  • The building permits get delayed and the electrical goes in on schedule

You know a bit about the electrical schedule. You know nothing about the permits. But you bucket them in optimistically, erroneously linking one with the other. So you don’t worry about the building permits and never imagine that their delay will impact the electrical. When the permits do get delayed you have to pay the electrician for the week he can’t work, and then have to wait for him to finish another job before he can resume yours.

Thus, the more steps involved in a plan, the greater the chance of failure, as we associate probabilities to events that aren’t at all related. That is especially true as more people get involved, bringing their individual biases and misconceptions of chance.

In Seeking Wisdom: From Darwin to Munger, Peter Bevelin writes:

A project is composed of a series of steps where all must be achieved for success. Each individual step has some probability of failure. We often underestimate the large number of things that may happen in the future or all opportunities for failure that may cause a project to go wrong. Humans make mistakes, equipment fails, technologies don’t work as planned, unrealistic expectations, biases including sunk cost-syndrome, inexperience, wrong incentives, changing requirements, random events, ignoring early warning signals are reasons for delays, cost overruns, and mistakes. Often we focus too much on the specific base project case and ignore what normally happens in similar situations (base rate frequency of outcomes—personal and others). Why should some project be any different from the long-term record of similar ones? George Bernard Shaw said: “We learn from history that man can never learn anything from history.”

The more independent steps that are involved in achieving a scenario, the more opportunities for failure and the less likely it is that the scenario will happen. We often underestimate the number of steps, people, and decisions involved.

We can’t pretend that knowing about conjunctive events bias will automatically stop us from having it. When, however, we are doing planning where a successful outcome is of importance to us, it’s useful to run through our assumptions with this bias in mind. Sometimes, assigning frequencies instead of probabilities can also show us where our assumptions might be leading us astray. In the housing example above, asking what is the frequency of having building permits delayed in every hundred houses, versus the frequency of having permits delayed and electrical going in on time for the same hundred demonstrates more easily the higher frequency of option one.

It also extremely useful to keep a decision journal for our major decisions, so that we can more realistic in our estimates on the time and resources we need for future plans. The more realistic we are, the higher our chances of accomplishing what we set out to do.

The conjunctive events bias teaches us to be more pessimistic about plans and to consider the worst-case scenario, not just the best. We may assume things will always run smoothly but disruption is the rule rather than the exception.

Chesterton’s Fence: A Lesson in Second Order Thinking

A core component of making great decisions is understanding the rationale behind previous decisions. If we don’t understand how we got “here,” we run the risk of making things much worse.

***

When we seek to intervene in any system created by someone, it’s not enough to view their decisions and choices simply as the consequences of first-order thinking because we can inadvertently create serious problems. Before changing anything, we should wonder whether they were using second-order thinking. Their reasons for making certain choices might be more complex than they seem at first. It’s best to assume they knew things we don’t or had experience we can’t fathom, so we don’t go for quick fixes and end up making things worse.

Second-order thinking is the practice of not just considering the consequences of our decisions but also the consequences of those consequences. Everyone can manage first-order thinking, which is just considering the immediate anticipated result of an action. It’s simple and quick, usually requiring little effort. By comparison, second-order thinking is more complex and time-consuming. The fact that it is difficult and unusual is what makes the ability to do it such a powerful advantage.

Second-order thinking will get you extraordinary results, and so will learning to recognize when other people are using second-order thinking. To understand exactly why this is the case, let’s consider Chesterton’s Fence, described by G. K. Chesterton himself as follows:

There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

***

Chesterton’s Fence is a heuristic inspired by a quote from the writer and polymath G. K. Chesterton’s 1929 book, The Thing. It’s best known as being one of John F. Kennedy’s favored sayings, as well as a principle Wikipedia encourages its editors to follow. In the book, Chesterton describes the classic case of the reformer who notices something, such as a fence, and fails to see the reason for its existence. However, before they decide to remove it, they must figure out why it exists in the first place. If they do not do this, they are likely to do more harm than good with its removal. In its most concise version, Chesterton’s Fence states the following:

Do not remove a fence until you know why it was put up in the first place.

Chesterton went on to explain why this principle holds true, writing that fences don’t grow out of the ground, nor do people build them in their sleep or during a fit of madness. He explained that fences are built by people who carefully planned them out and “had some reason for thinking [the fence] would be a good thing for somebody.” Until we establish that reason, we have no business taking an ax to it. The reason might not be a good or relevant one; we just need to be aware of what the reason is. Otherwise, we may end up with unintended consequences: second- and third-order effects we don’t want, spreading like ripples on a pond and causing damage for years.

Elsewhere, in his essay collection Heretics, Chesterton makes a similar point, detailed here:

Suppose that a great commotion arises in the street about something, let us say a lamp-post, which many influential persons desire to pull down. A grey-clad monk, who is the spirit of the Middle Ages, is approached upon the matter, and begins to say, in the arid manner of the Schoolmen, “Let us first of all consider, my brethren, the value of Light. If Light be in itself good—” At this point he is somewhat excusably knocked down. All the people make a rush for the lamp-post, the lamp-post is down in ten minutes, and they go about congratulating each other on their un-mediaeval practicality. But as things go on they do not work out so easily. Some people have pulled the lamp-post down because they wanted the electric light; some because they wanted old iron; some because they wanted darkness, because their deeds were evil. Some thought it not enough of a lamp-post, some too much; some acted because they wanted to smash municipal machinery; some because they wanted to smash something. And there is war in the night, no man knowing whom he strikes. So, gradually and inevitably, to-day, to-morrow, or the next day, there comes back the conviction that the monk was right after all, and that all depends on what is the philosophy of Light. Only what we might have discussed under the gas-lamp, we now must discuss in the dark.

As simple as Chesterton’s Fence is as a principle, it teaches us an important lesson. Many of the problems we face in life occur when we intervene with systems without an awareness of what the consequences could be. We can easily forget that this applies to subtraction as much as to addition. If a fence exists, there is likely a reason for it. It may be an illogical or inconsequential reason, but it is a reason nonetheless.


“Before I built a wall I’d ask to know
What I was walling in or walling out,
And to whom I was like to give offence.”

— Robert Frost, “Mending Wall”

Chesterton also alluded to the all-too-common belief that previous generations were bumbling fools, stumbling around, constructing fences wherever they fancied. Should we fail to respect their judgement and not try to understand it, we run the risk of creating new, unexpected problems. By and large, people do not do things for no reason. We’re all lazy at heart. We don’t like to waste time and resources on useless fences. Not understanding something does not mean it must be pointless.

Take the case of supposedly hierarchy-free companies. Someone came along and figured that having management and an overall hierarchy is an imperfect system. It places additional stress on those at the bottom and can even be damaging to their health. It leaves room for abuse of power and manipulative company politics. It makes it unlikely that good ideas from those at the bottom will get listened to.

However, despite the numerous problems inherent in hierarchical companies, doing away with this structure altogether belies a lack of awareness of the reasons why it is so ubiquitous. Someone needs to make decisions and be held responsible for their consequences. During times of stress or disorganization, people naturally tend to look to leaders for direction. Without a formal hierarchy, people often form an invisible one, which is far more complex to navigate and can lead to the most charismatic or domineering individual taking control, rather than the most qualified.

It is certainly admirable that hierarchy-free companies are taking the enormous risk inherent in breaking the mold and trying something new. However, their approach ignores Chesterton’s Fence and doesn’t address why hierarchies exist within companies in the first place. Removing them does not necessarily lead to a fairer, more productive system.

Yes, doing things the way they’ve always been done means getting what we’ve always got. There’s certainly nothing positive about being resistant to any change. Things become out of date and redundant with time. Sometimes an outside perspective is ideal for shaking things up and finding new ways. Even so, we can’t let ourselves be too overconfident about the redundancy of things we see as pointless.

Or, to paraphrase Rory Sutherland, the peacock’s tail is not about efficiency. In fact, its whole value lies in its inefficiency. It signals a bird is healthy enough to waste energy growing it and has the strength to carry it around. Peahens use the tails of peacocks as guidance for choosing which mates are likely to have the best genes to pass on to their offspring. If an outside observer were to somehow swoop in and give peacocks regular, functional tails, it would be more energy efficient and practical, but it would deprive them of the ability to advertise their genetic potential.

***

All of us, at one point or another, make some attempt to change a habit to improve our lives. If you’re engaging in a bad habit, it’s admirable to try to eliminate it—except part of why many attempts to do so fail is that bad habits do not appear out of nowhere. No one wakes up one day and decides they want to start smoking or drinking every night or watching television until the early hours of the morning. Bad habits generally evolve to serve an unfulfilled need: connection, comfort, distraction, take your pick.

Attempting to remove the habit and leave everything else untouched does not eliminate the need and can simply lead to a replacement habit that might be just as harmful or even worse. Because of this, more successful approaches often involve replacing a bad habit with a good, benign, or less harmful one—or dealing with the underlying need. In other words, that fence went up for a reason, and it can’t come down without something either taking its place or removing the need for it to be there in the first place.

To give a further example, in a classic post from 2009 on his website, serial entrepreneur Steve Blank gives an example of a decision he has repeatedly seen in startups. They grow to the point where it makes sense to hire a Chief Financial Officer. Eager to make an immediate difference, the new CFO starts looking for ways to cut costs so they can point to how they’re saving the company money. They take a look at the free snacks and sodas offered to employees and calculate how much they cost per year—perhaps a few thousand dollars. It seems like a waste of money, so they decide to do away with free sodas or start charging a few cents for them. After all, they’re paying people enough. They can buy their own sodas.

Blank writes that, in his experience, the outcome is always the same. The original employees who helped the company grow initially notice the change and realize things are not how they were before. Of course they can afford to buy their own sodas. But suddenly having to is just an unmissable sign that the company’s culture is changing, which can be enough to prompt the most talented people to jump ship. Attempting to save a relatively small amount of money ends up costing far more in employee turnover. The new CFO didn’t consider why that fence was up in the first place.

***

Chesterton’s Fence is not an admonishment of anyone who tries to make improvements; it is a call to be aware of second-order thinking before intervening. It reminds us that we don’t always know better than those who made decisions before us, and we can’t see all the nuances to a situation until we’re intimate with it. Unless we know why someone made a decision, we can’t safely change it or conclude that they were wrong.

The first step before modifying an aspect of a system is to understand it. Observe it in full. Note how it interconnects with other aspects, including ones that might not be linked to you personally. Learn how it works, and then propose your change.

Using Models to Stay Calm in Charged Situations

When polarizing topics are discussed in meetings, passions can run high and cloud our judgment. Learn how mental models can help you see clearly from this real-life scenario.

***

Mental models can sometimes come off as an abstract concept. They are, however, actual tools you can use to navigate through challenging or confusing situations. In this article, we are going to apply our mental models to a common situation: a meeting with conflict.

A recent meeting with the school gave us an opportunity to use our latticework. Anyone with school-age kids has dealt with the bureaucracy of a school system and the other parents who interact with it. Call it what you will, all school environments usually have some formal interface between parents and the school administration that is aimed at progressing issues and ideas of importance to the school community.

The particular meeting was an intense one. At issue was the school’s communication around a potentially harmful leak in the heating system. Some parents felt the school had communicated reasonably about the problem and the potential consequences. Others felt their child’s life had been put in danger due to potential exposure to mold and asbestos. Some parents felt the school could have done a better job of soliciting feedback from students about their experiences during the previous week, and others felt the school administration had done a poor job about communicating potential risks to parents.

The first thing you’ll notice if you’re in a meeting like this is that emotions on all sides run high. After some discussion you might also notice a few more things, like how many people do the following:

Any of these occurrences, when you hear them via statements from people around the table, are a great indication that using a few mental models might improve the dynamics of the situation.

The first mental model that is invaluable in situations like this is Hanlon’s Razor: don’t attribute to maliciousness that which is more easily explained by incompetence. (Hanlon’s Razor is one of the 9 general thinking concepts in The Great Mental Models Volume One.) When people feel victimized, they can get angry and lash out in an attempt to fight back against a perceived threat. When people feel accused of serious wrongdoing, they can get defensive and withhold information to protect themselves. Neither of these reactions is useful in a situation like this. Yes, sometimes people intentionally do bad things. But more often than not, bad things are the result of incompetence. In a school meeting situation, it’s safe to assume everyone at the table has the best interests of the students at heart. School staff and administrators usually go into teaching motivated by a deep love of education. They genuinely want their schools to be amazing places of learning, and they devote time and attention to improving the lives of their students.

It makes no sense to assume a school’s administration would deliberately withhold harmful information. Yes, it could happen. But, in either case, you are going to obtain more valuable information if you assume poor decisions were the result of incompetence versus maliciousness.

When we feel people are malicious toward us, we instinctively become a negatively coiled spring, waiting for the right moment to take them down a notch or two. Removing malice from the equation, you give yourself emotional breathing room to work toward better solutions and apply more models.

The next helpful model is relativity, adapted from the laws of physics. This model is about remembering that everyone’s perspective is different from yours. Understanding how others see the same situation can help you move toward a more meaningful dialogue with the people in the meeting. You can do this by looking around the room and asking yourself what is influencing people’s approaches to the situation.

In our school meeting, we see some people are afraid for their child’s health. Others are influenced by past dealings with the school administration. Authorities are worried about closing the school. Teachers are concerned about how missed time might impact their students’ learning. Administrators are trying to balance the needs of parents with their responsibility to follow the necessary procedures. Some parents are stressed because they don’t have care for their children when the school closes. There is a lot going on, and relativity gives us a lens to try to identify the dynamics impacting communication.

After understanding the different perspectives, it becomes easier to incorporate them into your thinking. You can diffuse conflict by identifying what it is you think you hear. Often, just the feeling of being heard will help people start to listen and engage more objectively.

Now you can dive into some of the details. First up is probabilistic thinking. Before we worry about mold levels or sick children, let’s try to identify the base rates. What is the mold content in the air outside? How many children are typically absent due to sickness at this time of year? Reminding people that severity has to be evaluated against something in a situation like this can really help diffuse stress and concern. If 10% of the student population is absent on any given day, and in the week leading up to these events 12% to 13% of the population was absent, then it turns out we are not actually dealing with a huge statistical anomaly.

Then you can evaluate the anecdotes with the model of the Law of Large Numbers in mind. Small sample sizes can be misleading. The larger your group for evaluation, the more relevant the conclusions. In a situation such as our school council meeting, small sample sizes only serve to ratchet up the emotion by implying they are the causal outcomes of recent events.

In reality, any one-off occurrence can often be explained in multiple ways. One or two children coming home with hives? There are a dozen reasonable explanations for that: allergies, dry skin, reaction to skin cream, symptom of an illness unrelated to the school environment, and so on. However, the more children that develop hives, the more it is statistically possible the cause relates to the only common denominator between all children: the school environment.

Even then, correlation does not equal causation. It might not be a recent leaky steam pipe; is it exam time? Are there other stressors in the culture? Other contaminants in the environment? The larger your sample size, the more likely you will obtain relevant information.

Finally, you can practice systems thinking and contribute to the discussion by identifying the other components in the system you are all dealing with. After all, a school council is just one part of a much larger system involving governments, school boards, legislators, administrators, teachers, students, parents, and the community. When you put your meeting into the bigger context of the entire system, you can identify the feedback loops: Who is responding to what information, and how quickly does their behavior change? When you do this, you can start to suggest some possible steps and solutions to remedy the situation and improve interactions going forward.

How is the information flowing? How fast does it move? How much time does each recipient have to adjust before receiving more information? Chances are, you aren’t going to know all this at the meeting. So you can ask questions. Does the principal have to get approval from the school board before sending out communications involving risk to students? Can teachers communicate directly with parents? What are the conditions for communicating possible risk? Will speculation increase the speed of a self-reinforcing feedback loop causing panic? What do parents need to know to make an informed decision about the welfare of their child? What does the school need to know to make an informed decision about the welfare of their students?

In meetings like the one described here, there is no doubt that communication is important. Using the meeting to discuss and debate ways of improving communication so that outcomes are generally better in the future is a valuable use of time.

A school meeting is one practical example of how having a latticework of mental models can be useful. Using mental models can help you diffuse some of the emotions that create an unproductive dynamic. They can also help you bring forward valuable, relevant information to assist the different parties in improving their decision-making process going forward.

At the very least, you will walk away from the meeting with a much better understanding of how the world works, and you will have gained some strategies you can implement in the future to leverage this knowledge instead of fighting against it.

The Illusory Truth Effect: Why We Believe Fake News, Conspiracy Theories and Propaganda

When a “fact” tastes good and is repeated enough, we tend to believe it, no matter how false it may be. Understanding the illusory truth effect can keep us from being bamboozled.

***

A recent Verge article looked at some of the unsavory aspects of working as Facebook content moderators—the people who spend their days cleaning up the social network’s most toxic content. One strange detail stands out. The moderators the Verge spoke to reported that they and their coworkers often found themselves believing fringe, often hatemongering conspiracy theories they would have dismissed under normal circumstances. Others described experiencing paranoid thoughts and intense fears for their safety.

An overnight switch from skepticism to fervent belief in conspiracy theories is not unique to content moderators. In a Nieman Lab article by Laura Hazard Owen, she explains that researchers who study the spread of disinformation online can find themselves struggling to be sure about their own beliefs and needing to make an active effort to counteract what they see. Some of the most fervent, passionate conspiracy theorists admit that they first fell into the rabbit hole when they tried to debunk the beliefs they now hold. There’s an explanation for why this happens: the illusory truth effect.

The illusory truth effect

Facts do not cease to exist because they are ignored.

Aldous Huxley

Not everything we believe is true. We may act like it is and it may be uncomfortable to think otherwise, but it’s inevitable that we all hold a substantial number of beliefs that aren’t objectively true. It’s not about opinions or different perspectives. We can pick up false beliefs for the simple reason that we’ve heard them a lot.

If I say that the moon is made of cheese, no one reading this is going to believe that, no matter how many times I repeat it. That statement is too ludicrous. But what about something a little more plausible? What if I said that moon rock has the same density as cheddar cheese? And what if I wasn’t the only one saying it? What if you’d also seen a tweet touting this amazing factoid, perhaps also heard it from a friend at some point, and read it in a blog post?

Unless you’re a geologist, a lunar fanatic, or otherwise in possession of an unusually good radar for moon rock-related misinformation, there is a not insignificant chance you would end up believing a made-up fact like that, without thinking to verify it. You might repeat it to others or share it online. This is how the illusory truth effect works: we all have a tendency to believe something is true after being exposed to it multiple times. The more times we’ve heard something, the truer it seems. The effect is so powerful that repetition can persuade us to believe information we know is false in the first place. Ever thought a product was stupid but somehow you ended up buying it on a regular basis? Or you thought that new manager was okay, but now you participate in gossip about her?

The illusory truth effect is the reason why advertising works and why propaganda is one of the most powerful tools for controlling how people think. It’s why the speech of politicians can be bizarre and multiple-choice tests can cause students problems later on. It’s why fake news spreads and retractions of misinformation don’t work. In this post, we’re going to look at how the illusory truth effect works, how it shapes our perception of the world, and how we can avoid it.

The discovery of the illusory truth effect

Rather than love, than money, than fame, give me truth.

Henry David Thoreau

The illusory truth effect was first described in a 1977 paper entitled “Frequency and the Conference of Referential Validity,” by Lynn Hasher and David Goldstein of Temple University and Thomas Toppino of Villanova University. In the study, the researchers presented a group of students with 60 statements and asked them to rate how certain they were that each was either true or false. The statements came from a range of subjects and were all intended to be not too obscure, but unlikely to be familiar to study participants. Each statement was objective—it could be verified as either correct or incorrect and was not a matter of opinion. For example, “the largest museum in the world is the Louvre in Paris” was true.

Students rated their certainty three times, with two weeks in between evaluations. Some of the statements were repeated each time, while others were not. With each repetition, students became surer of their certainty regarding the statements they labelled as true. It seemed that they were using familiarity as a gauge for how confident they were of their beliefs.

An important detail is that the researchers did not repeat the first and last 10 items on each list. They felt students would be most likely to remember these and be able to research them before the next round of the study. While the study was not conclusive evidence of the existence of the illusory truth effect, subsequent research has confirmed its findings.

Why the illusory truth effect happens

The sad truth is the truth is sad.

Lemony Snicket

Why does repetition of a fact make us more likely to believe it, and to be more certain of that belief? As with other cognitive shortcuts, the typical explanation is that it’s a way our brains save energy. Thinking is hard work—remember that the human brain uses up about 20% of an individual’s energy, despite accounting for just 2% of their body weight.

The illusory truth effect comes down to processing fluency. When a thought is easier to process, it requires our brains to use less energy, which leads us to prefer it. The students in Hasher’s original study recognized the repeated statements, even if not consciously. That means that processing them was easier for their brains.

Processing fluency seems to have a wide impact on our perception of truthfulness. Rolf Reber and Norbert Schwarz, in their article “Effects of Perceptual Fluency on Judgments of Truth,” found that statements presented in an easy-to-read color are judged as more likely to be true than ones presented in a less legible way. In their article “Birds of a Feather Flock Conjointly (?): Rhyme as Reason in Aphorisms,” Matthew S. McGlone and Jessica Tofighbakhsh found that aphorisms that rhyme (like “what sobriety conceals, alcohol reveals”), even if someone hasn’t heard them before, seem more accurate than non-rhyming versions. Once again, they’re easier to process.

Fake news

“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. ”

— Carl Sagan

The illusory truth effect is one factor in why fabricated news stories sometimes gain traction and have a wide impact. When this happens, our knee-jerk reaction can be to assume that anyone who believes fake news must be unusually gullible or outright stupid. Evan Davis writes in Post Truth, “Never before has there been a stronger sense that fellow citizens have been duped and that we are all suffering the consequences of their intellectual vulnerability.” As Davis goes on to write, this assumption isn’t helpful for anyone. We can’t begin to understand why people believe seemingly ludicrous news stories until we consider some of the psychological reasons why this might happen.

Fake news falls under the umbrella of “information pollution,” which also includes news items that misrepresent information, take it out of context, parody it, fail to check facts or do background research, or take claims from unreliable sources at face value. Some of this news gets published on otherwise credible, well-respected news sites due to simple oversight. Some goes on parody sites that never purport to tell the truth, yet are occasionally mistaken for serious reporting. Some shows up on sites that replicate the look and feel of credible sources, using similar web design and web addresses. And some fake news comes from sites dedicated entirely to spreading misinformation, without any pretense of being anything else.

A lot of information pollution falls somewhere in between the extremes that tend to get the most attention. It’s the result of people being overworked or in a hurry and unable to do the due diligence that reliable journalism requires. It’s what happens when we hastily tweet something or mention it in a blog post and don’t realize it’s not quite true. It extends to miscited quotes, doctored photographs, fiction books masquerading as memoirs, or misleading statistics.

The signal to noise ratio is so skewed that we have a hard time figuring out what to pay attention to and what we should ignore. No one has time to verify everything they read online. No one. (And no, offline media certainly isn’t perfect either.) Our information processing capabilities are not infinite and the more we consume, the harder it becomes to assess its value.

Moreover, we’re often far outside our circle of competence, reading about topics we don’t have the expertise in to assess accuracy in any meaningful way. This drip-drip of information pollution is not harmless. Like air pollution, it builds up over time and the more we’re exposed to it, the more likely we are to end up picking up false beliefs which are then hard to shift. For instance, a lot of people believe that crime, especially the violent kind, is on an upward trend year by year—in a 2016 study by Pew Research, 57% of Americans believed crime had worsened since 2008. This despite violent crime having actually fallen by nearly a fifth during that time. This false belief may stem from the fact that violent crime receives a disproportional amount of media coverage, giving it wide and repeated exposure.

When people are asked to rate the apparent truthfulness of news stories, they score ones they have read multiple times more truthful than those they haven’t. Danielle C. Polage, in her article “Making Up History: False Memories of Fake News Stories,” explains that a false story someone has been exposed to more than once can seem more credible than a true one they’re seeing for the first time. In experimental settings, people also misattribute their previous exposure to stories, believing they read a news item from another source when they actually saw it as part of a prior part of a study. Even when people know the story is part of the experiment, they sometimes think they’ve also read it elsewhere. The repetition is all that matters.

Given enough exposure to contradictory information, there is almost no knowledge that we won’t question.

Propaganda

If a lie is only printed often enough, it becomes a quasi-truth, and if such a truth is repeated often enough, it becomes an article of belief, a dogma, and men will die for it.

Isa Blagden

Propaganda and fake news are similar. By relying on repetition, disseminators of propaganda can change the beliefs and values of people.

Propaganda has a lot in common with advertising, except instead of selling a product or service, it’s about convincing people of the validity of a particular cause. Propaganda isn’t necessarily malicious; sometimes the cause is improved public health or boosting patriotism to encourage military enrollment. But often propaganda is used to undermine political processes to further narrow, radical, and aggressive agendas.

During World War II, the graphic designer Abraham Games served as the official war artist for the British government. Games’s work is iconic and era-defining for its punchy, brightly colored visual style. His army recruitment posters would often feature a single figure rendered in a proud, strong, admirable pose with a mere few words of text. They conveyed to anyone who saw them the sorts of positive qualities they would supposedly gain through military service. Whether this was true or not was another matter. Through repeated exposure to the poster, Games instilled the image the army wanted to create in the minds of viewers, affecting their beliefs and behaviors.

Today, propaganda is more likely to be a matter of quantity over quality. It’s not about a few artistic posters. It’s about saturating the intellectual landscape with content that supports a group’s agenda. With so many demands on our attention, old techniques are too weak.

Researchers Christopher Paul and Miriam Matthews at the Rand Corporation refer to the method of bombarding people with fabricated information as the “firehose of propaganda” model. While the report focuses on modern Russian propaganda, the techniques it describes are not confined to Russia. These techniques make use of the illusory truth effect, alongside other cognitive shortcuts. Firehose propaganda has four distinct features:

  • High-volume and multi-channel
  • Rapid, continuous and repetitive
  • Makes no commitment to objective reality
  • Makes no commitment to consistency

Firehose propaganda is predicated on exposing people to the same messages as frequently as possible. It involves a large volume of content, repeated again and again across numerous channels: news sites, videos, radio, social media, television and so on. These days, as the report describes, this can also include internet users who are paid to repeatedly post in forums, chat rooms, comment sections and on social media disputing legitimate information and spreading misinformation. It is the sheer volume that succeeds in obliterating the truth. Research into the illusory truth effect suggests that we are further persuaded by information heard from multiple sources, hence the efficacy of funneling propaganda through a range of channels.

Seeing as repetition leads to belief in many cases, firehose propaganda doesn’t need to pay attention to the truth or even to be consistent. A source doesn’t need to be credible for us to end up believing its messages. Fact-checking is of little help because it further adds to the repetition, yet we feel compelled not to ignore obviously untrue propagandistic material.

Firehose propaganda does more than spread fake news. It nudges us towards feelings like paranoia, mistrust, suspicion, and contempt for expertise. All of this makes future propaganda more effective. Unlike those espousing the truth, propagandists can move fast because they’re making up some or all of what they claim, meaning they gain a foothold in our minds first.  First impressions are powerful. Familiarity breeds trust.

How to combat the illusory truth effect

So how can we protect ourselves from believing false news and being manipulated by propaganda due to the illusory truth effect? The best route is to be far more selective. The information we consume is like the food we eat. If it’s junk, our thinking will reflect that.

We don’t need to spend as much time reading the news as most of us do. As with many other things in life, more can be less. The vast majority of the news we read is just information pollution. It doesn’t do us any good.

One of the best solutions is to quit the news. This frees up time and energy to engage with timeless wisdom that will improve your life. Try it for a couple of weeks. And if you aren’t convinced, read a few days’ worth of newspapers from 1978. You’ll see how much the news doesn’t really matter at all.

If you can’t quit the news habit, stick to reliable, well-known news sources that have a reputation to uphold. Steer clear of dubious sources whenever you can—even if you treat it as entertainment, you might still end up absorbing it. Research unfamiliar sources before trusting them. Be cautious of sites that are funded entirely by advertising (or that pay their journalists based on views) and seek to support reader-funded news sources you get value from if possible. Prioritize sites that treat their journalists well and don’t expect them to churn out dozens of thoughtless articles per day.  Don’t rely on news in social media posts without sources, from people outside of their circle of competence.

Avoid treating the news as entertainment to passively consume on the bus or while waiting in line. Be mindful about it—if you want to inform yourself on a topic, set aside designated time to learn about it from multiple trustworthy sources. Don’t assume breaking news is better, as it can take some time for the full details of a story to come out and people may be quick to fill in the gaps with misinformation. Accept that you can’t be informed about everything and most of it isn’t important. Pay attention to when news items make you feel outrage or other strong emotions, because this may be a sign of manipulation. Be aware that correcting false information can further fuel the illusory truth effect by adding to the repetition.

We can’t stop the illusory truth effect from existing. But we can recognize that it is a reality and seek to prevent ourselves from succumbing to it in the first place.

Conclusion

Our memories are imperfect. We are easily led astray by the illusory truth effect, which can direct what we believe and even change our understanding of the past. It’s not about intelligence—this happens to all of us. This effect is too powerful for us to override it simply by learning the truth. Cognitively, there is no distinction between a genuine memory and a false one. Our brains are designed to save energy and it’s crucial we accept that.

We can’t just pull back and think the illusory truth only applies to other people. It applies to everyone. We’re all responsible for our own beliefs. We can’t pin the blame on the media or social media algorithms or whatever else. When we put effort into thinking about and questioning the information we’re exposed to, we’re less vulnerable to the illusory truth effect. Knowing about the effect is the best way to identify when it’s distorting our worldview. Before we use information as the basis for important decisions, it’s a good idea to verify if it’s true, or if it’s something we’ve just heard a lot.

Truth is a precarious thing, not because it doesn’t objectively exist, but because the incentives to warp it can be so strong. It’s up to each of us to seek it out.