Category: Science

How Description Leads to Understanding

Describing something with accuracy forces you to learn more about it. In this way, description can be a tool for learning.

Accurate description requires the following:

  1. Observation
  2. Curiosity about what you are witnessing
  3. Suspending assumptions about cause and effect

It can be difficult to stick with describing something completely and accurately. It’s hard to overcome the tendency to draw conclusions based on partial information or to leave assumptions unexplored.

***

Some systems, like the ecosystem that is the ocean, are complex. They have many moving parts that have multiple dependencies and interact in complicated ways. Trying to figure them out is daunting, and it can seem more sane to not bother trying—except that complex systems are everywhere. We live our lives as part of many of them, and addressing any global challenges involves understanding their many dimensions.

One way to begin understanding complex systems is by describing them in detail: mapping out their parts, their multiple interactions, and how they change through time. Complex systems are often complicated—that is, they have many moving parts that can be hard to identify and define. But the overriding feature of complex systems is that they cannot be managed from the top down. Complex systems display emergent properties and unpredictable adaptations that we cannot identify in advance. But far from being inaccessible, we can learn a lot about such systems by describing what we observe.

For example, Jane Jacobs’s comprehensive description of the interactions along city sidewalks in The Death and Life of Great American Cities led to insight about how cities actually work. Her work also emphasized the multidimensionality of city systems by demonstrating via description that attempting to manage a city from the top down would stifle its adaptive capabilities and negatively impact the city itself.

Another book that uses description to illuminate complicated and intricate relationships is The Sea Around Us by Rachel Carson. In it she chronicles events in the oceans, from the cycles of plankton growth to the movement of waves, in accessible, evocative descriptions. It’s no trouble to conjure up vivid images based on her words. But as the book progresses, her descriptions of the parts coalesce into an appreciation for how multidimensional the sea system is.

Carson’s descriptions come through multiple lenses. She describes the sea through the behavior of animals and volcanoes. She explores the sea by describing its vertical integration from the surface to the depths and the bottom. She looks at the oceans through the lenses of their currents and relationships to wind. In total the book describes the same entity, the seas that cover the majority of the earth’s surface, through thirteen different descriptive lenses. Although the parts are broken down into their basics, the comprehensive view that Carson employs allows the reader to easily grasp how complicated the sea system is.

None of the lenses she uses impart complete information. Trying to appreciate how interconnected the parts of the system are by looking at just her description of tides or minerals is impossible. It’s only when the lenses are combined that a complete picture of the ecosystem emerges.

The book demonstrates the value in description, even if you cannot conclude causation in the specifics you’re describing.

One noticeable omission from the book is the role of plate tectonics in the movement of the ocean floor and associated phenomena like volcanoes. Plate tectonic theory is a scientific baby and was not yet widely accepted when Carson updated her original text in 1961. But not knowing plate tectonic theory doesn’t undermine her descriptions of life at the bottom of the oceans, or the impact of volcanoes, or the changing shape of the undersea shelves that attach to the continents. Although the reader is invited to contemplate the why behind what she is describing, we are also encouraged to be in the moment, observing the ocean through Carson’s words.

***

The book is not an argument for a particular way of interacting with the sea. It doesn’t need to make one. Carson’s descriptions offer their own evidence of how trying to change or manage the sea system would be extremely difficult because they reveal the multitude of connections between various sea phenomena.

Describing the whole from so many different angles illuminates the complex. By chronicling microinteractions, such as those between areas of hot and cold water or high and low pressure, we can see how changes in one aspect produce cascading change. We also get a sense of the adaptability of the living organisms that live in the oceans, like being able to live in depths that have no light (and therefore no plants that rely on photosynthesis) and adjusting biochemistry to take advantage of seasonal variations in temperature that affect water weight and salt contents.

The reader walks away from the book appreciating the challenge in describing in detail something as complicated as the ocean ecosystem. The book is full of observations and short on judgments, an approach that encourages us to develop our own curiosity about the sea around us.

The Sea Around Us is the Farnam Street book club’s summer selection. Members get additional resource materials to help get the most out of this fascinating book. Learn more or join in.

The Precautionary Principle: Better Safe than Sorry?

Also known as the Precautionary Approach or Precautionary Action, the Precautionary Principle is a concept best summed up by the proverb “better safe than sorry” or the medical maxim to “first do no harm.”

While there is no single definition, it typically refers to acting to prevent harm by not doing anything that could have negative consequences, even if the possibility of those consequences is uncertain.

In this article, we will explore how the Precautionary Principle works, its strengths and drawbacks, the best way to use it, and how we can apply it in our own lives.

Guilty until proven innocent

Whenever we make even the smallest change within a complex system, we risk dramatic unintended consequences.

The interconnections and dependencies within systems make it almost impossible to predict outcomes—and seeing as they often require a reasonably precise set of conditions to function, our interventions can wreak havoc.

The Precautionary Principle reflects the reality of working with and within complex systems. It shifts the burden of proof from proving something was dangerous after the fact to proving it is safe before taking chances. It emphasizes waiting for more complete information before risking causing damage, especially if some of the possible impacts would be irreversible, hard to contain, or would affect people who didn’t choose to be involved.

The possibility of harm does not need to be specific to that particular circumstance; sometimes we can judge a category of actions as one that always requires precaution because we know it has a high risk of unintended consequences.

For example, invasive species (plants or animals that cause harm after being introduced into a new environment by humans) have repeatedly caused native species to become extinct. So it’s reasonable to exercise precaution and not introduce living things into new places without strong evidence it will be harmless.

Preventing risks and protecting resources

Best known for its use as a regulatory guideline in environmental law and public health, the Precautionary Principle originated with the German term “Vorsorgeprinzip” applied to regulations for preventing air pollution. Konrad Von Moltke, director of the Institute for European Environmental Policy, later translated it into English.

Seeing as the natural world is a highly complex system we have repeatedly disrupted in serious, permanent ways, the Precautionary Principle has become a guiding part of environmental policy in many countries.

For example, the Umweltbundesamt (German Environmental Protection Agency) explains that the Precautionary Principle has two core components in German environmental law today: preventing risks and protecting resources.

Preventing risks means legislators shouldn’t take actions where our knowledge of the potential for environmental damage is incomplete or uncertain but there is cause for concern. The burden of proof is on proving lack of harm, not on proving harm. Protecting resources means preserving things like water and soil in a form future generations can use.

To give another example, some countries evoke versions of the Precautionary Principle to justify bans on genetically modified foods—in some cases for good, in others until evidence of their safety is considered stronger. It is left to legislators to interpret and apply the Precautionary Principle within specific situations.

The flexibility of the Precautionary Principle is both a source of strength and a source of weakness. We live in a fast-moving world where regulation does not always keep up with innovation, meaning guidelines (as opposed to rules) can often prove useful.

Another reason the Precautionary Principle can be a practical addition to legislation is that science doesn’t necessarily move fast enough to protect us from potential risks, especially ones that shift harm elsewhere or take a long time to show up. For example, thousands of human-made substances are present in the food we eat, ranging from medications given to livestock to materials used in packaging. Proving that a new additive has health risks once it’s in the food supply could take decades because it’s incredibly difficult to isolate causative factors. So some regulators, including the Food and Drug Administration in America, require manufacturers to prove something is safe before it goes to market. This approach isn’t perfect, but it’s far safer than waiting to discover harm after we start eating something.

The Precautionary Principle forces us to ask a lot of difficult questions about the nature of risk, uncertainty, probability, the role of government, and ethics. It can also prompt us to question our intuitions surrounding the right decisions to make in certain situations.

When and how to use the Precautionary Principle

When handling risks, it is important to be aware of what we don’t or can’t know for sure. The Precautionary Principle is not intended to be a stifling justification for banning things—it’s a tool for handling particular kinds of uncertainty. Heuristics can guide us in making important decisions, but we still need to be flexible and treat each case as unique.

So how should we use the Precautionary Principle? Sven Ove Hansson suggests two requirements in How Extreme Is the Precautionary Principle? First, if there are competing priorities (beyond avoidance of harm), it should be combined with other decision-making principles. For example, the idea of “explore versus exploit” teaches us that we need to balance doubling down on existing options with trying out new ones. Second, the decision to take precautionary action should be based on the most up-to-date science, and there should be plans in place for how to update that decision if the science changes. That includes planning how often to revaluate the evidence and how to assess its quality.

When is it a good idea to use the Precautionary Principle? There are a few types of situations where it’s better to be safe rather than sorry if things are uncertain.

When the costs of waiting are low. As we’ve already seen, the Precautionary Principle is intended as a tool for handling uncertainty, rather than a justification for arbitrary bans. This means that if the safety of something is uncertain but the costs of waiting to learn more are low, it’s a good idea to use precaution.

When preserving optionality is a priority. The Precautionary Principle is most often evoked for potential risks that would cause irreversible, far-reaching, uncontainable harm. Seeing as we don’t know what the future holds, keeping our options open by avoiding limiting choices gives us the most flexibility later on. The Precautionary Principle preserves optionality by ensuring we don’t restrict the resources we have available further down the line or leave messes for our future selves to clean up.

When the potential costs of a risk are far greater than the cost of preventative action. If a potential risk would be devastating or even ruinous, and it’s possible to protect against it, precautionary action is key. Sometimes winning is just staying in the game—and sometimes staying in the game boils down to not letting anything wipe you out.

For example, in 1963 the Swiss government pledged to provide bunker spaces to all citizens in the event of a nuclear attack or disaster. The country still maintains a national system of thousands of warning sirens and distributes potassium iodide tablets (used to reduce the effects of radiation) to people living near nuclear plants in case of an accident. Given the potential effects of an incident on Switzerland (regardless of how likely it is), these precautionary actions are considered worthwhile.

When alternatives are available. If there are alternative courses of action we know to be safe, it’s a good idea to wait for more information before adopting a new risky one.

When not to use the Precautionary Principle

As the third criteria for using the Precautionary Principle usefully, Sven Ove Hansson recommends it not be used when the likelihood or scale of a potential risk is too low for precautionary action to have any benefit. For example, if one person per year dies from an allergic reaction to a guinea pig bite, it’s probably not worth banning pet guinea pigs. We can add a few more examples of situations where it’s generally not a good idea to use the Precautionary Principle.

When the tradeoffs are substantial and known. The whole point of the Precautionary Principle is to avoid harm. If we know for sure that not taking an action will cause more damage than taking it possibly could, it’s not a good idea to use precaution.

For example, following a 2011 accident at Fukushima, Japan shut down all nuclear power plants. Seeing as nuclear power is cheaper than fossil fuels, this resulted in a sharp increase in electricity prices in parts of the country. According to the authors of the paper Be Cautious with the Precautionary Principle, the resulting increase in mortality from people being unable to spend as much on heating was higher than the fatalities from the actual accident.

When the risks are known and priced in. We all have different levels of risk appetite and we make judgments about whether certain activities are worth the risks involved. When a risk is priced in, that means people are aware of it and voluntarily decide it is worthwhile—or even desirable.

For example, riskier investments tend to have higher potential returns. Although they might not make sense for someone who doesn’t want to risk losing any money, they do make sense for those who consider the potential gains worth the potential losses.

When only a zero-risk option would be satisfying. It’s impossible to completely avoid risks, so it doesn’t make much sense to exercise precaution with the expectation that a 100% safe option will appear.

When taking risks could strengthen us. As individuals, we can sometimes be overly risk averse and too cautious—to the point where it makes us fragile. Our ancestors had the best chance of surviving if they overreacted, rather than underreacted, to risks. But for many of us today, the biggest risk we face can be the stress caused by worrying too much about improbable dangers. We can end up fearing the kinds of risks, like social rejection, that are unavoidable and that tend to make us stronger if we embrace them as inevitable. Never taking any risks is generally a far worse idea than taking sensible ones.

***

We all face decisions every day that involve balancing risk. The Precautionary Principle is a tool that helps us determine when a particular choice is worth taking a gamble on, or when we need to sit tight and collect more information.

Advice for Young Scientists—and Curious People in General

The Nobel Prize-winning biologist Peter Medawar (1915–1987) is best known for work that made the first organ transplants and skin grafts possible. Medawar was also a lively, witty writer who penned numerous books on science and philosophy.

In 1979, he published Advice to a Young Scientist, a book brimming with both practical advice and philosophical guidance for anyone “engaged in exploratory activities.” Here, we summarize some of Medawar’s key insights from the book.

***

Application, diligence, a sense of purpose

“There is no certain way of telling in advance if the daydreams of a life dedicated to the pursuit of truth will carry a novice through the frustration of seeing experiments fail and of making the dismaying discovery that some of one’s favourite ideas are groundless.”

If you want to make progress in any area, you need to be willing to give up your best ideas from time to time. Science proceeds because researchers do all they can to disprove their hypotheses rather than prove them right. Medawar notes that he twice spent two whole years trying to corroborate groundless hypotheses. The key to being a good scientist is the capacity to take no for an answer—when necesssary. Additionally:

“…one does not need to be terrifically brainy to be a good scientist…there is nothing in experimental science that calls for great feats of ratiocination or a preternatural gift for deductive reasoning. Common sense one cannot do without, and one would be the better for owning some of those old-fashioned virtues which have fallen into disrepute. I mean application, diligence, a sense of purpose, the power to concentrate, to persevere and not be cast down by adversity—by finding out after long and weary inquiry, for example, that a dearly loved hypothesis is in large measure mistaken.”

The truth is, any measure of risk-taking comes with the possibility of failure. Learning from failure to continue exploring the unknown is a broadly useful mindset.

***

How to make important discoveries

“It can be said with marked confidence that any scientist of any age who wants to make important discoveries must study important problems. Dull or piffling problems yield dull or piffling answers.”

A common piece of advice for people early on in their careers is to pursue what they find most interesting. Medawar disagrees, explaining that “almost any problem is interesting if it is studied in sufficient depth.” He advises scientists to look for important problems, meaning ones with answers that matter to humankind.

When choosing an area of research, Medawar cautions against mistaking a fashion (“some new histochemical procedure or technical gimmick”) for a movement (“such as molecular genetics or cellular immunology”). Movements lead somewhere; fashions generally don’t.

***

Getting started

Whenever we begin some new endeavor, it can be tempting to think we need to know everything there is to know about it before we even begin. Often, this becomes a form of procrastination. Only once we try something and our plans make contact with reality can we know what we need to know. Medawar believes it’s unnecessary for scientists to spend an enormous amount of time learning techniques and supporting disciplines before beginning research:

“As there is no knowing in advance where a research enterprise may lead and what kind of skills it will require as it unfolds, this process of ‘equipping oneself’ has no predeterminable limits and is bad psychological policy….The great incentive to learning a new skill or supporting discipline is needing to use it.”

The best way to learn what we need to know is by getting started, then picking up new knowledge as it proves itself necessary. When there’s an urgent need, we learn faster and avoid unnecessary learning. The same can be true for too much reading:

“Too much book learning may crab and confine the imagination, and endless poring over the research of others is sometimes psychologically a research substitute, much as reading romantic fiction may be a substitute for real-life romance….The beginner must read, but intently and choosily and not too much.”

We don’t talk about this much at Farnam Street, but it is entirely possible to read too much. Reading becomes counterproductive when it serves as a substitute for doing the real thing, if that’s what someone is reading for. Medawar explains that it is “psychologically most important to get results, even if they are not original.” It’s important to build confidence by doing something concrete and seeing a visible manifestation of our labors. For Medawar, the best scientists begin with the understanding that they can never know anything and, besides, learning needs to be a lifelong process.

***

The secrets to effective collaboration

“Scientific collaboration is not at all like cooks elbowing each other from the pot of broth; nor is it like artists working on the same canvas, or engineers working out how to start a tunnel simultaneously from both sides of a mountain in such a way that the contractors do not miss each other in the middle and emerge independently at opposite ends.”

Instead, scientific collaboration is about researchers creating the right environment to develop and expand upon each other’s ideas. A good collaboration is greater than the sum of its parts and results in work that isn’t attributable to a single person.

For scientists who find their collaborators infuriating from time to time, Medawar advises being self-aware. We all have faults, and we too are probably almost intolerable to work with sometimes.

When collaboration becomes contentious, Medawar maintains that we should give away our best ideas.

Scientists sometimes face conflict over the matter of credit. If several researchers are working on the same problem, whichever one finds the solution (or a solution) first gets the credit, no matter how close the others were. This is a problem most creative fields don’t face: “The twenty years Wagner spent on composing the first three operas of The Ring were not clouded by the fear that someone else might nip ahead of him with Götterdämmerung.” Once a scientific idea becomes established, it becomes public property. So the only chance of ownership a researcher has comes by being the first.

However, Medawar advocates for being open about ideas and doing away with secrecy because “anyone who shuts his door keeps out more than he lets out.” He goes on to write, “The agreed house rule of the little group of close colleagues I have always worked with has always been ‘Tell everyone everything you know,’ and I don’t know anyone who came to any harm by falling in with it.

***

How to handle moral dilemmas

A scientist will normally have contractual obligations to his employer and has always a special and unconditionally binding obligation to the truth.

Medawar writes that many scientists, at some point in their career, find themselves grappling with the conflict between a contractual obligation and their own conscience. However, the “time to grapple is before a moral dilemma arises.” If we think an enterprise might lead somewhere damaging, we shouldn’t start on it in the first place.

We should know our values and aim to do work in accordance with them.

***

The first rule is never to fool yourself

“I cannot give any scientist of any age better advice than this: the intensity of the conviction that a hypothesis is true has no bearing of whether it is true or not.”

Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” All scientists make mistakes sometimes. Medawar advises, when this happens, to issue a swift correction. To do so is far more respectable and beneficial for the field than trying to cover it up. Echoing the previous advice to always be willing to take no for an answer, Medawar warns about falling in love with a hypothesis and believing it is true without evidence.

“A scientist who habitually deceives himself is well on the way toward deceiving others.”

***

The best creative environment

“To be creative, scientists need libraries and laboratories and the company of other scientists; certainly a quiet and untroubled life is a help. A scientist’s work is in no way deepened or made more cogent by privation, anxiety, distress, or emotional harassment. To be sure, the private lives of scientists may be strangely and comically mixed up, but not in ways that have any special bearing on the nature and quality of their work.”

Creativity rises from tranquility, not from disarray. Creativity is supported by a safe environment, one in which you can share and question openly and be heard with compassion and a desire to understand.

***

A final piece of advice:

“A scientist who wishes to keep his friends and not add to the number of his enemies must not be forever scoffing and criticizing and so earn a reputation for habitual disbelief; but he owes it to his profession not to acquiesce in or appear to condone folly, superstition, or demonstrably unsound belief. The recognition and castigation of folly will not win him friends, but it may gain him some respect.”

We Are What We Remember

Memory is an intrinsic part of our life experience. It is critical for learning, and without memories we would have no sense of self. Understanding why some memories stick better than others, as well as accepting their fluidity, helps us reduce conflict and better appreciate just how much our memories impact our lives.

***

“Which of our memories are true and which are not is something we may never know. It doesn’t change who we are.”

Memories can be so vivid. Let’s say you are spending time with your sibling and reflecting on your past when suddenly a memory pops up. Even though it’s about events that occurred twenty years ago, it seems like it happened yesterday. The sounds and smells pop into your mind. You remember what you were wearing, the color of the flowers on the table. You chuckle and share your memory with your sibling. But they stare at you and say, “That’s not how I remember it at all.” What?

Memory discrepancies happen all the time, but we have a hard time accepting that our memories are rarely accurate. Because we’ve been conditioned to think of our memories like video recordings or data stored in the cloud, we assert that our rememberings are the correct ones. Anyone who remembers the situation differently must be wrong.

Memories are never an exact representation of a moment in the past. They are not copied with perfect fidelity, and they change over time. Some of our memories may not even be ours, but rather something we saw in a film or a story someone else told to us. We mix and combine memories, especially older ones, all the time. It can be hard to accept the malleable nature of memories and the fact that they are not just sitting in our brains waiting to be retrieved. In Adventures in Memory, writer Hilde Østby and neuropsychologist Ylva Østby present a fascinating journey through all aspects of memory. Their stories and investigations provide great insight into how memory works; and how our capacity for memory is an integral part of the human condition, and how a better understanding of memory helps us avoid the conflicts we create when we insist that what we remember is right.

***

Memory and learning

“One thing that aging doesn’t diminish is the wisdom we have accumulated over a lifetime.”

Our memories, dynamic and changing though they may be, are with us for the duration of our lives. Unless you’ve experienced brain trauma, you learn new things and store at least some of what you learn in memory.

Memory is an obvious component of learning, but we don’t often think of it that way. When we learn something new, it’s against the backdrop of what we already know. All knowledge that we pick up over the years is stored in memory. The authors suggest that “how much you know in a broad sense determines what you understand of the new things you learn.” Because it’s easier to remember something if it can hook into context you already have, then the more you know, the more a new memory can attach to. Thus, what we already know, what we remember, impacts what we learn.

The Østbys explain that the strongest memory networks are created “when we learn something truly meaningful and make an effort to understand it.” They describe someone who is passionate about diving and thus “will more easily learn new things about diving than about something she’s never been interested in before.” Because the diver already knows a lot about diving, and because she loves it and is motivated to learn more, new knowledge about diving will easily attach itself to the memory network she already has about the subject.

While studying people who seem to have amazing memories, as measured by the sheer amount they can recall with accuracy, one of the conclusions the Østbys reach is “that many people who rely on their memories don’t use mnemonic techniques, nor do they cram. They’re just passionate about what they do.” The more meaningful the topics and the more we are invested in truly learning, the higher the chances are that we will convert new information into lasting memory. Also, the more we learn, the more we will remember. There doesn’t seem to be a limit on how much we can put into memory.

***

How we build our narratives

The experience of being a human is inseparable from our ability to remember. You can’t build relationships without memories. You can’t prepare for the future if you don’t remember the past.

The memories we hold on to early on have a huge impact on the ones we retain as we progress through life. “When memories enter our brain,” the Østbys explain, “they attach themselves to similar memories: ones from the same environment, or that involve the same feeling, the same music, or the same significant moment in history. Memories seldom swim around without connections.” Thus, a memory is significantly more likely to stick around if it can attach itself to something. A new experience that has very little in common with the narrative we’ve constructed of ourselves is harder to retain in memory.

As we get older, our new memories tend to reinforce what we already think of ourselves. “Memory is self-serving,” the Østbys write. “Memories are linked to what concerns you, what you feel, what you want.

Why is it so much easier to remember the details of a vacation or a fight we’ve had with our partner than the details of a physics lesson or the plot of a classic novel? “The fate of a memory is mostly determined by how much it means to us. Personal memories are important to us. They are tied to our hopes, our values, and our identities. Memories that contribute meaningfully to our personal autobiography prevail in our minds.” We need not beat ourselves up because we have a hard time remembering names or birthdays. Rather, we can accept that the triggers for the creation of a memory and its retention are related to how it speaks to the narrative we maintain about ourselves. This view of memory suggests that to better retain information, we can try to make knowing that information part of our identity. We don’t try to remember physics equations for the sake of it, but rather because in our personal narrative, we are someone who knows a lot about physics.

***

Memory, imagination, and fluidity

Our ability to imagine is based, in part, on our ability to remember. The connection works on two levels.

The first, the Østbys write, is that “our memories are the fuel for our imagination.” What we remember about the past informs a lot of what we can imagine about the future. Whether it’s snippets from movies we’ve seen or activities we’ve done, it’s our ability to remember the experiences we’ve had that provide the foundation for our imagination.

Second, there is a physical connection between memory and imagination. “The process that gives us vivid memories is the same as the one that we use to imagine the future.” We use the same parts of the brain when we immerse ourselves in an event from our past as we do when we create a vision for our future. Thus, one of the conclusions of Adventures in Memory is that “as far as our brains are concerned, the past and future are almost the same.” In terms of how they can feel to us, memories and the products of imagination are not that different.

The interplay between past and future, between memory and imagination, impacts the formation of memories themselves. Memory “is a living organism,” the Østbys explain, “always absorbing images, and when new elements are added, they are sewn into the original memory as seamlessly as only our imagination can do.”

One of the most important lessons from the book is to change up the analogies we use to understand memory. Memories are not like movies, exactly the same no matter how many times you watch them. Nor are they like files stored in a computer, unchanging data saved for when we might want to retrieve it. Memories, like the rest of our biology, are fluid.

Memory is more like live theater, where there are constantly new productions of the same pieces,” the Østbys write. “Each and every one of our memories is a mix of fact and fiction. In most memories the central story is based on true events, but it’s still reconstructed every time we recall it. In these reconstructions, we fill in the gaps with probable facts. We subconsciously pick up details from a sort-of memory prop room.

Understanding our memory more like a theater production, where the version you see in London’s West End isn’t going to be exactly the same as the one you see on Broadway, helps us let go of attaching a judgment of accuracy to what we remember. It’s okay to find out when reminiscing with friends that you have different memories of the same day. It’s also acceptable that two people will have different memories of the events leading to their divorce, or that business partners will have different memories of the terms they agreed to at the start of the partnership. The more you get used to the fluidity of your memories, the more the differences in recollections become sources of understanding instead of points of contention. What people communicate about what they remember can give you insight into their attitudes, beliefs, and values.

***

Conclusion

New memories build on the ones that are already there. The more we know, the easier it is to remember the new things we learn. But we have to be careful and recognize that our tendency is to reinforce the narrative we’ve already built. Brand new information is harder to retain, but sometimes we need to make the effort.

Finally, memories are important not only for learning and remembering but also because they form the basis of what we can imagine and create. In so many ways, we are what we remember. Accepting that our vivid memories can be very different from those who were in the same situation helps us reduce the conflict that comes with insisting that our memories must always be correct.

When Technology Takes Revenge

While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects.

***

By many metrics, technology keeps making our lives better. We live longer, healthier, richer lives with more options than ever before for things like education, travel, and entertainment. Yet there is often a sense that we have lost control of our technology in many ways, and thus we end up victims of its unanticipated impacts.

Edward Tenner argues in Why Things Bite Back: Technology and the Revenge of Unintended Consequences that we often have to deal with “revenge effects.” Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.

Although Why Things Bite Back was written in the late 1990s and many of its specific examples and details are now dated, it remains an interesting lens for considering issues we face today. The revenge effects Tenner describes haunt us still. As the world becomes more complex and interconnected, it’s easy to see that the potential for unintended consequences will increase.

Thus, when we introduce a new piece of technology, it would be wise to consider whether we are interfering with a wider system. If that’s the case, we should consider what might happen further down the line. However, as Tenner makes clear, once the factors involved get complex enough, we cannot anticipate them with any accuracy.

Neither Luddite nor alarmist in nature, the notion of revenge effects can help us better understand the impact of intervening with complex systems But we need to be careful. Although second-order thinking is invaluable, it cannot predict the future with total accuracy. Understanding revenge effects is primarily a reminder of the value of caution and not of specific risks.

***

Types of revenge effects

There are four different types of revenge effects, described here as follows:

  1. Repeating effects: occur when more efficient processes end up forcing us to do the same things more often, meaning they don’t free up more of our time. Better household appliances have led to higher standards of cleanliness, meaning people end up spending the same amount of time—or more—on housework.
  2. Recomplicating effects: occur when processes become more and more complex as the technology behind them improves. Tenner gives the now-dated example of phone numbers becoming longer with the move away from rotary phones. A modern example might be lighting systems that need to be operated through an app, meaning a visitor cannot simply flip a switch.
  3. Regenerating effects: occur when attempts to solve a problem end up creating additional risks. Targeting pests with pesticides can make them increasingly resistant to harm or kill off their natural predators. Widespread use of antibiotics to control certain conditions has led to be resistant strains of bacteria that are harder to treat.
  4. Rearranging effects: occur when costs are transferred elsewhere so risks shift and worsen. Air conditioning units on subways cool down the trains—while releasing extra heat and making the platforms warmer. Vacuum cleaners can throw dust mite pellets into the air, where they remain suspended and are more easily breathed in. Shielding beaches from waves transfers the water’s force elsewhere.

***

Recognizing unintended consequences

The more we try to control our tools, the more they can retaliate.

Revenge effects occur when the technology for solving a problem ends up making it worse due to unintended consequences that are almost impossible to predict in advance. A smartphone might make it easier to work from home, but always being accessible means many people end up working more.

Things go wrong because technology does not exist in isolation. It interacts with complex systems, meaning any problems spread far from where they begin. We can never merely do one thing.

Tenner writes: “Revenge effects happen because new structures, devices, and organisms react with real people in real situations in ways we could not foresee.” He goes on to add that “complexity makes it impossible for anyone to understand how the system might act: tight coupling spreads problems once they begin.”

Prior to the Industrial Revolution, technology typically consisted of tools that served as an extension of the user. They were not, Tenner argues, prone to revenge effects because they did not function as parts in an overall system like modern technology. He writes that “a machine can’t appear to have a will of its own unless it is a system, not just a device. It needs parts that interact in unexpected and sometimes unstable and unwanted ways.”

Revenge effects often involve the transformation of defined, localized risks into nebulous, gradual ones involving the slow accumulation of harm. Compared to visible disasters, these are much harder to diagnose and deal with.

Large localized accidents, like a plane crash, tend to prompt the creation of greater safety standards, making us safer in the long run. Small cumulative ones don’t.

Cumulative problems, compared to localized ones, aren’t easy to measure or even necessarily be concerned about. Tenner points to the difference between reactions in the 1990s to the risk of nuclear disasters compared to global warming. While both are revenge effects, “the risk from thermonuclear weapons had an almost built-in maintenance compulsion. The deferred consequences of climate change did not.”

Many revenge effects are the result of efforts to improve safety. “Our control of the acute has indirectly promoted chronic problems”, Tenner writes. Both X-rays and smoke alarms cause a small number of cancers each year. Although they save many more lives and avoiding them is far riskier, we don’t get the benefits without a cost. The widespread removal of asbestos has reduced fire safety, and disrupting the material is often more harmful than leaving it in place.

***

Not all effects exact revenge

A revenge effect is not a side effect—defined as a cost that goes along with a benefit. The value of being able to sanitize a public water supply has significant positive health outcomes. It also has a side effect of necessitating an organizational structure that can manage and monitor that supply.

Rather, a revenge effect must actually reverse the benefit for at least a small subset of users. For example, the greater ease of typing on a laptop compared to a typewriter has led to an increase in carpal tunnel syndrome and similar health consequences. It turns out that the physical effort required to press typewriter keys and move the carriage protected workers from some of the harmful effects of long periods of time spent typing.

Likewise, a revenge effect is not just a tradeoff—a benefit we forgo in exchange for some other benefit. As Tenner writes:

If legally required safety features raise airline fares, that is a tradeoff. But suppose, say, requiring separate seats (with child restraints) for infants, and charging a child’s fare for them, would lead many families to drive rather than fly. More children could in principle die from transportation accidents than if the airlines had continued to permit parents to hold babies on their laps. This outcome would be a revenge effect.

***

In support of caution

In the conclusion of Why Things Bite Back, Tenner writes:

We seem to worry more than our ancestors, surrounded though they were by exploding steamboat boilers, raging epidemics, crashing trains, panicked crowds, and flaming theaters. Perhaps this is because the safer life imposes an ever increasing burden of attention. Not just in the dilemmas of medicine but in the management of natural hazards, in the control of organisms, in the running of offices, and even in the playing of games there are, not necessarily more severe, but more subtle and intractable problems to deal with.

While Tenner does not proffer explicit guidance for dealing with the phenomenon he describes, one main lesson we can draw from his analysis is that revenge effects are to be expected, even if they cannot be predicted. This is because “the real benefits usually are not the ones that we expected, and the real perils are not those we feared.”

Chains of cause and effect within complex systems are stranger than we can often imagine. We should expect the unexpected, rather than expecting particular effects.

While we cannot anticipate all consequences, we can prepare for their existence and factor it into our estimation of the benefits of new technology. Indeed, we should avoid becoming overconfident about our ability to see the future, even when we use second-order thinking. As much as we might prepare for a variety of impacts, revenge effects may be dependent on knowledge we don’t yet possess. We should expect larger revenge effects the more we intensify something (e.g., making cars faster means worse crashes).

Before we intervene in a system, assuming it can only improve things, we should be aware that our actions can do the opposite or do nothing at all. Our estimations of benefits are likely to be more realistic if we are skeptical at first.

If we bring more caution to our attempts to change the world, we are better able to avoid being bitten.

 

The Observer Effect: Seeing Is Changing

The act of looking at something changes it – an effect that holds true for people, animals, even atoms. Here’s how the observer effect distorts our world and how we can get a more accurate picture.

***

We often forget to factor in the distortion of observation when we evaluate someone’s behavior. We see what they are doing as representative of their whole life. But the truth is, we all change how we act when we expect to be seen. Are you ever on your best behavior when you’re alone in your house? To get better at understanding other people, we need to consider the observer effect: observing things changes them, and some phenomena only exist when observed.

The observer effect is not universal. The moon continues to orbit whether we have a telescope pointed at it or not. But both things and people can change under observation. So, before you judge someone’s behavior, it’s worth asking if they are changing because you are looking at them, or if their behavior is natural. People are invariably affected by observation. Being watched makes us act differently.

“I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers.”

— Isaac Asimov

The observer effect in science

The observer effect pops up in many scientific fields.

In physics, Erwin Schrödinger’s famous cat highlights the power of observation. In his best-known thought experiment, Schrödinger asked us to imagine a cat placed in a box with a radioactive atom that might or might not kill it in an hour. Until the box opens, the cat exists in a state of superposition (when half of two states each occur at the same time)—that is, the cat is both alive and dead. Only by observing it does the cat shift permanently to one of the two states. The observation removes the cat from a state of superposition and commits it to just one.

(Although Schrodinger meant this as a counter-argument to Einstein’s proposition of superposition of quantum states – he wanted to demonstrate the absurdity of the proposition – it has caught on in popular culture as a thought experiment of the observer effect.)

In biology, when researchers want to observe animals in their natural habitat, it is paramount that they find a way to do so without disturbing those animals. Otherwise, the behavior they see is unlikely to be natural, because most animals (including humans) change their behavior when they are being observed. For instance, Dr. Cristian Damsa and his colleagues concluded in their paper “Heisenberg in the ER” that being observed makes psychiatric patients a third less likely to require sedation. Doctors and nurses wash their hands more when they know their hygiene is being tracked. And other studies have shown that zoo animals only exhibit certain behaviors in the presence of visitors, such as being hypervigilant of their presence and repeatedly looking at them.

In general, we change our behavior when we expect to be seen. Philosopher Jeremy Bentham knew this when he designed the panopticon prison in the eighteenth century, building upon an idea by his brother Samuel. The prison was constructed so that its cells circled a central watchtower so inmates could never tell if they were being watched or not. Bentham expected this would lead to better behavior, without the need for many staff. It never caught on as an actual design for prisons, but the modern prevalence of CCTV is often compared to the Panopticon. We never know when we’re being watched, so we act as if it’s all the time.

The observer effect, however, is twofold. Observing changes what occurs, but observing also changes our perceptions of what occurs. Let’s take a look at that next.

“How much does one imagine, how much observe? One can no more separate those functions than divide light from air, or wetness from water.”

— Elspeth Huxley

Observer bias

The effects of observation get more complex when we consider how each of us filters what we see through our own biases, assumptions, preconceptions, and other distortions. There’s a reason, after all, why double-blinding (ensuring both tester and subject does not receive any information that may influence their behavior) is the gold-standard in research involving living things. Observer bias occurs when we alter what we see, either by only noticing what we expect or by behaving in ways that have influence on what occurs. Without intending to do so, researchers may encourage certain results, leading to changes in ultimate outcomes.

A researcher falling prey to the observer bias is more likely to make erroneous interpretations, leading to inaccurate results. For instance, in a trial for an anti-anxiety drug where researchers know which subjects receive a placebo and which receive actual drugs, they may report that the latter group seems calmer because that’s what they expect.

The truth is, we often see what we expect to see. Our biases lead us to factor in irrelevant information when evaluating the actions of others. We also bring our past into the present and let that color our perceptions as well—so, for example, if someone has really hurt you before, you are less likely to see anything good in what they do.

The actor-observer bias

Another factor in the observer effect, and one we all fall victim to, is our tendency to attribute the behavior of others to innate personality traits. Yet we tend to attribute our own behavior to external circumstances. This is known as the actor-observer bias.

For example, a student who gets a poor grade on a test claims they were tired that day or the wording on the test was unclear. Conversely, when that same student observes a peer who performed badly on a test on which they performed well, the student judges their peer as incompetent or ill-prepared. If someone is late to a meeting with a friend, they rush in apologizing for the bad traffic. But if the friend is late, they label them as inconsiderate. When we see a friend having an awesome time in a social media post, we assume their life is fun all of the time. When we post about ourselves having an awesome time, we see it as an anomaly in an otherwise non-awesome life.

We have different levels of knowledge about ourselves and others. Because observation focuses on what is displayed, not what preceded or motivated it, we see the full context for our own behavior but only the final outcome for other people. We need to take the time to learn the context of other’s lives before we pass judgment on their actions.

Conclusion

We can use the observer effect to our benefit. If we want to change a behavior, finding some way to ensure someone else observes it can be effective. For instance, going to the gym with a friend means they know if we don’t go, making it more likely that we stick with it. Tweeting about our progress on a project can help keep us accountable. Even installing software on our laptop that tracks how often we check social media can reduce our usage.

But if we want to get an accurate view of reality, it is important we consider how observing it may distort the results. The value of knowing about the observer effect in everyday life is that it can help us factor in the difference that observation makes. If we want to gain an accurate picture of the world, it pays to consider how we take that picture. For instance, you cannot assume that an employee’s behavior in a meeting translates to their work, or that the way your kids act at home is the same as in the playground. We all act differently when we know we are being watched.