Tag: Behavior

The Observer Effect: Seeing Is Changing

The act of looking at something changes it – an effect that holds true for people, animals, even atoms. Here’s how the observer effect distorts our world and how we can get a more accurate picture.

***

We often forget to factor in the distortion of observation when we evaluate someone’s behavior. We see what they are doing as representative of their whole life. But the truth is, we all change how we act when we expect to be seen. Are you ever on your best behavior when you’re alone in your house? To get better at understanding other people, we need to consider the observer effect: observing things changes them, and some phenomena only exist when observed.

The observer effect is not universal. The moon continues to orbit whether we have a telescope pointed at it or not. But both things and people can change under observation. So, before you judge someone’s behavior, it’s worth asking if they are changing because you are looking at them, or if their behavior is natural. People are invariably affected by observation. Being watched makes us act differently.

“I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers.”

— Isaac Asimov

The observer effect in science

The observer effect pops up in many scientific fields.

In physics, Erwin Schrödinger’s famous cat highlights the power of observation. In his best-known thought experiment, Schrödinger asked us to imagine a cat placed in a box with a radioactive atom that might or might not kill it in an hour. Until the box opens, the cat exists in a state of superposition (when half of two states each occur at the same time)—that is, the cat is both alive and dead. Only by observing it does the cat shift permanently to one of the two states. The observation removes the cat from a state of superposition and commits it to just one.

(Although Schrodinger meant this as a counter-argument to Einstein’s proposition of superposition of quantum states – he wanted to demonstrate the absurdity of the proposition – it has caught on in popular culture as a thought experiment of the observer effect.)

In biology, when researchers want to observe animals in their natural habitat, it is paramount that they find a way to do so without disturbing those animals. Otherwise, the behavior they see is unlikely to be natural, because most animals (including humans) change their behavior when they are being observed. For instance, Dr. Cristian Damsa and his colleagues concluded in their paper “Heisenberg in the ER” that being observed makes psychiatric patients a third less likely to require sedation. Doctors and nurses wash their hands more when they know their hygiene is being tracked. And other studies have shown that zoo animals only exhibit certain behaviors in the presence of visitors, such as being hypervigilant of their presence and repeatedly looking at them.

In general, we change our behavior when we expect to be seen. Philosopher Jeremy Bentham knew this when he designed the panopticon prison in the eighteenth century, building upon an idea by his brother Samuel. The prison was constructed so that its cells circled a central watchtower so inmates could never tell if they were being watched or not. Bentham expected this would lead to better behavior, without the need for many staff. It never caught on as an actual design for prisons, but the modern prevalence of CCTV is often compared to the Panopticon. We never know when we’re being watched, so we act as if it’s all the time.

The observer effect, however, is twofold. Observing changes what occurs, but observing also changes our perceptions of what occurs. Let’s take a look at that next.

“How much does one imagine, how much observe? One can no more separate those functions than divide light from air, or wetness from water.”

— Elspeth Huxley

Observer bias

The effects of observation get more complex when we consider how each of us filters what we see through our own biases, assumptions, preconceptions, and other distortions. There’s a reason, after all, why double-blinding (ensuring both tester and subject does not receive any information that may influence their behavior) is the gold-standard in research involving living things. Observer bias occurs when we alter what we see, either by only noticing what we expect or by behaving in ways that have influence on what occurs. Without intending to do so, researchers may encourage certain results, leading to changes in ultimate outcomes.

A researcher falling prey to the observer bias is more likely to make erroneous interpretations, leading to inaccurate results. For instance, in a trial for an anti-anxiety drug where researchers know which subjects receive a placebo and which receive actual drugs, they may report that the latter group seems calmer because that’s what they expect.

The truth is, we often see what we expect to see. Our biases lead us to factor in irrelevant information when evaluating the actions of others. We also bring our past into the present and let that color our perceptions as well—so, for example, if someone has really hurt you before, you are less likely to see anything good in what they do.

The actor-observer bias

Another factor in the observer effect, and one we all fall victim to, is our tendency to attribute the behavior of others to innate personality traits. Yet we tend to attribute our own behavior to external circumstances. This is known as the actor-observer bias.

For example, a student who gets a poor grade on a test claims they were tired that day or the wording on the test was unclear. Conversely, when that same student observes a peer who performed badly on a test on which they performed well, the student judges their peer as incompetent or ill-prepared. If someone is late to a meeting with a friend, they rush in apologizing for the bad traffic. But if the friend is late, they label them as inconsiderate. When we see a friend having an awesome time in a social media post, we assume their life is fun all of the time. When we post about ourselves having an awesome time, we see it as an anomaly in an otherwise non-awesome life.

We have different levels of knowledge about ourselves and others. Because observation focuses on what is displayed, not what preceded or motivated it, we see the full context for our own behavior but only the final outcome for other people. We need to take the time to learn the context of other’s lives before we pass judgment on their actions.

Conclusion

We can use the observer effect to our benefit. If we want to change a behavior, finding some way to ensure someone else observes it can be effective. For instance, going to the gym with a friend means they know if we don’t go, making it more likely that we stick with it. Tweeting about our progress on a project can help keep us accountable. Even installing software on our laptop that tracks how often we check social media can reduce our usage.

But if we want to get an accurate view of reality, it is important we consider how observing it may distort the results. The value of knowing about the observer effect in everyday life is that it can help us factor in the difference that observation makes. If we want to gain an accurate picture of the world, it pays to consider how we take that picture. For instance, you cannot assume that an employee’s behavior in a meeting translates to their work, or that the way your kids act at home is the same as in the playground. We all act differently when we know we are being watched.

When Safety Proves Dangerous

Not everything we do with the aim of making ourselves safer has that effect. Sometimes, knowing there are measures in place to protect us from harm can lead us to take greater risks and cancel out the benefits. This is known as risk compensation. Understanding how it affects our behavior can help us make the best possible decisions in an uncertain world.

***

The world is full of risks. Every day we take endless chances, whether we’re crossing the road, standing next to someone with a cough on the train, investing in the stock market, or hopping on a flight.

From the moment we’re old enough to understand, people start teaching us crucial safety measures to remember: don’t touch that, wear this, stay away from that, don’t do this. And society is endlessly trying to mitigate the risks involved in daily life, from the ongoing efforts to improve car safety to signs reminding employees to wash their hands after using the toilet.

But the things we do to reduce risk don’t always make us safer. They can end up having the opposite effect. This is because we tend to change how we behave in response to our perceived safety level. When we feel safe, we take more risks. When we feel unsafe, we are more cautious.

Risk compensation means that efforts to protect ourselves can end up having a smaller effect than expected, no effect at all, or even a negative effect. Sometimes the danger is transferred to a different group of people, or a behavior modification creates new risks. Knowing how we respond to risk can help us avoid transferring danger to other more vulnerable individuals or groups.

Examples of Risk Compensation

There are many documented instances of risk compensation. One of the first comes from a 1975 paper by economist Sam Peltzman, entitled “The Effects of Automobile Safety Regulation.” Peltzman looked at the effects of new vehicle safety laws introduced several years earlier, finding that they led to no change in fatalities. While people in cars were less likely to die in accidents, pedestrians were at a higher risk. Why? Because drivers took more risks, knowing they were safer if they crashed.

Although Peltzman’s research has been both replicated and called into question over the years (there are many ways to interpret the same dataset), risk compensation is apparent in many other areas. As Andrew Zolli and Ann Marie Healy write in Resilience: Why Things Bounce Back, children who play sports involving protective gear (like helmets and knee pads) take more physical risks, and hikers who think they can be easily rescued are less cautious on the trails.

A study of taxi drivers in Munich, Germany, found that those driving vehicles with antilock brakes had more accidents than those without—unsurprising, considering they tended to accelerate faster and stop harder. Another study suggested that childproof lids on medicine bottles did not reduce poisoning rates. According to W. Kip Viscusi at Duke University, parents became more complacent with all medicines, including ones without the safer lids. Better ripcords on parachutes lead skydivers to pull them too late.

As defenses against natural disasters have improved, people have moved into riskier areas, and deaths from events like floods or hurricanes have not necessarily decreased. After helmets were introduced in American football, tackling fatalities actually increased for a few years, as players were more willing to strike heads (this changed with the adoption of new tackling standards.) Bailouts and protective mechanisms for financial institutions may have contributed to the scale of the 2008 financial crisis, as they led to banks taking greater and greater risks. There are numerous other examples.

We can easily see risk compensation play out in our lives and those of people around us. Someone takes up a healthy habit, like going to the gym, then compensates by drinking more. Having an emergency fund in place can encourage us to take greater financial risks. Wearing a face mask during a pandemic might mean you’re more willing to hang out in crowded places.

Risk Homeostasis

According to psychology professor Gerald Wilde, we all internally have a desired level of risk that varies depending on who we are and the context we are in. Our risk tolerance is like a thermostat—we take more risks if we feel too safe, and vice versa, in order to remain at our desired “temperature.” It all comes down to the costs and benefits we expect from taking on more or less risk.

The notion of risk homeostasis, although controversial, can help explain risk compensation. It means that enforcing measures to make people safer will inevitably lead to changes in behavior that maintain the amount of risk we’d like to experience, like driving faster while wearing a seatbelt. A feedback loop communicating our perceived risk helps us keep things as dangerous as we wish them to be. We calibrate our actions to how safe we’d like to be, making adjustments if it swings too far in one direction or the other.

What We Can Learn from Risk Compensation

We can learn many lessons from risk compensation and the research that has been done on the subject. First, safety measures are more effective the less visible they are. If people don’t know about a risk reduction, they won’t change their behavior to compensate for it. When we want to make something safer, it’s best to ensure changes go as unnoticed as possible.

Second, an effective method to reduce risk-taking behavior is to provide incentives for prudent behavior, giving people a reason to adjust their risk thermostat. Just because it seems like something has become safer doesn’t mean the risk hasn’t transferred elsewhere, putting a different group of people in danger as when seat belt laws lead to more pedestrian fatalities. So, for instance, lower insurance premiums for careful drivers might result in fewer fatalities than stricter road safety laws because it causes them to make positive changes to their behavior, instead of shifting the risk elsewhere.

Third, we are biased towards intervention. When we want to improve a situation, our first instinct tends to be to step in and change something, anything. Sometimes it is wiser to do less, or even nothing. Changing something does not always make people safer, sometimes it just changes the nature of the danger.

Fourth, when we make a safety change, we may need to implement corresponding rules to avoid risk compensation. Football helmets made the sport more dangerous at first, but new rules about tackling helped cancel out the behavior changes because the league was realistic about the need for more than just physical protection.

Finally, making people feel less safe can actually improve their behavior. Serious injuries in car crashes are rarer when the roads are icy, even if minor incidents are more common, because drivers take more care. If we want to improve safety, we can make risks more visible through better education.

Risk compensation certainly doesn’t mean it’s not a good idea to take steps to make ourselves safer, but it does illustrate how we need to be aware of unintended consequences that occur when we interact with complex systems. We can’t always expect to achieve the changes we desire first time around. Once we make a change, we should pay careful attention to the effects on the whole system to see what happens. Sometimes it will take the testing of a few alternate approaches to bring us closer to the desired effect.

The Positive Side of Shame

Recently, shame has gotten a bad rap. It’s been branded as toxic and destructive. But shame can be used as a tool to effect positive change.

***

A computer science PhD candidate uncovers significant privacy-violating security flaws in large companies, then shares them with the media to attract negative coverage. Google begins marking unencrypted websites as unsafe, showing a red cross in the URL bar. A nine-year-old girl posts pictures of her school’s abysmal lunches on a blog, leading the local council to step in.

What do each of the aforementioned stories have in common? They’re all examples of shame serving as a tool to encourage structural changes.

Shame, like all emotions, exists because it conferred a meaningful survival advantage for our ancestors. It is a universal experience. The body language associated with shame — inverted shoulders, averted eyes, pursed lips, bowed head, and so on — occurs across cultures. Even blind people exhibit the same body language, indicating it is innate, not learned. We would not waste our time and energy on shame if it wasn’t necessary for survival.

Shame enforces social norms. For our ancestors, the ability to maintain social cohesion was a matter of life or death. Take the almost ubiquitous social rule that states stealing is wrong. If a person is caught stealing, they are likely to feel some degree of shame. While this behavior may not threaten anyone’s survival today, in the past it could have been a sign that a group’s ability to cooperate was in jeopardy. Living in small groups in a harsh environment meant full cooperation was essential.

Through the lens of evolutionary biology, shame evolved to encourage adherence to beneficial social norms. This is backed up by the fact that shame is more prevalent in collectivist societies where people spend little to no time alone than it is in individualistic societies where people live more isolated lives.

Jennifer Jacquet argues in Is Shame Necessary?: New Uses For An Old Tool that we’re not quite through with shame yet. In fact, if we adapt it for the current era, it can help us to solve some of the most pressing problems we face. Shame gives the weak greater power. The difference is that we must shift shame from individuals to institutions, organizations, and powerful individuals. Jacquet states that her book “explores the origins and future of shame. It aims to examine how shaming—exposing a transgressor to public disapproval—a tool many of us find discomforting, might be retrofitted to serve us in new ways.”

Guilt vs. shame

Jacquet begins the book with the story of Sam LaBudde, a young man who in the 1980s became determined to target practices in the tuna-fishing industry leading to the deaths of dolphins. Tuna is often caught with purse seines, a type of large net that encloses around a shoal of fish. Seeing as dolphins tend to swim alongside tuna, they are easily caught in the nets. There, they either die or suffer serious injuries.

LaBudde got a job on a tuna-fishing boat and covertly filmed dolphins dying from their injuries. For months, he hid his true intentions from the crew, spending each day both dreading and hoping for the death of a dolphin. The footage went the 1980s equivalent of viral, showing up in the media all over the world and attracting the attention of major tuna companies.

Still a child at the time, Jacquet was horrified to learn of the consequences of the tuna her family ate. She recalls it as one of her first experiences of shame related to consumption habits. Jacquet persuaded her family to boycott canned tuna altogether. So many others did the same that companies launched the “dolphin-safe” label, which ostensibly indicated compliance with guidelines intended to reduce dolphin deaths. Jacquet returned to eating tuna and thought no more of it.

The campaign to end dolphin deaths in the tuna-fishing industry was futile, however, because it was built upon guilt rather than shame. Jacquet writes, “Guilt is a feeling whose audience and instigator is oneself, and its discomfort leads to self-regulation.” Hearing about dolphin deaths made consumers feel guilty about their fish-buying habits, which conflicted with their ethical values. Those who felt guilty could deal with it by purchasing supposedly dolphin-safe tuna—provided they had the means to potentially pay more and the time to research their choices. A better approach might have been for the videos to focus on tuna companies, giving the names of the largest offenders and calling for specific change in their policies.

But individuals changing their consumption habits did not stop dolphins from dying. It failed to bring about a structural change in the industry. This, Jacquet later realized, was part of a wider shift in environmental action. She explains that it became more about consumers’ choices:

As the focus shifted from supply to demand, shame on the part of corporations began to be overshadowed by guilt on the part of consumers—as the vehicle for solving social and environmental problems. Certification became more and more popular and its rise quietly suggested that responsibility should fall more to the individual consumer rather than to political society. . . . The goal became not to reform entire industries but to alleviate the consciences of a certain sector of consumers.

Shaming, as Jacquet defines it, is about the threat of exposure, whereas guilt is personal. Shame is about the possibility of an audience. Imagine someone were to send a print-out of your internet search history from the last month to your best friend, mother-in-law, partner, or boss. You might not have experienced any guilt making the searches, but even the idea of them being exposed is likely shame-inducing.

Switching the focus of the environmental movement from shame to guilt was, at best, a distraction. It put the responsibility on individuals, even though small actions like turning off the lights count for little. Guilt is a more private emotion, one that arises regardless of exposure. It’s what you feel when you’re not happy about something you did, whereas shame is what you feel when someone finds out. Jacquet writes, “A 2013 research paper showed that just ninety corporations (some of them state-owned) are responsible for nearly two-thirds of historic carbon dioxide and methane emissions; this reminds us that we don’t all share the blame for greenhouse gas emissions.” Guilt doesn’t work because it doesn’t change the system. Taking this into account, Jacquet believes it is time for us to bring back shame, “a tool that can work more quickly and at larger scales.”

The seven habits of effective shaming

So, if you want to use shame as a force for good, as an individual or as part of a group, how can you do so in an effective manner? Jacquet offers seven pointers.

Firstly, “The audience responsible for the shaming should be concerned with the transgression.” It should be something that impacts them so they are incentivized to use shaming to change it. If it has no effect on their lives, they will have little reason to shame. The audience must be the victim. For instance, smoking rates are shrinking in many countries. Part of this may relate to the tendency of non-smokers to shame smokers. The more the former group grows, the greater their power to shame. This works because second-hand smoke impacts their health too, as do indirect tolls like strain on healthcare resources and having to care for ill family members. As Jacquet says, “Shaming must remain relevant to the audience’s norms and moral framework.”

Second, “There should be a big gap between the desired and actual behavior.” The smaller the gap, the less effective the shaming will be. A mugger stealing a handbag from an elderly lady is one thing. A fraudster defrauding thousands of retirees out of their savings is quite another. We are predisposed to fairness in general and become quite riled up when unfairness is significant. In particular, Jacquet observes, we take greater offense when it is the fault of a small group, such as a handful of corporations being responsible for the majority of greenhouse gas emissions. It’s also a matter of contrast. Jacquet cites her own research, which finds that “the degree of ‘bad’ relative to the group matters when it comes to bad apples.” The greater the contrast between the behavior of those being shamed and the rest of the group, the stronger the annoyance will be. For instance, the worse the level of pollution for a corporation is, the more people will shame it.

Third, “Formal punishment should be missing.” Shaming is most effective when it is the sole possible avenue for punishment and the transgression would otherwise go ignored. This ignites our sense of fury at injustice. Jacquet points out that the reason shaming works so well in international politics is that it is often a replacement for formal methods of punishment. If a nation commits major human rights abuses, it is difficult for another nation to use the law to punish them, as they likely have different laws. But revealing and drawing attention to the abuses may shame the nation into stopping, as they do not want to look bad to the rest of the world. When shame is the sole tool we have, we use it best.

Fourth, “The transgressor should be sensitive to the source of shaming.” The shamee must consider themselves subject to the same social norms as the shamer. Shaming an organic grocery chain for stocking unethically produced meat would be far more effective than shaming a fast-food chain for the same thing. If the transgressor sees themselves as subject to different norms, they are unlikely to be concerned.

Fifth, “The audience should trust the source of the shaming.” The shaming must come from a respectable, trustworthy, non-hypocritical source. If it does not, its impact is likely to be minimal. A news outlet that only shames one side of the political spectrum on a cross-spectrum issue isn’t going to have much impact.

Sixth, “Shaming should be directed where possible benefits are greatest.” We all have a limited amount of attention and interest in shaming. It should only be applied where it can have the greatest possible benefits and used sparingly, on the most serious transgressions. Otherwise, people will become desensitized, and the shaming will be ineffective. Wherever possible, we should target shaming at institutions, not individuals. Effective shaming focuses on the powerful, not the weak.

Seventh, “Shaming should be scrupulously implemented” Shaming needs to be carried out consistently. The threat can be more useful than the act itself, hence why it may need implementing on a regular basis. For instance, an annual report on the companies guilty of the most pollution is more meaningful than a one-off one. Companies know to anticipate it and preemptively change their behavior. Jacquet explains that “shame’s performance is optimized when people reform their behavior in response to its threat and remain part of the group. . . . Ideally, shaming creates some friction but ultimately heals without leaving a scar.”

To summarize, Jacquet writes: “When shame works without destroying anyone’s life, when it leads to reform and reintegration rather than fight or flight, or, even better, when it acts as a deterrent against bad behavior, shaming is performing optimally.”

***

Due to our negative experiences with shame on a personal level, we may be averse to viewing it in the light Jacquet describes: as an important and powerful tool. But “shaming, like any tool, is on its own amoral and can be used to any end, good or evil.” The way we use it is what matters.

According to Jacquet, we should not use shame to target transgressions that have minimal impact or are the fault of individuals with little power. We should use it when the outcome will be a broader benefit for society and when formal means of punishment have been exhausted. It’s important the shaming be proportional and done intentionally, not as a means of vindication.

Is Shame Necessary? is a thought-provoking read and a reminder of the power we have as individuals to contribute to meaningful change to the world. One way is to rethink how we view shame.

Choosing your Choice Architect(ure)

“Nothing will ever be attempted
if all possible objections must first be overcome.”

— Samuel Johnson

***

In the book Nudge by Richard Thaler and Cass Sunstein they coin the terms ‘Choice Architecture’ and ‘Choice Architect’. For them, if you have an ability to influence the choices other people make, you are a choice architect.

Considering the number of interactions we have everyday, it would be quite easy to argue that we are all Choice Architects at some point. But this also makes the inverse true; we are also wandering around someone else’s Choice Architecture.

Let’s take a look at a few of the principles of good choice architecture, so we can get a better idea of when someone is trying to nudge us.

This information can then be used/weighed when making decisions.  

Defaults

Thaler and Sunstein start with a discussion on “defaults” that are commonly offered to us:

For reasons we have discussed, many people will take whatever option requires the least effort, or the path of least resistance. Recall the discussion of inertia, status quo bias, and the ‘yeah, whatever’ heuristic. All these forces imply that if, for a given choice, there is a default option — an option that will obtain if the chooser does nothing — then we can expect a large number of people to end up with that option, whether or not it is good for them. And as we have also stressed, these behavioral tendencies toward doing nothing will be reinforced if the default option comes with some implicit or explicit suggestion that it represents the normal or even the recommended course of action.

When making decisions people will often take the option that requires the least effort or the path of least resistance. This makes sense: It’s not just a matter of laziness, we also only have so many hours in a day. Unless you feel particularly strongly about it, if putting little to no effort towards something leads you forward (or at least doesn’t noticeably kick you backwards) this is what you are likely to do. Loss aversion plays a role as well. If we feel like the consequences of making a poor choice are high, we will simply decide to do nothing. 

Inertia is another reason: If the ship is currently sailing forward, it can often take a lot of time and effort just to slightly change course.

You have likely seen many examples of inertia at play in your work environment and this isn’t necessarily a bad thing.

Sometimes we need that ship to just steadily move forward. The important bit is to realize when this is factoring into your decisions, or more specifically, when this knowledge is being used to nudge you into making specific choices.

Let’s think about some of your monthly recurring bills. While you might not be reading that magazine or going to the gym, you’re still paying for the ability to use that good or service. If you weren’t being auto-renewed monthly, what is the chance that you would put the effort into renewing that subscription or membership? Much lower, right? Publishers and gym owners know this, and they know you don’t want to go through the hassle of cancelling either, so they make that difficult, too. (They understand well our tendency to want to travel the path of least resistance and avoid conflict.)

This is also where they will imply that the default option is the recommended course of action. It sounds like this:

“We’re sorry to hear you no longer want the magazine Mr. Smith. You know, more than half of the fortune 500 companies have a monthly subscription to magazine X, but we understand if it’s not something you’d like to do at the moment.”

or

“Mr. Smith we are sorry to hear that you want to cancel your membership at GymX. We understand if you can’t make your health a priority at this point but we’d love to see you back sometime soon. We see this all the time, these days everyone is so busy. But I’m happy to say we are noticing a shift where people are starting to make time for themselves, especially in your demographic…”

(Just cancel them. You’ll feel better. We promise.)

The Structure of Complex Choices

We live in a world of reviews. Product reviews, corporate reviews, movie reviews… When was the last time you bought a phone or a car before checking the reviews? When was the last time that you hired an employee without checking out their references? 

Thaler and Sunstein call this Collaborative Filtering and explain it as follows:

You use the judgements of other people who share your tastes to filter through the vast number of books or movies available in order to increase the likelihood of picking one you like. Collaborative filtering is an effort to solve a problem of choice architecture. If you know what people like you tend to like, you might well be comfortable in selecting products you don’t know, because people like you tend to like them. For many of us, collaborative filtering is making difficult choices easier.

While collaborative filtering does a great job of making difficult choices easier we have to remember that companies also know that you will use this tool and will try to manipulate it. We just have to look at the information critically, compare multiple sources and take some time to review the reviewers.

These techniques can be useful for decisions of a certain scale and complexity: when the alternatives are understood and in small enough numbers. However, once we reach a certain size we require additional tools to make the right decision.

One strategy to use is what Amos Tversky (1972) called ‘elimination by aspects.’ Someone using this strategy first decides what aspect is most important (say, commuting distance), establishes a cutoff level (say, no more than a thirty-minute commute), then eliminates all the alternatives that do not come up to this standard. The process is repeated, attribute by attribute (no more than $1,500 per month; at least two bedrooms; dogs permitted), until either a choice is made or the set is narrowed down enough to switch over to a compensatory evaluation of the ‘finalists.’”

This is a very useful tool if you have a good idea of which attributes are of most value to you.

When using these techniques, we have to be mindful of the fact that the companies trying to sell us goods have spent a lot of time and money figuring out what attributes are important to you as well.

For example, if you were to shop for an SUV you would notice that there are a specific number of variables they all seem to have in common now (engine options, towing options, seating options, storage options). They are trying to nudge you not to eliminate them from your list. This forces you to do the tertiary research or better yet, this forces you to walk into dealerships where they will try to inflate the importance of those attributes (which they do best).

They also try to call things new names as a means to differentiate themselves and get onto your list. What do you mean our competitors don’t have FLEXfuel?

Incentives

Incentives are so ubiquitous in our lives that it’s very easy to overlook them. Unfortunately, this can influence us to make poor decisions.

Thaler and Sunstein believe this is tied into how salient the incentive is.

The most important modification that must be made to a standard analysis of incentives is salience. Do the choosers actually notice the incentives they face? In free markets, the answer is usually yes, but in important cases the answer is no.

Consider the example of members of an urban family deciding whether to buy a car. Suppose their choices are to take taxis and public transportation or to spend ten thousand dollars to buy a used car, which they can park on the street in front of their home. The only salient costs of owning this car will be the weekly stops at the gas station, occasional repair bills, and a yearly insurance bill. The opportunity cost of the ten thousand dollars is likely to be neglected. (In other words, once they purchase the car, they tend to forget about the ten thousand dollars and stop treating it as money that could have been spent on something else.) In contrast, every time the family uses a taxi the cost will be in their face, with the meter clicking every few blocks. So behavioral analysis of the incentives of car ownership will predict that people will underweight the opportunity costs of car ownership, and possibly other less salient aspects such as depreciation, and may overweight the very salient costs of using a taxi.

The problems here are relatable and easily solved: If the family above had written down all the numbers related to either taxi, public transportation, or car ownership, it would have been a lot more difficult for them to undervalue the salient aspects of any of their choices. (At least if the highest value attribute is cost).

***

This isn’t an exhaustive list of all the daily nudges we face but it’s a good start and some important, translatable, themes emerge.

  • Realize when you are wandering around someone’s choice architecture.
  • Do your homework
  • Develop strategies to help you make decisions when you are being nudged.

 

Still Interested? Buy, and most importantly read, the whole book. Also, check out our other post on some of the Biases and Blunders covered in Nudge.

The Fundamental Attribution Error: Why Predicting Behavior is so Hard

The Fundamental Attribution Error refers to a logical fallacy: our belief that the way people behave in one area carries consistently over to the way they behave in other situations.

We tend to assume that the way people behave is the result of their innate characteristics and overrate the influence of their personality. We underrate the influence of circumstances and how they can impact people’s behavior.

In this post, we’ll look at how the Fundamental Attribution Error works, how it misleads us, and how we can avoid this fallacy. We’ll draw upon the work of noted psychologists and experts in the field and consider what ‘character’ really means in this context.

Read on to learn more about one of the biggest reasoning errors you might be making.

***
“Psychologists refer to the inappropriate use of dispositional
explanation as the fundamental attribution error, that is,
explaining situation-induced behavior as caused by
enduring character traits of the agent.”
— Jon Elster

***

Think of a person you know well, perhaps a partner or close friend. How would you define their ‘character’? What traits would you say are fundamentally them? 

Now try imagining that person in different situations. How might they act if their flight to a conference was delayed by six hours? What would they do if they came home and found a sick stray animal on their doorstep? What would they do if they dropped their phone down a gutter?

You can probably imagine with ease how the person you have in mind would behave. We all do this; we make assertions about a person’s character, then we expect those things to carry over to every area of their lives. We label someone as ‘moral’ or ‘honest’ or ‘naive’ or any of countless labels. Then we expect that someone we label as ‘honest’ in one area will be honest in every area. Or that someone who is ‘naive’ about one thing is naive about everything.

Old-time folk psychology supports the notion that character is consistent. As social and political theorist Jon Elster writes in his wonderful book Explaining Social Behavior, folk wisdom suggests that predicting behavior is easy. Simply figure out someone’s character and you’ll know how to predict or explain everything about them: 

“People are often assumed to have personality traits (introvert, timid, etc.) as well as virtues (honesty, courage, etc.) or vices (the seven deadly sins, etc.). In folk psychology, these features are assumed to be stable over time and across situations. Proverbs in all languages testify to this assumption. “Who tells one lie will tell a hundred.” “Who lies also steals.” “Who steals an egg will steal an ox.” “Who keeps faith in small matters, does so in large ones.” “Who is caught red-handed once will always be distrusted.” If folk psychology is right, predicting and explaining behavior should be easy.

A single action will reveal the underlying trait or disposition and allow us to predict behavior on an indefinite number of other occasions when the disposition could manifest itself. The procedure is not tautological, as it would be if we took cheating on an exam as evidence of dishonesty and then used the trait of dishonesty to explain the cheating. Instead, it amounts to using cheating on an exam as evidence for a trait (dishonesty) that will also cause the person to be unfaithful to a spouse. If one accepts the more extreme folk theory that all virtues go together, the cheating might also be used to predict cowardice in battle or excessive drinking.”

Believing that a single action can ‘speak volumes’ about someone’s character is a natural and tempting way to approach understanding others. If you’ve spent much time dating, you’ve probably received advice concerning small things that could be indications a prospective partner is not a great person, like how they speak to wait staff or even how they speak to their Alexa. Yet in reality, this advice doesn’t translate into reality. It’s impossible to know if someone will be a good partner based on a single action. 

The problem is, we’re often wrong when we think we know someone’s character and can use it to make predictions. Character, as a concept, is hard to pin down in any area.

***

Appearances can be deceiving

In fact, our tendency to pick up on small details as indicators of someone’s character can backfire. We see someone seems good in one area and assume that carries across. Imagine you’re interviewing a financial advisor. He shows up on time. He’s wearing a nice suit. He buys you lunch. He’s polite and friendly. 

Will he handle your money correctly? You might think, based on the aforementioned factors, that he will. But in reality, his ability to manage his time or pick out a well-fitting suit has no relation to his money management skills. The shiny cuff links are not a sign of overall ‘good character.’

Appearances can be deceiving. The study of history shows us that behavior in one context does not always correlate to behavior in another. Our actions are as much the product of circumstances as of anything innate. 

Case in point: US President Lyndon Johnson. He was a bully and a liar. As a young man, he stole an election. But he also fought like hell to pass the Civil Rights Act, thereby outlawing discrimination based on race, religion, sex and other factors. Almost no other politician could have done that. Clearly, we cannot categorically say Johnson was a good or bad person. He had both positive and negative attributes depending on the context he was in. 

Another powerful and complex man was Henry Ford, of Ford Motors. We owe him a lot. He streamlined the modern automobile and made it affordable to the masses. He paid fairer wages to his employees and treated them better than was standard at the time. But Ford was also known for his antisemitism. 

Jon Elster goes on to give some examples from the music industry regarding impulsivity versus discipline:

“The jazz musician Charlie Parker was characterized by a doctor who knew him as “a man living from moment to moment. A man living for the pleasure principle, music, food, sex, drugs, kicks, his personality arrested at an infantile level.” Another great jazz musician, Django Reinhardt, had an even more extreme present-oriented attitude in his daily life, never saving any of his substantial earnings, but spending them on whims or on expensive cars, which he quickly proceeded to crash. In many ways he was the incarnation of the stereotype of “the Gypsy.” 

Yet you do not become a musician of the caliber of Parker and Reinhardt if you live in the moment in all respects. Proficiency takes years of utter dedication and concentration. In Reinhardt’s case, this was dramatically brought out when he damaged his left hand severely in a fire and retrained himself so that he could achieve more with two fingers than anyone else with four. If these two musicians had been impulsive and carefree across the board — if their “personality” had been consistently “infantile” — they could never have become such consummate artists.”

***

Once you notice the fundamental attribution error, you can see it everywhere. Hiring is difficult because we cannot expect a person’s behavior in an interview to carry over to their behavior on the job. An autistic person, for instance, might struggle to explain themselves in an interview but be incredible at their work. Likewise, a parent may refuse to believe their child acts out at school because they are well behaved at home. A religious teacher may preach honesty while cheating on their spouse.  

Jon Elster describes a social psychology experiment that demonstrates how our sense of the right way to behave in one situation can evaporate in another:

“In another experiment, theology students were told to prepare themselves to give a brief talk in a nearby building. One-half were told to build the talk around the Good Samaritan parable(!), whereas the others were given a more neutral topic. One group was told to hurry since the people in the other building were waiting for them, whereas another was told that they had plenty of time. On their way to the other building, subjects came upon a man slumping in the doorway, apparently in distress. Among the students who were told they were late, only 10 percent offered assistance; in the other group, 63 percent did so. The group that had been told to prepare a talk on the Good Samaritan was not more likely to behave as one. Nor was the behavior of the students correlated with answers to a questionnaire intended to measure whether their interest in religion was due to the desire for personal salvation or to a desire to help others. The situational factor — being hurried or not — had much greater explanatory power than any dispositional factor.”

The people involved in the experiment no doubt wanted to be good samaritans and thought of themselves as good people. But the incentive of avoiding being late and facing the shame of people waiting for them overrode that. So much for character!

As Elster writes “Behavior is often no more stable than the situations that shape it.” We can’t disregard any notion of character, of course. Elster refers to specific tendencies that do not carry from situation to situation. General ones might. We need to understand character as the result of specific interactions between people and situations. We should pay attention to the interplay between the situation, incentives, and the person instead of ascribing broad character traits. The result is a much better understanding of human nature

Want More? Check out our ever-growing database of mental models.

 

Our Genes and Our Behavior

“But now we are starting to show genetic influence on individual differences using DNA. DNA is a game changer; it’s a lot harder to argue with DNA than it is with a twin study or an adoption study.”
— Robert Plomin

***

It’s not controversial to say that our genetics help explain our physical traits. Tall parents will, on average, have tall children. Overweight parents will, on average, have overweight children. Irish parents have Irish looking kids. This is true to the point of banality and only a committed ignorant would dispute it.

It’s slightly more controversial to talk about genes influencing behavior. For a long time, it was denied entirely. For most of the 20th century, the “experts” in human behavior had decided that “nurture” beat “nature” with a score of 100-0. Particularly influential was the child’s early life — the way their parents treated them in the womb and throughout early childhood. (Thanks Freud!)

So, where are we at now?

Genes and Behavior

Developmental scientists and behavioral scientists eventually got to work with twin studies and adoption studies, which tended to show that certain traits were almost certainly heritable and not reliant on environment, thanks to the natural controlled experiments of twins separated at birth. (This eventually provided fodder for Judith Rich Harris’s wonderful work on development and personality.)

All throughout, the geneticists, starting with Gregor Mendel and his peas, kept on working. As behavioral geneticist Robert Plomin explains, the genetic camp split early on. Some people wanted to understand the gene itself in detail, using very simple traits to figure it out (eye color, long or short wings, etc.) and others wanted to study the effect of genes on complex behavior, generally:

People realized these two views of genetics could come together. Nonetheless, the two worlds split apart because Mendelians became geneticists who were interested in understanding genes. They would take a convenient phenotype, a dependent measure, like eye color in flies, just something that was easy to measure. They weren’t interested in the measure, they were interested in how genes work. They wanted a simple way of seeing how genes work.

By contrast, the geneticists studying complex traits—the Galtonians—became quantitative geneticists. They were interested in agricultural traits or human traits, like cardiovascular disease or reading ability, and would use genetics only insofar as it helped them understand that trait. They were behavior centered, while the molecular geneticists were gene centered. The molecular geneticists wanted to know everything about how a gene worked. For almost a century these two worlds of genetics diverged.

Eventually, the two began to converge. One camp (the gene people) figured out that once we could sequence the genome, they might be able to understand more complicated behavior by looking directly at genes in specific people with unique DNA, and contrasting them against one another.

The reason why this whole gene-behavior game is hard is because, as Plomin makes clear, complex traits like intelligence are not like eye color. There’s no “smart gene” — it comes from the interaction of thousands of different genes and can occur in a variety of combinations. Basic Mendel-style counting (the sort of dominant/recessive eye color gene thing you learned in high school biology) doesn’t work in analyzing the influence of genes on complex traits:

The word gene wasn’t invented until 1903. Mendel did his work in the mid-19th century. In the early 1900s, when Mendel was rediscovered, people finally realized the impact of what he did, which was to show the laws of inheritance of a single gene. At that time, these Mendelians went around looking for Mendelian 3:1 segregation ratios, which was the essence of what Mendel showed, that inheritance was discreet. Most of the socially, behaviorally, or agriculturally important traits aren’t either/or traits, like a single-gene disorder. Huntington’s disease, for example, is a single-gene dominant disorder, which means that if you have that mutant form of the Huntington’s gene, you will have Huntington’s disease. It’s necessary and sufficient. But that’s not the way complex traits work.

The importance of genetics is hard to understate, but until the right technology came along, we could only observe it indirectly. A study might have shown that 50% of the variance in cognitive ability was due to genetics, but we had no idea which specific genes, in which combinations, actually produced smarter people.

But the Moore’s law style improvement in genetic testing means that we can cheaply and effectively map out entire genomes for a very low cost. And with that, the geneticists have a lot of data to work with, a lot of correlations to begin sussing out. The good thing about finding strong correlations between genes and human traits is that we know which one is causative: The gene! Obviously, your reading ability doesn’t cause you to have certain DNA; it must be the other way around. So “Big Data” style screening is extremely useful, once we get a little better at it.

***

The problem is that, so far, the successes have been a bit minimal. There are millions of “ATCG” base pairs to check on.  As Plomin points out, we can only pinpoint about 20% of the specific genetic influence for something simple like height, which we know is about 90% heritable. Complex traits like schizophrenia are going to take a lot of work:

We’ve got to be able to figure out where the so-called missing heritability is, that is, the gap between the DNA variants that we are able to identify and the estimates we have from twin and adoption studies. For example, height is about 90 percent heritable, meaning, of the differences between people in height, about 90 percent of those differences can be explained by genetic differences. With genome-wide association studies, we can account for 20 percent of the variance of height, or a quarter of the heritability of height. That’s still a lot of missing heritability, but 20 percent of the variance is impressive.

With schizophrenia, for example, people say they can explain 15 percent of the genetic liability. The jury is still out on how that translates into the real world. What you want to be able to do is get this polygenic score for schizophrenia that would allow you to look at the entire population and predict who’s going to become schizophrenic. That’s tricky because the studies are case-control studies based on extreme, well-diagnosed schizophrenics, versus clean controls who have no known psychopathology. We’ll know soon how this polygenic score translates to predicting who will become schizophrenic or not.

It brings up an interesting question that gets us back to the beginning of the piece: If we know that genetics have an influence on some complex behavioral traits (and we do), and we can with the continuing progress of science and technology, sequence a baby’s genome and predict to a certain extent their reading level, facility with math, facility with social interaction, etc., do we do it?

Well, we can’t until we get a general recognition that genes do indeed influence behavior and do have predictive power as far as how children perform. So far, the track record on getting educators to see that it’s all quite real is pretty bad. Like the Freudians before, there’s a resistance to the “nature” aspect of the debate, probably influenced by some strong ideologies:

If you look at the books and the training that teachers get, genetics doesn’t get a look-in. Yet if you ask teachers, as I’ve done, about why they think children are so different in their ability to learn to read, and they know that genetics is important. When it comes to governments and educational policymakers, the knee-jerk reaction is that if kids aren’t doing well, you blame the teachers and the schools; if that doesn’t work, you blame the parents; if that doesn’t work, you blame the kids because they’re just not trying hard enough. An important message for genetics is that you’ve got to recognize that children are different in their ability to learn. We need to respect those differences because they’re genetic. Not that we can’t do anything about it.

It’s like obesity. The NHS is thinking about charging people to be fat because, like smoking, they say it’s your fault. Weight is not as heritable as height, but it’s highly heritable. Maybe 60 percent of the differences in weight are heritable. That doesn’t mean you can’t do anything about it. If you stop eating, you won’t gain weight, but given the normal life in a fast-food culture, with our Stone Age brains that want to eat fat and sugar, it’s much harder for some people.

We need to respect the fact that genetic differences are important, not just for body mass index and weight, but also for things like reading disability. I know personally how difficult it is for some children to learn to read. Genetics suggests that we need to have more recognition that children differ genetically, and to respect those differences. My grandson, for example, had a great deal of difficulty learning to read. His parents put a lot of energy into helping him learn to read. We also have a granddaughter who taught herself to read. Both of them now are not just learning to read but reading to learn.

Genetic influence is just influence; it’s not deterministic like a single gene. At government levels—I’ve consulted with the Department for Education—I don’t think they’re as hostile to genetics as I had feared, they’re just ignorant of it. Education just doesn’t consider genetics, whereas teachers on the ground can’t ignore it. I never get static from them because they know that these children are different when they start. Some just go off on very steep trajectories, while others struggle all the way along the line. When the government sees that, they tend to blame the teachers, the schools, or the parents, or the kids. The teachers know. They’re not ignoring this one child. If anything, they’re putting more energy into that child.

It’s frustrating for Plomin because he knows that eventually DNA mapping will get good enough that real, and helpful, predictions will be possible. We’ll be able to target kids early enough to make real differences — earlier than problems actually manifest — and hopefully change the course of their lives for the better. But so far, no dice.

Education is the last backwater of anti-genetic thinking. It’s not even anti-genetic. It’s as if genetics doesn’t even exist. I want to get people in education talking about genetics because the evidence for genetic influence is overwhelming. The things that interest them—learning abilities, cognitive abilities, behavior problems in childhood—are the most heritable things in the behavioral domain. Yet it’s like Alice in Wonderland. You go to educational conferences and it’s as if genetics does not exist.

I’m wondering about where the DNA revolution will take us. If we are explaining 10 percent of the variance of GCSE scores with a DNA chip, it becomes real. People will begin to use it. It’s important that we begin to have this conversation. I’m frustrated at having so little success in convincing people in education of the possibility of genetic influence. It is ignorance as much as it is antagonism.

Here’s one call for more reality recognition.

***

Still Interested? Check out a book by John Brookman of Edge.org with a curated collection of articles published on genetics.