Tag: Mental Models

Mental Model: Bias from Conjunction Fallacy

Daniel Kahneman and Amos Tversky spent decades in psychology research to disentangle patterns in errors of human reasoning. Over the course of their work they discovered a variety of logical fallacies that we tend to make, when facing information that appears vaguely familiar. These fallacies lead to bias – irrational behavior based on beliefs that are not always grounded in reality.

In his book Thinking Fast and Slow, which summarizes his and Tversky’s life work, Kahneman introduces biases that stem from the conjunction fallacy – the false belief that a conjunction of two events is more probable than one of the events on its own.

What is Probability?

Probability can be a difficult concept. Most of us have an intuitive understanding of what probability is, but there is little consensus on what it actually means. It is just as vague and subjective a concept as democracy, beauty or freedom. However, this is not always troublesome – we can still easily discuss the notion with others. Kahneman reflects:

In all the years I spent asking questions about the probability of events, no one ever raised a hand to ask me, “Sir, what do you mean by probability?” as they would have done if I had asked them to assess a strange concept such as globability.

Everyone acted as if they knew how to answer my questions, although we all understood that it would be unfair to ask them for an explanation of what the word means.

While logicians and statisticians might disagree, probability to most of us is simply a tool that describes our degree of belief. For instance, we know that the sun will rise tomorrow and we consider it near impossible that there will be two suns up in the sky instead of one. In addition to the extremes, there are also events which lie somewhere in the middle on the probability spectrum, such as the degree of belief that it will rain tomorrow.

Despite its vagueness, probability has its virtues. Assigning probabilities helps us make the degree of belief actionable and also communicable to others. If we believe that the probability it will rain tomorrow is 90%, we are likely to carry an umbrella and suggest our family do so as well.

Probability, Base Rates and Representativeness

Most of us are already familiar with representativeness and base rates. Consider the classic example of x number of black and y number of white colored marbles in a jar. It is a simple exercise to tell what the probabilities of drawing each color are if you know their base rates (proportion). Using base rates is the obvious approach for estimations when no other information is provided.

However, Kahneman managed to prove that we have a tendency to ignore base rates in light of specific descriptions. He calls this phenomenon the Representativeness Bias. To illustrate representativeness bias, consider the example of seeing a person reading The New York Times on the New York subway. Which do you think would be a better bet about the reading stranger?

1) She has a PhD.
2) She does not have a college degree.

Representativeness would tell you to bet on the PhD, but this is not necessarily a good idea. You should seriously consider the second alternative, because many more non-graduates than PhDs ride in New York subways. While a larger proportion of PhDs may read The New York Times, the total number of New York Times readers with only high school degrees is likely to be much larger, even if the proportion itself is very slim.

In a series of similar experiments, Kahneman’s subjects failed to recognize the base rates in light of individual information. This is unsurprising. Kahneman explains:

On most occasions, people who act friendly are in fact friendly. A professional athlete who is very tall and thin is much more likely to play basketball than football. People with a PhD are more likely to subscribe to The New York Times than people who ended their education after high school. Young men are more likely than elderly women to drive aggressively.

While following representativeness bias might improve your overall accuracy, it will not always be the statistically optimal approach.

Michael Lewis in his bestseller Moneyball tells a story of Oakland A’s baseball team coach, Billy Beane, who recognized this fallacy and used it to his advantage. When recruiting new players for the team, instead of relying on scouts he relied heavily on statistics of past performance. This approach allowed him to build a team of great players that were passed up by other teams because they did not look the part. Needless to say, the team achieved excellent results at a low cost.

Conjunction Fallacy

While representativeness bias occurs when we fail to account for low base rates, conjunction fallacy occurs when we assign a higher probability to an event of higher specificity. This violates the laws of probability.

Consider the following study:

Participants were asked to rank four possible outcomes of the next Wimbledon tournament from most to least probable. Björn Borg was the dominant tennis player of the day when the study was conducted. These were the outcomes:

A. Borg will win the match.
B. Borg will lose the first set.
C. Borg will lose the first set but win the match.
D. Borg will win the first set but lose the match.

How would you order them?

Kahneman was surprised to see that most subjects ordered the chances by directly contradicting the laws of logic and probability. He explains:

The critical items are B and C. B is the more inclusive event and its probability must be higher than that of an event it includes. Contrary to logic, but not to representativeness or plausibility, 72% assigned B a lower probability than C.

If you thought about the problem carefully you drew the following diagram in your head. Losing the first set will always, by definition, be a more probable event than losing the first set and winning the match.
Screen Shot 2016-08-05 at 6.28.30 PM

The Linda Problem

As discussed in our piece on the Narrative Fallacy, the best-known and most controversial of Kahneman and Tversky’s experiments involved a fictitious lady called Linda. The fictional character was created to illustrate the role heuristics play in our judgement and how it can be incompatible with logic. This is how they described Linda.

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Kahneman conducted a series of experiments, in which he showed that representativeness tends to cloud our judgements and that we ignore the base rates in light of stories. The Linda problem started off with the task to estimate the plausibility of 9 different scenarios that subjects were supposed to rank in order of likelihood.

Linda is a teacher in elementary school.
Linda works in a bookstore and takes yoga classes.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a member of the League of Women Voters.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.

Kahneman was startled to see that his subjects judged the likelihood of Linda being a bank teller and a feminist more likely than her being just a bank teller. As explained earlier, doing so makes little sense. He went on to explore the phenomenon further:

In what we later described as “increasingly desperate” attempts to eliminate the error, we introduced large groups of people to Linda and asked them this simple question:

Which alternative is more probable?

Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

This stark version of the problem made Linda famous in some circles, and it earned us years of controversy. About 85% to 90% of undergraduates at several major universities chose the second option, contrary to logic.

What is especially interesting about these results is that, even when aware of the biases in place, we do not discard them.

When I asked my large undergraduate class in some indignation, “Do you realize that you have violated an elementary logical rule?” someone in the back row shouted, “So what?” and a graduate student who made the same error explained herself by saying, “I thought you just asked for my opinion.”

The issue is not constrained to students and but also affects professionals.

The naturalist Stephen Jay Gould described his own struggle with the Linda problem. He knew the correct answer, of course, and yet, he wrote, “a little homunculus in my head continues to jump up and down, shouting at me—‘but she can’t just be a bank teller; read the description.”

Our brains simply seem to prefer consistency over logic.

The Role of Plausibility

Representativeness and conjunction fallacy occur, because we make the mental shortcut from our perceived plausibility of a scenario to its probability.

The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories.

Kahneman warns us about the effects of these biases on our perception of expert opinion and forecasting. He explains that we are more likely to believe scenarios that are illustrative rather than probable.

The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting. Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability:

A massive flood somewhere in North America next year, in which more than 1,000 people drown

An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown

The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.

In order to appreciate the role of plausibility, he suggests we have a look at an example without an accompanying explanation.

Which alternative is more probable?

Jane is a teacher.
Jane is a teacher and walks to work.

In this case, when evaluating plausibility and coherence there are no quick answers to the probability question and we can easily conclude that the first one is more likely. The rule goes that in the absence of a competing intuition, logic prevails.

Taming our intuition

The first lesson to thinking clearly is to question how you think. We should not simply believe whatever comes to our mind – our beliefs must be constrained by logic. You don’t have to become an expert in probability to tame your intuition, but having a grasp of simple concepts will help. There are two main rules that are worth repeating in light of representativeness bias:

1) All probabilities add up to 100%.

This means that if you believe that there’s a 90% chance it will rain tomorrow, there’s a 10% of chance that it will not rain tomorrow.

However, since you believe that there is only 90% chance that it will rain tomorrow, you cannot be 95% certain that it will rain tomorrow morning.

We typically make this type of error, when we mean to say that, if it rains, there’s 95% probability it will happen in the morning. That’s a different claim and the probability of raining tomorrow morning under such premises is 0.9*0.95=85.5%.

This also means the odds that, if it rains, it will not rain in the morning, are 90.0%-85.5% = 4.5%.

2) The second principle is called the Bayes rule.

 It allows us to correctly adjust our beliefs with the diagnosticity of the evidence. Bayes rule follows the formula:

Picture1

In essence the formula states that the posterior odds are proportional to prior odds times the likelihood. Kahneman crystallizes two keys to disciplined Bayesian reasoning:

• Anchor your judgment of the probability of an outcome on a plausible base rate.
• Question the diagnosticity of your evidence.

Kahnmenan explains it with an example:

If you believe that 3% of graduate students are enrolled in computer science (the base rate), and you also believe that the description of Tom is 4 times more likely for a graduate student in computer science than in other fields, then Bayes’s rule says you must believe that the probability that Tom is a computer science student is now 11%.

Four times as likely means that we expect roughly 80% of all computer science students to resemble Tom. We use this proportion to obtain the adjusted odds. (The calculation goes as follows: 0.03*0.8/(0.03*0.8+((1-0.03)*(1-0.8)))=11%)

The easiest way to become better at making decisions is by making sure you question your assumptions and follow strong evidence. When evidence is anecdotal, adjust minimally and trust the base rates. Odds are, you will be pleasantly surprised.

***

Want More? Check out our ever-growing collection of mental models and biases and get to work.

Mental Model: Commitment and Consistency Bias

“The difficulty lies not in the new ideas,
but in escaping the old ones, which ramify,
for those brought up as most of us have been,
into every corner of our minds.”

— John Maynard Keynes

***

Ben Franklin tells an interesting little story in his autobiography. Facing opposition to being reelected Clerk of the General Assembly, he sought to gain favor with the member so vocally opposing him:

Having heard that he had in his library a certain very scarce and curious book, I wrote a note to him, expressing my desire of perusing that book, and requesting he would do me the favour of lending it to me for a few days. He sent it immediately, and I return’d it in about a week with another note, expressing strongly my sense of the favour.

When we next met in the House, he spoke to me (which he had never done before), and with great civility; and he ever after manifested a readiness to serve me on all occasions, so that we became great friends, and our friendship continued to his death.

This is another instance of the truth of an old maxim I had learned, which says, “He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged.”

The man, having lent Franklin a rare and valuable book, sought to stay consistent with his past actions. He wouldn’t, of course, lend a book to an unworthy man, would he?

***

Positive Self Image

Scottish philosopher and economist Adam Smith said in The Theory of Moral Sentiments:

The opinion which we entertain of our own character depends entirely on our judgments concerning our past conduct. It is so disagreeable to think ill of ourselves, that we often purposely turn away our view from those circumstances which might render that judgment unfavorable.

Even when it acts against our best interest our tendency is to be consistent with our prior commitments, ideas, thoughts, words, and actions. As a byproduct of confirmation bias, we rarely seek disconfirming evidence of what we believe. This, after all, makes it easier to maintain our positive self-image.

Part of the reason this happens is our desire to appear and feel like we’re right. We also want to show people our conviction. This shouldn’t come as a surprise. Society values consistency and conviction even when it is wrong.

We associate consistency with intellectual and personal strength, rationality, honesty, and stability. On the other hand, the person who is perceived as inconsistent is also seen as confused, two-faced, even mentally ill in certain extreme circumstances.

A politician, for example, who wavers, gets labelled a flip flopper and can lose an election over it (John Kerry). A CEO who risks everything on a successful bet and holds a conviction that no one else holds is held to be a hero (Elon Musk).

But it’s not just our words and actions that nudge our subconscious, but also how other people see us. There is a profound truth behind Eminem’s lyrics: I am, whatever you say I am. If I wasn’t, then why would I say I am?

If you think I’m talented, I become more talented in your eyes — in part because you labelling me as talented filters the way you see me. You start seeing more of my genius and less of my normal-ness, simply by way of staying consistent with your own words.

In his book Outliers, Malcolm Gladwell talks about how teachers simply identifying students as smart not only affected how the teachers saw their work but, more importantly, affected the opportunities teachers gave the students. Smarter students received better opportunities, which, we can reason, offers them better experiences. This is turn makes them better. It’s almost a self-fulfilling prophecy.

And the more we invest in our beliefs of ourselves or others—think money, effort, or pain, the more sunk costs we have and the harder it becomes to change our mind. It doesn’t matter if we’re right. It doesn’t matter if the Ikea bookshelf sucks, we’re going to love it.

In Too Much Invested to Quit, psychologist Allan Teger says something similar of the Vietnam War:

The longer the war continued, the more difficult it was to justify the additional investments in terms of the value of possible victory. On the other hand, the longer the war continued, the more difficult it became to write off the tremendous losses without having anything to show for them.

***

As a consequence, there are few rules we abide by more than the “Don’t make any promises that you can’t keep.” This, generally speaking, is a great rule that keeps society together by ensuring that our commitments for the most part are real and reliable.

Aside from the benefits of preserving our public image, being consistent is simply easier and leads to a more predictable and consistent life. By being consistent in our habits and with previous decisions, we significantly reduce the need to think and can go on “auto-pilot” for most of our lives.

However beneficial these biases are, they too deserve deeper understanding and caution. Sometimes our drive to appear consistent can lure us into choices we otherwise would consider against our best interests. This is the essence of a harmful bias as opposed to a benign one: We are hurting ourselves and others by committing it. 

A Slippery Slope

Part of why commitment can be so dangerous is because it is like a slippery slope – you only need a single slip to slide down completely. Therefore compliance to even tiny requests, which initially appear insignificant, have a good probability of leading to full commitment later.

People whose job it is to persuade us know this.

Among the more blunt techniques on the spectrum are those reported by a used-car sales manager in Robert Cialdini’s book Influence. The dealer knows the power of commitment and that if we comply a little now, we are likely to comply fully later on. His advice to other sellers goes as follows:

“Put ’em on paper. Get the customer’s OK on paper. Get the money up front. Control ’em. Control the deal. Ask ’em if they would buy the car right now if the price is right. Pin ’em down.”

This technique will be obvious to most of us. However, there are also more subtle ways to make us comply without us noticing.

A great example of a subtle compliance practitioner is Jo-Ellen Demitrius, the woman currently reputed to be the best consultant in the business of jury selection.

Whenever screening potential jurors before a trial she asks an artful question:

“If you were the only person who believed in my client’s innocence, could you withstand the pressure of the rest of the jury to change your mind?”

It’s unlikely that any self-respecting prospective juror would answer negatively. And, now that the juror has made the implicit promise, it is unlikely that once selected he will give in to the pressure exerted by the rest of the jury.

Innocent questions and requests like this can be a great springboard for initiating a cycle of compliance.

The Lenient Policy

A great case study for compliance is the tactics that Chinese soliders employed on American war captives during the Korean War. The Chinese were particularly effective in getting Americans to inform on one another. In fact, nearly all American prisoners in the Chinese camps are said to have collaborated with the enemy in one way or another.

This was striking, since such behavior was rarely observed among American war prisoners during WWII. It raises the question of what secret trades led to the success of the Chinese?

Unlike the North Koreans, the Chinese did not treat the victims harshly. Instead they engaged in what they called “lenient policy” towards the captives, which was, in reality, a clever series of psychological assaults.

In their exploits the Chinese relied heavily on commitment and consistency tactics to receive the compliance they desired. At first, the Americans were not too collaborative, as they had been trained to provide only name, rank, and serial number, but the Chinese were patient.

They started with seemingly small but frequent requests to repeat statements like “The United States is not perfect” and “In a Communist country, unemployment is not a problem.” Once these requests had been complied with, the heaviness of the requests grew. Someone who had just agreed that United States was not perfect would be encouraged to expand on his thoughts about specific imperfections. Later he might be asked to write up and read out a list of these imperfections in a discussion group with other prisoners. “After all, it’s what you really believe, isn’t it?” The Chinese would then broadcast the essay readings not only to the whole camp, but to other camps and even the American forces in South Korea. Suddenly the soldier would find himself a “collaborator” of the enemy.

The awareness that the essays did not contradict his beliefs could even change his self-image to be consistent with the new “collaborator” label, often resulting in more cooperation with the enemy.

It is not surprising that very few American soldiers were able to avoid such “collaboration” altogether.

Foot in the Door

The small request growing into bigger requests as applied by the Chinese on American soldiers is also called the Foot-in-the-door Technique. It was first discovered by two scientists – Freedman and Fraser, who had worked on an experiment in which a fake volunteer worker asked home owners to allow a public-service billboard to be installed on their front lawns.

To get a better idea of how it would look, the home owners were even shown a photograph depicting an attractive house that was almost completely obscured by an ugly sign reading DRIVE CAREFULLY. While the request was quite understandably denied by 83 percent of residents, one particular group reacted favorably.

Two weeks earlier a different “volunteer worker” had come and asked the respondents of this group a similar request to display a much smaller sign that read BE A SAFE DRIVER. The request was so negligible that nearly all of them complied. However, the future effects of that request turned out to be so enormous that 76 percent of this group complied with the bigger, much less reasonable request (the big ugly sign).

At first, even the researchers themselves were baffled by the results and repeated the experiment on similar setups. The effect persisted. Finally, they proposed that the subjects must have distorted their own views about themselves as a result of their initial actions:

What may occur is a change in the person’s feelings about getting involved or taking action. Once he has agreed to a request, his attitude may change, he may become, in his own eyes, the kind of person who does this sort of thing, who agrees to requests made by strangers, who takes action on things he believes in, who cooperates with good causes.

The rule goes that once someone has instilled our self-image where they want it to be, we will comply naturally with the set of requests that adhere to the new self-view. Therefore we must be very careful about agreeing to even the smallest requests. Not only can it make us comply with larger requests later on, but it can make us even more willing to do favors that are only remotely connected to the earlier ones.

Even Cialdini, someone who knows this bias inside-out, admits to his fear that his behavior will be affected by consistency bias:

It scares me enough that I am rarely willing to sign a petition anymore, even for a position I support. Such an action has the potential to influence not only my future behavior but also my self-image in ways I may not want.

Further, once a person’s self-image is altered, all sorts of subtle advantages become available to someone who wants to exploit that new image.

Give it, take it away later

Have you ever witnessed a deal that is a little too good to be true only to later be disappointed? You had already made up your mind, had gotten excited and were ready to pay or sign until a calculation error was discovered. Now with the adjusted price, the offer did not look all that great.

It is likely that the error was not an accident – this technique, also called low-balling, is often used by compliance professionals in sales. Cialdini, having observed the phenomenon among car dealers, tested its effects on his own students.

In an experiment with colleagues, he made two groups of students show up at 7:00 AM in the morning to do a study on “thinking processes”. When they called one group of students they immediately told them that the study starts at 7:00 AM. Unsurprisingly, only 24 percent wanted to participate.

However, for the other group of students, researchers threw a low-ball. The first question was whether they wanted to take part in a study of thinking processes. Fifty-six percent of them replied positively. Now, to those that agreed, the meeting time of 7:00 AM was revealed.

These students were given the opportunity to opt out, but none of them did. In fact, driven by their commitment, 95 percent of the low-balled students showed up to the Psychology Building at 7:00 AM as they had promised.

Do you recognize the similarities between the experiment and the sales situation?

The script of low-balling tends to be the same:

First, an advantage is offered that induces a favorable decision in the manipulator’s direction. Then, after the decision has been made, but before the bargain is sealed, the original advantage is deftly removed (i.e., the price is raised, the time is changed, etc.).

It would seem surprising that anyone would buy under these circumstances, yet many do. Often the self-created justifications provide so many new reasons for the decision that even when the dealer pulls away the original favorable rationale, like a low price, the decision is not changed. We stick with our old decision even in the face of new information!

Of course not everyone complies, but that’s not the point. The effect is strong enough to hold for a good number of buyers, students or anyone else whose rate of compliance we may want to raise.

The Way Out

The first real defense to consistency bias is awareness about the phenomenon and the harm a certain rigidity in our decisions can cause us.

Robert Cialdini suggests two approaches to recognizing when consistency biases are unduly creeping into our decision making. The first one is to listen to our stomachs. Stomach signs display themselves when we realize that the request being pushed is something we don’t want to do.

He recalls a time when a beautiful young woman tried to sell him a membership he most certainly did not need by using the tactics displayed above. He writes:

I remember quite well feeling my stomach tighten as I stammered my agreement. It was a clear call to my brain, “Hey, you’re being taken here!” But I couldn’t see a way out. I had been cornered by my own words. To decline her offer at that point would have meant facing a pair of distasteful alternatives: If I tried to back out by protesting that I was not actually the man-about-town I had claimed to be during the interview, I would come off a liar; trying to refuse without that protest would make me come off a fool for not wanting to save $1,200. I bought the entertainment package, even though I knew I had been set up. The need to be consistent with what I had already said snared me.

But then eventually he came up with the perfect counter-attack for later episodes, which allowed him to get out of the situation gracefully.

Whenever my stomach tells me I would be a sucker to comply with a request merely because doing so would be consistent with some prior commitment I was tricked into, I relay that message to the requester. I don’t try to deny the importance of consistency; I just point out the absurdity of foolish consistency. Whether, in response, the requester shrinks away guiltily or retreats in bewilderment, I am content. I have won; an exploiter has lost.

The second approach concerns the signs that are felt within our heart and is best used when it is not really clear whether the initial commitment was wrongheaded.

Imagine you have recognized that your initial assumptions about a particular deal were not correct. The car is not extraordinarily cheap and the experiment is not as fun if you have to wake up at 6 AM to make it. Here it helps to ask one simple question:

“Knowing what I know, if I could go back in time, would I make the same commitment?”

Ask it frequently enough and the answer might surprise you.

***

Want More? Check out our ever-growing library of mental models and biases.

The Many Ways our Memory Fails Us (Part 3)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In the first two parts of our series on memory, we covered four major “sins” committed by our memories: Absent-Mindedness, Transience, Misattribution, and Blocking, using Daniel Schacter’s The Seven Sins of Memory as our guide.

We’re going to finish it off today with three other sins: Suggestibility, Bias, and Persistence, hopefully leaving us with a full understanding of our memory and where it fails us from time to time.

***

Suggestibility

As its name suggests, the sin of suggestibility refers to our brain’s tendency to misremember the source of memories:

Suggestibility in memory refers to an individual’s tendency to incorporate misleading information from external sources — other people, written materials or pictures, even the media — into personal recollections. Suggestibility is closely related to misattribution in the sense that the conversion of suggestions into inaccurate memories must involve misattribution. However, misattribution often occurs in the absence of overt suggestion, making suggestibility a distinct sin of memory.

Suggestibility is such a difficult phenomenon because the memories we’ve pulled from outside sources seem as truly real as our own. Take the case of a “false veteran” which Schacter describes in the book:

On May 31, 2000, a front-page story in the New York Times described the baffling case of Edward Daly, a Korean War veteran who made up elaborate — but imaginary — stories about his battle exploits, including his involvement in a terrible massacre in which he had not actually participated. While weaving his delusional tale, Daly talked to veterans who had participated in the massacre and “reminded” them of his heroic deeds. His suggestions infiltrated their memories. “I know that Daly was there,” pleaded one veteran. “I know that. I know that.”

The key word here is infiltrated. This brings to mind the wonderful Christopher Nolan movie Inception, about a group of experts who seek to infiltrate the minds of sleeping targets in order to change their memories. The movie is fictional but there is a subtle reality to the idea: With enough work, an idea that is merely suggested to us in one context can seem like our own idea or our own memory.

Take suggestive questioning, a problem with criminal investigations. The investigator talks to an eyewitness and, hoping to jog their memory, asks a series of leading questions, arriving at the answer he was hoping for. But is it genuine? Not always.

Schacter describes a psychology experiment wherein participants see a video of a robbery and then are fed misleading suggestions about the robbery soon after, such as the idea that the victim of the robbery was wearing a white apron. Amazingly, even when people could recognize that the apron idea was merely suggested to them, many people still regurgitated the suggested idea!

Previous experiments had shown that suggestive questions produce memory distortion by creating source memory problems like those in the previous chapter: participants misattribute information presented only in suggestive questions about the original videotape. [The psychologist Philip] Higham’s results provide an additional twist. He found that when people took a memory test just minutes after receiving the misleading question, and thus still correctly recalled that the “white apron” was suggested by the experimenter, they sometimes insisted nevertheless that the attendant wore a white apron in the video itself. In fact, they made this mistake just as often as people who took the memory test two days after receiving misleading suggestions, and who had more time to forget that the white apron was merely suggested. The findings testify to the power of misleading suggestions: they can create false memories of an event even when people recall that the misinformation was suggested.

The problem of overconfidence also plays a role in suggestion and memory errors. Take an experiment where subjects are shown a man entering a department store and then told he murdered a security guard. After being shown a photo lineup (which did not contain the gunman), some were told they chose correctly and some were told they chose incorrectly. Guess which group was more confident and trustful of their memories afterwards?

It was, of course, the group that received reinforcement. Not only were they more confident, but they felt they had better command of the details of the gunman’s appearance, even though they were as wrong as the group that received no positive feedback. This has vast practical applications. (Consider a jury taking into account the testimony of a very confident eyewitness, reinforced by police with an agenda.)

***

One more interesting idea in reference to suggestibility: Like the DiCaprio-led clan in the movie Inception, psychologists have been able to successfully “implant” false memories of childhood in many subjects based merely on suggestion alone. This should make you think carefully about what you think you remember about the distant past:

[The psychologist Ira] Hyman asked college students about various childhood experiences that, according to their parents, had actually happened, and also asked about a false event that, their parents confirmed, had never happened. For instance, students were asked: “When you were five you were at the wedding reception of some friends of the family and you were running around with some other kids, when you bumped into the table and spilled the punch bowl on the parents of the bride.” Participants accurately remembered almost all of the true events, but initially reported no memory of the false events.

However, approximately 20 to 40 percent of participants in different experimental conditions eventually came to describe some memory of the false event in later interviews. In one experiment, more than half of the participants who produced false memories describe them as “clear” recollections that included specific details of the central even, such as remembering exactly where or how one spilled the punch. Just under half reported “partial” false memories, which included some details but no specific memory of the central event.

Thus is the “power of the suggestion.”

The Sin of Bias

The problem of bias will be familiar to regular readers. In some form or another, we’re subject to mental biases every single day, most of which are benign, some of which are harmful, and most of which are not hard to understand. Biases specific to memory are so good to study because they’re so easy and natural to fall into. Because we trust our memory so deeply, they often go unquestioned. But we might want to be careful:

The sin of bias refers to distorting influences of our present knowledge, beliefs, feelings on new experiences, or our later memories of them. In the stifling psychological climate of 1984, the Ministry of Truth used memory as a pawn in the service of party rule. Much in the same manner, biases in remembering past experiences reveal how memory can serve as a pawn for the ruling masters of our cognitive systems.

There are four biases we’re subject to in this realm: Consistency and change bias, hindsight bias, egocentric bias, and stereotyping bias.

Consistency and Change Bias

The first is a consistency bias: We re-write our memories of the past based on how we feel in the present. In one experiment after another, this has undoubtedly been proven true. It’s probably something of a coping mechanism: If we saw the past with complete accuracy, we might not be such happy individuals.

We often re-write the past so that it seems we’ve always felt like we feel now, that we always believed what we believe now:

This consistency bias has turned up in several different contexts. Recalling past experiences of pain, for instance, is powerfully influenced by current pain level. When patients afflicted by chronic pain are experiencing high levels of pain in the present, they are biased to recall similarly high levels of pain in the past; when present pain isn’t so bad, past pain experiences seem more benign, too. Attitudes towards political and social issues also reflect consistency bias. People whose views on political issues have changed over time often recall incorrectly past attitudes as highly similar to present ones. In fact, memories of past political views are sometimes more closely related to present views than what they actually believed in the past.

Think about your stance five or ten years ago on some major issue like sentencing for drug-related crime. Can your recall specifically what you believed? For most people, they believe they have stayed consistent on the issue. But easily performed experiments show that a large percentage of people who think “all is the same” have actually changed their tune significantly over time. Such is the bias towards consistency.

This affects relationships fairly significantly: Schacter shows that our current feelings about our partner color our memories of our past feelings.

Consider a study that followed nearly four hundred Michigan couples through the first years of their marriage. In those couples who expressed growing unhappiness over the four years of the study, men mistakenly recalled the beginnings of their marriages as negative even though they said they were happy at the time. “Such biases can lead to a dangerous “downward spiral,” noted the researchers who conducted the study. “The worse your current view of your partner is, the worse your memories are, which only further confirms your negative attitudes.”

In other contexts, we sometimes lean in the other direction: We think things have changed more than they really have. We think the past was much better than it is today, or much worse than it is today.

Schacter discusses a twenty-year study done with a group of women between 1969 and 1989, assessing how they felt about their marriages throughout. Turns out, their recollections of the past were constantly on the move, but the false recollection did seem to serve a purpose: Keeping the marriage alive.

When reflecting back on the first ten years of their marriages, wives showed a change bias: They remembered their initial assessments as worse than they actually were. The bias made their present feelings seem an improvement by comparison, even though the wives actually felt more negatively ten years into the marriage than they had at the beginning. When they had been married for twenty years and reflected back on their second ten years of marriage, the women now showed a consistency bias: they mistakenly recalled that feelings from ten years earlier were similar to their present ones. In reality, however, they felt more negatively after twenty years of marriage than after ten. Both types of bias helped women cope with their marriages. 

The purpose of all this is to reduce our cognitive dissonance: That mental discomfort we get when we have conflicting ideas. (“I need to stay married” / “My marriage isn’t working” for example.)

Hindsight Bias

We won’t go into hindsight bias too extensively, because we have covered it before and the idea is familiar to most. Simply put, once we know the outcome of an event, our memory of the past is forever altered. As with consistency bias, we use the lens of the present to see the past. It’s the idea that we “knew it all along” — when we really didn’t.

A large part of hindsight bias has to do with the narrative fallacy and our own natural wiring in favor of causality. We really like to know why things happen, and when given a clear causal link in the present (Say, we hear our neighbor shot his wife because she cheated on him), the lens of hindsight does the rest (I always knew he was a bad guy!). In the process, we forget that we must not have thought he was such a bad guy, since we let him babysit our kids every weekend. That is hindsight bias. We’re all subject to it unless we start examining our past with more detail or keeping a written record.

Egocentric bias

The egocentric bias is our tendency to see the past in such a way that we, the rememberer, look better than we really are or really should. We are not neutral observers of our own past, we are instead highly biased and motivated to see ourselves in a certain light.

The self’s preeminent role in encoding and retrieval, combined with a powerful tendency for people to view themselves positively, creates fertile ground of memory biases that allow people to remember past experiences in a self-enhancing light. Consider, for example, college students who were led to believe that introversion is a desirable personality trait that predicts academic success, and then searched their memories for incidents in which they behaved in an introverted or extroverted manner. Compared with students who were led to believe that extroversion is a desirable trait, the introvert-success students more quickly generated memories in which they behaved like introverts than like extroverts. The memory search was biased by a desire to see the self positively, which led students to select past incidents containing the desired trait.

The egocentric bias occurs constantly and in almost any situation where it possibly can: It’s similar to what’s been called overconfidence in other arenas. We want to see ourselves in a positive light, and so we do. We mine our brain for evidence of our excellent qualities. We have positive maintaining illusions that keep our spirits up.

This is generally a good thing for our self-esteem, but as any divorced couple knows, it can also cause us to have a very skewed version of the past.

Bias from Stereotyping

In our series on the development of human personality, we discussed the idea of stereotyping as something human beings do constantly and automatically; the much-maligned concept is central to how we comprehend the world.

Stereotyping exists because it saves energy and space — it allows us to consolidate much of what we learn into categories with broadly accurate descriptions. As we learn new things, we either slot them into existing categories, create new categories, or slightly modify old categories (the one we like the least, because it requires the most work). This is no great insight.

But what is interesting is the degree to which stereotyping colors our memories themselves:

If I tell you that Julian, an artist, is creative, temperamental, generous, and fearless, you are more likely to recall the first two attributes, which fit the stereotype of an artist, than the latter two attributes, which do not. If I tell you that he is a skinhead, and list some of his characteristics, you’re more likely to remember that he is rebellious and aggressive than that he is lucky and modest. This congruity bias is especially likely to occur when people hold strong stereotypes about a particular group. A person with strong racial prejudices, for example, would be more likely to remember stereotypical features of an African American’s behavior than a less prejudiced person, and less likely to remember behaviors that don’t fit the stereotype.

Not only that, but when things happen which contradict our expectations, we are capable of distorting the past in such a way to make it come in line. When we try to remember a tale after we know how it ends, we’re more likely to distort the details of the story in such a way that the whole thing makes sense and fits our understanding of the world. This is related to the narrative fallacy and hindsight bias discussed above.

***

The final sin which Schacter discusses in his book is Persistence, the often difficult reality that some memories, especially negative ones, persist a lot longer than we wish. We’re not going to cover it here, but suggest you check out the book in its entirety to get the scoop.

And with that, we’re going to wrap up our series on the human memory. Take what you’ve learned, digest it, and then keep pushing deeper in your quest to understand human nature and the world around you.

The Fundamental Attribution Error: Why Predicting Behavior is so Hard


“Psychologists refer to the inappropriate use of dispositional
explanation as the fundamental attribution error, that is,
explaining situation-induced behavior as caused by
enduring character traits of the agent.”
— Jon Elster

***

The problem with any concept of “character” driving behavior is that “character” is pretty hard to pin down. We call someone “moral” or “honest,” we call them “courageous” or “naive” or any other number of names. The implicit connotation is that someone “honest” in one area will be “honest” in most others, or someone “moral” in one situation is going to be “moral” elsewhere.

Old-time folk psychology supports the notion, of course. As Jon Elster points out in his wonderful book Explaining Social Behavior, folk wisdom would have us believe that much of this “predicting and understanding behavior” thing is pretty darn easy! Simply ascertain character, and use that as a basis to predict or explain action.

People are often assumed to have personality traits (introvert, timid, etc.) as well as virtues (honesty, courage, etc.) or vices (the seven deadly sins, etc.). In folk psychology, these features are assumed to be stable over time and across situations. Proverbs in all languages testify to this assumption. “Who tells one lie will tell a hundred.” “Who lies also steals.” “Who steals an egg will steal an ox.” “Who keeps faith in small matters, does so in large ones.” “Who is caught red-handed once will always be distrusted.” If folk psychology is right, predicting and explaining behavior should be easy.

A single action will reveal the underlying trait or disposition and allow us to predict behavior on an indefinite number of other occasions when the disposition could manifest itself. The procedure is not tautological, as it would be if we took cheating on an exam as evidence of dishonesty and then used the trait of dishonesty to explain the cheating. Instead, it amounts to using cheating on an exam as evidence for a trait (dishonesty) that will also cause the person to be unfaithful to a spouse. If one accepts the more extreme folk theory that all virtues go together, the cheating might also be used to predict cowardice in battle or excessive drinking. 

This is a very natural and tempting way to approach the understanding of people. We like to think of actions that “speak volumes” about others’ character, thus using that as a basis to predict or understand their behavior in other realms.

For example, let’s say you were interviewing a financial advisor. He shows up on time, in a nice suit, and buys lunch. He says all the right words. Will he handle your money correctly?

Almost all of us would be led to believe he would, reasoning that his sharp appearance, timeliness, and generosity point towards his “good character”.

But what the study of history shows us is that appearances are flawed, and behavior in one context often does not have correlation to behavior in other contexts. Judging character becomes complex when we appreciate the situational nature of our actions. The U.S. President Lyndon Johnson was an arrogant bully and a liar who stole an election when he was young. He also fought like hell to pass the Civil Rights Act, something almost no other politician could have done.

Henry Ford standardized and streamlined the modern automobile and made it affordable to the masses, while paying “better than fair” wages to his employees and generally treating them well and with respect, something many “Titans” of business had trouble with in his day. He was also a notorious anti-Semite! If it’s true that “He who is moral in one respect is also moral in all respects,” then what are we to make of this?

Jon Elster has some other wonderful examples coming from the world of music, regarding impulsivity versus discipline:

The jazz musician Charlie Parker was characterized by a doctor who knew him as “a man living from moment to moment. A man living for the pleasure principle, music, food, sex, drugs, kicks, his personality arrested at an infantile level.” Another great jazz musician, Django Reinhardt, had an even more extreme present-oriented attitude in his daily life, never saving any of his substantial earnings, but spending them on whims or on expensive cars, which he quickly proceeded to crash. In many ways he was the incarnation of the stereotype of “the Gypsy.” Yet you do not become a musician of the caliber of Parker and Reinhardt if you live in the moment in all respects. Proficiency takes years of utter dedication and concentration. In Reinhardt’s case, this was dramatically brought out when he damaged his left hand severely in a fire and retrained himself so that he could achieve more with two fingers than anyone else with four. If these two musicians had been impulsive and carefree across the board — if their “personality” had been consistently “infantile” — they could never have become such consummate artists.

Once we realize this truth, it seems obvious. We begin seeing it everywhere. Dan Ariely wrote a book about situational dishonesty and cheating which we have written about before. Judith Rich Harris based her theory of child development on the idea that children do not behave the same elsewhere as they do at home, misleading parents into thinking they were molding their children. Good interviewing and hiring is a notoriously difficult problem because we are consistently misled into thinking that what we learn in the interview process is representative of the interviewee’s general competence. Books have been written about the Halo Effect, a similar idea that good behavior in one area creates a “halo” around all behavior.

The reason we see this everywhere is because it’s how the world works!

This basic truth is called the Fundamental Attribution Error, the belief that behavior in one context carries over with any consistency into other areas.

Studying the error leads us to conclude that we have a natural tendency to:

A. Over-rate some general consideration of “character” and,
B. Under-rate the “power of the situation”, and its direct incentives, to compel a variety of behavior.

Elster describes a social psychology experiment that effectively demonstrates how quickly any thought of “morality” can be lost in the right situation:

In another experiment, theology students were told to prepare themselves to give a brief talk in a nearby building. One-half were told to build the talk around the Good Samaritan parable(!), whereas the others were given a more neutral topic. One group was told to hurry since the people in the other building were waiting for them, whereas another was told that they had plenty of time. On their way to the other building, subjects came upon a man slumping in the doorway, apparently in distress. Among the students who were told they were late, only 10 percent offered assistance; in the other group, 63 percent did so. The group that had been told to prepare a talk on the Good Samaritan was not more likely to behave as one. Nor was the behavior of the students correlated with answers to a questionnaire intended to measure whether their interest in religion was due to the desire for personal salvation or to a desire to help others. The situational factor — being hurried or not — had much greater explanatory power than any dispositional factor.

So with a direct incentive in front of them — not wanting to be late when people were waiting for them, which could cause shame — the idea of being a Good Samaritan was thrown right out the window! So much for good character.

What we need to appreciate is that, in the words of Elster, “Behavior is often no more stable than the situations that shape it.” A shy young boy on the playground might be the most outgoing and aggressive boy in his group of friends. A moral authority in the realm of a religious institution might well cheat on their taxes. A woman who treats her friends poorly might treat her family with reverence and care.

We can’t throw the baby out with the bathwater, of course. Elster refers to contingent response tendencies that would carry from situation to situation, but they tend to be specific rather than general. If we break down character into specific interactions between person and types of situations, we can understand things a little more accurately.

Instead of calling someone a “liar,” we might understand that they lie on their taxes but are honest with their spouse. Instead of calling someone a “hard worker,” we might come to understand that they drive hard in work situations, but simply cannot be bothered to work around the house. And so on. We should pay attention to the interplay between the situation, the incentives and the nature of the person, rather than just assuming that a broad  character trait applies in all situations.

This carries two corollaries:

A. As we learn to think more accurately, we get one step closer to understanding human nature as it really is. We can better understand the people with whom we coexist.

B. We might better understand ourselves! Imagine if you could be the rare individual whose positive traits truly did carry over into all, or at least all important, situations. You would be traveling an uncrowded road.

***

Want More? Check out our ever-growing database of mental models.

The Many Ways Our Memory Fails Us (Part 2)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In part one, we began a conversation about the trappings of the human memory, using Daniel Schacter’s excellent The Seven Sins of Memory as our guide. (We’ve also covered some reasons why our memory is pretty darn good.) We covered transience — the loss of memory due to time — and absent-mindedness — memories that were never encoded at all or were not available when needed. Let’s keep going with a couple more whoppers: Blocking and Misattribution.

Blocking

Blocking is the phenomenon when something is indeed encoded in our memory and should be easily available in the given situation, but simply will not come to mind. We’re most familiar with blocking as the always frustrating “It’s on the tip of my tongue!

Unsurprisingly, blocking occurs most frequently when it comes to names and indeed occurs more frequently as we get older:

Twenty-year-olds, forty-year-olds, and seventy-year-olds kept diaries for a month in which they recorded spontaneously occurring retrieval blocks that were accompanied by the “tip of the tongue” sensation. Blocking occurred occasionally for the names of objects (for example, algae) and abstract words (for example, idiomatic). In all three groups, however, blocking occurred most frequently for proper names, with more blocks for people than for other proper names such as countries or cities. Proper name blocks occurred more frequently in the seventy-year-olds than in either of the other two groups.

This is not the worst sin our memory commits — excepting the times when we forget an important person’s name (which is admittedly not fun), blocking doesn’t cause the terrible practical results some of the other memory issues cause. But the reason blocking occurs does tells us something interesting about memory, something we intuitively know from other domains: We have a hard time learning things by rote or by force. We prefer associations and connections to form strong, lasting, easily available memories.

Why are names blocked from us so frequently, even more than objects, places, descriptions, and other nouns? For example, Schacter mentions experiments in which researchers show that we more easily forget a man’s name than his occupationeven if they’re the same word! (Baker/baker or Potter/potter, for example.)

It’s because relative to a descriptive noun like “baker,” which calls to mind all sorts of connotations, images, and associations, a person’s name has very little attached to it. We have no easy associations to make — it doesn’t tell us anything about the person or give us much to hang our hat on. It doesn’t really help us form an image or impression. And so we basically remember it by rote, which doesn’t always work that well.

Most models of name retrieval hold that activation of phonological representations [sound associations] occurs only after activation of conceptual and visual representations. This idea explains why people can often retrieve conceptual information about an object or person whom they cannot name, whereas the reverse does not occur. For example, diary studies indicate that people frequently recall a person’s occupation without remembering his name, but no instances have been documented in which a name is recalled without any conceptual knowledge about the person. In experiments in which people named pictures of famous individuals, participants who failed to retrieve the name “Charlton Heston” could often recall that he was an actor. Thus, when you block on the name “John Baker” you may very well recall that he is an attorney who enjoys golf, but it is highly unlikely that you would recall Baker’s name and fail to recall any of his personal attributes.

A person’s name is the weakest piece of information we have about them in our people-information lexicon, and thus the least available at any time, and the most susceptible to not being available as needed. It gets worse if it’s a name we haven’t needed to recall frequently or recently, as we all can probably attest to. (This also applies to the other types of words we block on less frequently — objects, places, etc.)

The only real way to avoid blocking problems is to create stronger associations when we learn names, or even re-encode names we already know by increasing their salience with a vivid image, even a silly one. (If you ever meet anyone named Baker…you know what to do.)

But the most important idea here is that information gains salience in our brain based on what it brings to mind. 

Whether or not blocking occurs in the sense implied by Freud’s idea of repressed memories, Schacter is non-committal about — it seems the issue was not, at the time of writing, settled.

Misattribution

The memory sin of misattribution has fairly serious consequences. Misattribution happens all the time and is a peculiar memory sin where we do remember something, but that thing is wrong, or possibly not even our own memory at all:

Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time and place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it–without awareness–from something we read or heard.

The most familiar, but benign, experience we’ve all had with misattribution is the curious case of deja vu. As of the writing of his book, Schacter felt there was no convincing explanation for why deja vu occurs, but we know that the brain is capable of thinking it’s recalling an event that happened previously, even if it hasn’t.

In the case of deja vu, it’s simply a bit of an annoyance. But the misattribution problem causes more serious problems elsewhere.

The major one is eyewitness testimony, which we now know is notoriously unreliable. It turns out that when eyewitnesses claim they “know what they saw!” it’s unlikely they remember as well as they claim. It’s not their fault and it’s not a lie — you do think you recall the details of a situation perfectly well. But your brain is tricking you, just like deja vu. How bad is the eyewitness testimony problem? It used to be pretty bad.

…consider two facts. First, according to estimates made in the late 1980s, each year in the United States more than seventy-five thousand criminal trials were decided on the basis of eyewitness testimony. Second, a recent analysis of forty cases in which DNA evidence established the innocence of wrongly imprisoned individuals revealed that thirty-six of them (90 percent) involved mistaken eyewitness identification. There are no doubt other such mistakes that have not been rectified.

What happens is that, in any situation where our memory stores away information, it doesn’t have the horsepower to do it with complete accuracy. There are just too many variables to sort through. So we remember the general aspects of what happened, and we remember some details, depending on how salient they were.

We recall that we met John, Jim, and Todd, who were all part of the sales team for John Deere. We might recall that John was the young one with glasses, Jim was the older bald one, and Todd talked the most. We might remember specific moments or details of the conversation which stuck out.

But we don’t get it all perfectly, and if it was an unmemorable meeting, with the transience of time, we start to lose the details. The combination of the specifics and the details is a process called memory binding, and it’s often the source of misattribution errors.

Let’s say we remember for sure that we curled our hair this morning. All of our usual cues tell us that we did — our hair is curly, it’s part of our morning routine, we remember thinking it needed to be done, etc. But…did we turn the curling iron off? We remember that we did, but is that yesterday’s memory or today’s?

This is a memory binding error. Our brain didn’t sufficiently “link up” the curling event and the turning off of the curler, so we’re left to wonder. This binding issue leads to other errors, like the memory conjunction error, where sometimes the binding process does occur, but it makes a mistake. We misattribute the strong familiarity:

Having met Mr. Wilson and Mr. Albert during your business meeting, you reply confidently the next day when an associate asks you the name of the company vice president: “Mr. Wilbert.” You remembered correctly pieces of the two surnames but mistakenly combined them into a new one. Cognitive psychologists have developed experimental procedures in which people exhibit precisely these kinds of erroneous conjunctions between features of different words, pictures, sentences, or even faces. Thus, having studied spaniel and varnish, people sometimes claim to remember Spanish.

What’s happening is a misattribution. We know we saw the syllables Span- and –nish and our memory tells us we must have heard Spanish. But we didn’t.

Back to the eyewitness testimony problem, what’s happening is we’re combining a general familiarity with a lack of specific recall, and our brain is recombining those into a misattribution. We recall a tall-ish man with some sort of facial hair, and then we’re shown 6 men in a lineup, and one is tall-ish with facial hair, and our brain tells us that must be the guy. We make a relative judgment: Which person here is closest to what I think I saw? Unfortunately, like the Spanish/varnish issue, we never actually saw the person we’ve identified as the perp.

None of this occurs with much conscious involvement, of course. It’s happening subconsciously, which is why good procedures are needed to overcome the problem. In the case of suspect lineups, the solution is to show the witness each suspect, one after another, and have them give a thumbs up or thumbs down immediately. This takes away the relative comparison and makes us consciously compare the suspect in front of us with our memory of the perpetrator.

The good thing about this error is that people can be encouraged to search their memory more carefully. But it’s far from foolproof, even if we’re getting a very strong indication that we remember something.

And what helps prevent us from making too many errors is something Schacter calls the distinctiveness heuristic. If a distinctive thing supposedly happened, we usually reason we’d have a good memory of it. And usually this is a very good heuristic to have. (Remember, salience always encourages memory formation.) As we discussed in Part One, a salient artifact gives us something to tie a memory to. If I meet someone wearing a bright rainbow-colored shirt, I’m a lot more likely to recall some details about them, simply because they stuck out.

***

As an aside, misattribution allows us one other interesting insight into the human brain: Our “people information” remembering is a specific, distinct module, one that can falter on its own, without harming any other modules. Schacter discusses a man with a delusion that many of the normal people around him were film stars. He even misattributed made-up famous-sounding names (like Sharon Sugar) to famous people, although he couldn’t put his finger on who they were.

But the man did not falsely recognize other things. Made up cities or made up words did not trip up his brain in the strange way people did. This (and other data) tells us that our ability to recognize people is a distinct “module” our brain uses, supporting one of Judith Rich Harris’s ideas about human personality that we’ve discussed: The “people information lexicon” we develop throughout our lives is a uniquely important module we use.

***

One final misattribution is something called cryptomnesia — essentially the opposite of deja vu. It’s when we think we recognize something as new and novel even though we’ve seen it before. Accidental plagiarizing can even result from cryptomnesia. (Try telling that to your school teachers!) Cryptomnesia falls into the same bucket as other misattributions in that we fail to recollect the source of information we’re recalling — the information and event where we first remembered it are not bound together properly. Let’s say we “invent” the melody to a song which already exists. The melody sounds wonderful and familiar, so we like it. But we mistakenly think it’s new.

In the end, Schacter reminds us to think carefully about the memories we “know” are true, and to try to remember specifics when possible:

We often need to sort out ambiguous signals, such as feelings of familiarity or fleeting images, that may originate in specific past experiences, or arise from subtle influences in the present. Relying on judgment and reasoning to come up with plausible attributions, we sometimes go astray.  When misattribution combines with another of memory’s sins — suggestibility — people can develop detailed and strongly held recollections of complex events that never occurred.

And with that, we will leave it here for now. Next time we’ll delve into suggestibility and bias, two more memory sins with a range of practical outcomes.

Multiplicative Systems: Understanding The Power of Multiplying by Zero

We all learned in math class that anything times zero is zero. But if you stop thinking about the idea here, you don’t see all the practical applications that understanding multiplicative systems can give you in life.

***

Let’s run through a little elementary algebra. Try to do it in your head: What’s 1,506,789 x 9,809 x 5.56 x 0?

Hopefully, you didn’t have to whip out the old TI-84 to solve that one. It’s a zero.

This leads us to a mental model called Multiplicative Systems and understanding it can get to the heart of a lot of issues.

The Weakest Link in the Chain

Suppose you were trying to become the best basketball player in the world. You’ve got the following things going for you:

1. God-given talent. You’re 6’9″, quick, skillful, can leap out of the building, and have been the best player in a competitive city since you can remember.

2. Support. You live in a city that reveres basketball and you’re raised by parents who care about your goals.

3. A proven track record. You were the player of the year in a very competitive Division 1 college conference.

4. A clear path forward. You’re selected as the second overall pick in the NBA Draft by the Boston Celtics.

Sounds like you have a shot, right? As good as anyone could have, right? What would you put the odds at of this person becoming one of the better players in the world? Pretty high?

Let’s add one more piece of information:

5. You’ve developed a cocaine habit.

What are your odds now?

This little exercise isn’t an academic one, it’s the sad case of Leonard “Len” Bias, a young basketball prodigy who died of a cocaine overdose after being selected to play in the NBA for the Boston Celtics in 1986. Many call Bias the best basketball player who never played professionally.

What the story of Len Bias illustrates so well is the truth that anything times zero must still be zero, no matter how large the string of numbers preceding it. In some facets of life, all of your hard work, dedication to improvement, and good fortune may still be worth nothing if there is a weak link in the chain.

Something all engineers learn very early on is that a system is no stronger than its weakest component. Take, for example, the case of a nuclear power plant. We have a very good understanding of how to make the nuclear power plant quite safe, nearly indestructible, which it must be considering the magnitude of a failure.

But in reality, what is the weakest link in the chain for most nuclear power plants? The human beings running them. We’re part of the system! And since we’ve yet to perfect the human being, we have yet to perfect the nuclear power plant. How could it be otherwise?

An additive system does not work this way. In an additive system, each component adds on to one another to create the final outcome. Going back to algebra, let’s say our equation was additive rather than multiplicative: 1,506,789 plus 9,809 plus 5.56 plus 0. The answer is 1,516,603.56 — still a pretty big number!

Think of an additive system as something like a great Thanksgiving dinner. You’ve got a great turkey, some whipped potatoes, a mass of stuffing, and a lump of homemade cranberry sauce, and you’re hanging with your family. Awesome!

Let’s say the potatoes get burnt in the oven, and they’re inedible. Problem? Sure, but dinner still works out just fine. Someone shows up with a pie for dessert? Great! But it won’t change the dinner all that much.

The interaction of the parts makes the dinner range from good to great. Take some parts away or add new ones in, and you get a different outcome, but not a binary, win/lose one. The meal still happens. Additive systems and multiplicative systems react differently when components are added or taken away.

Most businesses, for example, operate in a multiplicative system. But they too often think they’re operating in additive ones: Ever notice how some businesses will add one feature on top of another to their products but fail at basic customer service, so you leave, never to return? That’s a business that thinks it’s in an additive system when they really need to be resolving the big fat zero in the middle of the equation instead of adding more stuff.

***

Financial systems are, of course, multiplicative. General Motors, founded in 1908 by William Durant and C.S. Mott, came to dominate the American car market to the tune of 50% market share through a series of brilliant innovations and management practices and was for many years the dominant and most admirable corporation in America. Even today, after more than a century of competition, no American carmaker produces more automobiles than General Motors.

And yet, the original shareholders of GM ended up with a zero in 2008 as the company went into bankruptcy due to years of financial mismanagement. It didn’t matter that they had several generations of leadership: All of that becomes naught in a multiplicative system.

***

On a smaller scale, take the case of a young corporate climber who feels they just can’t get ahead. They seem to have all their ducks in a row: great resume, great background, great experience…the problem is that they suck at dealing with other people and treat others like stepping stones. That’s a zero that can negate all of the big numbers preceding it. The rest doesn’t matter.

And so we arrive at the “must be true” conclusion that understanding when you’re in an additive system versus a multiplicative system, and which components need absolute reliability for the system to work, is a critical model to have in your head. Multiplicative thinking is a model related to the greater idea of systems thinking, another mental model well worth acquiring.

***

Multiplicative Systems is another FS Mental Model.