Category: Thinking

Remembering More of Everything: The Memory Palace

“When information goes ‘in one ear and out the other,’
it’s often because it doesn’t have anything to stick to.”

— Joshua Foer

***

That’s a quote from the book Moonwalking with Einstein, the fascinating account of Joshua Foer’s journey investigating memory.

What starts as a routine piece of writing ends with his participation in the USA Memory Championships. While interviewing contestants for the article he was told that anyone could have a memory like these champions if they trained properly. Intrigued, Foer decided to give it a try.

The journey started by researching memory and its physical effects on the brain. Scientists had recently discovered that your brain is much like a muscle, and that making it work could make it grow by creating new pathways at a cellular level. Did that make the brains of these “mental athletes” physically different from yours or mine?

Foer found research where MRI was used to compare the memory specialists’ brains to those of a control group. There was no difference between the brain structure of the two. However, during the act of memorizing, the regions of the brain which “lit up” were completely different. 

Surprisingly, when the mental athletes were learning new information, they were engaging regions of the brain known to be involved in two specific tasks: visual memory and spatial navigation.

It turns out the mental athletes were purposefully converting the information they were memorizing into images, and then placing these images into a mentally constructed “palace” — thus the involvement of visual memory and spatial navigation.

Foer goes into great (and fascinating) detail regarding the science of memory (which we’ve covered some before). However, let’s explore the specific techniques that Foer learned while studying the memory athletes.

The Memory Palace

The Memory Palace is a device that has been used since the time of the ancient Greeks, to help encode their memories for easy retrieval. This was a time before smart devices; if you wanted information at your fingertips you had to put that information in your head. You’d do it through a process the modern memory athletes call elaborative encoding.

The general idea with most memory techniques is to change whatever boring thing is being inputted into your memory into something that is so colorful, so exciting, and so different from anything you’ve seen before that you can’t possibly forget it.

The memory palace technique is about changing your memories into images placed in a familiar mental location. The idea is that you can mentally walk through your Palace looking at your memories to recall them.

They can be big or small, indoors or outdoors, real or imaginary, so long as there’s some semblance of order that links one locus to the next, and so long as they are intimately familiar.

The idea is to give your memories something to hang on to. We are pretty terrible at remembering things, especially when these memories float freely in our head. But our spatial memory is actually pretty decent and when we give our memories some needed structure, we provide that missing order and context. Creating a multi-sensory experience in your head is the other part of the trick.

‘Now, it’s very important to try to remember this image multisensorily.’ The more associative hooks a new piece of information has, the more securely it gets embedded into the network of things you already know, and the more likely it is to remain in memory.

Try to animate your image so that you watch it move. Try to think of what it might smell like or feel like and make it as vivid as possible. This is you processing your image. Let’s look at a specific example to illustrate why this works.

***

Say your memory palace is your childhood home. Take a moment to conjure images and memories of that place. We are going to stick to the outside of the house. Mentally walk from the road to your front porch, try to remember as many details as possible.

Let’s imagine that your spouse has asked you to pick up a few steaks from the grocery store for dinner. Now put the steaks, exactly how they look in the grocery store, on your front porch.

Got it?

Okay, now lets try to make the steaks into something more memorable. How about a cow sitting on your front porch, not like a cow would, like a person would. Let’s make them exaggeratedly chewing, but we’ll make it bubble gum instead of grass. Now the cow is periodically blowing gigantic bubbles, so big that you’re worried they might pop. Maybe think about what that bubble gum would smell like or the strange smell of a mixture of bubble gum and cow. What would the cow’s skin feel like? What would it feel like to have to pick bubble gum off of the cow’s face?

Four hours from now when you leave work to head home you’ll remember you had to pick something up from the grocery store. When you take a trip to your memory palace, walk up the drive and gaze at your front porch. What do you think you are more likely to remember? The packaged steaks, that you see all the time? Or the gum chewing cow we created?

A professional memory athlete will put objects in multiple places within their palaces and have more than one palace in their repertoire. Some will even design their own fictional palaces in great detail, designed specifically as a place to hang memories.

The Memory Palace is a great way to recall a variety of things, but you will still hit a hard ceiling, and that ceiling conflicts with the Herculean amount of numbers some memory competitors can remember.

What’s the trick? It turns out that there is a whole different tool just for recalling numbers.

PAO: Person – Action – Object

In this system every two-digit number from 00 to 99 is processed into a single image of a person performing an action on an object.

The number 34 might be Frank Sinatra (a person) crooning (an action) into a microphone (an object). Likewise, 13 might be David Beckham kicking a soccer ball. The number 79 could be Superman flying with a cape. Any six-digit number, like say 34-13-79, can then be turned into a single image by combining the person from the first number with the action from the second and the object for the third – in this case, it would be Frank Sinatra kicking a cape.

As you can see this is still about storing very vivid and memorable images. I don’t know about you, but I’ve never thought about Frank Sinatra kicking a cape before. It becomes a very powerful tool when you realize that you can use your ‘stock images’ as a sort of algorithm to generate a unique image for every number between 0 and 999,999.

You may look at PAO and think that it’s a very clever way to memorize numbers, a party trick, but not necessarily useful for most of us from day to day. Maybe true, but Foer shares a great insight into the residual effects of training your memory.

I’m convinced that remembering more is only the most obvious benefit of the many months I spent training my memory. What I had really trained my brain to do, as much as to memorize, was to be more mindful, and to pay attention to the world around me. Remembering can only happen if you decide to take notice.

This reminds us of the importance of being mindful and paying attention to life. Foer takes it further, arguing that when we look at it critically, memory is a huge component of almost every aspect of our life.

How we perceive the world and how we act in it are products of how and what we remember. We’re all just a bundle of habits shaped by our memories. And to the extent that we control our lives, we do so by gradually altering those habits, which is to say the networks of our memory… Our ability to find humor in the world, to make connections between previously unconnected notions, to create new ideas, to share in a common culture: All these essentially human acts depend on memory. Now more than ever, as the role of memory in our culture erodes at a faster pace than ever before, we need to cultivate our ability to remember. Our memories make us who we are.

We are a culmination of our experiences, how we process this information and encode it into something meaningful is intrinsically tied to our memory. Understanding how it works and how to use tools or tricks to make it better is a worthy endeavour.

Foer’s personal account in Moonwalking with Einstein is a great starting point for your own mental journey. While you’re waiting for that to arrive, start reading our four part series on our memory’s advantages and weakness, starting here

The Need for Biological Thinking to Solve Complex Problems

“Biological thinking and physics thinking are distinct, and often complementary, approaches to the world, and ones that are appropriate for different kinds of systems.”

***

How should we think about complexity? Should we use a biological or physics system? The answer, of course, is that it depends. It’s important to have both tools available at your disposal.

These are the questions that Samuel Arbesman explores in his fascinating book Overcomplicated: Technology at the Limits of Comprehension.

[B]iological systems are generally more complicated than those in physics. In physics, the components are often identical—think of a system of nothing but gas particles, for example, or a single monolithic material, like a diamond. Beyond that, the types of interactions can often be uniform throughout an entire system, such as satellites orbiting a planet.

Biology is different and there is something meaningful to be learned from a biological approach to thinking.

In biology, there are a huge number of types of components, such as the diversity of proteins in a cell or the distinct types of tissues within a single creature; when studying, say, the mating behavior of blue whales, marine biologists may have to consider everything from their DNA to the temperature of the oceans. Not only is each component in a biological system distinctive, but it is also a lot harder to disentangle from the whole. For example, you can look at the nucleus of an amoeba and try to understand it on its own, but you generally need the rest of the organism to have a sense of how the nucleus fits into the operation of the amoeba, how it provides the core genetic information involved in the many functions of the entire cell.

Arbesman makes an interesting point here when it comes to how we should look at technology. As the interconnections and complexity of technology increases, it increasingly resembles a biological system rather than a physics one. There is another difference.

[B]iological systems are distinct from many physical systems in that they have a history. Living things evolve over time. While the objects of physics clearly do not emerge from thin air—astrophysicists even talk about the evolution of stars—biological systems are especially subject to evolutionary pressures; in fact, that is one of their defining features. The complicated structures of biology have the forms they do because of these complex historical paths, ones that have been affected by numerous factors over huge amounts of time. And often, because of the complex forms of living things, where any small change can create unexpected effects, the changes that have happened over time have been through tinkering: modifying a system in small ways to adapt to a new environment.

Biological systems are generally hacks that evolved to be good enough for a certain environment. They are far from pretty top-down designed systems. And to accommodate an ever-changing environment they are rarely the most optimal system on a mico-level, preferring to optimize for survival over any one particular attribute. And it’s not the survival of the individual that’s optimized, it’s the survival of the species.

Technologies can appear robust until they are confronted with some minor disturbance, causing a catastrophe. The same thing can happen to living things. For example, humans can adapt incredibly well to a large array of environments, but a tiny change in a person’s genome can cause dwarfism, and two copies of that mutation invariably cause death. We are of a different scale and material from a particle accelerator or a computer network, and yet these systems have profound similarities in their complexity and fragility.

Biological thinking, with a focus on details and diversity, is a necessary tool to deal with complexity.

The way biologists, particularly field biologists, study the massively complex diversity of organisms, taking into account their evolutionary trajectories, is therefore particularly appropriate for understanding our technologies. Field biologists often act as naturalists— collecting, recording, and cataloging what they find around them—but even more than that, when confronted with an enormously complex ecosystem, they don’t immediately try to understand it all in its totality. Instead, they recognize that they can study only a tiny part of such a system at a time, even if imperfectly. They’ll look at the interactions of a handful of species, for example, rather than examine the complete web of species within a single region. Field biologists are supremely aware of the assumptions they are making, and know they are looking at only a sliver of the complexity around them at any one moment.

[…]

When we’re dealing with different interacting levels of a system, seemingly minor details can rise to the top and become important to the system as a whole. We need “Field biologists” to catalog and study details and portions of our complex systems, including their failures and bugs. This kind of biological thinking not only leads to new insights, but might also be the primary way forward in a world of increasingly interconnected and incomprehensible technologies.

Waiting and observing isn’t enough.

Biologists will often be proactive, and inject the unexpected into a system to see how it reacts. For example, when biologists are trying to grow a specific type of bacteria, such as a variant that might produce a particular chemical, they will resort to a process known as mutagenesis. Mutagenesis is what it sounds like: actively trying to generate mutations, for example by irradiating the organisms or exposing them to toxic chemicals.

When systems are too complex for human understanding, often we need to insert randomness to discover the tolerances and limits of the system. One plus one doesn’t always equal two when you’re dealing with non-linear systems. For biologists, tinkering is the way to go.

As Stewart Brand noted about legacy systems, “Teasing a new function out of a legacy system is not done by command but by conducting a series of cautious experiments that with luck might converge toward the desired outcome.”

When Physics and Biology Meet

This doesn’t mean we should abandon the physics approach, searching for underlying regularities in complexity. The two systems complement one another rather than compete.

Arbesman recommends asking the following questions:

When attempting to understand a complex system, we must determine the proper resolution, or level of detail, at which to look at it. How fine-grained a level of detail are we focusing on? Do we focus on the individual enzyme molecules in a cell of a large organism, or do we focus on the organs and blood vessels? Do we focus on the binary signals winding their way through circuitry, or do we examine the overall shape and function of a computer program? At a larger scale, do we look at the general properties of a computer network, and ignore the individual machines and decisions that make up this structure?

When we need to abstract away a lot of the details we lean on physics thinking more. Think about it from an organizational perspective. The new employee at the lowest level is focused on the specific details of their job whereas the executive is focused on systems, strategy, culture, and flow — how things interact and reinforce one another. The details of the new employee’s job are lost on them.

We can’t use one system, whether biological or physics, exclusively. That’s a sure way to fragile thinking. Rather, we need to combine them.

In Cryptonomicon, a novel by Neal Stephenson, he makes exactly this point talking about the structure of the pantheon of Greek gods:

And yet there is something about the motley asymmetry of this pantheon that makes it more credible. Like the Periodic Table of the Elements or the family tree of the elementary particles, or just about any anatomical structure that you might pull up out of a cadaver, it has enough of a pattern to give our minds something to work on and yet an irregularity that indicates some kind of organic provenance—you have a sun god and a moon goddess, for example, which is all clean and symmetrical, and yet over here is Hera, who has no role whatsoever except to be a literal bitch goddess, and then there is Dionysus who isn’t even fully a god—he’s half human—but gets to be in the Pantheon anyway and sit on Olympus with the Gods, as if you went to the Supreme Court and found Bozo the Clown planted among the justices.

There is a balance and we need to find it.

Gradually Getting Closer to the Truth

You can use a big idea without a physics-like need for exact precision. The key to remember is moving closer to reality by updating.

Consider this excerpt from Philip Tetlock and Dan Gardner in Superforecasting

The superforecasters are a numerate bunch: many know about Bayes’ theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.

So they know the numbers. This numerate filter is the second of Garrett Hardin‘s three filters we need to think about problems.

Hardin writes:

The numerate temperament is one that habitually looks for approximate dimensions, ratios, proportions, and rates of change in trying to grasp what is going on in the world.

[…]

Just as “literacy” is used here to mean more than merely reading and writing, so also will “numeracy” be used to mean more than measuring and counting. Examination of the origins of the sciences shows that many major discoveries were made with very little measuring and counting. The attitude science requires of its practitioners is respect, bordering on reverence, for ration, proportions, and rates of change.

Rough and ready back-of-the-envelope calculations are often sufficient to reveal the outline of a new and important scientific discovery … In truth, the essence of many of the major insights of science can be grasped with no more than child’s ability to measure, count, and calculate.

 

We can find another example in investing. Charlie Munger, commenting at the 1996 Berkshire Hathaway Annual Meeting, said: “Warren often talks about these discounted cash flows, but I’ve never seen him do one. If it isn’t perfectly obvious that it’s going to work out well if you do the calculation, then he tends to go on to the next idea.” Buffett retorted: “It’s true. If (the value of a company) doesn’t just scream out at you, it’s too close.”

Precision is easy to teach but it’s missing the point.

The Many Ways our Memory Fails Us (Part 3)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In the first two parts of our series on memory, we covered four major “sins” committed by our memories: Absent-Mindedness, Transience, Misattribution, and Blocking, using Daniel Schacter’s The Seven Sins of Memory as our guide.

We’re going to finish it off today with three other sins: Suggestibility, Bias, and Persistence, hopefully leaving us with a full understanding of our memory and where it fails us from time to time.

***

Suggestibility

As its name suggests, the sin of suggestibility refers to our brain’s tendency to misremember the source of memories:

Suggestibility in memory refers to an individual’s tendency to incorporate misleading information from external sources — other people, written materials or pictures, even the media — into personal recollections. Suggestibility is closely related to misattribution in the sense that the conversion of suggestions into inaccurate memories must involve misattribution. However, misattribution often occurs in the absence of overt suggestion, making suggestibility a distinct sin of memory.

Suggestibility is such a difficult phenomenon because the memories we’ve pulled from outside sources seem as truly real as our own. Take the case of a “false veteran” which Schacter describes in the book:

On May 31, 2000, a front-page story in the New York Times described the baffling case of Edward Daly, a Korean War veteran who made up elaborate — but imaginary — stories about his battle exploits, including his involvement in a terrible massacre in which he had not actually participated. While weaving his delusional tale, Daly talked to veterans who had participated in the massacre and “reminded” them of his heroic deeds. His suggestions infiltrated their memories. “I know that Daly was there,” pleaded one veteran. “I know that. I know that.”

The key word here is infiltrated. This brings to mind the wonderful Christopher Nolan movie Inception, about a group of experts who seek to infiltrate the minds of sleeping targets in order to change their memories. The movie is fictional but there is a subtle reality to the idea: With enough work, an idea that is merely suggested to us in one context can seem like our own idea or our own memory.

Take suggestive questioning, a problem with criminal investigations. The investigator talks to an eyewitness and, hoping to jog their memory, asks a series of leading questions, arriving at the answer he was hoping for. But is it genuine? Not always.

Schacter describes a psychology experiment wherein participants see a video of a robbery and then are fed misleading suggestions about the robbery soon after, such as the idea that the victim of the robbery was wearing a white apron. Amazingly, even when people could recognize that the apron idea was merely suggested to them, many people still regurgitated the suggested idea!

Previous experiments had shown that suggestive questions produce memory distortion by creating source memory problems like those in the previous chapter: participants misattribute information presented only in suggestive questions about the original videotape. [The psychologist Philip] Higham’s results provide an additional twist. He found that when people took a memory test just minutes after receiving the misleading question, and thus still correctly recalled that the “white apron” was suggested by the experimenter, they sometimes insisted nevertheless that the attendant wore a white apron in the video itself. In fact, they made this mistake just as often as people who took the memory test two days after receiving misleading suggestions, and who had more time to forget that the white apron was merely suggested. The findings testify to the power of misleading suggestions: they can create false memories of an event even when people recall that the misinformation was suggested.

The problem of overconfidence also plays a role in suggestion and memory errors. Take an experiment where subjects are shown a man entering a department store and then told he murdered a security guard. After being shown a photo lineup (which did not contain the gunman), some were told they chose correctly and some were told they chose incorrectly. Guess which group was more confident and trustful of their memories afterwards?

It was, of course, the group that received reinforcement. Not only were they more confident, but they felt they had better command of the details of the gunman’s appearance, even though they were as wrong as the group that received no positive feedback. This has vast practical applications. (Consider a jury taking into account the testimony of a very confident eyewitness, reinforced by police with an agenda.)

***

One more interesting idea in reference to suggestibility: Like the DiCaprio-led clan in the movie Inception, psychologists have been able to successfully “implant” false memories of childhood in many subjects based merely on suggestion alone. This should make you think carefully about what you think you remember about the distant past:

[The psychologist Ira] Hyman asked college students about various childhood experiences that, according to their parents, had actually happened, and also asked about a false event that, their parents confirmed, had never happened. For instance, students were asked: “When you were five you were at the wedding reception of some friends of the family and you were running around with some other kids, when you bumped into the table and spilled the punch bowl on the parents of the bride.” Participants accurately remembered almost all of the true events, but initially reported no memory of the false events.

However, approximately 20 to 40 percent of participants in different experimental conditions eventually came to describe some memory of the false event in later interviews. In one experiment, more than half of the participants who produced false memories describe them as “clear” recollections that included specific details of the central even, such as remembering exactly where or how one spilled the punch. Just under half reported “partial” false memories, which included some details but no specific memory of the central event.

Thus is the “power of the suggestion.”

The Sin of Bias

The problem of bias will be familiar to regular readers. In some form or another, we’re subject to mental biases every single day, most of which are benign, some of which are harmful, and most of which are not hard to understand. Biases specific to memory are so good to study because they’re so easy and natural to fall into. Because we trust our memory so deeply, they often go unquestioned. But we might want to be careful:

The sin of bias refers to distorting influences of our present knowledge, beliefs, feelings on new experiences, or our later memories of them. In the stifling psychological climate of 1984, the Ministry of Truth used memory as a pawn in the service of party rule. Much in the same manner, biases in remembering past experiences reveal how memory can serve as a pawn for the ruling masters of our cognitive systems.

There are four biases we’re subject to in this realm: Consistency and change bias, hindsight bias, egocentric bias, and stereotyping bias.

Consistency and Change Bias

The first is a consistency bias: We re-write our memories of the past based on how we feel in the present. In one experiment after another, this has undoubtedly been proven true. It’s probably something of a coping mechanism: If we saw the past with complete accuracy, we might not be such happy individuals.

We often re-write the past so that it seems we’ve always felt like we feel now, that we always believed what we believe now:

This consistency bias has turned up in several different contexts. Recalling past experiences of pain, for instance, is powerfully influenced by current pain level. When patients afflicted by chronic pain are experiencing high levels of pain in the present, they are biased to recall similarly high levels of pain in the past; when present pain isn’t so bad, past pain experiences seem more benign, too. Attitudes towards political and social issues also reflect consistency bias. People whose views on political issues have changed over time often recall incorrectly past attitudes as highly similar to present ones. In fact, memories of past political views are sometimes more closely related to present views than what they actually believed in the past.

Think about your stance five or ten years ago on some major issue like sentencing for drug-related crime. Can your recall specifically what you believed? For most people, they believe they have stayed consistent on the issue. But easily performed experiments show that a large percentage of people who think “all is the same” have actually changed their tune significantly over time. Such is the bias towards consistency.

This affects relationships fairly significantly: Schacter shows that our current feelings about our partner color our memories of our past feelings.

Consider a study that followed nearly four hundred Michigan couples through the first years of their marriage. In those couples who expressed growing unhappiness over the four years of the study, men mistakenly recalled the beginnings of their marriages as negative even though they said they were happy at the time. “Such biases can lead to a dangerous “downward spiral,” noted the researchers who conducted the study. “The worse your current view of your partner is, the worse your memories are, which only further confirms your negative attitudes.”

In other contexts, we sometimes lean in the other direction: We think things have changed more than they really have. We think the past was much better than it is today, or much worse than it is today.

Schacter discusses a twenty-year study done with a group of women between 1969 and 1989, assessing how they felt about their marriages throughout. Turns out, their recollections of the past were constantly on the move, but the false recollection did seem to serve a purpose: Keeping the marriage alive.

When reflecting back on the first ten years of their marriages, wives showed a change bias: They remembered their initial assessments as worse than they actually were. The bias made their present feelings seem an improvement by comparison, even though the wives actually felt more negatively ten years into the marriage than they had at the beginning. When they had been married for twenty years and reflected back on their second ten years of marriage, the women now showed a consistency bias: they mistakenly recalled that feelings from ten years earlier were similar to their present ones. In reality, however, they felt more negatively after twenty years of marriage than after ten. Both types of bias helped women cope with their marriages. 

The purpose of all this is to reduce our cognitive dissonance: That mental discomfort we get when we have conflicting ideas. (“I need to stay married” / “My marriage isn’t working” for example.)

Hindsight Bias

We won’t go into hindsight bias too extensively, because we have covered it before and the idea is familiar to most. Simply put, once we know the outcome of an event, our memory of the past is forever altered. As with consistency bias, we use the lens of the present to see the past. It’s the idea that we “knew it all along” — when we really didn’t.

A large part of hindsight bias has to do with the narrative fallacy and our own natural wiring in favor of causality. We really like to know why things happen, and when given a clear causal link in the present (Say, we hear our neighbor shot his wife because she cheated on him), the lens of hindsight does the rest (I always knew he was a bad guy!). In the process, we forget that we must not have thought he was such a bad guy, since we let him babysit our kids every weekend. That is hindsight bias. We’re all subject to it unless we start examining our past with more detail or keeping a written record.

Egocentric bias

The egocentric bias is our tendency to see the past in such a way that we, the rememberer, look better than we really are or really should. We are not neutral observers of our own past, we are instead highly biased and motivated to see ourselves in a certain light.

The self’s preeminent role in encoding and retrieval, combined with a powerful tendency for people to view themselves positively, creates fertile ground of memory biases that allow people to remember past experiences in a self-enhancing light. Consider, for example, college students who were led to believe that introversion is a desirable personality trait that predicts academic success, and then searched their memories for incidents in which they behaved in an introverted or extroverted manner. Compared with students who were led to believe that extroversion is a desirable trait, the introvert-success students more quickly generated memories in which they behaved like introverts than like extroverts. The memory search was biased by a desire to see the self positively, which led students to select past incidents containing the desired trait.

The egocentric bias occurs constantly and in almost any situation where it possibly can: It’s similar to what’s been called overconfidence in other arenas. We want to see ourselves in a positive light, and so we do. We mine our brain for evidence of our excellent qualities. We have positive maintaining illusions that keep our spirits up.

This is generally a good thing for our self-esteem, but as any divorced couple knows, it can also cause us to have a very skewed version of the past.

Bias from Stereotyping

In our series on the development of human personality, we discussed the idea of stereotyping as something human beings do constantly and automatically; the much-maligned concept is central to how we comprehend the world.

Stereotyping exists because it saves energy and space — it allows us to consolidate much of what we learn into categories with broadly accurate descriptions. As we learn new things, we either slot them into existing categories, create new categories, or slightly modify old categories (the one we like the least, because it requires the most work). This is no great insight.

But what is interesting is the degree to which stereotyping colors our memories themselves:

If I tell you that Julian, an artist, is creative, temperamental, generous, and fearless, you are more likely to recall the first two attributes, which fit the stereotype of an artist, than the latter two attributes, which do not. If I tell you that he is a skinhead, and list some of his characteristics, you’re more likely to remember that he is rebellious and aggressive than that he is lucky and modest. This congruity bias is especially likely to occur when people hold strong stereotypes about a particular group. A person with strong racial prejudices, for example, would be more likely to remember stereotypical features of an African American’s behavior than a less prejudiced person, and less likely to remember behaviors that don’t fit the stereotype.

Not only that, but when things happen which contradict our expectations, we are capable of distorting the past in such a way to make it come in line. When we try to remember a tale after we know how it ends, we’re more likely to distort the details of the story in such a way that the whole thing makes sense and fits our understanding of the world. This is related to the narrative fallacy and hindsight bias discussed above.

***

The final sin which Schacter discusses in his book is Persistence, the often difficult reality that some memories, especially negative ones, persist a lot longer than we wish. We’re not going to cover it here, but suggest you check out the book in its entirety to get the scoop.

And with that, we’re going to wrap up our series on the human memory. Take what you’ve learned, digest it, and then keep pushing deeper in your quest to understand human nature and the world around you.

What’s So Significant About Significance?

How Not to be wrong

One of my favorite studies of all time took the 50 most common ingredients from a cookbook and searched the literature for a connection to cancer: 72% had a study linking them to increased or decreased risk of cancer. (Here’s the link for the interested.)

Meta-analyses (studies examining multiple studies) quashed the effect pretty seriously, but how many of those single studies were probably reported on in multiple media outlets, permanently causing changes in readers’ dietary habits? (We know from studying juries that people are often unable to “forget” things that are subsequently proven false or misleading — misleading data is sticky.)

The phrase “statistically significant” is one of the more unfortunately misleading ones of our time. The word significant in the statistical sense — meaning distinguishable from random chance — does not carry the same meaning in common parlance, in which we mean distinguishable from something that does not matterWe’ll get to what that means.

Confusing the two gets at the heart of a lot of misleading headlines and it’s worth a brief look into why they don’t mean the same thing, so you can stop being scared that everything you eat or do is giving you cancer.

***

The term statistical significance is used to denote when an effect is found to be extremely unlikely to have occurred by chance. In order to make that determination, we have to propose a null hypothesis to be rejected. Let’s say we propose that eating an apple a day reduces the incidence of colon cancer. The “null hypothesis” here would be that eating an apple a day does nothing to the incidence of colon cancer — that we’d be equally likely to get colon cancer if we ate that daily apple.

When we analyze the data of our study, we’re technically not looking to say “Eating an apple a day prevents colon cancer” — that’s a bit of a misconception. What we’re actually doing is an inversion we want the data to provide us with sufficient weight to reject the idea that apples have no effect on colon cancer.

And even when that happens, it’s not an all-or-nothing determination. What we’re actually saying is “It would be extremely unlikely for the data we have, which shows a daily apple reduces colon cancer by 50%, to have popped up by chance. Not impossible, but very unlikely.” The world does not quite allow us to have absolute conviction.

How unlikely? The currently accepted standard in many fields is 5% — there is a less than 5% chance the data would come up this way randomly. That immediately tells you that at least 1 out of every 20 studies must be wrong, but alas that is where we’re at. (The problem with the 5% p-value, and the associated problem of p-hacking has been subject to some intense debate, but we won’t deal with that here.)

We’ll get to why “significance can be insignificant,” and why that’s so important, in a moment. But let’s make sure we’re fully on board with the importance of sorting chance events from real ones with another illustration, this one outlined by Jordan Ellenberg in his wonderful book How Not to Be WrongPay close attention:

Suppose we’re in null hypothesis land, where the chance of death is exactly the same (say, 10%) for the fifty patients who got your drug and the fifty who got [a] placebo. But that doesn’t mean that five of the drug patients die and five of the placebo patients die. In fact, the chance that exactly five of the drug patients die is about 18.5%; not very likely, just as it’s not very likely that a long series of coin tosses would yield precisely as many heads as tails. In the same way, it’s not very likely that exactly the same number of drug patients and placebo patients expire during the course of the trial. I computed:

13.3% chance equally many drug and placebo patients die
43.3% chance fewer placebo patients than drug patients die
43.3% chance fewer drug patients than placebo patients die

Seeing better results among the drug patients than the placebo patients says very little, since this isn’t at all unlikely, even under the null hypothesis that your drug doesn’t work.

But things are different if the drug patients do a lot better. Suppose five of the placebo patients die during the trial, but none of the drug patients do. If the null hypothesis is right, both classes of patients should have a 90% chance of survival. But in that case, it’s highly unlikely that all fifty of the drug patients would survive. The first of the drug patients has a 90% chance; now the chance that not only the first but also the second patient survives is 90% of that 90%, or 81%–and if you want the third patient to survive as well, the chance of that happening is only 90% of that 81%, or 72.9%. Each new patient whose survival you stipulate shaves a little off the chances, and by the end of the process, where you’re asking about the probability that all fifty will survive, the slice of probability that remains is pretty slim:

(0.9) x (0.9) x (0.9) x … fifty times! … x (0.9) x (0.9) = 0.00515 …

Under the null hypothesis, there’s only one chance in two hundred of getting results this good. That’s much more compelling. If I claim I can make the sun come up with my mind, and it does, you shouldn’t be impressed by my powers; but if I claim I can make the sun not come up, and it doesn’t, then I’ve demonstrated an outcome very unlikely under the null hypothesis, and you’d best take notice.

So you see, all this null hypothesis stuff is pretty important because what you want to know is if an effect is really “showing up” or if it just popped up by chance.

A final illustration should make it clear:

Imagine you were flipping coins with a particular strategy of getting more heads, and after 30 flips you had 18 heads and 12 tails. Would you call it a miracle? Probably not — you’d realize immediately that it’s perfectly possible for an 18/12 ratio to happen by chance. You wouldn’t write an article in U.S. News and World Report proclaiming you’d figured out coin flipping.

Now let’s say instead you flipped the coin 30,000 times and you get 18,000 heads and 12,000 tails…well, then your case for statistical significance would be pretty tight.  It would be approaching impossible to get that result by chance — your strategy must have something to it. The null hypothesis of “My coin flipping technique is no better than the usual one” would be easy to reject! (The p-value here would be orders of magnitude less than 5%, by the way.)

That’s what this whole business is about.

***

Now that we’ve got this idea down, we come to the big question that statistical significance cannot answer: Even if the result is distinguishable from chance, does it actually matter?

Statistical significance cannot tell you whether the result is worth paying attention to — even if you get the p-value down to a minuscule number, increasing your confidence that what you saw was not due to chance. 

In How Not to be Wrong, Ellenberg provides a perfect example:

A 1995 study published in a British journal indicated that a new birth control pill doubled the risk of venous thrombosis (potentially killer blood clot) in its users. Predictably, 1.5 million British women freaked out, and some meaningfully large percentage of them stopped taking the pill. In 1996, 26,000 more babies were born than the previous year and there were 13,600 more abortions. Whoops!

So what, right? Lots of mothers’ lives were saved, right?

Not really. The initial probability of a women getting a venous thrombosis with any old birth control pill, was about 1 in 7,000 or about 0.01%. That means that the “Killer Pill,” even if was indeed increasing “thrombosis risk,” only increased that risk to 2 in 7,000, or about 0.02%!! Is that worth rearranging your life for? Probably not.

Ellenberg makes the excellent point that, at least in the case of health, the null hypothesis is unlikely to be right in most cases! The body is a complex system — of course what we put in it affects how it functions in some direction or another. It’s unlikely to be absolute zero.

But numerical and scale-based thinking, indispensable for anyone looking to not be a sucker, tells us that we must distinguish between small and meaningless effects (like the connection between almost all individual foods and cancer so far) and real ones (like the connection between smoking and lung cancer).

And now we arrive at the problem of “significance” — even if an effect is really happening, it still may not matter!  We must learn to be wary of “relative” statistics (i.e., “the risk has doubled”), and look to favor “absolute” statistics, which tell us whether the thing is worth worrying about at all.

So we have two important ideas:

A. Just like coin flips, many results are perfectly possible by chance. We use the concept of “statistical significance” to figure out how likely it is that the effect we’re seeing is real and not just a random illusion, like seeing 18 heads in 30 coin tosses.

B. Even if it is really happening, it still may be unimportant – an effect so insignificant in real terms that it’s not worth our attention.

These effects should combine to raise our level of skepticism when hearing about groundbreaking new studies! (A third and equally important problem is the fact that correlation is not causation, a common problem in many fields of science including nutritional epidemiology. Just because x is associated with y does not mean that x is causing y.)

Tread carefully and keep your thinking cap on.

***

Still Interested? Read Ellenberg’s great book to get your head working correctly, and check out our posts on Bayesian updating, another very useful statistical tool, and learn a little about how we distinguish science from pseudoscience.

Daniel Dennett’s Most Useful Critical Thinking Tools

We recently discussed some wonderful mental tools from the great Richard Feynman. Let’s get some more good ones from another giant, Daniel Dennett.

Dennett is one of the great thinkers in the world; he’s been at the forefront of cognitive science and evolutionary science for over 50 years, trying to figure out how the mind works and why we believe the things we believe. He’s written a number of amazing books on evolution, religion, consciousness, and free will. (He’s also subject to some extreme criticism due to his atheist bent, as with Dawkins.)

His most recent book is the wise and insightful Intuition Pumps and Other Tools for Critical Thinking, where he lays out a series of short essays (some very short — less than a page) with mental shortcuts, tools, analogies, and metaphors for thinking about a variety of topics, mostly those topics he is best known for.

Some people don’t like the disconnected nature of the book, but that’s precisely its usefulness: Like what we do here at Farnam Street, Dennett is simply trying to add tools to your toolkit. You are free to, in the words of Bruce Lee, “Absorb what is useful, discard what is useless and add what is specifically your own.”

***

The book opens with 12 of Dennett’s best “tools for critical thinking” — a bag of mental tricks to improve your ability to engage critically and rationally with the world.

Let’s go through a few of the best ones. You’ll be familiar with some and unfamiliar with others, agree with some and not with others. But if you adopt Bruce Lee’s advice, you should come away with something new and useful.

Making mistakes

Mistakes are not just opportunities for learning; they are, in an important sense, the only opportunity for learning or making something truly new. Before there can be learning, there must be learners. There are only two non-miraculous ways for learners to come into existence: they must either evolve or be designed and built by learners that evolved. Biological evolution proceeds by a grand, inexorable process of trial and error–and without the errors the trials wouldn’t accomplish anything. As Gore Vidal once said, “It is not enough to succeed. Others must fail.”

[…]

The chief trick to making good mistakes is not to hide them–especially not from yourself. Instead of turning away in denial when you make a mistake, you should become a connoisseur of your own mistakes, turning them over in your mind as if they were works of art, which in a way they are. The fundamental reaction to any mistake ought to be this: “Well, I won’t do that again!”

Reductio ad absurdum

The crowbar of rational inquiry, the great lever that enforces consistency, is reductio ad absurdum–literally, reduction (of the argument) to absurdity. You take the assertion or conjecture at issue and see if you can pry any contradictions (or just preposterous implications) out of it. If you can, that proposition has to be discarded or sent back to the shop for retooling. We do this all the time without bothering to display the underlying logic: “If that’s a bear, then bears have antlers!” or “He won’t get here in time for supper unless he can fly like Superman.”

Rapoport’s Rules

Just how charitable are you supposed to be when criticizing the views of an opponent? […] The best antidote I know for [the] tendency to caricature one’s opponent is a list of rules promulgated by the social psychologist and game theorist Anatol Rapoport (creator of the winning Tit-for-Tat strategy in Robert Axelrod’s legendary prisoner’s dilemma tournament).

How to compose a successful critical commentary:

1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”
2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
3. You should mention anything that you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.

Sturgeon’s Law

The science-fiction writer Ted Sturgeon, speaking at the World Science Fiction Convention in Philadelphia in September 1953, said,

When people talk about the mystery novel, they mentioned The Maltese Falcon and The Big Sleep. When they talk about the western, they say there’s The Way West and Shane. But when they talk about science fiction, they call it “that Buck Rogers stuff,” and they say “ninety percent of science fiction is crud.” Well, they’re right. Ninety percent of science fiction is crud. But then ninety percent of everything is crud, and it’s the ten percent that isn’t crud that’s important, and the ten percent of science fiction that isn’t crud is as good as or better than anything being written anywhere.

This advice is often ignored by ideologues intent on destroying the reputation of analytic philosophy, evolutionary psychology, sociology, cultural anthropology, macroeconomics, plastic surgery, improvisational theater, television sitcoms, philosophical theology, massage therapy, you name it. Let’s stipulate at the outset that there is a great deal of deplorable, stupid, second-rate stuff out there, of all sorts.

Occam’s Razor

Attributed to William of Ockham (or Occam), the fourteenth century logician and philosopher, this thinking tool is actually a much older rule of thumb. A Latin name for it is lex parsimoniae, the law of parsimony. It is usually put into English as the maxim “Do not muliply entities beyond necessary.” The idea is straightforward: Don’t concoct a complicated, extravagant theory if you’ve got a simpler one (containing fewer ingredients, fewer entities) that handles the phenomenon just as well. If exposure to extremely cold air can account for all the symptoms of frostbite, don’t postulate unobserved “snow germs” or “arctic microbes.” Kepler’s laws explain the orbit of the planets; we have no need to hypothesize pilots guiding the planets from control panels hidden under the surface.

Occam’s Broom

The molecular biologist Sidney Brenner recently invented a delicious play on Occam’s Razor, introducing the new term Occam’s Broom, to describe the process in which inconvenient facts are whisked under the rug by intellectually dishonest champions of one theory or another. This is our first boom crutch, an anti-thinking tool, and you should keep your eyes peeled for it. The practice is particularly insidious when used by propagandists who direct their efforts at the lay public, because like Sherlock Holmes’ famous clue about the dog that didn’t bark in the night, the absence of a fact that has been swept off the scene by Occam’s Broom is unnoticeable except by experts. 

Jootsing

…It is even harder to achieve what Doug Hofstadter calls joosting, which stands for “jumping out of the system.” This is an important tactic not just in science and philosophy, but also in the arts. Creativity, that ardently sought but only rarely found virtue, often is a heretofore unimagined violation of the rules of the system from which it springs. It might be the system of classical harmony in music, the rules for meter and rhyme in sonnets (or limericks, even), or the canons of good taste or good form in some genre of art. Or it might be the assumptions and principles of some theory or research program. Being creative is not just a matter of casting about for something novel–anbody can do that, since novelty can be found in any random juxtaposition of stuff–but of making the novelty jump out of some system, a system that has become somewhat established, for good reasons.

When an artistic tradition reaches the point where literally “anything goes,” those who want to be creative have a problem: there are no fixed rules to rebel against, no complacent expectations to shatter, nothing to subvert, no background against which to create something that is both surprising and yet meaningful. It helps to know the tradition if you want to subvert it. That’s why so few dabblers or novices succeed in coming up with anything truly creative.

Rathering (Anti-thinking tool)

Rathering is a way of sliding you swiftly and gently past a false dichotomy. The general form of a rathering is “It is not the case that blahblahblah, as orthodoxy would have you believe; it is rather that suchandsuchandsuch–which is radically different.” Some ratherings are just fine; you really must choose between the two alternatives on offer; in these cases, you are not being offered a false, bur rather a genuine, inescapable dichotomy. But some ratherings are little more than sleight of hand, due to the fact that the word “rather” implies–without argument–that there is an important incompatibility between the claims flanking it.

The “Surely” Operator

When you’re reading or skimming argumentative essays, especially by philosophers, here is a quick trick that may save you much time and effort, especially in this age of simple searching by computer: look for “surely” in the document, and check each occurrence. Not always, not even most of the time, but often the world “surely” is as good as a blinking light in locating a weak point in the argument….Why? Because it marks the very edge of what the author is actually sure about and hopes readers will also be sure about. (If the author were really sure all the readers would agree, it wouldn’t be worth mentioning.)

The Deepity

A “deepity” is a proposition that seems both important and true–and profound–but that achieves this effect by being ambiguous. On one reading it is manifestly false, but it would be earth-shaking if it were true; on the other reading it is true but trivial. The unwary listener picks up on the glimmer of truth from the second reading, and the devastating importance from the first reading, and thinks, Wow! That’s a deepity.

Here is an example. (Better sit down: this is heavy stuff.)

Love is just a word.

[…]

Richard Dawkins recently alerted me to a fine deepity by Rowan Williams, the Archbishop of Canterbury, who described his faith as a

silent waiting on the truth, pure sitting and breathing in the presence of a question mark.

***

Still Interested? Check out Dennett’s book for a lot more of these interesting tools for critical thinking, many non-intuitive. I guarantee you’ll generate food for thought as you go along. Also, try checking out 11 Rules for Critical Thinking and learn how to be Eager to be Wrong.