Tag: Nicholas Epley

A Discussion on the Work of Daniel Kahneman

Edge.org asked the likes of Christopher Chabris, Nicholas Epley, Jason Zweig, William Poundstone, Cass Sunstein, Phil Rosenzweig, Richard Thaler & Sendhil Mullainathan, Nassim Nicholas Taleb, Steven Pinker, and Rory Sutherland among others: “How has Kahneman’s work influenced your own? What step did it make possible?”

Kahneman’s work is summarized in the international best-seller Thinking, Fast and Slow.

Here are some select excerpts that I found interesting.

Christopher Chabris (author of The Invisible Gorilla)

There’s an overarching lesson I have learned from the work of Danny Kahneman, Amos Tversky, and their colleagues who collectively pioneered the modern study of judgment and decision-making: Don’t trust your intuition.

Jennifer Jacquet

After what I see as years of hard work, experiments of admirable design, lucid writing, and quiet leadership, Kahneman, a man who spent the majority of his career in departments of psychology, earned the highest prize in economics. This was a reminder that some of the best insights into economic behavior could be (and had been) gleaned outside of the discipline

Jason Zweig (author of Your Money and Your Brain)

… nothing amazed me more about Danny than his ability to detonate what we had just done.

Anyone who has ever collaborated with him tells a version of this story: You go to sleep feeling that Danny and you had done important and incontestably good work that day. You wake up at a normal human hour, grab breakfast, and open your email. To your consternation, you see a string of emails from Danny, beginning around 2:30 a.m. The subject lines commence in worry, turn darker, and end around 5 a.m. expressing complete doubt about the previous day’s work.

You send an email asking when he can talk; you assume Danny must be asleep after staying up all night trashing the chapter. Your cellphone rings a few seconds later. “I think I figured out the problem,” says Danny, sounding remarkably chipper. “What do you think of this approach instead?”

The next thing you know, he sends a version so utterly transformed that it is unrecognizable: It begins differently, it ends differently, it incorporates anecdotes and evidence you never would have thought of, it draws on research that you’ve never heard of. If the earlier version was close to gold, this one is hewn out of something like diamond: The raw materials have all changed, but the same ideas are somehow illuminated with a sharper shift of brilliance.

The first time this happened, I was thunderstruck. How did he do that? How could anybody do that? When I asked Danny how he could start again as if we had never written an earlier draft, he said the words I’ve never forgotten: “I have no sunk costs.”

William Poundstone (author of Are Your Smart Enough To Work At Google?)

As a writer of nonfiction I’m often in the position of trying to connect the dots—to draw grand conclusions from small samples. Do three events make a trend? Do three quoted sources justify a conclusion? Both are maxims of journalism. I try to keep in mind Kahneman and Tversky’s Law of Small Numbers. It warns that small samples aren’t nearly so informative, in our uncertain world, as intuition counsels.

Cass R. Sunstein (Author, Why Nudge?)

These ideas are hardly Kahneman’s most well-known, but they are full of implications, and we have only started to understand them.

1. The outrage heuristic. People’s judgments about punishment are a product of outrage, which operates as a shorthand for more complex inquiries that judges and lawyers often think relevant. When people decide about appropriate punishment, they tend to ask a simple question: How outrageous was the underlying conduct? It follows that people are intuitive retributivists, and also that utilitarian thinking will often seem uncongenial and even outrageous.

2. Scaling without a modulus. Remarkably, it turns out that people often agree on how outrageous certain misconduct is (on a scale of 1 to 8), but also remarkably, their monetary judgments are all over the map. The reason is that people do not have a good sense of how to translate their judgments of outrage onto the monetary scale. As Kahneman shows, some work in psychophysics explains the problem: People are asked to “scale without a modulus,” and that is an exceedingly challenging task. The result is uncertainty and unpredictability. These claims have implications for numerous questions in law and policy, including the award of damages for pain and suffering, administrative penalties, and criminal sentences.

3. Rhetorical asymmetry. In our work on jury awards, we found that deliberating juries typically produce monetary awards against corporate defendants that are higher, and indeed much higher, than the median award of the individual jurors before deliberation began. Kahneman’s hypothesis is that in at least a certain category of cases, those who argue for higher awards have a rhetoric advantage over those who argue for lower awards, leading to a rhetorical asymmetry. The basic idea is that in light of social norms, one side, in certain debates, has an inherent advantage – and group judgments will shift accordingly. A similar rhetorical asymmetry can be found in groups of many kinds, in both private and public sectors, and it helps to explain why groups move.

4. Predictably incoherent judgments. We found that when people make moral or legal judgments in isolation, they produce a pattern of outcomes that they would themselves reject, if only they could see that pattern as a whole. A major reason is that human thinking is category-bound. When people see a case in isolation, they spontaneously compare it to other cases that are mainly drawn from the same category of harms. When people are required to compare cases that involve different kinds of harms, judgments that appear sensible when the problems are considered separately often appear incoherent and arbitrary in the broader context. In my view, Kahneman’s idea of predictable coherence has yet to be adequately appreciated; it bears on both fiscal policy and on regulation.

Phil Rosenzweig

For years, there were (as the old saying has it) two kinds of people: those relatively few of us who were aware of the work of Danny Kahneman and Amos Tversky, and the much more numerous who were not. Happily, the balance is now shifting, and more of the general public has been able to hear directly a voice that is in equal measures wise and modest.

Sendhil Mullainathan (Author of Scarcity: Why Having Too Little Means So Much)

… Kahneman and Tversky’s early work opened this door exactly because it was not what most people think it was. Many think of this work as an attack on rationality (often defined in some narrow technical sense). That misconception still exists among many, and it misses the entire point of their exercise. Attacks on rationality had been around well before Kahneman and Tversky—many people recognized that the simplifying assumptions of economics were grossly over-simplifying. Of course humans do not have infinite cognitive abilities. We are also not as strong as gorillas, as fast as cheetahs, and cannot swim like sea lions. But we do not therefore say that there is something wrong with humans. That we have limited cognitive abilities is both true and no more helpful to doing good social science that to acknowledge our weakness as swimmers. Pointing it out did it open any new doors.

Kahneman and Tversky’s work did not just attack rationality, it offered a constructive alternative: a better description of how humans think. People, they argued, often use simple rules of thumb to make judgments, which incidentally is a pretty smart thing to do. But this is not the insight that left us one step from doing behavioral economics. The breakthrough idea was that these rules of thumb could be catalogued. And once understood they can be used to predict where people will make systematic errors. Those two words are what made behavioral economics possible.

Nassim Taleb (Author of Antifragile)

Here is an insight Danny K. triggered and changed the course of my work. I figured out a nontrivial problem in randomness and its underestimation a decade ago while reading the following sentence in a paper by Kahneman and Miller of 1986:

A spectator at a weight lifting event, for example, will find it easier to imagine the same athlete lifting a different weight than to keep the achievement constant and vary the athlete’s physique.

This idea of varying one side, not the other also applies to mental simulations of future (random) events, when people engage in projections of different counterfactuals. Authors and managers have a tendency to take one variable for fixed, sort-of a numeraire, and perturbate the other, as a default in mental simulations. One side is going to be random, not the other.

It hit me that the mathematical consequence is vastly more severe than it appears. Kahneman and colleagues focused on the bias that variable of choice is not random. But the paper set off in my mind the following realization: now what if we were to go one step beyond and perturbate both? The response would be nonlinear. I had never considered the effect of such nonlinearity earlier nor seen it explicitly made in the literature on risk and counterfactuals. And you never encounter one single random variable in real life; there are many things moving together.

Increasing the number of random variables compounds the number of counterfactuals and causes more extremes—particularly in fat-tailed environments (i.e., Extremistan): imagine perturbating by producing a lot of scenarios and, in one of the scenarios, increasing the weights of the barbell and decreasing the bodyweight of the weightlifter. This compounding would produce an extreme event of sorts. Extreme, or tail events (Black Swans) are therefore more likely to be produced when both variables are random, that is real life. Simple.

Now, in the real world we never face one variable without something else with it. In academic experiments, we do. This sets the serious difference between laboratory (or the casino’s “ludic” setup), and the difference between academia and real life. And such difference is, sort of, tractable.

… Say you are the manager of a fertilizer plant. You try to issue various projections of the sales of your product—like the weights in the weightlifter’s story. But you also need to keep in mind that there is a second variable to perturbate: what happens to the competition—you do not want them to be lucky, invent better products, or cheaper technologies. So not only you need to predict your fate (with errors) but also that of the competition (also with errors). And the variance from these errors add arithmetically when one focuses on differences.

Rory Sutherland

When I met Danny in London in 2009 he diffidently said that the only hope he had for his work was that “it might lead to a better kind of gossip”—where people discuss each other’s motivations and behaviour in slightly more intelligent terms. To someone from an industry where a new flavour-variant of toothpaste is presented as being an earth-changing event, this seemed an incredibly modest aspiration for such important work.

However, if this was his aim, he has surely succeeded. When I meet people, I now use what I call “the Kahneman heuristic”. You simply ask people “Have you read Danny Kahneman’s book?” If the answer is yes, you know (p>0.95) that the conversation will be more interesting, wide-ranging and open-minded than otherwise.

And it then occurred to me that his aim—for better conversations—was perhaps not modest at all. Multiplied a millionfold it may very important indeed. In the social sciences, I think it is fair to say, the good ideas are not always influential and the influential ideas are not always good. Kahneman’s work is now both good and influential.

Mindwise: How We Understand What Others Think, Believe, Feel, and Want

“The main problem is that we think we understand the minds of others, and even our own mind, better than we actually do.”

Despite the fact I do it countless times a day, I’m sometimes terrible at it. Our lives are guided by our inferences about what others think, believe, feel, and want. Understanding the minds of others is one of the keys to social success. With that in mind, I read Nicholas Epley’s new book Mindwise: How We Understand What Others Think, Believe, Feel, and Want.

While we can understand what others think, believe and feel, sometimes we’re wrong. The book’s goal is to bring “your brain’s greatest ability out of the shadows and into the light of scientific inspection.”

That ability is our sixth sense.

I am going to tell you about the kind of mind reading you do intuitively every day of your life, dozens of times a day, when you infer what others are thinking, feeling, wanting, or intending. The kind that enables you to build and maintain the intimate relationships that make life worth living, to maintain a desired reputation in the eyes of others, to work effectively in teams, and to outwit and outlast your competitors. The kind that forms the foundation of all social interaction, creating the web of presumptions and assumptions that enables large societies to function.

This sixth sense is always on. A great example is the feeling you get when a co-worker calls in sick and you’re confident they’re lying. In this way Epley believes we’re all mind readers.

It’s easy to understand why. You and I are members of one of the most social species on the planet. No human being succeeds in life alone. Getting along and getting ahead requires coordinating with others, either in cooperation as friends, spouses, teammates, and coworkers, or in competition as adversaries, opponents, or rivals.

We’re so good at this sixth sense that it operates at an almost unconscious level. As philosopher Jerry Fodor has written, “Commonsense psychology works so well, it disappears.” Some of us, however, are better at mind reading and social understanding than others.

That we cannot read anyone’s mind perfectly does not mean we are never accurate, of course, but our mistakes are especially interesting because they are a major source of wreckage in our relationships, careers, and lives, leading to needless conflict and misunderstanding. Our mistakes lead to ineffective solutions to some of society’s biggest problems, and they can send nations into needless wars with the worst of consequences.

Our mistakes are somewhat predictable and therefore, argues Epley, correctable. They happen in two ways:

Our mistakes come from the two most basic questions that underlie any social interaction. First, does “it” have a mind? And second, what state is that other mind in?

We can make mistakes with the first question by failing to engage our mind-reading ability when we should, thereby failing to consider the mind of another and running the risk of treating him or her like a relatively mindless animal or object. These mistakes are at the heart of dehumanization. But we can also make mistakes by engaging our ability when we shouldn’t, thereby attributing a mind to something that is actually mindless.

Once we’re trying to read the minds of others, we can make mistakes with the second question by misunderstanding others’ thoughts, beliefs , attitudes, or emotions, thereby misunderstanding what state another mind is in. Our most common mistakes come from excessive egocentrism, overreliance on stereotypes, and an all-to-easy assumption that others’ minds match their actions …

All of these mistakes have the same basic consequence of leading us to think that others’ minds are more simplistic than they actually are.

Let’s take a closer look at when we “fail to recognize the fully human mind of another person,” which is the essence of dehumanization. This can happen “any time you fail to attend to the mind of another person, because this can also lead you to believe that another person has weaker mental capacities than you do: a lesser mind.” In the book Epley describes how doctors used to believe that children could not feel pain, how employers often think of employees as mindless (which leads bosses to over-estimate the importance of money and underestimate the intrinsic incentives like autonomy, pride, and mastery), and how we generally lack consideration for other people in certain social settings.

Enemies think of each other as unfeeling savages. Consider the story of how Brut Champagne got its name.

When the French began making champagne for the British, the champagne makers quickly learned that the Brits preferred much drier champagne than the French did. In fact, the French found this version to be unpalatable. They named this inferior champagne brut sauvage, for who could have such unsophisticated preferences other than a savage brut? The joke was eventually on the French: brut is now the most popular variety of champagne in the world.

The Lens Problem
We have a lens problem. The lens shapes what we see. And we react to what we see. “I’m right, and you’re biased.”

The lens in your eye filters light onto your retina, allowing you to see the world before your eye. Likewise, our own minds serve as a lens made up of beliefs, attitudes, and knowledge through which we perceive the world. If you look at an object through two different lenses, such as a telescope versus a microscope, then the very same object will look very different.

… This is a problem because you look through a lens rather than at it directly, which can make it hard to tell that your vision is being affected by it. Similarly, research makes it very clear that people have a hard time recognizing the ways in which their own perceptions are biased by the interpretive lens of beliefs, attitudes, and knowledge that they view it through. This handicaps our ability to understand the minds of others in two ways. First, people tend to overestimate the extent to which others believe, think, and feel as they do. This kind of egocentrism is a chronic mistake. Second, when people find out that others perceive the world differently than they do, the inability to recognize one’s own bias leads people to think that others are the ones who are biased. The fingerprints of the lens problem are at the center of almost any difference of opinion.

Remember Kathryn Schultz on what happens when someone disagrees with us?

Perspective Taking
Most of us are taught that we should put ourselves in the shoes of others to better understand their thoughts and feelings but this may not be the best strategy.

Everyone from Dale Carnegie to Barack Obama has suggested that the true way to understand other people is to honestly put yourself in another person’s shoes. My research, however, suggests that this does little or nothing to increase how accurately you understand the minds of others. The main problem with this solution to social misunderstanding is that it relies completely on being able to use knowledge that a person already has in his or her head to understand another’s perspective. But if you have a mistaken understanding of another person to begin with, then no amount of perspective taking is going to make your judgment systematically more accurate. When we ask husbands and wives, for instance, to predict each others’ attitudes, those we tell to adopt the other person’s perspective as honestly as they can actually become a little less accurate than those who do not adopt the other person’s perspective. In conflict, we find in our research that opposing sides tend to misunderstand each other even more when we ask them to honestly adopt the other side’s perspective. Perspective taking can have many beneficial consequences in social life, but systematically making people understand each other better does not seem to be one of them.

Epley prefers “perspective getting” to increase understanding.

If you actually want to understand the mind of another person, you have to get that person’s perspective as directly as you possibly can. You do that in one of two ways, either by being the other person or by having the other person tell you honestly and openly what’s actually on his or her mind. Court judges understand, for instance, what waterboarding feels like when they actually experience it directly, as the journalist Christopher Hitchens did, or when they listen to another person’s honest report of the experience directly, as you can by reading Christopher Hitchens’ account of his experience in Vanity Fair.

The Mind is a Beautiful Thing

You’ve never actually seen a belief, smelled an attitude, or poked a feeling. No intention has ever walked past you on the sidewalk. You can’t weigh a want. Like atoms before electron microscopes, minds are inferred rather than observed. They exist only as a theory each of us uses to explain both our own and other people’s behaviour. … But what a marvelous theory it is. Human beings have been explaining one another for millennia without ever referencing a single neuron because the sense we’ve evolved is of such practical value. Mental concepts like attitudes, beliefs, intentions , and preferences are so highly correlated with whatever is actually going on in the brain that we can use our theory about other people’s minds to predict their behaviour.

And in part, this ability to reason about the minds of others is what makes us human. We live in groups, large or small, and key to that social relationship is to understand people’s thoughts, beliefs, emotions, etc. The bigger the group the harder this is. Not only do you have to keep track of more people but you have to keep track of more possibilities.

Mindwise is a fascinating exploration of understanding other people.