Tag: Rory Sutherland

Defensive Decision Making: What IS Best v. What LOOKS Best

“It wasn’t the best decision we could make,” said one of my old bosses, “but it was the most defensible.”

What she meant was that she wanted to choose option A but ended up choosing option B because it was the defensible default. She realized that if she chose option A and something went wrong, it would be hard to explain because it was outside of normal. On the other hand, if she chose option A and everything went right, she’d get virtually no upside. A good outcome was merely expected, but a bad outcome would have significant consequences for her. The decision she landed on wasn’t the one she would have made if she owned the entire company. Since she didn’t, she wanted to protect her downside. In asymmetrical organizations, defensive decisions like this one protect the person making the decision.

My friend and advertising legend Rory Sutherland calls defensive decisions the Heathrow Option. Americans might think of it as the IBM Option. There’s a story behind this:

A while ago, British Airways noticed a reluctance for personal assistants to book their bosses on flights from London City Airport to JFK. They almost always picked Heathrow, which was further away, and harder to get to. Rory believed this was because “flying from London City might be better on average,” but “because it was a non-standard option, if anything were to go wrong, you were much more likely to get it in the neck.”

Of course, if you book your boss to fly out of Heathrow—the default—and the flight is delayed, they’ll blame the airline and not you. But if you opted for the London City airport, they’d blame you.

At first glance, it might seem like defensive decision making is irrational. It’s actually perfectly rational when you consider the asymmetry involved. This asymmetry also offers insight into why cultures rarely change.

Some decisions place the decisionmakers in situations where outcomes offer little upside and massive downside. In these cases, it can seem like great outcomes carry a 1% upside, good outcomes are neutral, and poor outcomes carry at least 20% downside—if they don’t get you fired.

It’s easy to see why people opt for the default choice in these cases. If you do something that’s different—and thus hard to defend—and it works out, you’ve risked a lot for very little gain. If you do something that’s different and it doesn’t work out, and you might find yourself unemployed.

This asymmetry explains why your boss, who has nice rhetoric about challenging norms and thinking outside the box, is likely to continue with the status quo rather than change things. After all, why would they risk looking like a fool by doing something different? It’s much easier to protect themselves. Defaults give people a possible out, a way to avoid being held accountable for their decisions if things go wrong. You can distance yourself from your decision and perhaps be safe from the consequences of a poor outcome.

Doing the safe thing is not the same as doing the right thing. Often, the problem with the safe thing is that there is no growth, no innovation. It’s churning out more of the same. So in the short term, while you may think that the default is a better choice for your job security, in the long game there’s a negative. When you are unwilling to take risks, you stop recognizing opportunities. If you aren’t willing to put yourself out there for 1% gain, how do you grow? After all, the 1% upsides are more common than the 50% upsides. But in either case, if you become afraid of downside, then what level of risk would be acceptable? It’s not that choosing the default makes you a bad person. But a lifetime of opting for the default limits your opportunities and your potential.

And for anyone who owns a company, a staff full of default decision makers is a death knell. You get amazing results when people have the space to take risks and not be penalized for every downside.

Footnotes
  • 1

    Image source: https://www.flickr.com/photos/hyku/3474143529

Rory Sutherland Offers 4 Interesting Reads

Rory Sutherland

I asked Rory Sutherland (Vice Chairman: Ogilvy & Mather) what books stood out for him last year. I’ve had the privilege of chatting with Rory a few times now and I think you’ll agree, like most farnamstreeters not only is he exceptionally smart but he’s an awesome person.

I think you’ll enjoy his reply:

Gerd Gigerenzer’s Risk Savvy: How to Make Good Decisions is a wonderful book; the concept of defensive decision-making which he describes within it is alone worth the cover price. As an additional bonus, you get a very valuable lesson in the interpretation of statistics, a field of mathematics which – I think it is now almost universally agreed – is given too little time and attention in schools.

Pathological Altruism, edited by Barbara Oakley et al, is a wonderfully broad book – but built around a single insight. That, just as apparently self-interested acts can have benign consequences, the reverse is also true. We tend to think that altruism is something to be maximised – but in fact it needs to be calibrated. A very important book.

Peter Thiel’s Zero to One: Notes on Startups, or How to Build the Future is an excellent book from someone who seems to understand what Fitzgerald called “the whole equation” of a business: in this case it isn’t movies but technology. A very enjoyable book of just the right length.

Finally I immensely enjoyed the manuscript of Richard Thaler’s upcoming book Misbehaving: The Making of Behavioral Economics. I have not laughed so much in ages as when reading his chapter describing how the Economics Faculty of the University of Chicago tried to agree on the allocation of offices in their new building. No, it did not go well.

A Discussion on the Work of Daniel Kahneman

Edge.org asked the likes of Christopher Chabris, Nicholas Epley, Jason Zweig, William Poundstone, Cass Sunstein, Phil Rosenzweig, Richard Thaler & Sendhil Mullainathan, Nassim Nicholas Taleb, Steven Pinker, and Rory Sutherland among others: “How has Kahneman’s work influenced your own? What step did it make possible?”

Kahneman’s work is summarized in the international best-seller Thinking, Fast and Slow.

Here are some select excerpts that I found interesting.

Christopher Chabris (author of The Invisible Gorilla)

There’s an overarching lesson I have learned from the work of Danny Kahneman, Amos Tversky, and their colleagues who collectively pioneered the modern study of judgment and decision-making: Don’t trust your intuition.

Jennifer Jacquet

After what I see as years of hard work, experiments of admirable design, lucid writing, and quiet leadership, Kahneman, a man who spent the majority of his career in departments of psychology, earned the highest prize in economics. This was a reminder that some of the best insights into economic behavior could be (and had been) gleaned outside of the discipline

Jason Zweig (author of Your Money and Your Brain)

… nothing amazed me more about Danny than his ability to detonate what we had just done.

Anyone who has ever collaborated with him tells a version of this story: You go to sleep feeling that Danny and you had done important and incontestably good work that day. You wake up at a normal human hour, grab breakfast, and open your email. To your consternation, you see a string of emails from Danny, beginning around 2:30 a.m. The subject lines commence in worry, turn darker, and end around 5 a.m. expressing complete doubt about the previous day’s work.

You send an email asking when he can talk; you assume Danny must be asleep after staying up all night trashing the chapter. Your cellphone rings a few seconds later. “I think I figured out the problem,” says Danny, sounding remarkably chipper. “What do you think of this approach instead?”

The next thing you know, he sends a version so utterly transformed that it is unrecognizable: It begins differently, it ends differently, it incorporates anecdotes and evidence you never would have thought of, it draws on research that you’ve never heard of. If the earlier version was close to gold, this one is hewn out of something like diamond: The raw materials have all changed, but the same ideas are somehow illuminated with a sharper shift of brilliance.

The first time this happened, I was thunderstruck. How did he do that? How could anybody do that? When I asked Danny how he could start again as if we had never written an earlier draft, he said the words I’ve never forgotten: “I have no sunk costs.”

William Poundstone (author of Are Your Smart Enough To Work At Google?)

As a writer of nonfiction I’m often in the position of trying to connect the dots—to draw grand conclusions from small samples. Do three events make a trend? Do three quoted sources justify a conclusion? Both are maxims of journalism. I try to keep in mind Kahneman and Tversky’s Law of Small Numbers. It warns that small samples aren’t nearly so informative, in our uncertain world, as intuition counsels.

Cass R. Sunstein (Author, Why Nudge?)

These ideas are hardly Kahneman’s most well-known, but they are full of implications, and we have only started to understand them.

1. The outrage heuristic. People’s judgments about punishment are a product of outrage, which operates as a shorthand for more complex inquiries that judges and lawyers often think relevant. When people decide about appropriate punishment, they tend to ask a simple question: How outrageous was the underlying conduct? It follows that people are intuitive retributivists, and also that utilitarian thinking will often seem uncongenial and even outrageous.

2. Scaling without a modulus. Remarkably, it turns out that people often agree on how outrageous certain misconduct is (on a scale of 1 to 8), but also remarkably, their monetary judgments are all over the map. The reason is that people do not have a good sense of how to translate their judgments of outrage onto the monetary scale. As Kahneman shows, some work in psychophysics explains the problem: People are asked to “scale without a modulus,” and that is an exceedingly challenging task. The result is uncertainty and unpredictability. These claims have implications for numerous questions in law and policy, including the award of damages for pain and suffering, administrative penalties, and criminal sentences.

3. Rhetorical asymmetry. In our work on jury awards, we found that deliberating juries typically produce monetary awards against corporate defendants that are higher, and indeed much higher, than the median award of the individual jurors before deliberation began. Kahneman’s hypothesis is that in at least a certain category of cases, those who argue for higher awards have a rhetoric advantage over those who argue for lower awards, leading to a rhetorical asymmetry. The basic idea is that in light of social norms, one side, in certain debates, has an inherent advantage – and group judgments will shift accordingly. A similar rhetorical asymmetry can be found in groups of many kinds, in both private and public sectors, and it helps to explain why groups move.

4. Predictably incoherent judgments. We found that when people make moral or legal judgments in isolation, they produce a pattern of outcomes that they would themselves reject, if only they could see that pattern as a whole. A major reason is that human thinking is category-bound. When people see a case in isolation, they spontaneously compare it to other cases that are mainly drawn from the same category of harms. When people are required to compare cases that involve different kinds of harms, judgments that appear sensible when the problems are considered separately often appear incoherent and arbitrary in the broader context. In my view, Kahneman’s idea of predictable coherence has yet to be adequately appreciated; it bears on both fiscal policy and on regulation.

Phil Rosenzweig

For years, there were (as the old saying has it) two kinds of people: those relatively few of us who were aware of the work of Danny Kahneman and Amos Tversky, and the much more numerous who were not. Happily, the balance is now shifting, and more of the general public has been able to hear directly a voice that is in equal measures wise and modest.

Sendhil Mullainathan (Author of Scarcity: Why Having Too Little Means So Much)

… Kahneman and Tversky’s early work opened this door exactly because it was not what most people think it was. Many think of this work as an attack on rationality (often defined in some narrow technical sense). That misconception still exists among many, and it misses the entire point of their exercise. Attacks on rationality had been around well before Kahneman and Tversky—many people recognized that the simplifying assumptions of economics were grossly over-simplifying. Of course humans do not have infinite cognitive abilities. We are also not as strong as gorillas, as fast as cheetahs, and cannot swim like sea lions. But we do not therefore say that there is something wrong with humans. That we have limited cognitive abilities is both true and no more helpful to doing good social science that to acknowledge our weakness as swimmers. Pointing it out did it open any new doors.

Kahneman and Tversky’s work did not just attack rationality, it offered a constructive alternative: a better description of how humans think. People, they argued, often use simple rules of thumb to make judgments, which incidentally is a pretty smart thing to do. But this is not the insight that left us one step from doing behavioral economics. The breakthrough idea was that these rules of thumb could be catalogued. And once understood they can be used to predict where people will make systematic errors. Those two words are what made behavioral economics possible.

Nassim Taleb (Author of Antifragile)

Here is an insight Danny K. triggered and changed the course of my work. I figured out a nontrivial problem in randomness and its underestimation a decade ago while reading the following sentence in a paper by Kahneman and Miller of 1986:

A spectator at a weight lifting event, for example, will find it easier to imagine the same athlete lifting a different weight than to keep the achievement constant and vary the athlete’s physique.

This idea of varying one side, not the other also applies to mental simulations of future (random) events, when people engage in projections of different counterfactuals. Authors and managers have a tendency to take one variable for fixed, sort-of a numeraire, and perturbate the other, as a default in mental simulations. One side is going to be random, not the other.

It hit me that the mathematical consequence is vastly more severe than it appears. Kahneman and colleagues focused on the bias that variable of choice is not random. But the paper set off in my mind the following realization: now what if we were to go one step beyond and perturbate both? The response would be nonlinear. I had never considered the effect of such nonlinearity earlier nor seen it explicitly made in the literature on risk and counterfactuals. And you never encounter one single random variable in real life; there are many things moving together.

Increasing the number of random variables compounds the number of counterfactuals and causes more extremes—particularly in fat-tailed environments (i.e., Extremistan): imagine perturbating by producing a lot of scenarios and, in one of the scenarios, increasing the weights of the barbell and decreasing the bodyweight of the weightlifter. This compounding would produce an extreme event of sorts. Extreme, or tail events (Black Swans) are therefore more likely to be produced when both variables are random, that is real life. Simple.

Now, in the real world we never face one variable without something else with it. In academic experiments, we do. This sets the serious difference between laboratory (or the casino’s “ludic” setup), and the difference between academia and real life. And such difference is, sort of, tractable.

… Say you are the manager of a fertilizer plant. You try to issue various projections of the sales of your product—like the weights in the weightlifter’s story. But you also need to keep in mind that there is a second variable to perturbate: what happens to the competition—you do not want them to be lucky, invent better products, or cheaper technologies. So not only you need to predict your fate (with errors) but also that of the competition (also with errors). And the variance from these errors add arithmetically when one focuses on differences.

Rory Sutherland

When I met Danny in London in 2009 he diffidently said that the only hope he had for his work was that “it might lead to a better kind of gossip”—where people discuss each other’s motivations and behaviour in slightly more intelligent terms. To someone from an industry where a new flavour-variant of toothpaste is presented as being an earth-changing event, this seemed an incredibly modest aspiration for such important work.

However, if this was his aim, he has surely succeeded. When I meet people, I now use what I call “the Kahneman heuristic”. You simply ask people “Have you read Danny Kahneman’s book?” If the answer is yes, you know (p>0.95) that the conversation will be more interesting, wide-ranging and open-minded than otherwise.

And it then occurred to me that his aim—for better conversations—was perhaps not modest at all. Multiplied a millionfold it may very important indeed. In the social sciences, I think it is fair to say, the good ideas are not always influential and the influential ideas are not always good. Kahneman’s work is now both good and influential.

Decision Making Psychology with Rory Sutherland

Below are three excerpts from a great interview with Rory Sutherland on decision making psychology.

Understanding Human Behavior

That attempt to model economic behaviour as though it were Newtonian physics was responsible for many past mistakes. This is closer to weather forecasting than to conventional physics as a science. But it is still a science and can still make progress like a science. And the great news is that we are starting from such a low base. If our ability to understand and predict human behaviour only improves by a few percent a decade, the benefits will be immense. And even a tiny reduction in misdirected effort (by abandoning daft, ineffectual sunk-cost-plagued endeavours such as the war on drugs or, at a more modest level, badly conceived choice-architectures in a new range of cars) all can be economically transformative.

The Physical Fallacy

The problem we all face is “The physical fallacy”. All of us, even those the social sciences, have an innate bias where we are happier fixing problems with stuff, rather than with psychological solutions – building faster trains rather than putting wifi on existing trains, to use my oft cited example. But as Benjamin Franklin (no mean decision scientist himself) remarked “There are two ways of being happy: We must either diminish our wants or augment our means – either may do. The result is the same and it is for each man to decide for himself and to do that which happens to be easier.”

There is no reason to prefer one solution over another simply because it involves solid matter rather than grey matter. This is an interesting area where the advertising industry and the environmental movement (rarely seen as natural bedfellows) sometimes find common ground. Intangible value is the best kind of value – since the materials needed to create it are not in short supply.

Marketing and Advertising

If you need to understand why marketing and advertising (and reputation and brands) are important to the functioning of markets, Akerlof’s paper “The Market for Lemons” is essential reading. So too is his excellent and underread book “Identity Economics” written with Rachel Kranton. The problem is not with economics as practiced by great economists – it is the unquestioning adherence to the dumber assumptions of Basic Economics 101 as unthinkingly absorbed by the product of a thousand business schools.

You are particularly made aware of the pernicious influence of bad economics if you work in advertising. Even when advertising demonstrably works and is highly cost effective, people in finance and in the boards of companies don’t seem to like it very much. Since they have a mental model of the world in which everyone has perfect information, they have of course constructed in their heads a vision of the world in which marketing shouldn’t exist.

The Noise Bottleneck: When More Information is Harmful

noise

When consuming information, we strive for more signal and less noise. The problem is a cognitive illusion: we feel like the more information we consume the more signal we receive.

While this is probably true on an absolute basis, Nassim Taleb argues in this excerpt from Antifragile, that it is not true on a relative basis. He calls is the noise bottleneck.

Taleb argues that as you consume more data and the ratio of noise to signal increases, the less you know about what’s going on and the more inadvertent trouble you are likely to cause.

***

The Institutionalization Of Neuroticism

Imagine someone of the type we call neurotic in common parlance. He is wiry, looks contorted, and speaks with an uneven voice. His necks moves around when he tries to express himself. When he has a small pimple his first reaction is to assume that it is cancerous, that the cancer is of the lethal type, and that it has already spread. His hypochondria is not just in the medical department: he incurs a small setback in business and reacts as if bankruptcy were both near and certain. In the office, he is tuned to every single possible detail, systematically transforming every molehill into a mountain. The last thing you want in life is to be in the same car with him when stuck in traffic on your way to an important appointment. The expression overreact was designed with him in mind: he does not have reactions, just overreactions.

Compare him to someone with the opposite temperament, imperturbable, with the calm under fire that is considered necessary to become a leader, military commander or a mafia godfather. Usually unruffled and immune to small information —they can impress you with their self-control in difficult circumstances. For a sample of a composed, call and pondered voice, listen to interview of “Sammy the Bull” Salvatore Gravano who was involved in the murder of nineteen people (all competing mobsters). He speaks with minimal effort. In the rare situations when he is angry, unlike with the neurotic fellow, everyone knows it and takes it seriously.

The supply of information to which we are exposed under modernity is transforming humans from the equable second fellow to the neurotic first. For the purpose of our discussion, the second fellow only reacts to real information, the first largely to noise. The difference between the two fellows will show us the difference between noise and signal. Noise is what you are supposed to ignore; signal what you need to heed.

Indeed, we have been loosely mentioning “noise” earlier in the book; time to be precise about it. In science, noise is a generalization beyond the actual sound to describe random information that is totally useless for any purpose, and that you need to clean up to make sense of what you are listening to. Consider, for examples, elements in an encrypted message that have absolutely no meaning, just randomized letters to confuse the spies, or the hiss you hear on a telephone line and that you try to ignore in order to just focus on the voice of your interlocutor.

Noise and Signal

If you want to accelerate someone’s death, give him a personal doctor.

One can see from the tonsillectomy story that access to data increases intervention —as with neuroticism. Rory Sutherland signaled to me that those with a personal doctor on staff should be particularly vulnerable to naive interventionism, hence iatrogenics; doctors need to justify their salaries and prove to themselves that they have some work ethics, something “doing nothing” doesn’t satisfy (Editor’s note: the same forces apply to leaders, managers, etc.). Indeed at the time of writing the personal doctor or the late singer Michael Jackson is being sued for something that is equivalent to overintervention-to-stifle-antifragility (but it will take the law courts a while before they become familiar with the concept). Conceivably, the same happened to Elvis Prestley. So with overmedicated politicians and heads of state.

Likewise those in corporations or in policymaking (like Fragilista Greenspan) endowed with a sophisticated statistics department and therefore getting a lot of “timely” data are capable of overreacting and mistaking noise for information —Greenspan kept an eye on such fluctuations as the sales of vacuum cleaners in Cleveland “to get a precise idea about where the economy is going”, and, of course micromanaged us into chaos.

In business and economic decision-making, data causes severe side effects —data is now plentiful thanks to connectivity; and the share of spuriousness in the data increases as one gets more immersed into it. A not well discussed property of data: it is toxic in large quantities —even in moderate quantities.

The previous two chapters showed how you can use and take advantage of noise and randomness; but noise and randomness can also use and take advantage of you, particularly when totally unnatural —the data you get on the web or thanks to the media.

The more frequently you look at data, the more noise you are disproportionally likely to get (rather than the valuable part called the signal); hence the higher the noise to signal ratio. And there is a confusion, that is not psychological at all, but inherent in the data itself. Say you look at information on a yearly basis, for stock prices or the fertilizer sales of your father-in-law’s factory, or inflation numbers in Vladivostock. Assume further that for what you are observing, at the yearly frequency the ratio of signal to noise is about one to one (say half noise, half signal) —it means that about half of changes are real improvements or degradations, the other half comes from randomness. This ratio is what you get from yearly observations. But if you look at the very same data on a daily basis, the composition would change to 95% noise, 5% signal. And if you observe data on an hourly basis, as people immersed in the news and markets price variations do, the split becomes 99.5% noise to .5% signal. That is two hundred times more noise than signal —which is why anyone who listens to news (except when very, very significant events take place) is one step below sucker.

There is a biological story with information. I have been repeating that in a natural environment, a stressor is information. So too much information would be too much stress, exceeding the threshold of antifragility. In medicine, we are discovering the healing powers of fasting, as the avoidance of too much hormonal rushes that come with the ingestion of food. Hormones convey information to the different parts of our system and too much of it confuses our biology. Here again, as with the story of the news received at too high a frequency, too much information becomes harmful. And in Chapter x (on ethics) I will show how too much data (particularly when sterile) causes statistics to be completely meaningless.

Now let’s add the psychological to this: we are not made to understand the point, so we overreact emotionally to noise. The best solution is to only look at very large changes in data or conditions, never small ones.

Just as we are not likely to mistake a bear for a stone (but likely to mistake a stone for a bear), it is almost impossible for someone rational with a clear, uninfected mind, one who is not drowning in data, to mistake a vital signal, one that matters for his survival, for noise. Significant signals have a way to reach you. In the tonsillectomies, the best filter would have been to only consider the children who are very ill, those with periodically recurring throat inflammation.

There was even more noise coming from the media and its glorification of the anecdote. Thanks to it, we are living more and more in virtual reality, separated from the real world, a little bit more every day, while realizing it less and less. Consider that every day, 6,200 persons die in the United States, many of preventable causes. But the media only reports the most anecdotal and sensational cases (hurricanes, freak incidents, small plane crashes) giving us a more and more distorted map of real risks. In an ancestral environment, the anecdote, the “interesting” is information; no longer today. Likewise, by presenting us with explanations and theories the media induces an illusion of understanding the world.

And the understanding of events (and risks) on the part of members of the press is so retrospective that they would put the security checks after the plane ride, or what the ancients call post bellum auxilium, send troops after the battle. Owing to domain dependence, we forget the need to check our map of the world against reality. So we are living in a more and more fragile world, while thinking it is more and more understandable.

To conclude, the best way to mitigate interventionism is to ration the supply of information, as naturalistically as possible. This is hard to accept in the age of the internet. It has been very hard for me to explain that the more data you get, the less you know what’s going on, and the more iatrogenics you will cause.

***

The noise bottleneck is really a paradox. We think the more information we consume the more signal we’ll consume. Only the mind doesn’t work like that. When the volume of information increases, our ability to comprehend the relevant from the irrelevant becomes compromised. We place too much emphasis on irrelevant data and lose sight of what’s really important.

Still Curious? Read The Pot Belly of Ignorance.

Source (image via)