Tag: Loss aversion

Elon Musk on Regulators

The Federal Aviation Administration had a meeting with Elon Musk they won’t forget. Musk met with them to discuss some approvals for the work one of his companies, SpaceX, was doing. The meeting reads like an episode of Dilbert. The FAA responded in the type of double-speak that only governments seem to master. So what did he do? He told one of the experts they were wrong.

“His manager sent me this long email about how he had been in the shuttle program and in charge of 20 launches or something like that and how dare I say that the other guy was wrong,” Musk says in Ashlee Vance’s book Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future.

“Not only is he wrong,” Musk says, “let me rearticulate the reasons. We’re trying to have a really big impact in the space industry. If the rules are such that you can’t make progress, then you have to fight the rules.

And then he nails the fundamental problem with regulators.

There is a fundamental problem with regulators. If a regulator agrees to change a rule and something bad happens, they can easily lose their career. Whereas if they change a rule and something good happens, they don’t even get a reward. So, it’s very asymmetric. It’s then very easy to understand why regulators resist changing the rules. It’s because there’s a big punishment on one side and no reward on the other. How would any rational person behave in such a scenario?

The asymmetry he’s talking about is loss aversion. And it doesn’t stop at regulators, it extends into other areas as well. The same principle applies to most CEOs, managers and leaders. If you want to predict behavior, take a close look at the incentives.

As Keynes said: “Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally.”

 

Daniel Kahneman in Conversation with Michael Mauboussin on Intuition, Causality, Loss Aversion and More

Ever want to be the fly on the wall for a fascinating conversation. Well, here’s your chance. Santa Fe Institute Board of Trustees Chair Michael Mauboussin interviews Nobel Prize winner Daniel Kahneman. The wide-ranging conversation talks about disciplined intuition, causality, base rates, loss aversion and so much more. You don’t want to miss this.

Here’s an excerpt from Kahneman I think you’ll enjoy. You can read the entire transcript here.

The Sources of Power is a very eloquent book on expert intuition with magnificent examples, and so he is really quite hostile to my point of view, basically.

We spent years working on that, on the question of when can intuitions be trusted? What’s the boundary between trustworthy and untrustworthy intuitions?

I would summarize the answer as saying there is one thing you should not do. People’s confidence in their intuition is not a good guide to their validity. Confidence is something else entirely, and maybe we can talk about confidence separately later, but confidence is not it.

What there is, if you want to know whether you can trust intuition, it really is like deciding on a painting, whether it’s genuine or not. You can look at the painting all you want, but asking about the provenance is usually the best guide about whether a painting is genuine or not.

Similarly for expertise and intuition, you have to ask not how happy the individual is with his or her own intuitions, but first of all, you have to ask about the domain. Is the domain one where there is enough regularity to support intuitions? That’s true in some medical domains, it certainly is true in chess, it is probably not true in stock picking, and so there are domains in which intuition can develop and others in which it cannot. Then you have to ask whether, if it’s a good domain, one in which there are regularities that can be picked up by the limited human learning machine. If there are regularities, did the individual have an opportunity to learn those regularities? That primarily has to do with the quality of the feedback.

Those are the questions that I think should be asked, so there is a wide domain where intuitions can be trusted, and they should be trusted, and in a way, we have no option but to trust them because most of the time, we have to rely on intuition because it takes too long to do anything else.

Then there is a wide domain where people have equal confidence but are not to be trusted, and that may be another essential point about expertise. People typically do not know the limits of their expertise, and that certainly is true in the domain of finances, of financial analysis and financial knowledge. There is no question that people who advise others about finances have expertise about finance that their advisees do not have. They know how to look at balance sheets, they understand what happens in conversations with analysts.

There is a great deal that they know, but they do not really know what is going to happen to a particular stock next year. They don’t know that, that is one of the typical things about expert intuition in that we know domains where we have it, there are domains where we don’t, but we feel the same confidence and we do not know the limits of our expertise, and that sometimes is quite dangerous.

***

Focusing Illusions

focusing illusions

My favorite chapter in the book Rapt: Attention and the Focused Life by Winifred Gallagher is called ‘Decisions: Focusing Illusions.’ It’s a really great summary of how focusing on the wrong things affects the weights we use to make decisions. There is a lot of great content packed into this chapter but I’ll attempt to highlight a few points.

***
Bounded Rationality

According to the principle of ‘bounded rationality,’ which (Daniel) Kahneman first applied to economic decisions and more recently to choices concerning quality of life, we are reasonable-enough beings but sometimes liable to focus on the wrong things. Our thinking gets befuddled not so much by our emotions as by our ‘cognitive illusions,’ or mistaken intuitions, and other flawed, fragmented mental constructs.

***
Loss/Risk Aversion

If you’re pondering a choice that involves risk, you might focus too much on the threat of possible loss, thereby obscuring an even likelier potential benefit. Where this common scenario is concerned, research shows that we aren’t so much risk-averse as loss-averse, in that we’re generally much more sensitive to what we might have to give up than to what we might gain.

***
The Focusing Illusion

The key to understanding why you pay more attention to your thoughts about living than to life itself is neatly summed up by what Kahneman proudly calls his ‘fortune cookie maxim’ (a.k.a the focusing illusion): ‘Nothing in life is as important as you think it is while you are thinking about it.’ Why? ‘Because you’re thinking about it!

In one much-cited illustration of the focusing illusion, Kahneman asked some people if they would be happier if they lived in California. Because the climate is often delightful there, most subjects thought so. For the same reason, even Californians assume they’re happier than people who live elsewhere. When Kahneman actually measured their well-being however, Michiganders and others are just as contented as Californians. The reason is that 99 percent of the stuff of life – relationships, work, home, recreation – is the same no matter where you are, and once you settle in a place, no matter how salubrious, you don’t think about it’s climate very much. If you’re prompted to evaluate it, however, the weather immediately looms large, simply because you’re paying attention to it. This illusion inclines you to accentuate the difference between Place A and Place B, making it seem to matter much more than it really does, which is marginal.

To test the fortune cookie rule, you have only to ask yourself how happy you are. The question automatically summons your remembering self, which will focus on any recent change in your life – marriage or divorce, new job or home. You’ll then think about this novel event, which in turn will increase its import and influence your answer. If you’re pleased that you’ve just left the suburbs for the city, say, you’ll decide that life is pretty good. If you regret the move, you’ll be dissatisfied in general. Fifteen years on, however, the change that looms so large now will pale next to a more recent event – a career change, perhaps or becoming a grandparent – which will draw your focus and, simply because you’re thinking about it, bias your evaluation of your general well-being.

***
The Effects of Adaptation

Like focusing too much on the opinions of your remembering self, overlooking the effects of adaptation – the process of becoming used to a situation – can obstruct wise decisions about how to live. As Kahneman says, ‘when planning for the future, we don’t consider that we will stop paying attention to a thing.

The tendency to stop focusing on a particular event or experience over time, no matter how wonderful or awful, helps explain why the differences in well-being between groups of people in very different circumstances tend to be surprisingly small – sometimes astoundingly so. The classic examples are paraplegics and lottery winners, who respectively aren’t nearly as miserable or happy as you’d think. ‘That’s where attention comes in,’ says Kahneman. ‘People think that if they win the lottery, they’ll be happy forever. Of course, they will not. For a while, they are happy because of the novelty, and because they think about winning all the time. Then they adapt and stop paying attention to it.’ Similarly, he says, ‘Everyone is surprised by how happy paraplegics can be, but they are not paraplegic full-time. They do other things. They enjoy their meals, their friends, the newspaper. It has to do with the allocation of attention.’

Like couples who’ve just fallen in love, professionals starting a career, or children who go to camp for the first time, paraplegics and lottery winners initially pay a lot of attention to their new situation. Then, like everybody else, they get used to it and shift their focus to the next big thing. Their seemingly blase attitude surprises us, because when we imagine ourselves in their place, we focus on how we’d feel at the moment of becoming paralyzed or wildly rich, when such an event utterly monopolizes one’s focus. We forget that we, too, would get used to wealth, a wheelchair, and most other things under the sun, then turn our attention elsewhere.

***
Good Enough

Finally, don’t worry if the choice you made wasn’t the absolute best, as long as it meets your needs. Offering the single most important lesson from his research, Schwartz says, ‘Good enough is almost always good enough. If you have that attitude, many problems about decisions and much paralysis melt away.’

Blindness to the Benefits of Ambiguity

“Decision makers,” write Stefan Trautmann and Richard Zeckhauser in their paper Blindness to the Benefits of Ambiguity, “often prove to be blind to the learning opportunities offered by ambiguous probabilities. Such decision makers violate rational decision making and forgo significant expected payoffs.”

Trautmann and Zeckhauser argue that we often don’t recognize the benefits in commonly occurring ambiguous situations. In part this is because we often treated repeated decisions involving ambiguity as one-shot decisions. In doing so, we ignore the opportunity for learning when we encounter ambiguity in decisions that offer repeat choices.

To put this in context, the authors offer the following example:

A patient is prescribed a drug for high cholesterol. It is successful, lowering her total cholesterol from 230 to 190, and her only side effect is a mild case of sweaty palms. The physician is likely to keep the patient on this drug as long as her cholesterol stays low. Yet, there are many medications for treating cholesterol. Another might lower her cholesterol even more effectively or impose no side effects. Trying an alternative would seem to make sense, since the patient is likely to be on a cholesterol medication for the rest of her life.

In situations of ambiguity with repeated choices we often gravitate towards the first decision that offers a positive payoff. Once we’ve found a positive payoff we’re likely to stick with that decision when given the opportunity to make the same choice again rather than experiment in an attempt to optimize payoffs. We ignore the opportunity for learning and favor the status quo. Another way to think of this is uncertainty avoidance (or ambiguity aversion).

Few individuals recognize that ambiguity offers the opportunity for learning. If a choice situation is to be repeated, ambiguity brings benefits, since one can change one’s choice if one learns the ambiguous choice is superior.

“We observe,” they offer, “that people’s lack of a clear understanding of learning under ambiguity leads them to adopt non-Bayesian rules.”

Another example of how this manifests itself in the real world:

In the summer of 2010, the consensus estimate is that there are five applicants for every job opening, yet major employers who expect to hire significant numbers of workers once the economy turns up are sitting by the sidelines and having current workers do overtime. The favorability of the hiring situation is unprecedented in recent years. Thus, it would seem to make sense to hire a few workers, see how they perform relative to the norm. If the finding is much better, suggesting that the ability to select in a very tough labor market and among five applicants is a big advantage, then hire many more. This situation, where the payoff to the first-round decision is highly ambiguous, but perhaps well worthwhile once learning is taken into account, is a real world exemplar of the laboratory situations investigated in this paper.

According to Tolstoi, happy families are all alike, while every unhappy family is unhappy in its own way. A similar observation seems to hold true for situations involving ambiguity: There is only one way to capitalize correctly on learning opportunities under ambiguity, but there are many ways to violate reasonable learning strategies.

From an evolutionary perspective, why would learning avoidance persist if the benefits from learning are large?

Psychological findings suggest that negative experiences are crucial to learning, while good experiences have virtually no pedagogic power. In the current setting, ambiguous options would need to be sampled repeatedly in order to obtain sufficient information on whether to switch from the status quo. Both bad and good outcomes would be experienced along the way, but only good ones could trigger switching. Bad outcomes would also weigh much more heavily, leading people to require too much positive evidence before shifting to ambiguous options. In individual decision situations, losses often weigh 2 to 3 times as much as gains.

In addition, if one does not know what returns would have come from an ambiguous alternative, one cannot feel remorse from not having chosen it. Blame from others also plays an important role. In principal-agent relationships, bad outcomes often lead to criticism, and possibly legal consequences because of responsibility and accountability. Therefore, agents, such as financial advisors or medical practitioners may experience an even higher asymmetry from bad and good payoffs. Most people, for that reason, have had many fewer positive learning experiences with ambiguity than rational sampling would provide.

It might be a good idea to try a new brand the next time you’re at the store rather than just making the same choice over and over. Who knows, you might discover you like it better.

Why Do People Choke When the Stakes Are High?

Loss Aversion.

In sports, on a game show, or just on the job, what causes people to choke when the stakes are high? A new study by researchers at the California Institute of Technology (Caltech) suggests that when there are high financial incentives to succeed, people can become so afraid of losing their potentially lucrative reward that their performance suffers.

It is a somewhat unexpected conclusion. After all, you would think that the more people are paid, the harder they will work, and the better they will do their jobs — until they reach the limits of their skills. That notion tends to hold true when the stakes are low, says Vikram Chib, a postdoctoral scholar at Caltech and lead author on a paper published in the May 10 issue of the journal Neuron. Previous research, however, has shown that if you pay people too much, their performance actually declines.

Some experts have attributed this decline to too much motivation: they think that, faced with the prospect of earning an extra chunk of cash, you might get so excited that you will fail to do the task properly. But now, after looking at brain-scan data of volunteers performing a specific motor task, the Caltech team says that what actually happens is that you become worried about losing your potential prize. The researchers also found that the more someone is afraid of loss, the worse they perform.

Continue Reading

Making Good Citizenship Fun — Richard Thaler

Interesting article by Richard Thaler on encouraging good citizenship by making the desired behavior more fun:

Lotteries are just one way to provide positive reinforcement. Their power comes from the fact that the chance of winning the prize is overvalued. Of course you can simply pay people for doing the right thing, but if the payment is small, it could well backfire. …

An alternative to lotteries is a frequent-flyer-type reward program, where the points can be redeemed for something fun. A free goodie can be a better inducement than cash since it offers that rarest of commodities, a guilt-free pleasure. This sort of reward system has been successfully used in England to encourage recycling. In the Royal Borough of Windsor and Maidenhead outside of London, citizens could sign up for a rewards program in which they earned points depending on the weight of the material they recycled. The points were good for discounts at merchants in the area. Recycling increased by 35 percent.