Tag: Behavioral Psychology

Maria Konnikova on How we Get Conned

There’s a scene in the classic Paul Newman film The Sting, where Johnny Hooker (played by a young Robert Redford) tries to get Henry Gondorf (played by Newman) to finally tell him when they’re going to pull the big con. His response tells the tale:

You gotta keep his con even after you take his money. He can’t know you took him.

It’s this same subject that our friend Maria Konnikova — whom we interviewed a few years ago upon the release of her book Mastermind: How to Think like Sherlock Holmes — has mined with her new book The Confidence Game: Why We Fall For it…Every Time.

It’s a good question: Why do we fall for it every time? Confidence games (cons for short) are a wonderful arena to study the Psychology of Human Misjudgment.

In fact, you could call a good con artist — you have to love the term artist here — a master of human psychology. They are, after all, in the game of manipulating people into parting with their money. They are so good, a successful con is a lot like a magic trick:

When we step into a magic show, we come in actively wanting to be fooled. We want deception to cover our eyes and make our world a tiny bit more fantastical, more awesome than it was before. And the magician, in many ways, uses the exact same approaches as the confidence man—only without the destruction of the con’s end game. “Magic is a kind of a conscious, willing con,” Michael Shermer, a science historian and writer who has devoted many decades to debunking claims about the supernatural and the pseudoscientific, told me one December afternoon. “You’re not being foolish to fall for it. If you don’t fall for it, the magician is doing something wrong.”

Shermer, the founder of the Skeptics Society and Skeptic magazine, has thought extensively about how the desire to embrace magic so often translates into susceptibility to its less savory forms. “Take the Penn and Teller cup and balls. I can explain it to you and it still would work. It’s not just knowing the secret; it’s not a trick. It’s the whole skill and art of presentation. There’s a whole narrative—and that’s why it’s effective.” At their root, magic tricks and confidence games share the same fundamental principle: a manipulation of our beliefs. Magic operates at the most basic level of visual perception, manipulating how we see—and don’t see—and experience reality. It changes for an instant what we think possible, quite literally taking advantage of our eyes’ and brains’ foibles to create an alternative version of the world. The con does the same thing, but can go much deeper. Tricks like three-card monte are identical to a magician’s routine—except the intent is more nefarious.

Psychology and show magic have more in common than you’d think: As Shermer says, there are many magic tricks that you can explain ahead of time and they will still work, and still baffle. But…wait…how?

The link between everyday psychological manipulation and show magic is so close that the magician Harry Houdini spent a good portion of his later life trying to sniff out cons in the form of mediums, mystics, and sooth-sayers. Even he couldn’t totally shake free of the illusions:

Mysticism, [Houdini] argued, was a game as powerful as it was dangerous. “It is perfectly rational to suppose that I may be deceived once or twice by a new illusion,” he wrote, “but if my mind, which has been so keenly trained for years to invent mysterious effects, can be deceived, how much more susceptible must the ordinary observer be?

Such is the power of the illusion. The same, of course, goes for the mental tricks in our psychological make-up. A great example is the gambling casino: Leaving out the increasingly rare exceptions, who ever walks in thinking they have a mathematical edge over the house? Who would be surprised to find out the casino is deliberately manipulating them into losing money with social proof, deprival super-reaction, commitment bias, over-confidence bias, and other tricks? Most intelligent folks aren’t shocked or surprised by the concept of a house edge. And yet casinos continue to do healthy business. We participate in the magic trick. In a perverse sense, we allow ourselves to be conned.

In some ways, confidence artists like Demara have it easy. We’ve done most of the work for them; we want to believe in what they’re telling us. Their genius lies in figuring out what, precisely, it is we want, and how they can present themselves as the perfect vehicle for delivering on that desire.

The Beginning of a Con: The “Put-Up” & The “Mark”

Who makes a good mark for a con artist? Essentially, it could be anyone. Context trumps character. Konnikova wisely retracts from trying to pinpoint exactly who is easiest to con: The truth is, in the right time and place, we can all get hit by a good enough con man. In fact, con artists themselves often make great marks. This is probably linked, in part, to over-confidence. (In fact, you might call conning a con man an…Over-confidence game?)

The con artist starts by getting to know us at a deep level. Konnikova argues that con artists combine excellent judgment of character with a honed ability to show the mark exactly what he wants to see. An experienced con artist has been drowned in positive and negative feedback on what works and does not. Through practice evolution, he’s learned what works. That’s why we end up letting him in, even if we’re on guard:

A con artist looks at everyone at that fine level. When it comes to the put-up, accuracy matters—and con men don’t just want to know how someone looks to them. They want to correctly reflect how they want to be seen.

What’s more, confidence artists can use what they’re learning as they go in order to get us to give up even more. We are more trusting of people who seem more familiar and more similar to us, and we open up to them in ways we don’t to strangers. It makes a certain sense: those like us and those we know or recognize are unlikely to want to hurt us. And they’re more likely to understand us.

There are a few things at play here. The con is triggering a bias from liking/loving, which we all have in us. By getting us committed and then drawing us in slowly, they also trigger commitment bias — in fact, Konnikova explains that the term Confidence Game itself comes from a basic trust exercise: Get into a conversation with a mark, commit them to saying that they trust you, then ask them if they’ll let you hold their wallet as a show of that trust. Robert Cialdini — the psychology professor who wrote the wonderfully useful book Influence — would certainly not be surprised to see that this little con worked pretty frequently. (Maria smartly points out the connection between con artists and Cialdini’s work in the book.)

The “Play,” the “Rope,” the “Tale,” and the “Convincer”

Once the con artist decides that we’re a mark, the fun begins.

After the mark is chosen, it is time to set the actual con in motion: the play, the moment when you first hook a victim and begin to gain her trust. And that is accomplished, first and foremost, through emotion. Once our emotions have been captured, once the con artist has cased us closely enough to identify what it is we want, feeling, at least in the moment, takes over from thinking.

[…]

What visceral states do is create an intense attentional focus. We tune out everything else and tune in to the in-the-moment emotional cues. It’s similar to the feeling of overwhelming hunger or thirst—or the need to go to the bathroom—when you suddenly find yourself unable to think about anything else. In those moments, you’re less likely to deliberate, more likely to just say yes to something without fully internalizing it, and generally more prone to lapses that are outside the focus of your immediate attention.

As far as the context of a good con, emotion rules the day. People in financial straits, or who find themselves in stressful or unusual situations are the easiest to con. This is probably because these situations trigger what Danny Kahneman would call System 1 thinking: Fast, snap judgments, often very bad ones. Influenced by stress, we’re not slowing down and thinking things through. In fact, many people won’t even admit to be conned after the fact because they feel so ashamed of their lack of judgment in the critical moments. (Cult conversions use some of the same tactics.)

Now begins the “Tale”

A successful story does two things well. It relies on the narrative itself rather than any overt arguments or logical appeals to make the case on its own, and it makes us identify with its characters. We’re not expecting to be persuaded or asked to do something. We’re expecting to experience something inherently pleasant, that is, an interesting tale. And even if we’re not relating to the story as such, the mere process of absorbing it can create a bond between us and the teller—a bond the teller can then exploit.

It’s always harder to argue with a story, be it sad or joyful. I can dismiss your hard logic, but not how you feel. Give me a list of reasons, and I can argue with it. Give me a good story, and I can no longer quite put my finger on what, if anything, should raise my alarm bells. After all, nothing alarming is ever said explicitly, only implied.

This is, of course, the con artist preying on our inherent bias for narrative. It’s how we sense-make, but as Cialdini knows so well, it can be used for nefarious purposes to cause a click, whirr automatic reaction where our brain doesn’t realize it’s being tricked. Continuing the fallacy, the con artist reinforces the narrative we’ve been building in our head:

One of the key elements of the convincer, the next stage of the confidence game, is that it is, well, convincing: the convincer makes it seem like you’re winning and everything is going according to plan. You’re getting money on your investment. Your wrinkles are disappearing and your weight, dropping. That doctor really seems to know what he’s doing. That wine really is exceptional, and that painting, exquisite. You sure know how to find the elusive deal. The horse you bet on, both literal and figurative, is coming in a winner.

 The “Breakdown,” and the “Send”

And now comes the break-down. We start to lose. How far can the grifter push us before we balk? How much of a beating can we take? Things don’t completely fall apart yet—that would lose us entirely, and the game would end prematurely — but cracks begin to show. We lose some money. Something doesn’t go according to plan. One fact seems to be off. A figure is incorrectly labeled. A wine bottle is “faulty.” The crucial question: do we notice, or do we double down? High off the optimism of the convincer, certain that good fortune is ours, we often take the second route. When we should be cutting our losses, we instead recommit—and that is entirely what the breakdown is meant to accomplish.

A host of biases are being triggered at this point, turning our brains into mush. We’re starting to lose a little, but we feel if we hang in long enough, we can probably at least come out even, or ahead. (Deprival super-reaction tendency, so common at the roulette table, and sunk-cost fallacies.) We’ve already put our trust in this nice fellow, so any new problems can probably be rationalized as something we “knew could happen all along,” so no reason to worry. (Commitment & consistency, hindsight bias.) And of course, this is where the con artist really has us. It’s called The Send.

The send is that part of the con where the victim is recommitted, that is, asked to invest increasingly greater time and resources into the con artist’s scheme—and in the touch, the con finally comes to its fruition and the mark is completely, irrevocably fleeced.

The End of the Line

Of course, all things eventually come to an end.

The blow-off is often the final step of the con, the grifter’s smooth disappearance after the game has played out. Sometimes, though, the mark may not be so complacent. If that happens, there’s always one more step that can be taken: the fix, when a grifter puts off the involvement of law enforcement to prevent marks from making their complaints official.

Like the scene in The Sting, the ideal con ends without trouble for the con-man: Ideally, the mark won’t even know it was a con. But if they do, Konnikova makes an interesting point that the blow-off and the fix often end up being unnecessary, for reputational reasons. This self-preservation mechanism is one reason so many frauds never come to light, why there are few prosecutions in relation to the amount of fraud really going on:

The blow-off is the easiest part of the game, and the fix hardly ever employed. The Drake fraud persisted for decades—centuries, in fact—because people were too sheepish about coming forward after all that time. Our friend Fred Demara was, time and time again, not actually prosecuted for his transgressions. People didn’t even want to be associated with him, let alone show who they were publically by suing him. The navy had only one thing to say: go quietly—leave, don’t make a scene, and never come back.

Besides the reputational issue, there are clearly elements of Pavlovian mere association at play. Who wants to be reminded of their own stupidity? Much easier to sweep it away as soon as possible, never to be reminded again.

***

Confidence Game is an enjoyable read with tales of cons and con artists throughout history – a good reminder of our own fallibility in the face of a good huckster and the power of human misjudgment.

Dan Ariely on How and Why We Cheat

Three years ago, Dan Ariely, a psychology and behavioral economics professor at Duke, put out a book called The (Honest) Truth About Dishonesty: How We Lie to Everyone–Especially Ourselves. I read the book back closer to when it was released, and I recently revisited it to see how it held up to my initial impressions.

It was even better. In fact, this is one of the most useful books I have ever come across, and my copy is now marked, flagged, and underlined. Let’s get in deep.

Why We Cheat

We’re Cheaters All

Dan is both an astute researcher and a good writer; he knows how to get to the point, and his points matter. His books, which include Predictably Irrational and The Upside of Irrationality, are not filled with fluff. We’ve mentioned his demonstrations of pluralistic ignorance here before.

In The Honest Truth, Ariely doesn’t just explore where cheating comes from but he digs into which situations make us more likely to cheat than others. Those discussions are what make the book eminently practical, and not just a meditation on cheating. It’s a how-to guide on our own dishonesty.

Ariely was led down that path because of a friend of his who had worked with Enron:

It was of course, possible that John and everyone else involved with Enron was deeply corrupt, but I began to think that there may have been a different type of dishonest at work–one that relates more to wishful blindness and is practiced by people like John, you, and me. I started wondering if the problem of dishonesty goes deeper than just a few bad apples and if this kind of wishful blindness takes place in other companies as well. I also wondered if my friends and I would have behaved similarly if we had been the ones consulting for Enron.

This is a beautiful setup that led him to a lot of interesting conclusions in his years of subsequent research. Here’s (some of) what Dan found.

  1. Cheating was standard, but only a little. Ariely and his co-researchers ran the same experiment in many different variations, and with many different topics to investigate. Nearly every time, he found evidence of a standard level of cheating. In other experiments, the outcome was the same. A little cheating was everywhere. People generally did not grab all they could, but only as much as they could justify psychologically.
  2. Increasing the cheating reward or moderately altering the risk of being caught didn’t affect the outcomes much. In Ariely’s experience, the cheating stayed steady: A little bit of stretching every time.
  3. The more abstracted from the cheating we are, the more we cheat. This was an interesting one–it turns out the less “connected” we feel to our dishonesty, the more we’re willing to do it. This ranges from being more willing to cheat to earn tokens exchangeable for real money than to earn actual money, to being more willing to “tap” a golf ball to improve its lie than actually pick it up and move it with our hands.
  4. A nudge not to cheat works better before we cheat than after. In other words, we need to strengthen our morals just before we’re tempted to cheat, not after. And even more interesting, when Ariely took his findings to the IRS and other organizations who could benefit from being cheated less, they barely let him in the door! The incentives in organizations are interesting.
  5. We think we’re more honest than everyone else. Ariely showed this pretty conclusively by studying golfers and asking them how much they thought others cheated and how much they thought they cheated themselves. It was a rout: They consistently underestimated their own dishonesty versus others’. I wasn’t surprised by this finding.
  6. We underestimate how blinded we can become to incentives. In a brilliant chapter called “Blinded by our Motivations,” Ariely discusses how incentives skew our judgment and our moral compass. He shows how pharma reps are masters of this game–and yet we allow it to continue. If we take Ariely seriously, the laws against conflicts of interest need to be stronger.
  7. Related to (6), disclosure does not seem to decrease incentive-caused bias. This reminds me of Charlie Munger’s statement, “I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. Never a year passes that I don’t get some surprise that pushes my limit a little farther.” Ariely has discussed incentive-caused bias in teacher evaluation before.
  8. We cheat more when our willpower is depleted. This doesn’t come as a total surprise: Ariely found that when we’re tired and have exerted a lot of mental or physical energy, especially in resisting other temptations, we tend to increase our cheating. (Or perhaps more accurately, decrease our non-cheating.)
  9. We cheat ourselves, even if we have direct incentive not to. Ariely was able to demonstrate that even with a strong financial incentive to honestly assess our own abilities, we still think we cheat less than we do, and we hurt ourselves in the process.
  10. Related to (9), we can delude ourselves into believing we were honest all along. This goes to show the degree to which we can damage ourselves by our cheating as much as others. Ariely also discusses how good we are at pounding our own conclusions into our brain even if no one else is being persuaded, as Munger has mentioned before.
  11. We cheat more when we believe the world “owes us one.” This section of the book should feel disturbingly familiar to anyone. When we feel like we’ve been cheated or wronged “over here,” we let the universe make it up to us “over there.” (By cheating, of course.) Think about the last time you got cut off in traffic, stiffed on proper change, and then unloaded on by your boss. Didn’t you feel more comfortable reaching for what wasn’t yours afterwards? Only fair, right?
  12. Unsurprisingly, cheating has a social contagion aspect. If we see someone who we identify with and whose group we feel we belong to cheating, it makes us (much) more likely to cheat. This has wide-ranging social implications.
  13. Finally, nudging helps us cheat less. If we’re made more aware of our moral compass through specific types of reminders and nudges, we can decrease our own cheating. Perhaps most important is to keep ourselves out of situations where we’ll be tempted to cheat or act dishonestly, and to take pre-emptive action if it’s unavoidable.

There’s much more in the book, and we highly recommend you read it for that as well as Dan’s general theory on cheating. The final chapter on the steps that old religions have taken to decrease dishonesty among their followers is a fascinating bonus. (Reminded me of Nassim Taleb’s retort that heavy critics of religion, like Dawkins, take it too literally and under-appreciate the social value of its rules and customs. It’s also been argued that religion has an evolutionary basis.)

Check out the book, and while you’re at up, pick up his other two: Predictably Irrational, and The Upside of Irrationality.

Biases and Blunders

Nudge: Improving Decisions About Health, Wealth, and Happiness

You would be hard pressed to come across a reading list on behavioral economics that doesn’t mention Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard Thaler and Cass Sunstein.

It is a fascinating look at how we can create environments or ‘choice architecture’ to help people make better decisions. But one of the reasons it’s been so influential is because it helps us understand why people sometimes make bad decisions in the first place. If we really want to understand how we can nudge people into making better choices, it’s important to understand why they often make such poor ones.

Let’s take a look at how Thaler and Sunstein explain some of our common mistakes in a chapter aptly called ‘Biases and Blunders.’

Anchoring and Adjustment

Humans have a tendency to put too much emphasis on one piece of information when making decisions. When we overweigh one piece of information and make assumptions based on it, we call that an anchor. Say I borrow a 400-page-book from a friend and I think to myself, the last book I read was about 300 pages and I read it in 5 days so I’ll let my friend know I’ll have her book back to her in 7 days. Problem is, I’ve only compared one factor related to me reading books and now I’ve made a decision without taking into account many other factors which could affect the outcome. For example, is the new book a topic I will digest at the same rate? Will I have the same time over those 7 days for reading? I have looked at number of pages but are the number of words per page similar?

As Thaler and Sunstein explain:

This process is called ‘anchoring and adjustment.’ You start with some anchor, the number you know, and adjust in the direction you think is appropriate. So far, so good. The bias occurs because the adjustments are typically insufficient.

Availability Heuristic

This is the tendency of our mind to overweigh information that is recent and readily available. What did you think about the last time you read about a plane crash? Did you start thinking about you being in a plane crash? Imagine how much it would weigh on your mind if you were set to fly the next day.

We assess the likelihood of risks by asking how readily examples come to mind. If people can easily think of relevant examples, they are far more likely to be frightened and concerned than if they cannot.

Accessibility and salience are closely related to availability, and they are important as well. If you have personally experienced a serious earthquake, you’re more likely to believe that an earthquake is likely than if you read about it in a weekly magazine. Thus, vivid and easily imagined causes of death (for example, tornadoes) often receive inflated estimates of probability, and less-vivid causes (for example, asthma attacks) receive low estimates, even if they occur with a far greater frequency (here, by a factor of twenty). Timing counts too: more recent events have a greater impact on our behavior, and on our fears, than earlier ones.

Representativeness Heuristic

Use of the representativeness heuristic can cause serious misperceptions of patterns in everyday life. When events are determined by chance, such as a sequence of coin tosses, people expect the resulting string of heads and tails to be representative of what they think of as random. Unfortunately, people do not have accurate perceptions of what random sequences look like. When they see the outcomes of random processes, they often detect patterns that they think have great meaning but in fact are just due to chance.

It would seem as though we have issues with randomness. Our brains automatically want to see patterns when none may exist. Try a coin toss experiment on yourself. Simply flip a coin and keep track if it’s heads or tails. At some point you will hit ‘a streak’ of either heads or tails and you will notice that you experience a sort of cognitive dissonance; you know that ‘a streak’ at some point is statistically probable but you can’t help but thinking the next toss has to break the streak because for some reason in your head it’s not right. That unwillingness to accept randomness, our need for a pattern, often clouds our judgement when making decisions.

Unrealistic Optimism

We have touched upon optimism bias in the past. Optimism truly is a double-edged sword. On one hand it is extremely important to be able to look past a bad moment and tell yourself that it will get better. Optimism is one of the great drivers of human progress.

On the other hand, if you never take those rose-coloured glasses off, you will make mistakes and take risks that could have been avoided. When assessing the possible negative outcomes associated with risky behaviour we often think ‘it won’t happen to me.’ This is a brain trick: We are often insensitive to the base rate.

Unrealistic optimism is a pervasive feature of human life; it characterizes most people in most social categories. When they overestimate their personal immunity from harm, people may fail to take sensible preventive steps. If people are running risks because of unrealistic optimism, they might be able to benefit from a nudge.

Loss Aversion

When they have to give something up, they are hurt more than they are pleased if they acquire the very same thing.

We are familiar with loss aversion in the context described above but Thaler and Sunstein take the concept a step further and explain how it plays a role in ‘default choices.’ Loss aversion can make us so fearful of making the wrong decision that we don’t make any decision. This explains why so many people settle for default options.

The combination of loss aversion with mindless choosing implies that if an option is designated as the ‘default,’ it will attract a large market share. Default options thus act as powerful nudges. In many contexts defaults have some extra nudging power because consumers may feel, rightly or wrongly, that default options come with an implicit endorsement from the default setter, be it the employer, government, or TV scheduler.

Of course, this is not the only reason default options are so popular. “Anchoring,” which we mentioned above, plays a role here. Our mind anchors immediately to the default option, especially in unfamiliar territory for us.

We also have the tendency towards inertia, given that mental effort is tantamount to physical effort – thinking hard requires physical resources. If we don’t know the difference between two 401(k) plans and they both seem similar, why expend the mental effort to switch away from the default investment option? You may not have that thought consciously; it often happens as a “click, whirr.

State of Arousal

Our prefered definition requires recognizing that people’s state of arousal varies over time. To simplify things we will consider just the two endpoints: hot and cold. When Sally is very hungry and appetizing aromas are emanating from the kitchen, we can say she is in a hot state. When Sally is thinking abstractly on Tuesday about the right number of cashews she should consume before dinner on Saturday, she is in a cold state. We will call something ‘tempting’ if we consume more of it when hot than when cold. None of this means that decisions made in a cold state are always better. For example, sometimes we have to be in a hot state to overcome our fears about trying new things. Sometimes dessert really is delicious, and we do best to go for it. Sometimes it is best to fall in love. But it is clear that when we are in a hot state, we can often get into a lot of trouble.

For most of us, however, self-control issues arise because we underestimate the effect of arousal. This is something the behavioral economist George Loewenstein (1996) calls the ‘hot-cold empathy gap.’ When in a cold state, we do not appreciate how much our desires and our behavior reflects a certain naivete about the effects that context can have on choice.

The concept of arousal is analogous to mood. At the risk of stating the obvious, our mood can play a definitive role in our decision making. We all know it, but how many among us truly use that insight to make better decisions?

This is one reason we advocate decision journals when it comes to meaningful decisions (probably no need to log in your cashew calculations); a big part of tracking your decisions is your mood when you make themA zillion contextual clues go into your state of arousal, but taking a quick pause to note which state you’re in as you make a decision can make a difference over time.

Mood is also affected by chemicals. This one may be familiar to you coffee (or tea) addicts out there. Do you recall the last time you felt terrible or uncertain about a decision when you were tired, only to feel confident and spunky about the same topic after a cup of java?

Or, how about alcohol? There’s a reason it’s called a “social lubricant” – our decision making changes when we’ve consumed enough of it.

Lastly, the connection between sleep and mood goes deep. Need we say more?

Peer Pressure

Peer pressure is another tricky nudge that can be both positive or negative. We can be nudged to make better decisions when we think that our peer group is doing the same. If we think our neighbors conserve more energy or recycle more, we start making a better effort to reduce our consumption and recycle. If we think the people around us are eating better and exercising more we tend to do the same. Information we get from peer groups can also help us make better decisions because of ‘collaborative filtering’; the choices of our peer groups help us filter out and narrow down our choices. If your friends who share similar views and tastes as you recommend book X, then you may like it as well. (Google, Amazon and Netflix are built on this principle).

However, if we are all reading the same book because we constantly see people with it, but none of us actually like it, then we all lose. We run off the mountain with the other lemmings.

Social influences come in two basic categories. The first involves information. If many people do something or think something, their actions and their thoughts convey information about what might be best for you to do or think. The second involves peer pressure. If you care about what other people think about you (perhaps in the mistaken belief that they are paying some attention to what you are doing), then you might go along with the crowd to avoid their wrath or curry their favor.

An important problem here is ‘pluralistic ignorance’ – that is, ignorance, on the part of all or most, about what other people think. We may follow a practice or a tradition not because we like it, or even think it defensible, but merely because we think that most other people like it. Many social practices persist for this reason, and a small shock, or nudge, can dislodge them.

How do we beat social influence? It’s very difficult, and not always desirable: If you are about to enter a building a lot of people are running away from, there’s a better than good chance you should too. But this useful instinct leads us awry.

A simple algorithm, when you feel yourself acting out of social proof, is to ask yourself: Would I still do this if everyone else was not?

***

For more, check out Nudge.

Bias from Self-Interest — Self Deception and Denial to Reduce Pain or Increase Pleasure; Regret Avoidance (Tolstoy effect)

We can ignore reality, but we cannot ignore the consequences of reality.

Bias from self-interest affects everything from how we see and filter information to how we avoid pain. It affects our self-preservation instincts and helps us rationalize our choices. In short, it permeates everything.

***

Our Self-Esteem

Our self-esteem can be a very important aspect of personal well-being, adjustment and happiness. It has been reported that people with higher self-esteem are happier with their lives, have fewer interpersonal problems, achieve at a higher and more consistent level and give in less to peer pressure.

The strong motivation to preserve a positive and consistent self-image is more than evident in our lives.

We attribute success to our own abilities and failures to environmental factors and we continuously rate ourselves as better than average on any subjective measure – ethics, beauty, and ability to get along with others.

Look around – these positive illusions appear to be the rule rather than the exception in well-adjusted people.

However, sometimes life is harsh on us and gives few if any reasons for self-love.

We get fired, a relationship ends, and we end up making decisions which are not well aligned with our inner selves. And so we come up with ways to straighten our damaged self-image.

Under the influence of bias from self-interest, we may find ourselves drifting away from facts and spinning them to the point they become acceptable. While the tendency is mostly harmless and episodical, there are cases when it grows extreme.

The imperfect and confusing realities of our life can activate strong responses, which helps us preserve ourselves and our fragile self-images. Usually amplified by love, death or chemical dependency, strong self-serving bias may leave the person with little capacity to assess the situation objectively.

In his speech, The Psychology of Human Misjudgment, Charlie Munger reflects on the extreme tendencies that serious criminals display in Tolstoy’s novels and beyond. Their defense mechanisms can be divided into two distinct types – they are either in denial of committing the crime at all or they think that the crime is justifiable in light of their hardships.

Munger coins the two cases the Tolstoy effect.

Avoiding Reality by Denying It

Denial occurs, when we encounter a serious thought about reality, but decide to ignore it.

Imagine one day you notice a strange, dark spot on your skin. You feel a sudden sense of anxiety, but soon go on with your day and forget about it. Weeks later, it has not gone away and has slowly become darker and you eventually decide to take action and visit the doctor.

In such cases, small doses of denial might serve us well. We have time to absorb the information slowly and figure out the next steps for action, in case our darkest fears come true. However, once denial becomes a prolonged measure for coping with troubling matters, causing our problems to amplify, we are bound to suffer from consequences.

The consequences can be different. The mildest one is a simple inability to move on with our lives.

Charlie Munger was startled to see a case of persistent denial in a family friend:

This first really hit me between the eyes when a friend of our family had a super-athlete, super-student son who flew off a carrier in the north Atlantic and never came back, and his mother, who was a very sane woman, just never believed that he was dead.

The case made him realize that denial is often amplified by intense feelings of love and death. We’re denying to avoid pain.

While denial of the death of someone close is usually harmless and understandable, it can become a significant problem, when we deny an issue that is detrimental to ourselves and others.

A good example of such issues are physical dependencies, such as alcoholism or drug addiction.

Munger advises staying away from any opportunity to slip into an addiction since the psychological effects are most damaging. The reality distortion that happens in the minds of drug addicts leads them to believe that they have remained in a respectable condition and with reasonable prospects even as their condition keeps deteriorating.

Rationalizing Our Choices

A less severe case of distortion, but no less foolish, is our tendency to rationalize the choices we have made.

Most of us have a positive concept of ourselves and we believe ourselves to be competent, moral and smart.

We can go to great lengths to preserve this self-image. No doubt we have all engaged in behaviors that are less than consistent with our inner self-image and then used phrases, such as “not telling the truth is not lying”, “I didn’t have the time” and “others are even worse” to justify our less than ideal actions.

This tendency in part can be explained by the engine that drives self-justification called cognitive dissonance. It is the state of tension that occurs, whenever we hold two opposing facts in our heads, such as “smoking is bad” and “I smoke two packs a day”.

Dissonance bothers us under any circumstances, but it becomes particularly unbearable when our self-concept is threatened by it. After all, we spend our lives trying to lead lives that are consistent and meaningful. This drive “to save face” is so powerful that it often overrules and contradicts the pure effects of rewards and punishments as assumed by economic theory or observed in simple animal behavioral research.

The most obvious way to quiet dissonance is by quitting. However, a smoker that has tried to quit and failed can also quiet the other belief – namely that smoking is not all that bad. It is the simple and failure-free option that allows her to feel good about herself and requires hardly any effort. Having suspended our moral compass only once and found rationales for the bad, but fixable, choices gives us permission to repeat them in the future and continue the vicious cycle.

The Vicious Cycle of Self-Justification

Carol Tavris

Carol Tavris and Elliot Aronson in their book Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts explain the vicious cycle of choices with an analogy of a pyramid.

Consider the case of two reasonably honest students at the beginning of the term. They face the temptation to cheat on an important test. One of them gives in and the other does not. How do you think they will feel about cheating a week later?

Most likely their initially torn opinions will have polarized in light of their initial choices. Now take this effect and amplify it over the term. By the time they are through with the term two things will have happened:
1) They will be very far from each other in their beliefs
2) They will be convinced that they have always felt strongly about the issue and their side of the argument

Just like those students, we are often at the top of the choice pyramid, facing a decision whose consequences are morally ambiguous. This first choice then starts a process of entrapment of action – justification – further action, which increases the intensity of our commitment

triangle

Over time our choices reinforce themselves and towards the bottom of the pyramid, we find ourselves rolling toward increasingly extreme views.

Consider the famous Stanley Milgram experiment, where two thirds of the 3,000 subjects administered a life threatening level of electric shock to another person. While this study is often used to illustrate our obedience to authority, it also a demonstrates the effects of self-justification.

Simply imagine the scenario of someone asking you to do the favor inflicting 500V of potentially deadly and incredibly painful shock on another person for the sake of science. Chances are most of us would refuse it under any circumstances.

Now suppose the researcher tells you he is interested in effects of punishment on learning and you will have to inflict hardly noticeable electric impulses on another person. You are even encouraged to try the lower levels of 10V yourself to feel that the pain is hardly noticeable.

When you come along, suddenly the experimenter asks you to increase the shock to 20V, which seems like a small increase, so you agree without thinking much. Then the cascade continues – if you gave 20V shock, what is the harm in giving 30V? Suddenly you find yourself unable to draw the line, so you simply tag along with the instructions.

When people are asked in advance whether they would administer shock above 450V, nearly nobody believes they would. However, when facing the choice under pressing circumstances, two-thirds of them did!

The implications here are powerful – if we don’t actively draw the line ourselves, our habits and circumstances will decide for us.

Making Smarter Choices

We will all do dumb things. We can’t help it. We are wired that way. However, we are not doomed to live in denial or keep striving to justify our actions. We always have the choice to correct our tendencies, once we recognize them.

A better understanding of our minds serves as the first step towards breaking the self-justification habit. It takes time, self-reflection and willingness to become more mindful about our behavior and reasons for our behavior, but it is well worth the effort.

The authors of Mistakes Were Made (But not By Me) give an example of conservative William Safire, who wrote a column criticizing (then and current) American presidential candidate Hillary Clinton’s efforts to conceal the identity of her health care task force. A few years later Dick Cheney, a Republican (conservative) candidate whom Safire admired, made a similar move to Clinton by insisting on keeping his energy task force secret.

The alarm bell in Safire’s head rang and he admits that the temptation to rationalize the occasion and apply double standards was enormous. However, he recognized the dissonance effects and ended up writing a similar column about Cheney.

We know that Safire’s ability to spot his own dissonance and do the fair thing is rare. People will bend over backward to reduce dissonance in a way that is favorable to them and their team. Resisting that urge is not easy to do, but it is much better than letting the natural psychological tendencies cripple the integrity of our behaviors. There are ways to make fairness easier.

Making Things Easier

On the personal level, Charlie Munger suggests we should face two simple facts. Firstly, fixable, but unfixed bad performance is bad character and tends to create more of itself and cause more damage — a sort of Gresham’s Law. And, secondly, in demanding places like athletic teams, excuses and bad behavior will not get us far.

On the institutional level, Munger advises building a fair, meritocratic, demanding culture plus personnel handling methods that build up morale. His second piece of advice is the severance of the worst offenders, when possible.

Munger expands on the second point by noting that it is not in any case possible to let go our children, but we must, therefore, try to fix them to our best ability. He gives a real life example of a child, who had the habit of taking candy from the stock of his father’s employer with the excuse that he had intended to replace it later. The father said words that never left the child:

“Son, it would be better for you to simply take all you want and call yourself a thief every time you do it.”

Turns out the child in this example was the dean of University of Southern California Business School, where Munger delivered the speech.

If we are effective, the lessons we teach our children will serve them well throughout their lives.

***

There is so much more to touch on with bias from self-interest, including its relation to hierarchy, how it distorts information, how it feeds our desire for self-preservation and scarcity, how it impacts group preservation, its relationship to terrority etc.

Bias From Self-Interest is part of the Farnam Street latticework of mental models

The Psychology of Risk and Reward

The Psychology of Risk and Reward

An excerpt from The Aspirational Investor: Taming the Markets to Achieve Your Life’s Goals that I think you’d enjoy.

Most of us have a healthy understanding of risk in the short term.

When crossing the street, for example, you would no doubt speed up to avoid an oncoming car that suddenly rounds the corner.

Humans are wired to survive: it’s a basic instinct that takes command almost instantly, enabling our brains to resolve ambiguity quickly so that we can take decisive action in the face of a threat.

The impulse to resolve ambiguity manifests itself in many ways and in many contexts, even those less fraught with danger. Glance at the (above) picture for no more than a couple of seconds. What do you see?

Some observers perceive the profile of a young woman with flowing hair, an elegant dress, and a bonnet. Others see the image of a woman stooped in old age with a wart on her large nose. Still others—in the gifted minority—are able to see both of the images simultaneously.

What is interesting about this illusion is that our brains instantly decide what image we are looking at, based on our first glance. If your initial glance was toward the vertical profile on the left-hand side, you were all but destined to see the image of the elegant young woman: it was just a matter of your brain interpreting every line in the picture according to the mental image that you already formed, even though each line can be interpreted in two different ways. Conversely, if your first glance fell on the central dark horizontal line that emphasizes the mouth and chin, your brain quickly formed an image of the older woman.

Regardless of your interpretation, your brain wasn’t confused. It simply decided what the picture was and filled in the missing pieces. Your brain resolved ambiguity and extracted order from conflicting information.

What does this have to do with decision making? Every bit of information can be interpreted differently according to our perspective. Ashvin Chhabra directs us to investing. I suggest you reframe this in the context of decision making in general.

Every trade has a seller and a buyer: your state of mind is paramount. If you are in a risk-averse mental framework, then you are likely to interpret a further fall in stocks as additional confirmation of your sell bias. If instead your framework is positive, you will interpret the same event as a buying opportunity.

The challenge of investing is compounded by the fact that our brains, which excel at resolving ambiguity in the face of a threat, are less well equipped to navigate the long term intelligently. Since none of us can predict the future, successful investing requires planning and discipline.

Unfortunately, when reason is in apparent conflict with our instincts—about markets or a “hot stock,” for example—it is our instincts that typically prevail. Our “reptilian brain” wins out over our “rational brain,” as it so often does in other facets of our lives. And as we have seen, investors trade too frequently, and often at the wrong time.

One way our brains resolve conflicting information is to seek out safety in numbers. In the animal kingdom, this is called “moving with the herd,” and it serves a very important purpose: helping to ensure survival. Just as a buffalo will try to stay with the herd in order to minimize its individual vulnerability to predators, we tend to feel safer and more confident investing alongside equally bullish investors in a rising market, and we tend to sell when everyone around us is doing the same. Even the so-called smart money falls prey to a herd mentality: one study, aptly titled “Thy Neighbor’s Portfolio,” found that professional mutual fund managers were more likely to buy or sell a particular stock if other managers in the same city were also buying or selling.

This comfort is costly. The surge in buying activity and the resulting bullish sentiment is self-reinforcing, propelling markets to react even faster. That leads to overvaluation and the inevitable crash when sentiment reverses. As we shall see, such booms and busts are characteristic of all financial markets, regardless of size, location, or even the era in which they exist.

Even though the role of instinct and human emotions in driving speculative bubbles has been well documented in popular books, newspapers, and magazines for hundreds of years, these factors were virtually ignored in conventional financial and economic models until the 1970s.

This is especially surprising given that, in 1951, a young PhD student from the University of Chicago, Harry Markowitz, published two very important papers. The first, entitled “Portfolio Selection,” published in the Journal of Finance, led to the creation of what we call modern portfolio theory, together with the widespread adoption of its important ideas such as asset allocation and diversification. It earned Harry Markowitz a Nobel Prize in Economics.

The second paper, entitled “The Utility of Wealth” and published in the prestigious Journal of Political Economy, was about the propensity of people to hold insurance (safety) and to buy lottery tickets at the same time. It delved deeper into the psychological aspects of investing but was largely forgotten for decades.

The field of behavioral finance really came into its own through the pioneering work of two academic psychologists, Amos Tversky and Daniel Kahneman, who challenged conventional wisdom about how people make decisions involving risk. Their work garnered Kahneman the Nobel Prize in Economics in 2002. Behavioral finance and neuroeconomics are relatively new fields of study that seek to identify and understand human behavior and decision making with regard to choices involving trade-offs between risk and reward. Of particular interest are the human biases that prevent individuals from making fully rational financial decisions in the face of uncertainty.

As behavioral economists have documented, our propensity for herd behavior is just the tip of the iceberg. Kahneman and Tversky, for example, showed that people who were asked to choose between a certain loss and a gamble, in which they could either lose more money or break even, would tend to choose the double down (that is, gamble to avoid the prospect of losses), a behavior the authors called “loss aversion.” Building on this work, Hersh Shefrin and Meir Statman, professors at the University of Santa Clara Leavey School of Business, have linked the propensity for loss aversion to investors’ tendency to hold losing investments too long and to sell winners too soon. They called this bias the disposition effect.

The lengthy list of behaviorally driven market effects often converge in an investor’s tale of woe. Overconfidence causes investors to hold concentrated portfolios and to trade excessively, behaviors that can destroy wealth. The illusion of control causes investors to overestimate the probability of success and underestimate risk because of familiarity—for example, causing investors to hold too much employer stock in their 401(k) plans, resulting in under-diversification. Cognitive dissonance causes us to ignore evidence that is contrary to our opinions, leading to myopic investing behavior. And the representativeness bias leads investors to assess risk and return based on superficial characteristics—for example, by assuming that shares of companies that make products you like are good investments.

Several other key behavioral biases come into play in the realm of investing. Framing can cause investors to make a decision based on how the question is worded and the choices presented. Anchoring often leads investors to unconsciously create a reference point, say for securities prices, and then adjust decisions or expectations with respect to that anchor. This bias might impede your ability to sell a losing stock, for example, in the false hope that you can earn your money back. Similarly, the endowment bias might lead you to overvalue a stock that you own and thus hold on to the position too long. And regret aversion may lead you to avoid taking a tough action for fear that it will turn out badly. This can lead to decision paralysis in the wake of a market crash, even though, statistically, it is a good buying opportunity.

Behavioral finance has generated plenty of debate. Some observers have hailed the field as revolutionary; others bemoan the discipline’s seeming lack of a transcendent, unifying theory. This much is clear: behavioral finance treats biases as mistakes that, in academic parlance, prevent investors from thinking “rationally” and cause them to hold “suboptimal” portfolios.

But is that really true? In investing, as in life, the answer is more complex than it appears. Effective decision making requires us to balance our “reptilian brain,” which governs instinctive thinking, with our “rational brain,” which is responsible for strategic thinking. Instinct must integrate with experience.

Put another way, behavioral biases are nothing more than a series of complex trade-offs between risk and reward. When the stock market is taking off, for example, a failure to rebalance by selling winners is considered a mistake. The same goes for a failure to add to a position in a plummeting market. That’s because conventional finance theory assumes markets to be inherently stable, or “mean-reverting,” so most deviations from the historical rate of return are viewed as fluctuations that will revert to the mean, or self-correct, over time.

But what if a precipitous market drop is slicing into your peace of mind, affecting your sleep, your relationships, and your professional life? What if that assumption about markets reverting to the mean doesn’t hold true and you cannot afford to hold on for an extended period of time? In both cases, it might just be “rational” to sell and accept your losses precisely when investment theory says you should be buying. A concentrated bet might also make sense, if you possess the skill or knowledge to exploit an opportunity that others might not see, even if it flies in the face of conventional diversification principles.

Of course, the time to create decision rules for extreme market scenarios and concentrated bets is when you are building your investment strategy, not in the middle of a market crisis or at the moment a high-risk, high-reward opportunity from a former business partner lands on your desk and gives you an adrenaline jolt. A disciplined process for managing risk in relation to a clear set of goals will enable you to use the insights offered by behavioral finance to your advantage, rather than fall prey to the common pitfalls. This is one of the central insights of the Wealth Allocation Framework. But before we can put these insights to practical use, we need to understand the true nature of financial markets.

Books Everyone Should Read on Psychology and Behavioral Economics

Psychology and Behavioral Economics Books

Earlier this year, a prominent friend of mine was tasked with coming up with a list of behavioral economics book recommendations for the military leaders of a G7 country and I was on the limited email list asking for input.

Yikes.

While I read a lot and I’ve offered up books to sports teams and fortune 100 management teams, I’ve never contributed to something as broad as educating a nation’s military leaders. While I have a huge behavorial economics reading list, this wasn’t where I started.

Not only did I want to contribute, but I wanted to choose books that these military leaders wouldn’t normally have come across in everyday life. Books they were unlikely to have read. Books that offered perspective.

Given that I couldn’t talk to them outright, I was really trying to answer the question ‘what would I like to communicate to military leaders through non-fiction books?’ There were no easy answers.

I needed to offer something timeless. Not so outside the box that they wouldn’t approach it, and not so hard to find that those purchasing the books would give up and move on to the next one on the list. And it can’t be so big they get intimidated by the commitment to read. On top of that, you need a book that starts strong because, in my experience of dealing with C-level executives, they stop paying attention after about 20 pages if it’s not relevant or challenging them in the right way.

In short there is no one-size-fits-all but to make the biggest impact you have to consider all of these factors.

While the justifications for why people chose the books below are confidential, I can tell you what books were on the final email that I saw. I left one book off the list, which I thought was a little too controversial to post.

These books have nothing to do with military per se, rather they deal with enduring concepts like ecology, intuition, game theory, strategy, biology, second order thinking, and behavioral psychology. In short these books would benefit most people who want to improve their ability to think, which is why I’m sharing them with you.

If you’re so inclined you can try to guess which ones I recommended in the comments. Read wisely.

In no order and with no attribution:

  1. Risk Savvy: How to Make Good Decisions by Gerd Gigerenzer
  2. The Righteous Mind: Why Good People Are Divided by Politics and Religion by Jonathan Haidt
  3. The Checklist Manifesto: How to Get Things Right by Atul Gawande
  4. The Darwin Economy: Liberty, Competition, and the Common Good by Robert H. Frank
  5. David and Goliath: Underdogs, Misfits, and the Art of Battling Giants by Malcolm Gladwell
  6. Predictably Irrational, Revised and Expanded Edition: The Hidden Forces That Shape Our Decisions by Dan Ariely
  7. Thinking, Fast and Slow by Daniel Kahneman
  8. The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life by Robert Trivers
  9. The Hour Between Dog and Wolf: Risk Taking, Gut Feelings and the Biology of Boom and Bust by John Coates
  10. Adapt: Why Success Always Starts with Failure by Tim Harford
  11. The Lessons of History by Will & Ariel Durant
  12. Poor Charlie’s Almanack
  13. Passions Within Reason: The Strategic Role of the Emotions by Robert H. Frank
  14. The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t by Nate Silver
  15. Sex at Dawn: How We Mate, Why We Stray, and What It Means for Modern Relationships by Christopher Ryan & Cacilda Jetha
  16. The Red Queen: Sex and the Evolution of Human Nature by Matt Ridley
  17. Introducing Evolutionary Psychology by Dylan Evans & Oscar Zarate
  18. Filters Against Folly: How To Survive Despite Economists, Ecologists, and the Merely Eloquent by Garrett Hardin
  19. Games of Strategy (Fourth Edition) by Avinash Dixit, Susan Skeath & David H. Reiley, Jr.
  20. The Theory of Political Coalitions by William H. Riker
  21. The Evolution of War and its Cognitive Foundations (PDF) by John Tooby & Leda Cosmides.
  22. Fight the Power: Lanchester’s Laws of Combat in Human Evolution by Dominic D.P. Johnson & Niall J. MacKay.