Tag: Cognitive Biases

Nonsense: A Handbook of Logical Fallacies

Robert Gula in Nonsense: A Handbook of Logical Fallacies:

Let’s not call them laws; and, since they’re not particularly original, I won’t attach my name to them. They are merely a description of patterns that seem to characterize the ways that people tend to respond and think. For example, people:

  1. Tend to believe what they want to believe;
  2. Tend to project their own biases or experiences upon situations;
  3. Tend to generalize from a specific event;
  4. Tend to get personally involved in the analysis of an issue and tend to let their feelings overcome a sense of objectivity;
  5. Are not good listeners;
  6. Are eager to rationalize;
  7. Are often unable to distinguish what is relevant from what is irrelevant;
  8. Are easily diverted from the specific issue at hand;
  9. Are usually unwilling to explore thoroughly the ramifications of a topic; tend to oversimplify;
  10. Often judge from appearances. They observe something, misinterpret what they observe, and make terrible errors in judgment;
  11. Often simply don’t know what they are talking about, especially in matters of general discussion. They rarely think carefully before they speak, but they allow their feelings, prejudices, biases, likes, dislikes, hopes, and frustrations to supersede careful thinking.
  12. Rarely act according to a set of consistent standards. Rarely do they examine the evidence and then form a conclusion. Rather, they tend to do whatever they want to do and to believe whatever they want to believe and then post hoc find whatever evidence will support their actions or beliefs and conveniently ignore any counter-evidence. They often think selectively: in evaluating a situation they are eager to find reasons to support what they want to support and they are just as eager to ignore or disregard reasons that don’t support what they want.
  13. Do not have a clear conceptual understanding of words employed in the discussion and consequently often do not say what they mean and often do not mean what they say.

…The above comments may seem jaundiced. They are not meant to be. They are not even meant to be critical or judgmental. They merely suggest that it is a natural human tendency to be subjective rather than objective and that the untrained mind will usually take the path of least resistance. But the path of least resistance rarely follows the path of rationality and logic.

Still curious? Read Nonsense: A Handbook of Logical Fallacies.

Daniel Kahneman: Debunking the Myth of Intuition

In a SPIEGEL interview, Nobel Prize-winning psychologist Daniel Kahneman discusses the innate weakness of human thought, deceptive memories and the misleading power of intuition.

By studying human intuition, or System 1, you seem to have learned to distrust this intuition…

I wouldn’t put it that way. Our intuition works very well for the most part. But it’s interesting to examine where it fails.

Experts, for example, have gathered a lot of experience in their respective fields and, for this reason, are convinced that they have very good intuition about their particular field. Shouldn’t we be able to rely on that?

It depends on the field. In the stock market, for example, the predictions of experts are practically worthless. Anyone who wants to invest money is better off choosing index funds, which simply follow a certain stock index without any intervention of gifted stock pickers. Year after year, they perform better than 80 percent of the investment funds managed by highly paid specialists. Nevertheless, intuitively, we want to invest our money with somebody who appears to understand, even though the statistical evidence is plain that they are very unlikely to do so. Of course, there are fields in which expertise exists. This depends on two things: whether the domain is inherently predictable, and whether the expert has had sufficient experience to learn the regularities. The world of stock is inherently unpredictable.

So, all the experts’ complex analyses and calculations are worthless and no better than simply betting on the index?

The experts are even worse because they’re expensive.

So it’s all about selling snake oil?

It’s more complicated because the person who sells snake oil knows that there is no magic, whereas many people on Wall Street seem to believe that they understand. That’s the illusion of validity

… which earns them millions in bonuses.

There is no need to be cynical. You may be cynical about the whole banking system, but not about the individuals. Many believe they are building real value.

[…]

Do we generally put too much faith in experts?

I’m not claiming that the predictions of experts are fundamentally worthless. … Take doctors. They’re often excellent when it comes to short-term predictions. But they’re often quite poor in predicting how a patient will be doing in five or 10 years. And they don’t know the difference. That’s the key.

How can you tell whether a prediction is any good?

In the first place, be suspicious if a prediction is presented with great confidence. That says nothing about its accuracy. You should ask whether the environment is sufficiently regular and predictable, and whether the individual has had enough experience to learn this environment.

According to your most recent book “Thinking, Fast and Slow,” when in doubt, it’s better to trust a computer algorithm.

When it comes to predictions, algorithms often just happen to be better.

Why should that be the case?

Well, the results are unequivocal. Hundreds of studies have shown that wherever we have sufficient information to build a model, it will perform better than most people.

How can a simple procedure be superior to human reasoning?

Well, even models are sometimes useless. A computer will be just as unreliable at predicting stock prices as a human being. And the political situation in 20 years is probably completely unpredictable; the world is simply too complex. However, computer models are good where things are relatively regular. Human judgment is easily influenced by circumstances and moods: Give a radiologist the same X-ray twice, and he’ll often interpret it differently the second time. But with an algorithm, if you give it the same information twice, it will turn out the same answer twice.

IBM has developed a supercomputer called “Watson” that is supposed to quickly supply medical diagnoses by analyzing the description of symptoms and the patient’s history. Is this the medicine of the future?

I think so. There’s no magic involved.

Continue Reading

Daniel Kahneman Answers

In one of the more in-depth and wide-ranging Q&A sessions on the freakonomics blog has run, Daniel Kahneman, whose new book is called Thinking, Fast and Slow, answered 22 questions posted by readers.

Three of the questions that caught my attention:

Q. As you found, humans will take huge, irrational risks to avoid taking a loss. Couldn’t that explain why so many Penn State administrators took the huge risk of not disclosing a sexual assault?

A. In such a case, the loss associated with bringing the scandal into the open now is large, immediate and easy to imagine, whereas the disastrous consequences of procrastination are both vague and delayed. This is probably how many cover-up attempts begin. If people were certain that cover-ups would have very bad personal consequences (as happened in this case), we may see fewer cover-ups in future. From that point of view, the decisive reaction of the board of the University is likely to have beneficial consequences down the road.

Q. Problems in healthcare quality may be getting worse before they get better, and there are countless difficult decisions that will have to be made to ensure long-term system improvement. But on a daily basis, doctors and nurses and patients are each making a variety of decisions that shape healthcare on a smaller but more tangible level. How can the essence of Thinking, Fast and Slow be extracted and applied to the individual decisions that patients and providers make so that the quality of healthcare is optimized?

A. I don’t believe that you can expect the choices of patients and providers to change without changing the situation in which they operate. The incentives of fee-for-service are powerful, and so is the social norm that health is priceless (especially when paid for by a third party). Where the psychology of behavior change and the nudges of behavioral economics come into play is in planning for a transition to a better system. The question that must be asked is, “How can we make it easy for physicians and patients to change in the desired direction?”, which is closely related to, “Why don’t they already want the change?” Quite often, when you raise this question, you may discover that some inexpensive tweaks in the context will substantially change behavior. (For example, we know that people are more likely to pay their taxes if they believe that other people pay their taxes.)

Q:How can I identify my incentives and values so I can create a personal program of behavioral conditioning that associates incentives with the behavior likely to achieve long-term goals?

A. The best sources I know for answers to the question are included in the book Nudge by Richard Thaler and Cass Sunstein, in the book Mindless Eating by Brian Wansink, and in the books of the psychologist Robert Cialdini.

Read what you’ve been missing. Subscribe to Farnam Street via Email, RSS, or Twitter.

From Daniel Kahneman Answers Your Questions

Putting people and things into categories

Putting people and things into categories is something we all do. It’s a useful shortcut but reveals biases. And it plays a role in everything from ethnic violence to childhood development.

The Browser’s excellent five-books interview with Susan Gelman:

People have all kinds of cognitive biases, ways that we look at the world that are not quite in tune with reality, shortcuts that we use to make sense of the world. Essentialism is one of those and seems to be really pervasive. It’s how we think about everyday categories around us, like women or dogs or gold, or social categories, like different races or ethnicities. We tend to think that if we have a word for these categories, that it’s real and based in nature, that it’s not constructed by humans, but is really out there. We think that it has some deep, underlying basis and that if we look hard enough, we’ll be able to learn something about that deep underlying something that all members of the category have in common. That’s why it’s called essentialism, because that underlying something, that makes a Jew a member of that category, for example, is the essence.

Essentialism has a lot of positive implications. You could say it’s one of the motivators for science. One of the reasons why we keep looking and digging for non-obvious similarities within a category is that we have this optimistic belief that the world has a lot of structure to it.

Gelman recommends reading:

The Mismeasure of Man: This is a classic book. It was published in 1981 and got a lot of attention when it came out. Gould just does this beautiful job of laying out the “biology as destiny” idea – and then ripping it to shreds. It’s a historical view, he’s talking about the foundations – he wasn’t trying to capture current day psychology. You can think about it as how intelligence is viewed as this single thing that has an underlying essence.

The Bad Seed: It’s really essentialism personified. What makes it essentialism is that this girl, who outwardly seems very sweet and innocent, in actuality is bad to the core. So there’s this appearance/reality distinction that is a big piece of essentialism. … It’s a fiction, but it’s one that resonates with people. This is not supposed to be a work of science fiction.

How Pleasure Works: He’s a world-class scientist, and he’s also very good at taking sophisticated scientific ideas and portraying them to a broad audience. This book is a wonderful example of that. He’s really interested in how pleasure works, and he says, upfront, that his view is rooted in essentialism. So he says that we like what we like, not, as you might think, because of what it presents to our senses. It’s not just how something tastes or how it looks. Instead, it’s all filtered through our beliefs about what the item is, and that that has to do with essentialism. For example, two cups of water might look identical, but if I’m told that one of them came from a cold, pure mountain spring, and the other came out of a tap in New York City, I’m going to like the one that I think came out of the mountain spring more.

The Edge of Islam: This is definitely the most challenging book on my list. It’s not an easy read. Janet McIntosh is a cultural-linguistic anthropologist and she did her fieldwork in a little town in Kenya where there are two ethnic groups that she looked at, the Swahili and the Giriama. What’s really cool about it is that she shows how essentialism works in a culture that’s really different from a middle-class, developed world context.

Mindset: The New Psychology of Success Praise, for example, turns out to be a really bad thing, because it fosters a fixed mindset. If you praise someone for how they do, then they lose interest in that activity. This is based on experiments with kids. They’re less likely to continue with the activity that they’re praised for, because they’re vulnerable then. They don’t want to try it again and maybe they won’t be as good. Then they’ll have a negative self-view. Then there is a whole thing about effort. If you have a fixed mindset, you think, “Well I’d better not put in a lot of effort, because if I put in a lot of effort and I still don’t do well, it really means I’m no good.” But if you have a growth mindset, you think, “Well, I’d better put in more effort and I’ll do better and I’ll learn and I’ll grow.” The last piece of the whole thing is that she’s found that if you make people aware of these differences and you give them enough input about it, you can move people from a fixed mindset to a growth mindset.

Continue Reading

Susan Gelman is professor of psychology at the University of Michigan. She won the Eleanor Maccoby Book Prize for her most recent book, The Essential Child.

The Ben Franklin Effect

Ben Franklin discovered that a person who has done someone a favor is more likely to do that person another favor than they would be had they received a favor. Or, as Franklin put it: “He that has once done you a Kindness will be more ready to do you another, than he whom you yourself have obliged.” This simple technique can be used to gain your favor or create a sense of indebt to others.

Cognitive dissonance is a state of tension that occurs whenever a person holds two cognitions (ideas, beliefs, attitudes, or opinions) that are psychologically inconsistent.

Dissonance produces an uncomfortable mental state the mind needs to resolve. In resolving dissonance our minds trend towards self-justification which makes it hard to admit mistakes.

Dissonance is a very powerful effect. In the case of the Ben Franklin effect, the dissonance is caused by the subjects’ negative attitude to the other person contrasted with the knowledge they did that person a favor. The easiest way to rationalize why we did someone a favor is to say “that person is not so bad after all.”

In his autobiography, Franklin explains how he curried favor by manipulating a rival legislator when he served in the Pennsylvania legislature in the 18th century:

“Having heard that he had in his library a certain very scarce and curious book, I wrote a note to him, expressing my desire of perusing that book, and requesting he would do me the favour of lending it to me for a few days. He sent it immediately, and I return’d it in about a week with another note, expressing strongly my sense of the favour. When we next met in the House, he spoke to me (which he had never done before), and with great civility; and he ever after manifested a readiness to serve me on all occasions, so that we became great friends, and our friendship continued to his death.”

Still curious? Read more about The Reciprocation Bias.

Google’s Quest to Build a Better Boss

The HR department’s long run on gut instincts may be coming to a close.

Recently, Google applied their engineering (data-driven) mindset to building better bosses and the counter-intuitive findings suggest that promoting the best technical person is a bad idea.

Not content to just learn what makes a good boss, Google is using this information to make bad bosses better: “We were able to have a statistically significant improvement in manager quality for 75 percent of our worst-performing managers.”

But Mr. Bock’s group found that technical expertise — the ability, say, to write computer code in your sleep — ranked dead last among Google’s big eight. What employees valued most were even-keeled bosses who made time for one-on-one meetings, who helped people puzzle through problems by asking questions, not dictating answers, and who took an interest in employees’ lives and careers.

“In the Google context, we’d always believed that to be a manager, particularly on the engineering side, you need to be as deep or deeper a technical expert than the people who work for you,” Mr. Bock says. “It turns out that that’s absolutely the least important thing. It’s important, but pales in comparison. Much more important is just making that connection and being accessible.”

They’ve even published a list of cognitive biases

Continue Reading