Tag: Over-confidence Bias

A Simple Checklist to Improve Decisions

We owe thanks to the publishing industry. Their ability to take a concept and fill an entire category with a shotgun approach is the reason that more people are talking about biases.

Unfortunately, talk alone will not eliminate them but it is possible to take steps to counteract them. Reducing biases can make a huge difference in the quality of any decision and it is easier than you think.

In a recent article for Harvard Business Review, Daniel Kahneman (and others) describe a simple way to detect bias and minimize its effects in the most common type of decisions people make: determining whether to accept, reject, or pass on a recommendation.

The Munger two-step process for making decisions is a more complete framework, but Kahneman’s approach is a good way to help reduce biases in our decision-making.

If you’re short on time here is a simple checklist that will get you started on the path towards improving your decisions:

Preliminary Questions: Ask yourself

1. Check for Self-interested Biases

  • Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?
  • Review the proposal with extra care, especially for overoptimism.

2. Check for the Affect Heuristic

  • Has the team fallen in love with its proposal?
  • Rigorously apply all the quality controls on the checklist.

3. Check for Groupthink

  • Were there dissenting opinions within the team?
  • Were they explored adequately?
  • Solicit dissenting views, discreetly if necessary.
  • Challenge Questions: Ask the recommenders

4. Check for Saliency Bias

  • Could the diagnosis be overly influenced by an analogy to a memorable success?
  • Ask for more analogies, and rigorously analyze their similarity to the current situation.

5. Check for Confirmation Bias

  • Are credible alternatives included along with the recommendation?
  • Request additional options.

6. Check for Availability Bias

  • If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?
  • Use checklists of the data needed for each kind of decision.

7. Check for Anchoring Bias

  • Do you know where the numbers came from? Can there be
  • …unsubstantiated numbers?
  • …extrapolation from history?
  • …a motivation to use a certain anchor?
  • Reanchor with figures generated by other models or benchmarks, and request new analysis.

8. Check for Halo Effect

  • Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?
  • Eliminate false inferences, and ask the team to seek additional comparable examples.

9. Check for Sunk-Cost Fallacy, Endowment Effect

  • Are the recommenders overly attached to a history of past decisions?
  • Consider the issue as if you were a new CEO.
  • Evaluation Questions: Ask about the proposal

10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect

  • Is the base case overly optimistic?
  • Have the team build a case taking an outside view; use war games.

11. Check for Disaster Neglect

  • Is the worst case bad enough?
  • Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.

12. Check for Loss Aversion

  • Is the recommending team overly cautious?
  • Realign incentives to share responsibility for the risk or to remove risk.

If you’re looking to dramatically improve your decision making here is a great list of books to get started:

Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein

Think Twice: Harnessing the Power of Counterintuition by Michael J. Mauboussin

Think Again: Why Good Leaders Make Bad Decisions and How to Keep It from Happening to You by Sydney Finkelstein, Jo Whitehead, and Andrew Campbell

Predictably Irrational: The Hidden Forces That Shape Our Decisions by Dan Ariely

Thinking, Fast and Slow by Daniel Kahneman

Judgment and Managerial Decision Making by Max Bazerman

James March: The Ambiguities of Experience

In his book, The Ambiguities of Experience, James March explores the role of experience in creating intelligence.

Folk wisdom both trumpets the significance of experience and warns of its inadequacies.

On one hand, experience is thought to be the best teacher. On the other hand, experience is described as the teacher of fools, of those unable or unwilling to learn from accumulated knowledge. There is no need to learn everything yourself.

The disagreement between those aphorisms reflects profound questions about the human pursuit of intelligence through learning from experience.

“Since experience in organizations often suffers from weak signals, substantial noise, and small samples, it is quite likely that realized history will deviate considerably from underlying reality.”

— James March

March convincingly argues that although individuals and organizations are eager to derive intelligence from experience, the inferences stemming from that eagerness are often misguided.

The problems lie partly in errors in how people think, but even more so in properties of experience that confound learning from it. ‘Experience,’ March concludes, ‘may possibly be the best teacher, but it is not a particularly good teacher.’

Here are some of my notes from the book:

  • Intelligence normally entails two interrelated but somewhat different components. The first involves effective adaptation to an environment. The second: the elegance of interpretations of the experiences of life.
  • Since experience in organizations often suffers from weak signals, substantial noise, and small samples, it is quite likely that realized history will deviate considerably from the underlying reality.
  • Agencies write standards because experience is a poor teacher.
  • Constant exposure to danger without its realization leaves human beings less concerned about what once terrified them, and therefore experience can have the paradoxical effect of having people learn to feel more immune than they should to the unlikely dangers that surround them.
  • Generating an explanation of history involves transforming the ambiguities and complexities of experience into a form that is elaborate enough to elicit interest, simple enough to be understood, and credible enough to be accepted. The art of storytelling involves a delicate balancing of those three criteria
  • Humans have limited capabilities to store and recall history. They are sensitive to reconstructed memories that serve current beliefs and desires. They conserve belief by being less critical of evidence that seems to confirm prior beliefs than of evidence that seems to disconfirm them. They destroy both observations and beliefs in order to make them consistent. They prefer simple causalities, ideas that place causes and effects close to one another and that match big effects with big causes. 
  • The key effort is to link experience with a pre-existent accepted storyline so as to achieve a subjective sense of the understanding.
  • Experience is rooted in a complicated causal system that can be described adequately only by a description that is too complex for the human mind. The more accurately reality is reflected, the less comprehensible the story, and the more comprehensible the story, the less realistic it is.
  • Storytellers have their individual sources and biases, but they have to gain acceptance of their stories by catering to their audiences.
  • Despite the complications in extracting reality from experience, or perhaps because of them, there is a tendency for the themes of stories of management to converge over time.
  • Organizational stories and models are built particularly around four main mythic themes: rationality (the idea that the human spirit finds definitive expression through taking and justifying action in terms of its future consequences for prior values); hierarchy (the ideas that problems and actions can be decomposed into nested sets of subproblems and sub-actions such that interactions among them can be organized within a hierarchy); individual leader significance (the idea that any story of history must be related to a human project in order to be meaningful and that organizational human history is produced by the intentions of specific human leaders); and historical efficiency (the idea that history follows a path leading to a unique equilibrium defined by antecedent conditions and produced by competition.
  • Underlying many of these myths is a grand myth of human significance: the idea that humans can, through their individual and collective intelligence actions, influence the course of history to their advantage.
  • The myth of human significance produces the cruelties and generosities stemming from the human inclination to assign credit and blame for events to human intention.
  • There is an overwhelming tendency in American life to lionize or pillory the people who stand at the helms of our large institutions -to offer praise or level blame for outcomes over which they may have little control.
  • An experienced scholar is less inclined to claim originality than is a beginner.
  • …processes of adaptation can eliminate sources of error but are inefficient in doing so.
  • Knowledge is lost through turnover, forgetting, and misfiling, which assure that at any point there is considerable ignorance. Something that was once known is no longer known. In addition, knowledge is lost through its incomplete accessibility.
  • A history of success leads managers to a systematic overestimation of the prospects for success in novel endeavors. If managers attribute their success to talent when they are, in fact, a joint consequence of talent and good fortune, successful managers will come to believe that they have capabilities for beating the odds in the future as they apparently have had in the past.
  • In a competitive world of promises, winning projects are systematically projects in which hopes exceed reality
  • The history of organizations cycling between centralization and decentralization is a tribute, in part, to the engineering difficulty of finding an enduring balance between the short-run and local costs and the long-run and more global benefits of boundaries.
  • The vividness of direct experience leads learners to exaggerate the information content of personal experience relative to other information.
  • The ambiguities of experience take many forms but can be summarized in terms of five attributes: 1) the causal structure of experience is complex; 2) experience is noisy; 3) history includes numerous examples of endogeneity, causes in which the properties of the world are affected by actions adapting to it; 4) history as it is known is constructed by participants and observers; 5) history is miserly in providing experience. It offers only small samples and thus large sampling error in the inferences formed.
  • Experience often appears to increase significantly the confidence of successful managers in their capabilities without greatly expanding their understanding.

Still interested? Want to know more? Buy the book. Read Does Experience Make you an Expert? next. 

Taleb: The Fooled by Randomness Effect and the Internet Diet?

In this brief article Nassim Taleb (of Black Swan fame) touches on information, complexity, the randomness effect, over-confidence, and signal and noise.

THE DEGRADATION OF PREDICTABILITY — AND KNOWLEDGE

I used to think that the problem of information is that it turns homo sapiens into fools — we gain disproportionately in confidence, particularly in domains where information is wrapped in a high degree of noise (say, epidemiology, genetics, economics, etc.). So we end up thinking that we know more than we do, which, in economic life, causes foolish risk taking. When I started trading, I went on a news diet and I saw things with more clarity. I also saw how people built too many theories based on sterile news, the fooled by randomness effect. But things are a lot worse. Now I think that, in addition, the supply and spread of information turns the world into Extremistan (a world I describe as one in which random variables are dominated by extremes, with Black Swans playing a large role in them). The Internet, by spreading information, causes an increase in interdependence, the exacerbation of fads (bestsellers like Harry Potter and runs on the banks become planetary). Such world is more “complex”, more moody, much less predictable.

So consider the explosive situation: more information (particularly thanks to the Internet) causes more confidence and illusions of knowledge while degrading predictability.

Look at this current economic crisis that started in 2008: there are about a million persons on the planet who identify themselves in the field of economics. Yet just a handful realized the possibility and depth of what could have taken place and protected themselves from the consequences. At no time in the history of mankind have we lived under so much ignorance (easily measured in terms of forecast errors) coupled with so much intellectual hubris. At no point have we had central bankers missing elementary risk metrics, like debt levels, that even the Babylonians understood well.

I recently talked to a scholar of rare wisdom and erudition, Jon Elster, who upon exploring themes from social science, integrates insights from all authors in the corpus of the past 2500 years, from Cicero and Seneca, to Montaigne and Proust. He showed me how Seneca had a very sophisticated understanding of loss aversion. I felt guilty for the time I spent on the Internet. Upon getting home I found in my mail a volume of posthumous essays by bishop Pierre-Daniel Huet called Huetiana, put together by his admirers c. 1722. It is so saddening to realize that, being born close to four centuries after Huet, and having done most of my reading with material written after his death, I am not much more advanced in wisdom than he was — moderns at the upper end are no wiser than their equivalent among the ancients; if anything, much less refined.

So I am now on an Internet diet, in order to understand the world a bit better — and make another bet on horrendous mistakes by economic policy makers. I am not entirely deprived of the Internet; this is just a severe diet, with strict rationing. True, technologies are the greatest things in the world, but they have way too monstrous side effects — and ones rarely seen ahead of time. And since spending time in the silence of my library, with little informational pollution, I can feel harmony with my genes; I feel I am growing again.

Related: Noise Vs. Signal

Source

Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments

We tend to feel we’re more able and smarter than we really are. We think we’re above average drivers, we’re above average investors, and we make better decisions than everyone else.

According to a recent study, this occurs, in part, because we “suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.”

“Ignorance more frequently begets confidence than does knowledge.”

The study goes on to make several key points:

  • In many domains in life, success and satisfaction depend on knowledge, wisdom, or savvy in knowing which rules to follow and which strategies to pursue.
  • People differ widely in the knowledge and strategies they apply in these domains with varying levels of success. Some of the knowledge and theories that people apply to their actions are sound and meet with favorable results.
  • When people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden: Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it.

The authors come to the conclusions that the skills we need to have competence in any domain are often the same skills we need to accurately evaluate competence. The better we are at something, the better we’re able to judge ourselves. Because of this, incompetent individuals often exaggerate their ability more than competent ones.

Hindsight Bias: Why You’re Not As Smart As You Think You Are

hindsight bias

Hindsight bias occurs when we look backward in time and see events are more predictable than they were at the time a decision was made. This bias, also known as the “knew-it-all-along effect,” typically involves those annoying “I told you so” people who never really told you anything.

For instance, consider driving in the car with your partner and coming to a T in the road. Your partner decides to turn right and 4 miles down the road when you realize you are lost you think “I knew we should have taken that left.”

Hindsight bias can offer a number of benefits in the short run. For instance, it can be flattering to believe that our judgment is better than it actually is. And, of course, hindsight bias allows us to participate in one of our favorite pastimes — criticizing the decisions of others for their lack of foresight.

“Judgments about what is good and what is bad, what is worthwhile and what is a waste of talent, what is useful and what is less so, are judgments that seldom can be made in the present. They can safely be made only by posterity.”

— Tulvings

Aside from helping aid in a more objective reflection of decisions, hindsight bias also has several practical implications. For example, consider someone asked to review a paper but knows the results of the previous review from someone else? Or a physician asked for a second opinion after knowing the results of the first. The results of these actions will likely be biased by some degree. Once we know an outcome it becomes easy to find some plausible explanation.

Hindsight bias helps us become less accountable for our decisions, less critical of ourselves, and over-confident in our ability to make decisions.

One of the most interesting things I discovered when researching hindsight bias was the impact on our legal system and the perceptions of jurors.

* * *

Harvard Professor Max Bazerman offers:

The processes that give rise to anchoring and overconfidence are also at play with the hindsight bias. According to this explanation, knowledge of an event’s outcome works as an anchor by which individuals interpret their prior judgments of the event’s likelihood. Due to the selective accessibility of the confirmatory information during information retrieval, adjustments to anchors are inadequate. Consequently, hindsight knowledge biases our perceptions of what we remember knowing in foresight. Furthermore, to the extent that various pieces of data about the event vary in support of actual outcome, evidence that is consistent with the known outcome may become cognitively more salient and thus more available in memory. This tendency will lead an individual to justify a claimed foresight in view of “the facts provided.” Finally, the relevance of a particular piece of that may later be judged important to the extent to which it is representative of the final observed outcome.

In Cognitive Illusions, Rudiger Pohl offered the following explanations of hindsight bias:

Most prominent among the proposes explanations are cognitive accounts which assume that hindsight bias results from an inability to ignore the solution. Among the early approaches are the following three: (1) Fischhoff (1975) assumed an immediate and irreversible assimilation of the solution into one’s knowledge base. As a consequence, the reconstructed estimate will be biased towards the solution. (2) Tversky and Kahneman (1974) proposed a cognitive heuristic for the anchoring effected, named anchoring and insufficient adjustment. The same mechanism may apply here, if the solution is assumed to serve as an “anchor” in the reconstruction process. The reconstruction starts from this anchor and is then adjusted in the direction of one’s knowledge base. However, this adjustment process may stop too early, for example at the point where the first plausible value is reached, thus ending to a biased reconstruction. (3) Hell (1988) argued that the relative trace strengths of the regional estimate and of the solution might predict the amount of hindsight bias. The stronger the trace strength of the solution relative to that of the original estimate, the larger hindsight bias should be.

Pohl also offers an evolutionary explanation of hindsight bias:

Finally, some authors argued that hindsight bias is not necessarily a bothersome consequence of a “faulty” information process system, but that is may rather represent an unavoidable by-product of an evolutionary evolved function, namely adaptive learning. According to this view, hindsight bias is seen as the consequence of our most valuable ability to update previously held knowledge. This may be seen as a necessary process in order to prevent memory overload and thus to maintain normal cognitive functioning. Besides, updating allows us to keep our knowledge more coherent and to draw better inferences.

Ziva Junda, in social cognition, offers the following explanation of why hindsight bias occurs:

Preceding events take on new meaning and importance as they are made to cohere with the known outcome. Now that we know that our friends have filed for divorce, any ambiguous behavior we have seen is reinterpreted as indicative of tension, any disagreement gains significance, and any signs of affection seem irrelevant. It now seems obvious that their marriage was doomed from the start…Moreover, having adjusted our interpretations in light of current knowledge, it is difficult to imagine how things could have happened differently.

When making likelihood judgments, we often rely on the availability heuristic: The more difficult it is for us to imagine an outcome, the more unlikely it seems. Therefore, the difficulty we experience imagining how things might have turned out differently makes us all the more convinced that the outcomes that did occur were bound to have occurred.

Hindsight bias has large implications for criminal trials. In Jury Selection Hale Starr and Mark McCormick offer the following:

The effects of hindsight bias – which result in being held to a higher standard – are most critical for both criminal and civil defendants. The defense is more susceptible to the hindsight bias since their actions are generally the ones being evaluated fro reasonableness in foresight-foreseeability. When jurors perceive that the results of particular actions were “reasonably” more likely after the outcome is known, defendants are judged as having been capable of knowing more than they knew at the time the action was taken and therefore as capable of preventing the “bad” outcome.

In post-verdict surveys jurors unknowingly demonstrate some of the effects of hindsight bias:

“I can’t understand why the managers didn’t try to get more information or use the information they had available. They should have known there would be safety problems at the plant”.

“The defendants should have known people would remove the safety shield around the tire. There should have been warnings so people wouldn’t do that”

“Even though he was a kid, he should have known that once he showed the others who had been drinking that he had a gun, things would get out of hand. He should have known guns invited violence.

Jurors influenced by the hindsight bias look at the evidence presented and determine that the defendants knew or should have known their actions were unsafe, unwise, or created a dangerous situation. Hindsight bias often results in the judgment that the event was “an accident or tragedy waiting to happen.”

* * *

Protection Against Hindsight Bias

In Principles of Forecasting, Jon Scott Armstrong, offers the following advice on how to protect yourself:

The surest protection against (hindsight bias) is disciplining ourselves to make explicit predictions, showing what we did in fact know (sounds like a decision journal). That record can also provide us with some protection against those individuals who are wont to second guess us, producing exaggerated claims of what we should have known (and perhaps should have told them). If these observers look to this record, it may show them that we are generally less proficient as forecaster than they would like while protecting us against charges of having blown a particular assignment. Having an explicit record can also protect us against overconfidence in our own forecasting ability: If we feel that we “knew all along” what was going to happen, then it is natural enough to think that we will have similar success in the future. Unfortunately, an exaggerated perception of a surprise-free past maybe portend a surprised-full future.

Documenting the reasons we made a forecast makes it possible for us to know not only how well the forecast did, but also where it went astray. For example, subsequent experiences may show that we used wrong (or misunderstood) inputs. In that case, we can, in principle, rerun the forecasting process with better inputs and assess the accuracy of our (retrospectively) revised forecasts. Perhaps we did have the right theory and procedures, but were applying them to a mistaken picture of then-current conditions…Of course inputs are also subject to hindsight bias, hence we need to record them explicitly as well. The essence of making sense out of outcome knowledge is reinterpreting the processes and conditions that produced the reported event.

Hindsight Bias is part of the Farnam Street latticework of mental models.