Tag: Availability Bias

Is Everything Obvious Once You Know The Answer?

Reading Duncan Watts new book Everything is Obvious: Once You Know The Answer can make you uncomfortable.

Common sense is particularly well adapted to handling the complexity of everyday situations. We get intro trouble when we project our common sense to situations outside the realm of everyday life.

Applying common sense in these areas, Watts argues, “turns out to suffer from a number of errors that systematically mislead us. Yet because of the way we learn from experience—even experiences that are never repeated or that take place in other times and places—the failings of commonsense reasoning are rarely apparent to us.”

We think we have the answers but we don’t. Most real-world problems are more complex than we think. “When policy makers sit down, say, to design some scheme to alleviate poverty, they invariably rely on their own common-sense ideas about why it is that poor people are poor, and therefore how best to help them.” This is where we get into trouble. “A quick look at history,” Watts argues, “suggests that when common sense is used for purposes beyond the everyday, it can fail spectacularly.”

According to Watts, commonsense reasoning suffers from three types of errors, which reinforce one another. First, is that our mental model of the individual behaviour is systematically flawed. Second, our mental model of complex system (collective behaviour) is equally flawed. Lastly—and most interesting, in my view—is that “we learn less from history than we think we do, and that this misperception in turn skews our perception of the future.”

Whenever something interesting happens—a book by an unknown author rocketing to the top of the best-seller list, an unknown search engine increasing in value more than 100,000 times in less than 10 years, the housing bubble collapsing—we instinctively want to know why. We look for an explanation. “In this way,” Watts says, “we deceive ourselves into believing that we can make predictions that are impossible.”

“By providing ready explanations for whatever particular circumstances the world throws at us, commonsense explanations give us the confidence to navigate from day to day and relieve us of the burden of worrying about whether what we think we know is really true, or is just something we happen to believe.”

Once we know the outcome, our brains weave a clever story based on the aspects of the situation that seem relevant (at least, relevant in hindsight). We convince ourselves that we fully understand things that we don’t.

Is Netflix successful, as Reed Hastings argues, because of their culture? Which aspects of their culture make them successful? Do companies with a similar culture exist that fail? “The paradox of common sense, then, is that even as it helps us make sense of the world, it can actively undermine our ability to understand it.”

The key to improving your ability to make decisions then is to figure out what kind of predictions can we make and how we can improve our accuracy.

One problem with making predictions is knowing what variables to look at and how to weigh them. Even if we get the variables and relative importance of one factor to another correct, these predictions also reflect how much the future will resemble the past. As Warren Buffett says “the rearview mirror is always clearer than the windshield.”

Relying on historical data is problematic because of the frequency of big strategic decisions. “If you could make millions, or even hundreds, such bets,” Watts argues, “it would make sense to got with the historical probability. But when facing a decisions about whether or not to lead the country into war, or to make some strategic acquisition, you cannot count on getting more than one attempt. … making one-off strategic decisions is therefore ill suited to statistical models or crowd wisdom.”

Watts finds it ironic that organizations using the best practices in strategy planning can also be the most vulnerable to planning errors. This is the strategy paradox.

Michael Raynor, author of The Strategy Paradox, argues that the main cause of strategic failure is not bad strategy but great strategy that happens to be wrong. Bad strategy is characterized by lack of vision, muddled leadership, and inept execution, which is more likely to lead to mediocrity than colossal failure. Great strategy, on the other hand, is marked by clarity of vision, bold leadership, and laser-focused execution. Great strategy can lead to great successes as it did with the iPod but it can also lead to enormous failures as it did with Betamax. “Whether great strategy succeeds or fails therefore depends entirely on whether the initial vision happens to be right or not. And that is not just difficult to know in advance, but impossible.” Raynor argues that the solution to this is to develop methods for planning that account for strategic uncertainty. (I’ll eventually get around to reviewing the Strategy Paradox—It was a great read.)

Rather than trying to predict an impossible future, another idea is to react to changing circumstances as rapidly as possible, dropping alternatives that are not working no matter how promising they seem and diverting resources to those that are succeeding. This sounds an awful lot like evolution (variation and selection).

Watts and Raynor’s solution to overcome our inability to predict the future echos Peter Palchinsky’s principles. The Palchinsky Principles, as said by Tim Harford in Adapt (review) are “first, seek out new ideas and try new things; second, when trying something new do it on a scale where failure is survivable; third, seek out feedback and learn from your mistakes as you go along.”

Of course this experimental approach has limits. The US can’t go to war with half of Iraq with one strategy and the other half with a different approach to see which one works best. Watts says “for decisions like these, it’s unlikely that an experimental approach will be of much help.”

In the end, Watts concludes that planners need to learn to behave more “like what the development economist William Easterly calls searchers.” As Easterly put it:

A Planner thinks he already knows the answer; he thinks of poverty as a technical engineering problem that his answers will solve. A Searcher admits he doesn’t know the answers in advance; he believes that poverty is a complicated tangle of political, social, historical, institutional, and technological factors…and hopes to find answers to individual problems by trial and error…A Planner believes outsiders know enough to impose solutions. A Searcher believes only insiders have enough knowledge to find solutions, and that most solutions must be homegrown.

Still curious? Read Everything is Obvious: Once You Know The Answer.

Future Babble: Why expert predictions fail and why we believe them anyway

Future Babble has come out to mixed reviews. I think the book would interest anyone seeking wisdom.

Here are some of my notes:

First a little background: Predictions fail because the world is too complicated to be predicted with accuracy and we’re wired to avoid uncertainty. However, we shouldn’t blindly believe experts. The world is divided into two: foxes and hedgehogs. The fox knows many things whereas the hedgehog knows one big thing. Foxes beat hedgehogs when it comes to making predictions.

  • What we should ask is, in a non-linear world, why would we think oil prices can be predicted. Practically since the dawn of the oil industry in the nineteenth century, experts have been forecasting the price of oil. They’ve been wrong ever since. And yet this dismal record hasn’t caused us to give up on the enterprise of forecasting oil prices. 
  • One of psychology’s fundamental insights, wrote psychologist Daniel Gilbert, is that judgements are generally the products of non-conscious systems that operate quickly, on the basis scant evidence, and in a routine manner, and then pass their hurried approximations to consciousness, which slowly and deliberately adjust them. … (one consequence of this is that) Appearance equals reality. In the ancient environment in which our brains evolved, that as a good rule, which is why it became hard-wired into the brain and remains there to this day. (an example of this) as psychologists have shown, people often stereotype “baby-faced” adults as innocent, helpless, and needy.
  • We have a hard time with randomness. If we try, we can understand it intellectually, but as countless experiments have shown, we don’t get it intuitively. This is why someone who plunks one coin after another into a slot machine without winning will have a strong and growing sense—the gambler’s fallacy—that a jackpot is “due,” even though every result is random and therefore unconnected to what came before. … and people believe that a sequence of random coin tosses that goes “THTHHT” is far more likely than the sequence “THTHTH” even though they are equally likely.
  • People are particularly disinclined to see randomness as the explanation for an outcome when their own actions are involved. Gamblers rolling dice tend to concentrate and throw harder for higher numbers, softer for lower. Psychologists call this the “illusion of control.” … they also found the illusion is stronger when it involves prediction. In a sense, the “illusion of control” should be renamed the “illusion of prediction.”
  • Overconfidence is a universal human trait closely related to an equally widespread phenomenon known as “optimism bias.” Ask smokers about the risk of getting lung cancer from smoking and they’ll say it’s high. But their risk? Not so high. … The evolutionary advantage of this bias is obvious: It encourages people to take action and makes them more resilient in the face of setbacks.
  • … How could so many experts have been so wrong? … A crucial component of the answer lies in psychology. For all the statistics and reasoning involved, the experts derived their judgements, to one degree or another, from what they felt to be true. And in doing so they were fooled by a common bias. … This tendency to take current trends and project them into the future is the starting point of most attempts to predict. Very often. it’s also the end point. That’s not necessarily a bad thing. After all, tomorrow typically is like today. Current trends do tend to continue. But not always. Change happens. And the further we look into the future, the more opportunity there is for current rends to be modified, bent, or reversed. Predicting the future by projecting the present is like driving with no hands. It works while you are on a long stretch of straight road but even a gentle curve is trouble, and a sharp turn always ends in a flaming wreck.
  • When people attempt to judge how common something is—or how likely it is to happen in the future—they attempt to think of an example of that thing. If an example is recalled easily, it must be common. If it’s harder to recall, it must be less common. … Again, this is not a conscious calculation. The “availability heuristic” is a tool of the unconscious mind.
  • “deviating too far from consensus leaves one feeling potentially ostracized from the group, with the risk that one may be terminated.” (Robert Shiller) … It’s tempting to think that only ordinary people are vulnerable to conformity, that esteemed experts could not be so easily swayed. Tempting, but wrong. As Shiller demonstrated, “groupthink” is very much a disease that can strike experts. In fact, psychologist Irving Janis coined the term “groupthink” to describe expert behavior. In his 1972 classic, Victims of Groupthink, Janis investigated four high-level disasters: the defence of Pearl Harbour, the Bay of Pigs invasion, and escalation of the wars in Korea and Vietnam and demonstrated that conformity among highly educated, skilled, and successful people working in their fields of expertise was a root cause in each case.
  • (On corporate use of scenario planning)… Scenarios are not predictions, emphasizes Peter Schwartz, the guru of scenario planning. “They are tools for testing choices.” The idea is to have a clever person dream up a number of very different futures, usually three to four. … Managers then consider the implications of each, forcing them out of the rut of the status quo, and thinking about what they would do if confronted with real change. The ultimate goal is to make decisions that would stand up well in a wide variety of contexts. No one denies there maybe some value in such exercises. But how much value? The consultants who offer scenario planning services are understandably bullish, but ask them for evidence and they typically point to examples of scenarios that accurately foreshadowed the future. That is silly, frankly. For one thing, it contradicts their claim that scenarios are not predictions and al the misses would have to be considered, and the misses vastly outnumber the hits. … Consultants also cite the enormous popularity of scenario planning as proof of its enormous value… Lack of evidence aside, there are more disturbing reasons to be wary of scenarios. Remember that what drives the availably heuristic is not how many examples the mind can recall but how easily they are recalled. … and what are scenarios? Vivid, colourful, dramatic stories. Nothing could be easier to remember or recall. And so being exposed to a dramatic scenario about (whatever)… will make the depicted events feel much more likely to happen.
  • (on not having control) At its core, torture is a process of psychological destruction. and that process almost always begins with the torturer explicitly telling the victim he his powerless. “I decide when you can eat and sleep. I decide when you suffer, how you suffer, if it will end. I decide if you live or die.” …Knowing what will happen in the future is a form of control, even if we cannot change what will happen. …Uncertainty is potent… people who experienced the mild-but-unpredictable shocks experienced much more fear than those who got the strong-but-predictable shocks.
  • Our profound aversion to uncertainty helps explain what would otherwise be a riddle: Why do people pay so much attention to dark and scary predictions? Why do gloomy forecasts so often outnumber optimistic predictions, take up more media space, and sell more books? Part of this predilection for gloom is simply an outgrowth of what is sometimes called negativity bias: our attention is drawn more swiftly by bad news or images, and we are more likely to remember them than cheery information….People who’s brains gave priority to bad news were much less likely to be eaten by lions or die some other pleasant death. … (negative) predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds, we want to believe the expert predicting a dark future is less tormenting then suspecting it. Certainty is always preferable to uncertainty, even when what’s certain is disaster.
  • Researchers have also shown that financial advisors who express considerable confidence in their stock forecasts are more trusted than those who are less confident, even when their objective records are the same. … This “confidence heuristic” like the availability heuristics, isn’t necessarily a conscious decision path. We may not actually say to ourselves “she’s so sure of herself she must be right”…
  • (on our love for stories) Confirmation bias also plays a critical role for the very simple reason that none of us is a blank slate. Every human brain is a vast warehouse of beliefs and assumptions about the world and how it works. Psychologists call these “schemas.” We love stories that fit our schemas; they’re the cognitive equivalent of beautiful music. But a story that doesn’t fit – a story that contradicts basic beliefs – is dissonant.
  • … What makes this mass delusion possible is the different emphasis we put on predictions that hit and those that miss. We ignore misses, even when they lie scattered by the dozen at our feet; we celebrate hits, even when we have to hunt for them and pretend there was more to them that luck.
  • By giving us the sense that we should have predicted what is now the present, or even that we actually did predict it when we did not, it strong suggests that we can predict the future. This is an illusion, and yet it seems only logical – which makes it a particularly persuasive illusion.

If you like the notes you should buy Future Babble. Like the book summaries? Check out my notes from Adapt: Why Success Always Starts With Failure, The Ambiguities of Experience, On Leadership.

Subscribe to Farnam Street via twitteremail, or RSS.

A Simple Checklist to Improve Decisions

We owe thanks to the publishing industry. Their ability to take a concept and fill an entire category with a shotgun approach is the reason that more people are talking about biases.

Unfortunately, talk alone will not eliminate them but it is possible to take steps to counteract them. Reducing biases can make a huge difference in the quality of any decision and it is easier than you think.

In a recent article for Harvard Business Review, Daniel Kahneman (and others) describe a simple way to detect bias and minimize its effects in the most common type of decisions people make: determining whether to accept, reject, or pass on a recommendation.

The Munger two-step process for making decisions is a more complete framework, but Kahneman’s approach is a good way to help reduce biases in our decision-making.

If you’re short on time here is a simple checklist that will get you started on the path towards improving your decisions:

Preliminary Questions: Ask yourself

1. Check for Self-interested Biases

  • Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?
  • Review the proposal with extra care, especially for overoptimism.

2. Check for the Affect Heuristic

  • Has the team fallen in love with its proposal?
  • Rigorously apply all the quality controls on the checklist.

3. Check for Groupthink

  • Were there dissenting opinions within the team?
  • Were they explored adequately?
  • Solicit dissenting views, discreetly if necessary.
  • Challenge Questions: Ask the recommenders

4. Check for Saliency Bias

  • Could the diagnosis be overly influenced by an analogy to a memorable success?
  • Ask for more analogies, and rigorously analyze their similarity to the current situation.

5. Check for Confirmation Bias

  • Are credible alternatives included along with the recommendation?
  • Request additional options.

6. Check for Availability Bias

  • If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?
  • Use checklists of the data needed for each kind of decision.

7. Check for Anchoring Bias

  • Do you know where the numbers came from? Can there be
  • …unsubstantiated numbers?
  • …extrapolation from history?
  • …a motivation to use a certain anchor?
  • Reanchor with figures generated by other models or benchmarks, and request new analysis.

8. Check for Halo Effect

  • Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?
  • Eliminate false inferences, and ask the team to seek additional comparable examples.

9. Check for Sunk-Cost Fallacy, Endowment Effect

  • Are the recommenders overly attached to a history of past decisions?
  • Consider the issue as if you were a new CEO.
  • Evaluation Questions: Ask about the proposal

10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect

  • Is the base case overly optimistic?
  • Have the team build a case taking an outside view; use war games.

11. Check for Disaster Neglect

  • Is the worst case bad enough?
  • Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.

12. Check for Loss Aversion

  • Is the recommending team overly cautious?
  • Realign incentives to share responsibility for the risk or to remove risk.

If you’re looking to dramatically improve your decision making here is a great list of books to get started:

Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein

Think Twice: Harnessing the Power of Counterintuition by Michael J. Mauboussin

Think Again: Why Good Leaders Make Bad Decisions and How to Keep It from Happening to You by Sydney Finkelstein, Jo Whitehead, and Andrew Campbell

Predictably Irrational: The Hidden Forces That Shape Our Decisions by Dan Ariely

Thinking, Fast and Slow by Daniel Kahneman

Judgment and Managerial Decision Making by Max Bazerman

Predicting the Improbable

One natural human bias is that we tend to draw strong conclusions based on few observations. This bias, misconceptions of chance, shows itself in many ways including the gambler and hot hand fallacies. Such biases may induce public opinion and the media to call for dramatic swings in policies or regulation in response to highly improbable events. These biases are made even worse by our natural tendency to “do something.”

***

An event like an earthquake happens, making it more available in our mind.

We think the event is more probable than evidence would support so we run out and buy earthquake insurance. Over many years as the earthquake fades from our mind (making it less available) we believe, paradoxically, that the risk is lower (based on recent evidence) so we cancel our policy. …

Some events are hard to predict. This becomes even more complicated when you consider not only predicting the event but the timing of the event as well. This article below points out that experts, like the rest of us, base their predictions on inference from observing the past and are just as prone to biases as the rest of us.

Why do people over infer from recent events?

There are two plausible but apparently contradicting intuitions about how people over-infer from observing recent events.

The gambler’s fallacy claims that people expect rapid reversion to the mean.

For example, upon observing three outcomes of red in roulette, gamblers tend to think that black is now due and tend to bet more on black (Croson and Sundali 2005).

The hot hand fallacy claims that upon observing an unusual streak of events, people tend to predict that the streak will continue. (See Misconceptions of Chance)

The hot hand fallacy term originates from basketball where players who scored several times in a row are believed to have a “hot hand”, i.e. are more likely to score at their next attempt.

Recent behavioural theory has proposed a foundation to reconcile the apparent contradiction between the two types of over-inference. The intuition behind the theory can be explained with reference to the example of roulette play.

A person believing in the law of small numbers thinks that small samples should look like the parent distribution, i.e. that the sample should be representative of the parent distribution. Thus, the person believes that out of, say 6, spins 3 should be red and 3 should be black (ignoring green). If observed outcomes in the small sample differ from the 50:50 ratio, immediate reversal is expected. Thus, somebody observing 2 times red in 6 consecutive spins believes that black is “due” on the 3rd spin to restore the 50:50 ratio.

Now suppose such person is uncertain about the fairness of the roulette wheel. Upon observing an improbable event (6 times red in 6 spins, say), the person starts to doubt about the fairness of the roulette wheel because a long streak does not correspond to what he believes a random sequence should look like. The person then revises his model of the data generating process and starts to believe the event on streak is more likely. The upshot of the theory is that the same person may at first (when the streak is short) believe in reversion of the trend (the gambler’s fallacy) and later – when the streak is long – in continuation of the trend (the hot hand fallacy).

Continue Reading

The Art and Science of High-Stakes Decisions

How can anyone make rational decisions in a world where knowledge is limited, time is pressing, and deep thought is often unattainable?

Some decisions are more difficult than others and yet we often make these decisions in the same way easy decisions are made, on autopilot.

We have difficulty contemplating and taking protective actions towards low probability, high stakes threats. It almost seems perverse when you consider we are least prepared to make the decisions that matter most.

Sure we can pick between the store brand of peanut butter and the Kraft label and we can no doubt surf the internet with relative ease, yet life seems to offer few opportunities to prepare for decisions where the consequences of a poor decision are catastrophic. If we pick the wrong type of peanut butter, we are generally not penalized too harshly. If we fail to purchase flood insurance, on the other hand, we can be financially and emotionally wiped out.

Shortly after the planes crashed into the towers in Manhattan some well-known academics got together to discuss1 how skilled people were at making choices involving low and ambiguous probability of a high-stakes loss

High-stakes decisions involve two distinctive properties: 1) existence of a possible large loss (financial or emotional) and 2) the costs to reverse decisions once made are high. More importantly, these professors wanted to determine if prescriptive guidelines for improving decision-making process could be created in an effort to help make better decisions.

Whether we’re buying something at the grocery store or making a decision to purchase earthquake insurance, we operate in the same way. The presence of potentially catastrophic costs of errors does little to reduce our reliance on heuristics (or rules of thumb). Such heuristics serve us well on a daily basis. For simple decisions, not only are heuristics generally right but the costs of errors are small, such as being caught without an umbrella or regretting not picking up the Kraft peanut butter after discovering the store band doesn’t taste as you remember. However, in high-stakes decisions, heuristics can often be a poor method of forecasting.

In order to make better high-stakes decisions, we need a better understanding of why we generally make poor decisions.

Here are several causes.

Poor understanding of probability.
Several studies show that people either utilize probability information insufficiently when it is made available to them or ignore it all together. In one study, 78% of subjects failed to seek out probability information when evaluating between several risky managerial decisions.

In the context of high-stakes decisions, the probability of an event causing loss may seem sufficiently low that organizations and individuals consider them not worth worry about. In doing so, they effectively treat the probability of something as zero or close to it.

An excessive focus on short time horizons.
Many high-stakes decisions are not obvious to the decision-maker. In part, this is because people tend to focus on the immediate consequences and not the long-term consequences.

A CEO near retirement has incentives to skimp on insurance to report slightly higher profits before leaving (shareholders are unaware of the increased risk and appreciate the increased profits). Governments tend to under-invest in less visible things like infrastructure because they have short election cycles. The long-term consequences of short-term thinking can be disastrous.

The focus on short-term decision making is one of the most widely-documented failings of human decision making. People have difficulty considering the future consequences of current actions over long periods of time. Garrett Hardin, the author of Filters against Folly, suggests we look at things through three filters (literacy, numeracy, and ecolacy). In ecolacy, the key question is “and then what?” And then what helps us avoid a focus solely on the short-term.

Excessive attention to what’s available
Decisions requiring difficult trade-offs between attributes or entailing ambiguity as to what a right answer looks like often leads people to resolve choices by focusing on the information most easily brought to mind. Sometimes things can be difficult to bring to mind.

Constant exposure to low-risk events without realization leads to us being less concerned than we probably would warrant (it makes these events less available) and “proves” our past decisions to ignore low-risk events were right.

People refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value. Kunreuther et. al. (1993) suggests underreaction to threats of flooding may arise from “the inability of individuals to conceptualize floods that have never occurred… Men on flood plains appear to be very much prisoners of their experience… Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned.”Paradoxically, we feel more secure even as the “risk” may have increased.

Distortions under stress
Most high-stakes decisions will be made under perceived (or real) stress. A large number of empirical studies find that stress focuses decision-makers on a selective set of cues when evaluating options and leads to greater reliance on simplifying heuristics. When we’re stressed, we’re less likely to think things through.

Over-reliance on social norms
Most individuals have little experience with high-stakes decisions and are highly uncertain about how to resolve them (procedural uncertainty). In such cases—and combined with stress—the natural course of action is to mimic the behavior of others or follow established social norms. This is based on the psychological desire to fail conventionally.

The tendency to prefer the status-quo
What happens when people are presented with difficult choices and no obvious right answer? We tend to prefer making no decision at all, we choose the norm.

In high-stakes decisions many options are better than the status-quo and we must make trade-offs. Yet, when faced with decisions that involve life-and-death trade-offs, people frequently remark “I’d rather not think about it.”

Failures to learn
Although individuals and organizations are eager to derive intelligence from experience, the inferences stemming from that eagerness are often misguided. The problems lie partly in errors in how people think, but even more so in properties of experience that confound learning from it. Experience may possibly be the best teacher, but it is not a particularly good teacher.

As an illustration, one study finds that participants in an earthquake simulation tended to over-invest in mitigation that was normatively ineffective but under-invest when it is normatively effective. The reason was a misinterpretation of feedback; when mitigation was ineffective, respondents attributed the persistence of damage to the fact that they had not invested enough. by contract, when it was effective, they attributed the absence of damage to a belief that earthquakes posted limited damage risk.

Gresham’s Law of Decision making
Over time, bad decisions will tend to drive out good decisions in an organization.

Improving
What can you do to improve your decision-making?

A few things: 1) learn more about judgment and decision making; 2) encourage decision makers to see events through alternative frames, such as gains versus losses and changes in the status-quo; 3) adjust the time frame of decisions—while the probability of an earthquake at your plant may be 1/100 in any given year, the probability over the 25 year life of the plant will be 1/5; and 4) read Farnam Street!

Footnotes
  • 1

    http://marketing.wharton.upenn.edu/ideas/pdf/Kahn/High%20Stakes%20Decision%20Making.pdf

James March: The Ambiguities of Experience

In his book, The Ambiguities of Experience, James March explores the role of experience in creating intelligence.

Folk wisdom both trumpets the significance of experience and warns of its inadequacies.

On one hand, experience is thought to be the best teacher. On the other hand, experience is described as the teacher of fools, of those unable or unwilling to learn from accumulated knowledge. There is no need to learn everything yourself.

The disagreement between those aphorisms reflects profound questions about the human pursuit of intelligence through learning from experience.

“Since experience in organizations often suffers from weak signals, substantial noise, and small samples, it is quite likely that realized history will deviate considerably from underlying reality.”

— James March

March convincingly argues that although individuals and organizations are eager to derive intelligence from experience, the inferences stemming from that eagerness are often misguided.

The problems lie partly in errors in how people think, but even more so in properties of experience that confound learning from it. ‘Experience,’ March concludes, ‘may possibly be the best teacher, but it is not a particularly good teacher.’

Here are some of my notes from the book:

  • Intelligence normally entails two interrelated but somewhat different components. The first involves effective adaptation to an environment. The second: the elegance of interpretations of the experiences of life.
  • Since experience in organizations often suffers from weak signals, substantial noise, and small samples, it is quite likely that realized history will deviate considerably from the underlying reality.
  • Agencies write standards because experience is a poor teacher.
  • Constant exposure to danger without its realization leaves human beings less concerned about what once terrified them, and therefore experience can have the paradoxical effect of having people learn to feel more immune than they should to the unlikely dangers that surround them.
  • Generating an explanation of history involves transforming the ambiguities and complexities of experience into a form that is elaborate enough to elicit interest, simple enough to be understood, and credible enough to be accepted. The art of storytelling involves a delicate balancing of those three criteria
  • Humans have limited capabilities to store and recall history. They are sensitive to reconstructed memories that serve current beliefs and desires. They conserve belief by being less critical of evidence that seems to confirm prior beliefs than of evidence that seems to disconfirm them. They destroy both observations and beliefs in order to make them consistent. They prefer simple causalities, ideas that place causes and effects close to one another and that match big effects with big causes. 
  • The key effort is to link experience with a pre-existent accepted storyline so as to achieve a subjective sense of the understanding.
  • Experience is rooted in a complicated causal system that can be described adequately only by a description that is too complex for the human mind. The more accurately reality is reflected, the less comprehensible the story, and the more comprehensible the story, the less realistic it is.
  • Storytellers have their individual sources and biases, but they have to gain acceptance of their stories by catering to their audiences.
  • Despite the complications in extracting reality from experience, or perhaps because of them, there is a tendency for the themes of stories of management to converge over time.
  • Organizational stories and models are built particularly around four main mythic themes: rationality (the idea that the human spirit finds definitive expression through taking and justifying action in terms of its future consequences for prior values); hierarchy (the ideas that problems and actions can be decomposed into nested sets of subproblems and sub-actions such that interactions among them can be organized within a hierarchy); individual leader significance (the idea that any story of history must be related to a human project in order to be meaningful and that organizational human history is produced by the intentions of specific human leaders); and historical efficiency (the idea that history follows a path leading to a unique equilibrium defined by antecedent conditions and produced by competition.
  • Underlying many of these myths is a grand myth of human significance: the idea that humans can, through their individual and collective intelligence actions, influence the course of history to their advantage.
  • The myth of human significance produces the cruelties and generosities stemming from the human inclination to assign credit and blame for events to human intention.
  • There is an overwhelming tendency in American life to lionize or pillory the people who stand at the helms of our large institutions -to offer praise or level blame for outcomes over which they may have little control.
  • An experienced scholar is less inclined to claim originality than is a beginner.
  • …processes of adaptation can eliminate sources of error but are inefficient in doing so.
  • Knowledge is lost through turnover, forgetting, and misfiling, which assure that at any point there is considerable ignorance. Something that was once known is no longer known. In addition, knowledge is lost through its incomplete accessibility.
  • A history of success leads managers to a systematic overestimation of the prospects for success in novel endeavors. If managers attribute their success to talent when they are, in fact, a joint consequence of talent and good fortune, successful managers will come to believe that they have capabilities for beating the odds in the future as they apparently have had in the past.
  • In a competitive world of promises, winning projects are systematically projects in which hopes exceed reality
  • The history of organizations cycling between centralization and decentralization is a tribute, in part, to the engineering difficulty of finding an enduring balance between the short-run and local costs and the long-run and more global benefits of boundaries.
  • The vividness of direct experience leads learners to exaggerate the information content of personal experience relative to other information.
  • The ambiguities of experience take many forms but can be summarized in terms of five attributes: 1) the causal structure of experience is complex; 2) experience is noisy; 3) history includes numerous examples of endogeneity, causes in which the properties of the world are affected by actions adapting to it; 4) history as it is known is constructed by participants and observers; 5) history is miserly in providing experience. It offers only small samples and thus large sampling error in the inferences formed.
  • Experience often appears to increase significantly the confidence of successful managers in their capabilities without greatly expanding their understanding.

Still interested? Want to know more? Buy the book. Read Does Experience Make you an Expert? next.