We build on Nobel winner Daniel Kahneman’s favorite approach for making better decisions. This may sound weird, but it’s a form of imaginary time travel.
It’s called the premortem. And, while it may be Kahneman’s favorite, he didn’t come up with it. A fellow by the name of Gary Klein invented the premortem technique.
A premortem works something like this. When you’re on the verge of making a decision, not just any decision but a big decision, you call a meeting. At the meeting, you ask each member of your team to imagine that it’s a year later.
Split them into two groups. Have one group imagine that the effort was an unmitigated disaster. Have the other pretend it was a roaring success. Ask each member to work independently and generate reasons, or better yet, write a story, about why the success or failure occurred. Instruct them to be as detailed as possible, and, as Klein emphasizes, to identify causes that they wouldn’t usually mention “for fear of being impolite.” Next, have each person in the “failure” group read their list or story aloud, and record and collate the reasons. Repeat this process with the “success” group. Finally use the reasons from both groups to strengthen your … plan. If you uncover overwhelming and impassible roadblocks, then go back to the drawing board.
Premortems encourage people to use “prospective hindsight,” or, more accurately, to talk in “future perfect tense.” Instead of thinking, “we will devote the next six months to implementing a new HR software initiative,” for example, we travel to the future and think, “we have devoted six months to implementing a new HR software package.”
You imagine that a concrete success or failure has occurred and look “back from the future” to tell a story about the causes.
Pretending that a success or failure has already occurred—and looking back and inventing the details of why it happened—seems almost absurdly simple. Yet renowned scholars including Kahneman, Klein, and Karl Weick supply compelling logic and evidence that this approach generates better decisions, predictions, and plans. Their work suggests several reasons why. …
1. This approach helps people overcome blind spots
As … upcoming events become more distant, people develop more grandiose and vague plans and overlook the nitty-gritty daily details required to achieve their long-term goals.
2. This approach helps people bridge short-term and long-term thinking
Weick argues that this shift is effective, in part, because it is far easier to imagine the detailed causes of a single outcome than to imagine multiple outcomes and try to explain why each may have occurred. Beyond that, analyzing a single event as if it has already occurred rather than pretending it might occur makes it seem more concrete and likely to actually happen, which motivates people to devote more attention to explaining it.
3. Looking back dampens excessive optimism
As Kahneman and other researchers show, most people overestimate the chances that good things will happen to them and underestimate the odds that they will face failures, delays, and setbacks. Kahneman adds that “in general, organizations really don’t like pessimists” and that when naysayers raise risks and drawbacks, they are viewed as “almost disloyal.”
Max Bazerman, a Harvard professor, believes that we’re less prone to irrational optimism when we predict the fate of projects that are not our own. For example, when it comes to friends’ home renovation projects, most people estimate the costs will run 25 to 50 percent over budget. When it comes to our projects; however, they will be “completed on time and near the project costs.”
4. A premortem challenges the illusion of consensus
Most times, not everyone on a team agrees with the course of action. Even when you have enough cognitive diversity in the room, people still keep their mouths shut because people in power tend to reward people who agree with them while punishing those who dare to speak up with a dissenting view.
The resulting corrosive conformity is evident when people don’t raise private doubts, known risks, and inconvenient facts. In contrast, as Klein explains, a premortem can create a competition where members feel accountable for raising obstacles that others haven’t. “The whole dynamic changes from trying to avoid anything that might disrupt harmony to trying to surface potential problems.”
To improve performance, we need to do two things. The down arrow is what we have to reduce, errors. The up arrow is what we have to increase, insights. Performance improvement depends on doing both of these things.
We tend to look for ways to eliminate errors. That’s the down arrow. But if we eliminate all errors we haven’t created any insights. …
Ideally, reducing mistakes would at least help us gain insights but I don’t believe that’s how it works. I suspect the relation between the arrows runs the other way. When we put too much energy into eliminating mistakes, we’re less likely to gain insights. Having insights is a different matter from preventing mistakes.
When I showed this slide in my seminars, I got a lot of head nods. The participants agreed that their organizations were all about the down arrow. They felt frustrated by organizations that stifled their attempts to do a good job. Their organizations hammered home the message of reducing mistakes, perhaps because it is easier for managers to cut down on mistakes than to encourage insights. Mistakes are visible, costly, and embarrassing.
Klein’s book is really about three things: (1) what sparks an insight; (2) what prevents us from grasping them; and (3) how can we improve the flow of insights?
Wallas’s model “is still the most common explanation of how insight works. If you do any exploration into the field of insight, you can’t go far without bumping into Wallas, who is the epitome of a British freethinking intellectual,” writes Gary Klein in Seeing What Others Don’t: The Remarkable Ways We Gain Insights.
The Creativity Question, published in 1976, preserves Wallas’s “Stages of Control” and presents his model of insight: (1) preparation; (2) incubation; (3) illumination; and (4) verification.
During the preparation stage we investigate a problem, applying ourselves to an analysis that is hard, conscious, systematic, but fruitless.
Then we shift to the incubation stage, in which we stop consciously thinking about the problem and let our unconscious mind take over. Wallas quoted the German physicist Hermann von Helmholtz, who in 1891 at the end of his career offered some reflections on how this incubation stage feels. After working hard on a project, Helmholtz explained that “happy ideas come unexpectedly without effort, like an inspiration. So far as I am concerned, they have never come to me when my mind was fatigued, or when I was at my working table. They came particularly readily during the slow ascent of wooded hills on a sunny day.”
Wallas advised his readers to take this incubation stage seriously. We should seek out mental relaxation and stop thinking about the problem. We should avoid anything that might interfere with the free working of the unconscious mind, such as reading serious materials.
Next comes the illumination stage, when insight bursts forth with conciseness, suddenness, and immediate certainty. Wallas believed that the insight, the “happy idea,” was the culmination of a train of unconscious associations. These associations had to mature outside of conscious scrutiny until they were ready to surface. Wallas claimed that people could sometimes sense that an insight was brewing in their minds. The insight starts to make its appearance in fringe consciousness, giving people an intimation that the flash of illumination is nearby. At this point the insight might drift away and not evolve into consciousness. Or it might get interrupted by an intrusion that causes it to miscarry. That’s why if people feel this intimation arising while reading, they often stop and gaze out into space, waiting for the insight to appear. Wallas warned of the danger of trying to put the insight into words too quickly, before it was fully formed.
Finally, during the verification stage we test whether the idea is valid. If the insight is about a topic such as mathematics, we may need to consciously work out the details during this final stage.
Wallas noted that none of these stages exist in isolation.
In the daily stream of thought these four different stages constantly overlap each other as we explore different problems. An economist reading a Blue Book, a physiologist watching an experiment, or a business man going through his morning’s letters, may at the same time be “incubating” on a problem which he proposed to himself a few days ago, be accumulating knowledge in “preparation” for a second problem, and be “verifying” his conclusions on a third problem. Even in exploring the same problem, the mind may be unconsciously incubating on one aspect of it, while it is consciously employed in preparing for or verifying another aspect. And it must always be remembered that much very important thinking, done for instance by a poet exploring his own memories, or by a man trying to see clearly his emotional relation to his country or his party, resembles musical composition in that the stages leading to success are not very easily fitted into a “problem and solution” scheme. Yet, even when success in thought means the creation of something felt to be beautiful and true rather than the solution of a prescribed problem, the four stages of Preparation, Incubation, Illumination, and the Verification of the final result can generally be distinguished from each other. (The Creativity Question)
If you talk to anyone about insight today, most people are familiar with the model Wallas proposed.
“It’s a very satisfying explanation that has a ring of plausibility,” writes Klein, “until we examine it more closely.”
Klein points to many counterexamples where people had insights that came unexpectedly, without a preparation stage. A lot of people aren’t wrestling with a problem when they come up with an accidental insight.
According to Wallas, when we’re stuck and need to find an insight that will get us past an impasse, we should start with deliberate preparation. … One flaw in Wallas’s method is that his sample of cases was skewed. He only studied success stories. He didn’t consider all the cases in which people prepared very hard but got nowhere.(Seeing What Others Don’t: The Remarkable Ways We Gain Insights)
Specific preparation doesn’t always lead to insights. And the people who gain insights may or may not follow the Wallas model, so perhaps it is incomplete.
We need another theory.
Sometimes shifts in thinking are not about making minor adjustments or adding details. Sometimes we fundamentally shift core beliefs. This allows us to see the problem in a new way and may lead to insight. We shift from a poor story to a better one. These are discontinuous discoveries.
Insights shift us toward a new story, a new set of beliefs that are more accurate, more comprehensive, and more useful. Our insights transform us in several ways. They change how we understand, act, see, feel, and desire. They change how we understand. They transform our thinking; our new story gives us a different viewpoint. They change how we act. … Insights transform how we see; we look for different things in keeping with our new story. (Seeing What Others Don’t)
In Wolf Hall, Hilary Mantel makes an important observation, “Insight cannot be taken back. You cannot return to the moment you were in before.” After insight, everything is different.
Insights are unique is some other ways:
When they do appear, they are coherent and unambiguous. They don’t come as part of a set of possible answers. When we have the insight, we think, “Oh yes, that’s it.” We feel a sense of closure. This sense of closure produces a feeling of confidence in the insight. (Seeing What Others Don’t)
The Difference Between Insight and Intuition
Intuition is the use of patterns they’ve already learned, whereas insight is the discovery of new patterns. (Seeing What Others Don’t)
The Role of Stories
Stories are a way we frame and organize the details of a situation. There are other types of frames besides stories, such as maps and even organizational wiring diagrams that show where people stand in a hierarchy. … These kinds of stories organize all kinds of details about a situation and depend on a few core beliefs we can call “anchors,” because they are fairly stable and anchor the way we interpret the other details. (Seeing What Others Don’t)
And anchors can change. They change when we get new information. They change when we shift our beliefs.
What causes us to change our story? Klein proposes five strategies connections, coincidences, curiosities, contradictions, and creative desperation.
In all of these cases we change our story.
When faced with creative desperation, we try to find a weak belief that is trapping us. We want to jettison this belief so that we can escape from fixation and from impasse. In contrast, when using a contradiction strategy, we center on the weak belief. We take it seriously instead of explaining it away or trying to jettison it. We use it to rebuild our story. (Seeing What Others Don’t)
When we’re desperate, we’re more likely to attack a weak anchor and give something a try.
In times of desperation, we actively search for an assumption we can reverse. We don’t seek to imagine the implications if the assumption was valid. Rather, we try to improve the situation by eliminating the assumption. (Seeing What Others Don’t)
But changing our story is not the only way to gain insight. We can also add new anchors.
The Triple Path
In the end, Klein came up with the tripe path model of insight, which tries to capture the similarities between the strategies.
The connection path is different from the desperation path or the contradiction path. We’re not attacking or building on weak anchors. When we make connections or notice coincidences or curiosities, we add a new anchor to our beliefs and then work out the implications. Usually the new anchor comes from a new piece of information we receive.
I’ve combined the connections, coincidences, and curiosities in the Triple Path Model. They have the same dynamic: to build on a new potential anchor. They have the same trigger: our thinking is stimulated when we notice the new anchor. Coincidences and curiosities aren’t insights in themselves; they start us on the path to identifying a new anchor that we connect to the other beliefs we hold. Connections, coincidences, and curiosities have the same activity: to combine the new anchor with others. This path to insight doesn’t force us to abandon other anchors. It lets us build a new story that shifts our understanding. This path has a different motivation, a different trigger, and a different activity from the contradiction and the creative desperation paths. Nevertheless, like the other two paths, the outcome is the same: an unexpected shift in the story.
Each of the three paths, the contradiction path, the connection path, and the creative desperation path, gets sparked in a different way. And each operates in a different fashion: to embrace an anomaly that seems like a weak anchor in a frame, to overturn that weak anchor, or to add a new anchor. Future work on insight is likely to uncover other paths to insight besides the three shown in the diagram. (Seeing What Others Don’t)
Other people weren’t wrong, they were just following one path.
The Triple Path Model shows why earlier accounts aren’t wrong as much as they are incomplete. They restrict themselves to a single path. Researchers and theorists such as Wallas who describe insight as escaping from fixation and impasses are referring to the creative desperation path. Researchers who emphasize seeing associations and combinations of ideas are referring to the connection path. Researchers who describe insight as reformulating the problem or restructuring how people think have gravitated to the contradiction path. None of them are wrong. The Triple Path Model of insight illustrates why people seem to be talking past each other. It’s because they’re on different paths. (Seeing What Others Don’t)
Gary Klein‘s book Streetlights and Shadows looks at commonly held maxims for decision-making and overturns them, revealing cases where these practices break down.
The book is an impressive look at why these problems occur and how we can become more resilient decision makers.
Here are ten beliefs that we hold about systems and decision making along with a proposed replacement.
1. Teaching people procedures helps them perform tasks more skillfully.
Replacement: In complex situations people will need judgment skills to follow procedures effectively and go beyond them when necessary.
2. Decision biases distort our thinking.
Replacement: Decision biases reflect our thinking. Rather than discouraging people from using heuristics, we should help them build expertise so they can use their heuristics more effectively.
2(a). Successful decision makers rely on logic and statistics instead of intuition.
Replacement: We need to blend systemic analysis and intuition.
3. To make a decision, generate several options and compare them to pick the best one.
Replacement: Good decision makers use their experience to recognize effective options and evaluate them through mental simulation.
4. We can reduce uncertainty by gathering more information. Too much information can get in our way.
Replacement: In complex environments, what we need isn’t the right information but the right way to understand the information we have.
5. It’s bad to jump to conclusions – wait to see the evidence.
Replacement: Speculate, but test your speculations instead of committing to them.
6. To get people to learn, give them feedback on the consequences of their actions.
Replacement: We can’t just give feedback; we have to find ways to make it understandable.
7. To make sense of a situation, we draw inferences from the data.
Replacement: We make sense of data by fitting them into stories and other frames, but the reverse also happens: our frames determine what counts as data.
8. The starting point for any project is a clear description of the goal.
Replacement: When facing wicked problems, we should redefine goals as we try to reach them.
9. Our plans will succeed more often if we identify the biggest risks and then find ways to eliminate them. B: We should cope with risk in complex situations by relying on resilience engineering rather than attempting to identify and prevent risks.
10.Leaders can create common ground by assigning roles and setting ground rules in advance.
Replacement: All team members are responsible for continually monitoring common ground for breakdowns and repairing the breakdown when necessary.