We often think we can rely on common sense. But in a complex world, common sense is not always sufficient.
Duncan Watts new book Everything is Obvious: Once You Know The Answer explores the limit of common sense. Watts writes that common sense is suitable for handling everyday situations, but not when we step outside of those realms.
Common sense is particularly well adapted to handling the complexity of everyday situations. We get intro trouble when we project our common sense to situations outside the realm of everyday life.
Applying common sense in these areas, Watts argues, “turns out to suffer from a number of errors that systematically mislead us. Yet because of the way we learn from experience—even experiences that are never repeated or that take place in other times and places—the failings of common sense reasoning are rarely apparent to us.”
We think we have the answers but we don’t. Most real-world problems are more complex than we think. “When policy makers sit down, say, to design some scheme to alleviate poverty, they invariably rely on their own common-sense ideas about why it is that poor people are poor, and therefore how best to help them.” This is where we get into trouble. “A quick look at history,” Watts argues, “suggests that when common sense is used for purposes beyond the everyday, it can fail spectacularly.”
According to Watts, common sense reasoning suffers from three types of errors, which reinforce one another. First, is that our mental model of the individual behaviour is systematically flawed. Second, our mental model of complex systems (collective behaviour) is equally flawed. Lastly—and most interesting, in my view—is that “we learn less from history than we think we do, and that this misperception in turn skews our perception of the future.”
Whenever something interesting happens—a book by an unknown author rocketing to the top of the best-seller list, an unknown search engine increasing in value more than 100,000 times in less than 10 years, the housing bubble collapsing—we instinctively want to know why. We look for an explanation. “In this way,” Watts says, “we deceive ourselves into believing that we can make predictions that are impossible.”
“By providing ready explanations for whatever particular circumstances the world throws at us, common sense explanations give us the confidence to navigate from day to day and relieve us of the burden of worrying about whether what we think we know is really true, or is just something we happen to believe.”
Once we know the outcome, our brains weave a clever story based on the aspects of the situation that seem relevant (at least, relevant in hindsight). We convince ourselves that we fully understand things that we don’t.
Is Netflix successful, as Reed Hastings argues, because of their culture? Which aspects of their culture make them successful? Do companies with a similar culture exist that fail? “The paradox of common sense, then, is that even as it helps us make sense of the world, it can actively undermine our ability to understand it.”
The key to improving your ability to make decisions then is to figure out what kind of predictions can we make and how we can improve our accuracy.
One problem with making predictions is knowing what variables to look at and how to weigh them. Even if we get the variables and relative importance of one factor to another correct, these predictions also reflect how much the future will resemble the past. As Warren Buffett says “the rearview mirror is always clearer than the windshield.”
Relying on historical data is problematic because of the frequency of big strategic decisions. “If you could make millions, or even hundreds, such bets,” Watts argues, “it would make sense to got with the historical probability. But when facing a decisions about whether or not to lead the country into war, or to make some strategic acquisition, you cannot count on getting more than one attempt. … making one-off strategic decisions is therefore ill suited to statistical models or crowd wisdom.”
Watts finds it ironic that organizations using the best practices in strategy planning can also be the most vulnerable to planning errors. This is the strategy paradox.
Michael Raynor, author of The Strategy Paradox, argues that the main cause of strategic failure is not bad strategy but great strategy that happens to be wrong. Bad strategy is characterized by lack of vision, muddled leadership, and inept execution, which is more likely to lead to mediocrity than colossal failure. Great strategy, on the other hand, is marked by clarity of vision, bold leadership, and laser-focused execution. Great strategy can lead to great successes as it did with the iPod but it can also lead to enormous failures as it did with Betamax. “Whether great strategy succeeds or fails therefore depends entirely on whether the initial vision happens to be right or not. And that is not just difficult to know in advance, but impossible.” Raynor argues that the solution to this is to develop methods for planning that account for strategic uncertainty. (I’ll eventually get around to reviewing the Strategy Paradox—It was a great read.)
Rather than trying to predict an impossible future, another idea is to react to changing circumstances as rapidly as possible, dropping alternatives that are not working no matter how promising they seem and diverting resources to those that are succeeding. This sounds an awful lot like evolution (variation and selection).
Watts and Raynor’s solution to overcome our inability to predict the future echos Peter Palchinsky’s principles. The Palchinsky Principles, as said by Tim Harford in Adapt (review) are “first, seek out new ideas and try new things; second, when trying something new do it on a scale where failure is survivable; third, seek out feedback and learn from your mistakes as you go along.”
Of course this experimental approach has limits. The US can’t go to war with half of Iraq with one strategy and the other half with a different approach to see which one works best. Watts says “for decisions like these, it’s unlikely that an experimental approach will be of much help.”
In the end, Watts concludes that planners need to learn to behave more “like what the development economist William Easterly calls searchers.” As Easterly put it:
A Planner thinks he already knows the answer; he thinks of poverty as a technical engineering problem that his answers will solve. A Searcher admits he doesn’t know the answers in advance; he believes that poverty is a complicated tangle of political, social, historical, institutional, and technological factors…and hopes to find answers to individual problems by trial and error…A Planner believes outsiders know enough to impose solutions. A Searcher believes only insiders have enough knowledge to find solutions, and that most solutions must be homegrown.
|Still curious? Read Everything is Obvious: Once You Know The Answer.|