Tag: Predictions

The Wisdom of Crowds and The Expert Squeeze

As networks harness the wisdom of crowds, the ability of experts to add value in their predictions is steadily declining. This is the expert squeeze.

As networks harness the wisdom of crowds, the ability of experts to add value in their predictions is steadily declining. This is the expert squeeze.

In Think Twice: Harnessing the Power of Counterintuition, Michael Mauboussin, the first guest on my podcast, The Knowledge Project, explains the expert squeeze and its implications for how we make decisions.

As networks harness the wisdom of crowds and computing power grows, the ability of experts to add value in their predictions is steadily declining. I call this the expert squeeze, and evidence for it is mounting. Despite this trend, we still pine for experts— individuals with special skill or know-how— believing that many forms of knowledge are technical and specialized. We openly defer to people in white lab coats or pinstripe suits, believing they hold the answers, and we harbor misgivings about computergenerated outcomes or the collective opinion of a bunch of tyros.

The expert squeeze means that people stuck in old habits of thinking are failing to use new means to gain insight into the problems they face. Knowing when to look beyond experts requires a totally fresh point of view, and one that does not come naturally. To be sure, the future for experts is not all bleak. Experts retain an advantage in some crucial areas. The challenge is to know when and how to use them.

The Value of Experts
The Value of Experts

So how can we manage this in our role as the decision maker? The first step is to classify the problem.

(The figure above — The Value of Experts) helps to guide this process. The second column from the left covers problems that have rules-based solutions with limited possible outcomes. Here, someone can investigate the problem based on past patterns and write down rules to guide decisions. Experts do well with these tasks, but once the principles are clear and well defined, computers are cheaper and more reliable. Think of tasks such as credit scoring or simple forms of medical diagnosis. Experts agree about how to approach these problems because the solutions are transparent and for the most part tried and true.

[…]

Now let’s go to the opposite extreme, the column on the far right that deals with probabilistic fields with a wide range of outcomes. Here are no simple rules. You can only express possible outcomes in probabilities, and the range of outcomes is wide. Examples include economic and political forecasts. The evidence shows that collectives outperform experts in solving these problems.

[…]

The middle two columns are the remaining province for experts. Experts do well with rules-based problems with a wide range of outcomes because they are better than computers at eliminating bad choices and making creative connections between bits of information.

Once you’ve classified the problem, you can turn to the best method for solving it.

… computers and collectives remain underutilized guides for decision making across a host of realms including medicine, business, and sports. That said, experts remain vital in three capacities. First, experts must create the very systems that replace them. … Of course, the experts must stay on top of these systems, improving the market or equation as need be.

Next, we need experts for strategy. I mean strategy broadly, including not only day-to-day tactics but also the ability to troubleshoot by recognizing interconnections as well as the creative process of innovation, which involves combining ideas in novel ways. Decisions about how best to challenge a competitor, which rules to enforce, or how to recombine existing building blocks to create novel products or experiences are jobs for experts.

Finally, we need people to deal with people. A lot of decision making involves psychology as much as it does statistics. A leader must understand others, make good decisions, and encourage others to buy in to the decision.

So what are the practical tips you can do to make the expert squeeze work for you instead of against you? Here Mauboussin offers 3 tips.

1. Match the problem you face with the most appropriate solution.

What we know is that experts do a poor job in many settings, suggesting that you should try to supplement expert views with other approaches.

2. Seek diversity.

(Philip) Tetlock’s work shows that while expert predictions are poor overall, some are better than others. What distinguishes predictive ability is not who the experts are or what they believe, but rather how they think. Borrowing from Archilochus— through Isaiah Berlin— Tetlock sorted experts into hedgehogs and foxes. Hedgehogs know one big thing and try to explain everything through that lens. Foxes tend to know a little about a lot of things and are not married to a single explanation for complex problems. Tetlock finds that foxes are better predictors than hedgehogs. Foxes arrive at their decisions by stitching “together diverse sources of information,” lending credence to the importance of diversity. Naturally, hedgehogs are periodically right— and often spectacularly so— but do not predict as well as foxes over time. For many important decisions, diversity is the key at both the individual and collective levels.

3. Use technology when possible. Leverage technology to side-step the squeeze when possible.

Flooded with candidates and aware of the futility of most interviews, Google decided to create algorithms to identify attractive potential employees. First, the company asked seasoned employees to fill out a three-hundred-question survey, capturing details about their tenure, their behavior, and their personality. The company then compared the survey results to measures of employee performance, seeking connections. Among other findings, Google executives recognized that academic accomplishments did not always correlate with on-the-job performance. This novel approach enabled Google to sidestep problems with ineffective interviews and to start addressing the discrepancy.

Learning the difference between when experts help or hurt can go a long way toward avoiding stupidity. This starts with identifying the type of problem you’re facing and then considering the various approaches to solve the problem with pros and cons.

Still curious? Follow up by reading Generalists vs. Specialists, Think Twice: Harnessing the Power of Counterintuition, and reviewing the work of Philip Tetlock.

Daniel Kahneman’s Favorite Approach For Making Better Decisions

premortem

Bob Sutton’s book, Scaling Up Excellence: Getting to More Without Settling for Less, contains an interesting section towards the end on looking back from the future, which talks about “a mind trick that goads and guides people to act on what they know and, in turn, amplifies their odds of success.”

We build on Nobel winner Daniel Kahneman’s favorite approach for making better decisions. This may sound weird, but it’s a form of imaginary time travel.

It’s called the premortem. And, while it may be Kahneman’s favorite, he didn’t come up with it. A fellow by the name of Gary Klein invented the premortem technique.

A premortem works something like this. When you’re on the verge of making a decision, not just any decision but a big decision, you call a meeting. At the meeting you ask each member of your team to imagine that it’s a year later.

Split them into two groups. Have one group imagine that the effort was an unmitigated disaster. Have the other pretend it was a roaring success. Ask each member to work independently and generate reasons, or better yet, write a story, about why the success or failure occurred. Instruct them to be as detailed as possible, and, as Klein emphasizes, to identify causes that they wouldn’t usually mention “for fear of being impolite.” Next, have each person in the “failure” group read their list or story aloud, and record and collate the reasons. Repeat this process with the “success” group. Finally use the reasons from both groups to strengthen your … plan. If you uncover overwhelming and impassible roadblocks, then go back to the drawing board.

Premortems encourage people to use “prospective hindsight,” or, more accurately, to talk in “future perfect tense.” Instead of thinking, “we will devote the next six months to implementing a new HR software initiative,” for example, we travel to the future and think “we have devoted six months to implementing a new HR software package.”

You imagine that a concrete success or failure has occurred and look “back from the future” to tell a story about the causes.

Pretending that a success or failure has already occurred—and looking back and inventing the details of why it happened—seems almost absurdly simple. Yet renowned scholars including Kahneman, Klein, and Karl Weick supply compelling logic and evidence that this approach generates better decisions, predictions, and plans. Their work suggests several reasons why. …

1. This approach helps people overcome blind spots

As … upcoming events become more distant, people develop more grandiose and vague plans and overlook the nitty-gritty daily details required to achieve their long-term goals.

2. This approach helps people bridge short-term and long-term thinking

Weick argues that this shift is effective, in part, because it is far easier to imagine the detailed causes of a single outcome than to imagine multiple outcomes and try to explain why each may have occurred. Beyond that, analyzing a single event as if it has already occurred rather than pretending it might occur makes it seem more concrete and likely to actually happen, which motivates people to devote more attention to explaining it.

3. Looking back dampens excessive optimism

As Kahneman and other researchers show, most people overestimate the chances that good things will happen to them and underestimate the odds that they will face failures, delays, and setbacks. Kahneman adds that “in general, organizations really don’t like pessimists” and that when naysayers raise risks and drawbacks, they are viewed as “almost disloyal.”

Max Bazerman, a Harvard professor, believes that we’re less prone to irrational optimism when we predict the fate of projects that are not our own. For example, when it comes to friends’ home renovation projects, most people estimate the costs will run 25 to 50 percent over budget. When it comes to our projects however, they will be “completed on time and near the project costs.”

4. A premortem challenges the illusion of consensus

Most times not everyone on a team agrees with the course of action. Even when you have enough cognitive diversity in the room, people still keep their mouths shut because people in power tend to reward people who agree with them while punishing those who have the courage to speak up with a dissenting view.

The resulting corrosive conformity is evident when people don’t raise private doubts, known risks, and inconvenient facts. In contrast, as Klein explains, a premortem can create a competition where members feel accountable for raising obstacles that others haven’t. “The whole dynamic changes from trying to avoid anything that might disrupt harmony to trying to surface potential problems.”

Nate Silver: Confidence Kills Predictions

Best known for accurate election predictions, statistician Nate Silver is also the author of The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t. Heather Bell, Managing Editor of Journal of Indexes, recently spoke with Silver.

IU: What do you see as the common theme among bad predictions? What most often leads people astray?
Silver: A lot of it is overconfidence. People tend to underestimate what the uncertainty that is intrinsic to a problem actually is. If you have someone estimate what they think a confidence interval is that’s supposed to cover 90 percent of all outcomes, it usually only covers 50 percent. You have upside outcomes and downside outcomes in the market certainly more often than people realize.

There are a variety of reasons for this. Part of it is that we can sometimes get stuck in the recent past and examples that are most familiar to us, kind of what Daniel Kahneman called “the availability heuristic,” where we assume that the current trend will always perpetuate itself, when actually it can be an anomaly or a fluke, or where we always think that the period we’re living through is the “signal,” so to speak. That’s often not true—sometimes you’re living in the outlier period, like when you have a housing bubble period that you haven’t historically had before.

Overconfidence is the core linkage between most of the failures of predictions that we’ve looked at. Obviously, you can look at that in a more technical sense and see where sometimes people are fitting models where they don’t have as much data as they think, but the root of it comes down to a failure to understand that it’s tough to be objective and that we often come at a problem with different biases and perverse incentives—and if we don’t check those, we tend to get ourselves into trouble.

IU: What standards or conditions must be met, in your opinion, for something to be considered “predictable”?
Silver: I tend not to think in terms of black and white absolutes. There are two ways to define “predictable,” I’d say. One is by asking, How well we are able to model the system? The other is more of a cosmic predictability: How intrinsically random is something over the long run?

I look at baseball as an example. Even the best teams only win about two-thirds of their games. Even the best hitters only get on base about 40 percent of the time. In that sense, baseball is highly unpredictable. In another sense though, baseball is very easy to measure relative to a lot of other things. It’s easy to set up models for it, and the statistics are of very high quality. A lot of smart people have worked on the problem. As a result, we are able to measure and quantify the uncertainty pretty accurately. We still can’t predict who’s going to win every game, but we are doing a pretty good job with that. Things are predictable in theory, but our capabilities are not nearly as strong.

Predictability is a tricky question, but I always say we almost always have some notion of what’s going to happen next, but it’s just never a perfect notion. The question is more, Where do you sit along that spectrum?

What Matters More in Decisions: Analysis or Process?

We all make decisions. Some of them are large and many of them are small. Few of us understand that the process we use to make those decisions is more important than the analysis we put into the decision.

 

***

Think of the last major decision you made.

Maybe it was an acquisition, a large purchase, or perhaps it was whether to launch a new product.

Odds are three things went into that decision: (1) It probably relied on the insights of a few key executives; (2) it involved some sort of fact gathering and analysis; and (3) it was likely enveloped in some sort of decision process—whether formal or informal—that translated the analysis into a decision.

Now how would you rate the quality of your organization’s strategic decisions?

If you’re like most executives, the answer wouldn’t be positive:

In a recent McKinsey Quarterly survey of 2,207 executives, only 28 percent said that the quality of strategic decisions in their companies was generally good, 60 percent thought that bad decisions were about as frequent as good ones, and the remaining 12 percent thought good decisions were altogether infrequent.

How could it be otherwise? Product launches are frequently behind schedule and over budget. Strategic plans often ignore even the anticipated response of competitors. Mergers routinely fail to live up to the promises made in press releases.

The persistence of problems across time and organizations, both large and small, indicates that we can make better decisions.

“I have no use whatsoever for projections or forecasts. They create an illusion of apparent precision. The more meticulous they are, the more concerned you should be. We never look at projections.”

— Warren Buffett

The best place to start if we’re trying to improve the quality of our decisions is to look at how organizations make decisions. One interesting thing about bureaucracies is that they develop processes to limit the damage the worst people can do at every level. That is they come up with mechanisms to reduce the impact the worst people can have. Yes, this also limits the positive impact that people can have as well. When it comes to decisions, organizations default to gathering data and analyzing decisions.

The widespread belief is that analysis reduces biases. But does it?

Is putting your faith in analysis any better than using your gut? What does the evidence say? Is there a better way?

Dan Lovallo and Olivier Sibony set to find out.

Lovallo is a professor at the University of Sydney and Olivier is a director at McKinsey & Company. Together they studied 1,048 “major” business decisions over five years. The results are surprising.

Most business decisions were not made on “gut calls” but rather rigorous analysis. And yet they were poor decisions. In short, most people did the all the legwork we think we’re supposed to do: they delivered large quantities of detailed analysis.

Yet this wasn’t enough. “Our research indicates that, contrary to what one might assume, good analysis in the hands of managers who have good judgment won’t naturally yield good decisions.

[Projections] are put together by people who have an interest in a particular outcome, have a subconscious bias, and its apparent precision makes it fallacious. They remind me of Mark Twain’s saying, ‘A mine is a hole in the ground owned by a liar.’ Projections in America are often a lie, although not an intentional one, but the worst kind because the forecaster often believes them himself.”

— Charlie Munger

***

Lovallo and Sibony didn’t only look at the analysis, they also asked executives about the process used to make decisions.

Did they, for example, “explicitly explore and discuss major uncertainties or discuss viewpoints that contradicted the senior leader’s?

So what matters more, process or analysis? After comparing the results they determined that “process mattered more than analysis—by a factor of six.

This finding does not mean that analysis is unimportant, as a closer look at the data reveals: almost no decisions in our sample made through a very strong process were backed by very poor analysis. Why? Because one of the things an unbiased decision-making process will do is ferret out poor analysis. The reverse is not true; superb analysis is useless unless the decision process gives it a fair hearing.

To illustrate the weakness of how most organizations make decisions, Sibony used an interesting analogy: the legal system.

Imagine walking into a courtroom where the trial consists of a prosecutor presenting PowerPoint slides. In 20 pretty compelling charts, he demonstrates why the defendant is guilty. The judge then challenges some of the facts of the presentation, but the prosecutor has a good answer to every objection. So the judge decides, and the accused man is sentenced.

That wouldn’t be due process, right? So if you would find this process shocking in a courtroom, why is it acceptable when you make an investment decision? Now of course, this is an oversimplification, but this process is essentially the one most companies follow to make a decision. They have a team arguing only one side of the case. The team has a choice of what points it wants to make and what way it wants to make them. And it falls to the final decision maker to be both the challenger and the ultimate judge. Building a good decision-making process is largely ensuring that these flaws don’t happen.

Simply understanding our cognitive biases doesn’t make you immune to them. It’s not enough.  A disciplined decision process is the best place to improve the quality of decisions and guard against common decision-making biases.

Still curious? Read the ultimate guide to making smart decisions.

Footnotes
  • 1

    The inspiration for this post comes from Chip and Dan Heath in Decisive.

How Good Gamblers Think

Warren Buffett

From The Signal And The Noise:

Successful gamblers – and successful forecasters of any kind – do not think of the future in terms of no-lose best, unimpeachable theories, and infinitely precise measurements. These are the illusions of the sucker, the sirens of his over-confidence. Successful gamblers, instead, think of the future as speckles of probability, flickering upward and downward like a stock market ticker to every new jolt of information. When their estimates of these probabilities diverge by a sufficient margin from the odds on offer, they may place a bet.

This sounds an awful lot like how Warren Buffett, Charlie Munger, and Benjamin Graham think about investing.

In his 1987 letter to shareholders, Warren Buffett explains Graham’s concept of Mr. Market:

Ben Graham, my friend and teacher, long ago described the mental attitude toward market fluctuations that I believe to be most conducive to investment success. He said that you should imagine market quotations as coming from a remarkably accommodating fellow named Mr. Market who is your partner in a private business. Without fail, Mr. Market appears daily and names a price at which he will either buy your interest or sell you his.

Even though the business that the two of you own may have economic characteristics that are stable, Mr. Market’s quotations will be anything but. For, sad to say, the poor fellow has incurable emotional problems. At times he feels euphoric and can see only the favorable factors affecting the business. When in that mood, he names a very high buy-sell price because he fears that you will snap up his interest and rob him of imminent gains. At other times he is depressed and can see nothing but trouble ahead for both the business and the world. On these occasions he will name a very low price, since he is terrified that you will unload your interest on him.

Mr. Market has another endearing characteristic: He doesn’t mind being ignored. If his quotation is uninteresting to you today, he will be back with a new one tomorrow. Transactions are strictly at your option. Under these conditions, the more manic-depressive his behavior, the better for you.

But, like Cinderella at the ball, you must heed one warning or everything will turn into pumpkins and mice: Mr. Market is there to serve you, not to guide you. It is his pocketbook, not his wisdom, that you will find useful. If he shows up some day in a particularly foolish mood, you are free to either ignore him or to take advantage of him, but it will be disastrous if you fall under his influence. Indeed, if you aren’t certain that you understand and can value your business far better than Mr. Market, you don’t belong in the game. As they say in poker, “If you’ve been in the game 30 minutes and you don’t know who the patsy is, you’re the patsy.”

And Charlie Munger’s take:

The model I like—to sort of simplify the notion of what goes on in a market for common stocks—is the pari-mutuel system at the racetrack. If you stop to think about it, a pari-mutuel system is a market. Everybody goes there and bets and the odds change based on what’s bet. That’s what happens in the stock market.

Any damn fool can see that a horse carrying a light weight with a wonderful win rate and a good post position etc., etc. is way more likely to win than a horse with a terrible record and extra weight and so on and so on. But if you look at the odds, the bad horse pays 100 to 1, whereas the good horse pays 3 to 2. Then it’s not clear which is statistically the best bet using the mathematics of Fermat and Pascal. The prices have changed in such a way that it’s very hard to beat the system.

And then the track is taking 17% off the top. So not only do you have to outwit all the other betters, but you’ve got to outwit them by such a big margin that on average, you can afford to take 17% of your gross bets off the top and give it to the house before the rest of your money can be put to work.

Given those mathematics, is it possible to beat the horses only using one’s intelligence? Intelligence should give some edge, because lots of people who don’t know anything go out and bet lucky numbers and so forth. Therefore, somebody who really thinks about nothing but horse performance and is shrewd and mathematical could have a very considerable edge, in the absence of the frictional cost caused by the house take.

Unfortunately, what a shrewd horseplayer’s edge does in most cases is to reduce his average loss over a season of betting from the 17% that he would lose if he got the average result to maybe 10%. However, there are actually a few people who can beat the game after paying the full 17%.

I used to play poker when I was young with a guy who made a substantial living doing nothing but bet harness races…. Now, harness racing is a relatively inefficient market. You don’t have the depth of intelligence betting on harness races that you do on regular races. What my poker pal would do was to think about harness races as his main profession. And he would bet only occasionally when he saw some mispriced bet available. And by doing that, after paying the full handle to the house—which I presume was around 17%—he made a substantial living.

You have to say that’s rare. However, the market was not perfectly efficient. And if it weren’t for that big 17% handle, lots of people would regularly be beating lots of other people at the horse races. It’s efficient, yes. But it’s not perfectly efficient. And with enough shrewdness and fanaticism, some people will get better results than others.

The stock market is the same way—except that the house handle is so much lower. If you take transaction costs—the spread between the bid and the ask plus the commissions—and if you don’t trade too actively, you’re talking about fairly low transaction costs. So that with enough fanaticism and enough discipline, some of the shrewd people are going to get way better results than average in the nature of things.

It is not a bit easy. And, of course, 50% will end up in the bottom half and 70% will end up in the bottom 70%. But some people will have an advantage. And in a fairly low transaction cost operation, they will get better than average results in stock picking.

How do you get to be one of those who is a winner—in a relative sense—instead of a loser?

Here again, look at the pari-mutuel system. I had dinner last night by absolute accident with the president of Santa Anita. He says that there are two or three betters who have a credit arrangement with them, now that they have off-track betting, who are actually beating the house. They’re sending money out net after the full handle—a lot of it to Las Vegas, by the way—to people who are actually winning slightly, net, after paying the full handle. They’re that shrewd about something with as much unpredictability as horse racing.

And the one thing that all those winning betters in the whole history of people who’ve beaten the pari-mutuel system have is quite simple. They bet very seldom.

It’s not given to human beings to have such talent that they can just know everything about everything all the time. But it is given to human beings who work hard at it—who look and sift the world for a mispriced be—that they can occasionally find one.

And the wise ones bet heavily when the world offers them that opportunity. They bet big when they have the odds. And the rest of the time, they don’t. It’s just that simple.

That is a very simple concept. And to me it’s obviously right—based on experience not only from the pari-mutuel system, but everywhere else.

And yet, in investment management, practically nobody operates that way. We operate that way—I’m talking about Buffett and Munger. And we’re not alone in the world. But a huge majority of people have some other crazy construct in their heads. And instead of waiting for a near cinch and loading up, they apparently ascribe to the theory that if they work a little harder or hire more business school students, they’ll come to know everything about everything all the time.

Silver concludes, “Finding patterns is easy in any kind of data-rich environment; that’s what mediocre gamblers do. The key is in determining whether the patterns represent signal or noise.”