Tag: Decision Making

Double Loop Learning: Download New Skills and Information into Your Brain

We’re taught single loop learning from the time we are in grade school, but there’s a better way. Double loop learning is the quickest and most efficient way to learn anything that you want to “stick.”

***

So, you’ve done the work necessary to have an opinion, learned the mental models, and considered how you make decisions. But how do you now implement these concepts and figure out which ones work best in your situation? How do you know what’s effective and what’s not? One solution to this dilemma is double loop learning.

We can think of double loop learning as learning based on Bayesian updating — the modification of goals, rules, or ideas in response to new evidence and experience. It might sound like another piece of corporate jargon, but double loop learning cultivates creativity and innovation for both organizations and individuals.

“Every reaction is a learning process; every significant experience alters your perspective.”

— Hunter S. Thompson

Single Loop Learning

The first time we aim for a goal, follow a rule, or make a decision, we are engaging in single loop learning. This is where many people get stuck and keep making the same mistakes. If we question our approaches and make honest self-assessments, we shift into double loop learning. It’s similar to the Orient stage in John Boyd’s OODA loop. In this stage, we assess our biases, question our mental models, and look for areas where we can improve. We collect data, seek feedback, and gauge our performance. In short, we can’t learn from experience without reflection. Only reflection allows us to distill the experience into something we can learn from.

In Teaching Smart People How to Learn, business theorist Chris Argyris compares single loop learning to a typical thermostat. It operates in a homeostatic loop, always seeking to return the room to the temperature at which the thermostat is set. A thermostat might keep the temperature steady, but it doesn’t learn. By contrast, double loop learning would entail the thermostat’s becoming more efficient over time. Is the room at the optimum temperature? What’s the humidity like today and would a lower temperature be more comfortable? The thermostat would then test each idea and repeat the process. (Sounds a lot like Nest.)

Double Loop Learning

Double loop learning is part of action science — the study of how we act in difficult situations. Individuals and organizations need to learn if they want to succeed (or even survive). But few of us pay much attention to exactly how we learn and how we can optimize the process.

Even smart, well-educated people can struggle to learn from experience. We all know someone who’s been at the office for 20 years and claims to have 20 years of experience, but they really have one year repeated 20 times.

Not learning can actually make you worse off. The world is dynamic and always changing. If you’re standing still, then you won’t adapt. Forget moving ahead; you have to get better just to stay in the same relative spot, and not getting better means you’re falling behind.

Many of us are so focused on solving problems as they arise that we don’t take the time to reflect on them after we’ve dealt with them, and this omission dramatically limits our ability to learn from the experiences. Of course, we want to reflect, but we’re busy and we have more problems to solve — not to mention that reflecting on our idiocy is painful and we’re predisposed to avoid pain and protect our egos.

Reflection, however, is an example of an approach I call first-order negative, second-order positive. It’s got very visible short-term costs — it takes time and honest self-assessment about our shortcomings — but pays off in spades in the future. The problem is that the future is not visible today, so slowing down today to go faster at some future point seems like a bad idea to many. Plus with the payoff being so far in the future, it’s hard to connect to the reflection today.

The Learning Dilemma: How Success Becomes an Impediment

Argyris wrote that many skilled people excel at single loop learning. It’s what we learn in academic situations. But if we are accustomed only to success, double loop learning can ignite defensive behavior. Argyris found this to be the reason learning can be so difficult. It’s not because we aren’t competent, but because we resist learning out of a fear of seeming incompetent. Smart people aren’t used to failing, so they struggle to learn from their mistakes and often respond by blaming someone else. As Argyris put it, “their ability to learn shuts down precisely at the moment they need it the most.”

In the same way, a muscle strengthens at the point of failure, we learn best after dramatic errors.

The problem is that single loop processes can be self-fulfilling. Consider managers who assume their employees are inept. They deal with this by micromanaging and making every decision themselves. Their employees have no opportunity to learn, so they become discouraged. They don’t even try to make their own decisions. This is a self-perpetuating cycle. For double loop learning to happen, the managers would have to let go a little. Allow someone else to make minor decisions. Offer guidance instead of intervention. Leave room for mistakes. In the long run, everyone would benefit. The same applies to teachers who think their students are going to fail an exam. The teachers become condescending and assign simple work. When the exam rolls around, guess what? Many of the students do badly. The teachers think they were right, so the same thing happens the next semester.

Many of the leaders Argyris studied blamed any problems on “unclear goals, insensitive and unfair leaders, and stupid clients” rather than making useful assessments. Complaining might be cathartic, but it doesn’t let us learn. Argyris explained that this defensive reasoning happens even when we want to improve. Single loop learning just happens to be a way of minimizing effort. We would go mad if we had to rethink our response every time someone asked how we are, for example. So everyone develops their own “theory of action—a set of rules that individuals use to design and implement their own behavior as well as to understand the behavior of others.” Most of the time, we don’t even consider our theory of action. It’s only when asked to explain it that the divide between how we act and how we think we act becomes apparent. Identifying the gap between our espoused theory of action and what we are actually doing is the hard part.

The Key to Double Loop Learning: Push to the Point of Failure

The first step Argyris identified is to stop getting defensive. Justification gets us nowhere. Instead, he advocates collecting and analyzing relevant data. What conclusions can we draw from experience? How can we test them? What evidence do we need to prove a new idea is correct?

The next step is to change our mental models. Break apart paradigms. Question where conventions came from. Pivot and make reassessments if necessary.

Problem-solving isn’t a linear process. We can’t make one decision and then sit back and await success.

Argyris found that many professionals are skilled at teaching others, yet find it difficult to recognize the problems they themselves cause (see Galilean Relativity). It’s easy to focus on other people; it’s much harder to look inward and face complex challenges. Doing so brings up guilt, embarrassment, and defensiveness. As John Grey put it, “If there is anything unique about the human animal, it is that it has the ability to grow knowledge at an accelerating rate while being chronically incapable of learning from experience.”

When we repeat a single loop process, it becomes a habit. Each repetition requires less and less effort. We stop questioning or reconsidering it, especially if it does the job (or appears to). While habits are essential in many areas of our lives, they don’t serve us well if we want to keep improving. For that, we need to push the single loop to the point of failure, to strengthen how we act in the double loop. It’s a bit like the Feynman technique — we have to dismantle what we know to see how solid it truly is.

“Fail early and get it all over with. If you learn to deal with failure… you can have a worthwhile career. You learn to breathe again when you embrace failure as a part of life, not as the determining moment of life.”

— Rev. William L. Swig

One example is the typical five-day, 9-to-5 work week. Most organizations stick to it year after year. They don’t reconsider the efficacy of a schedule designed for Industrial Revolution factory workers. This is single loop learning. It’s just the way things are done, but not necessarily the smartest way to do things.

The decisions made early on in an organization have the greatest long-term impact. Changing them in the months, years, or even decades that follow becomes a non-option. How to structure the work week is one such initial decision that becomes invisible. As G.K. Chesterton put it, “The things we see every day are the things we never see at all.” Sure, a 9-to-5 schedule might not be causing any obvious problems. The organization might be perfectly successful. But that doesn’t mean things cannot improve. It’s the equivalent of a child continuing to crawl because it gets them around. Why try walking if crawling does the job? Why look for another option if the current one is working?

A growing number of organizations are realizing that conventional work weeks might not be the most effective way to structure work time. They are using double loop learning to test other structures. Some organizations are trying shorter work days or four-day work weeks or allowing people to set their own schedules. Managers then keep track of how the tested structures affect productivity and profits. Over time, it becomes apparent whether the new schedule is better than the old one.

37Signals is one company using double loop learning to restructure their work week. CEO Jason Fried began experimenting a few years ago. He tried out a four-day, 32-hour work week. He gave employees the whole of June off to explore new ideas. He cut back on meetings and created quiet spaces for focused work. Rather than following conventions, 37Signals became a laboratory looking for ways of improving. Over time, what worked and what didn’t became obvious.

Double loop learning is about data-backed experimentation, not aimless tinkering. If a new idea doesn’t work, it’s time to try something else.

In an op-ed for The New York Times, Camille Sweeney and Josh Gosfield give the example of David Chang. Double loop learning turned his failing noodle bar into an award-winning empire.

After apprenticing as a cook in Japan, Mr. Chang started his own restaurant. Yet his early efforts were ineffective. He found himself overworked and struggling to make money. He knew his cooking was excellent, so how could he make it profitable? Many people would have quit or continued making irrelevant tweaks until the whole endeavor failed. Instead, Mr. Chang shifted from single to double loop learning. A process of making honest self-assessments began. One of his foundational beliefs was that the restaurant should serve only noodles, but he decided to change the menu to reflect his skills. In time, it paid off; “the crowds came, rave reviews piled up, awards followed and unimaginable opportunities presented themselves.” This is what double loop learning looks like in action: questioning everything and starting from scratch if necessary.

Josh Waitzkin’s approach (as explained in The Art of Learning) is similar. After reaching the heights of competitive chess, Waitzkin turned his focus to martial arts. He began with tai chi chuan. Martial arts and chess are, on the surface, completely different, but Waitzkin used double loop learning for both. He progressed quickly because he was willing to lose matches if doing so meant he could learn. He noticed that other martial arts students had a tendency to repeat their mistakes, letting fruitless habits become ingrained. Like the managers Argyris worked with, students grew defensive when challenged. They wanted to be right, even if it prevented their learning. In contrast, Waitzkin viewed practice as an experiment. Each session was an opportunity to test his beliefs. He mastered several martial arts, earning a black belt in jujitsu and winning a world championship in tai ji tui shou.

Argyris found that organizations learn best when people know how to communicate. (No surprise there.) Leaders need to listen actively and open up exploratory dialogues so that problematic assumptions and conventions can be revealed. Argyris identified some key questions to consider.

  • What is the current theory in use?
  • How does it differ from proposed strategies and goals?
  • What unspoken rules are being followed, and are they detrimental?
  • What could change, and how?
  • Forget the details; what’s the bigger picture?

Meaningful learning doesn’t happen without focused effort. Double loop learning is the key to turning experience into improvements, information into action, and conversations into progress.

Earning Your Stripes: My Conversation with Patrick Collison [The Knowledge Project #32]

Subscribe on iTunes | Stitcher | Spotify | Android | Google Play

On this episode of the Knowledge Project, I chat with Patrick Collison, co-founder and CEO of the leading online payment processing company, Stripe. If you’ve purchased anything online recently, there’s a good chance that Stripe facilitated the transaction.

What is now an organization with over a thousand employees and handling billions of dollars of online purchases every year, began as a small side experiment while Patrick and his brother John were going to college.

During our conversation, Patrick shares the details of their unlikely journey and some of the hard-earned wisdom he picked up along the way. I hope you have something handy to write with because the nuggets per minute in this episode are off the charts. Patrick was so open and generous with his responses that I’m really excited for you to hear what he has to say.

Here are just a few of the things we cover:

  • The biggest (and most valuable) mistakes Patrick made in the early days of Stripe and how they helped him get better
  • The characteristics that Patrick looks for in a new hire to fit and contribute to the Stripe company culture
  • What compelled he and his brother to move forward with the early concept of Stripe, even though on paper it was doomed to fail from the start
  • The gaps Patrick saw in the market that dozens of other processing companies were missing — and how he capitalized on them
  • The lessons Patrick learned from scaling Stripe from two employees (he and his brother) to nearly 1,000 today
  • How he evaluates the upsides and potential dangers of speculative positions within the company
  • How his Irish upbringing influenced his ability to argue and disagree without taking offense (and how we can all be a little more “Irish”)
  • The power of finding the right peer group in your social and professional circles and how impactful and influential it can be in determining where you end up.
  • The 4 ways Patrick has modified his decision-making process over the last 5 years and how it’s helped him develop as a person and as a business leader (this part alone is worth the listen)
  • Patrick’s unique approach to books and how he chooses what he’s going to spend his time reading
  • …life in Silicon Valley, Baumol’s cost disease, and so, so much more.

Patrick truly is one of the warmest, humble and down to earth people I’ve had the pleasure to speak with and I thoroughly enjoyed our conversation together. I hope you will too!

Listen

Transcript

Normally only members of our learning community have access to transcripts, however, we pick one or two a year to make avilable to everyone. Here’s the complete transcript of the interview with Patrick.

If you liked this, check out other episodes of the knowledge project.

***

Members can discuss this podcast on the Learning Community Forum

Go Fast and Break Things: The Difference Between Reversible and Irreversible Decisions

Reversible vs. irreversible decisions. We often think that collecting as much information as possible will help us make the best decisions. Sometimes that’s true, but sometimes it hamstrings our progress. Other times it can be flat out dangerous.

***

Many of the most successful people adopt simple, versatile decision-making heuristics to remove the need for deliberation in particular situations.

One heuristic might be defaulting to saying no, as Steve Jobs did. Or saying no to any decision that requires a calculator or computer, as Warren Buffett does. Or it might mean reasoning from first principles, as Elon Musk does. Jeff Bezos, the founder of Amazon.com, has another one we can add to our toolbox. He asks himself, is this a reversible or irreversible decision?

If a decision is reversible, we can make it fast and without perfect information. If a decision is irreversible, we had better slow down the decision-making process and ensure that we consider ample information and understand the problem as thoroughly as we can.

Bezos used this heuristic to make the decision to found Amazon. He recognized that if Amazon failed, he could return to his prior job. He would still have learned a lot and would not regret trying. The decision was reversible, so he took a risk. The heuristic served him well and continues to pay off when he makes decisions.

Decisions Amidst Uncertainty

Let’s say you decide to try a new restaurant after reading a review online. Having never been there before, you cannot know if the food will be good or if the atmosphere will be dreary. But you use the incomplete information from the review to make a decision, recognizing that it’s not a big deal if you don’t like the restaurant.

In other situations, the uncertainty is a little riskier. You might decide to take a particular job, not knowing what the company culture is like or how you will feel about the work after the honeymoon period ends.

Reversible decisions can be made fast and without obsessing over finding complete information. We can be prepared to extract wisdom from the experience with little cost if the decision doesn’t work out. Frequently, it’s not worth the time and energy required to gather more information and look for flawless answers. Although your research might make your decision 5% better, you might miss an opportunity.

Reversible decisions are not an excuse to act reckless or be ill-informed, but is rather a belief that we should adapt the frameworks of our decisions to the types of decisions we are making. Reversible decisions don’t need to be made the same way as irreversible decisions.

The ability to make decisions fast is a competitive advantage. One major advantage that start-ups have is that they can move with velocity, whereas established incumbents typically move with speed. The difference between the two is meaningful and often means the difference between success and failure.

Speed is measured as distance over time. If we’re headed from New York to LA on an airplane and we take off from JFK and circle around New York for three hours, we’re moving with a lot of speed, but we’re not getting anywhere. Speed doesn’t care if you are moving toward your goals or not. Velocity, on the other hand, measures displacement over time. To have velocity, you need to be moving toward your goal.

This heuristic explains why start-ups making quick decisions have an advantage over incumbents. That advantage is magnified by environmental factors, such as the pace of change. The faster the pace of environmental change, the more an advantage will accrue to people making quick decisions because those people can learn faster.

Decisions provide us with data, which can then make our future decisions better. The faster we can cycle through the OODA loop, the better. This framework isn’t a one-off to apply to certain situations; it is a heuristic that needs to be an integral part of a decision-making toolkit.

With practice, we also get better at recognizing bad decisions and pivoting, rather than sticking with past choices due to the sunk costs fallacy. Equally important, we can stop viewing mistakes or small failures as disastrous and view them as pure information which will inform future decisions.

“A good plan, violently executed now, is better than a perfect plan next week.”

— General George Patton

Bezos compares decisions to doors. Reversible decisions are doors that open both ways. Irreversible decisions are doors that allow passage in only one direction; if you walk through, you are stuck there. Most decisions are the former and can be reversed (even though we can never recover the invested time and resources). Going through a reversible door gives us information: we know what’s on the other side.

In his shareholder letter, Bezos writes[1]:

Some decisions are consequential and irreversible or nearly irreversible – one-way doors – and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that – they are changeable, reversible – they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.

As organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention. We’ll have to figure out how to fight that tendency.

Bezos gives the example of the launch of one-hour delivery to those willing to pay extra. This service launched less than four months after the idea was first developed. In 111 days, the team “built a customer-facing app, secured a location for an urban warehouse, determined which 25,000 items to sell, got those items stocked, recruited and onboarded new staff, tested, iterated, designed new software for internal use – both a warehouse management system and a driver-facing app – and launched in time for the holidays.”

As further guidance, Bezos considers 70% certainty to be the cut-off point where it is appropriate to make a decision. That means acting once we have 70% of the required information, instead of waiting longer. Making a decision at 70% certainty and then course-correcting is a lot more effective than waiting for 90% certainty.

In Blink: The Power of Thinking Without Thinking, Malcolm Gladwell explains why decision-making under uncertainty can be so effective. We usually assume that more information leads to better decisions — if a doctor proposes additional tests, we tend to believe they will lead to a better outcome. Gladwell disagrees: “In fact, you need to know very little to find the underlying signature of a complex phenomenon. All you need is evidence of the ECG, blood pressure, fluid in the lungs, and an unstable angina. That’s a radical statement.”

In medicine, as in many areas, more information does not necessarily ensure improved outcomes. To illustrate this, Gladwell gives the example of a man arriving at a hospital with intermittent chest pains. His vital signs show no risk factors, yet his lifestyle does and he had heart surgery two years earlier. If a doctor looks at all the available information, it may seem that the man needs admitting to the hospital. But the additional factors, beyond the vital signs, are not important in the short term. In the long run, he is at serious risk of developing heart disease. Gladwell writes,

… the role of those other factors is so small in determining what is happening to the man right now that an accurate diagnosis can be made without them. In fact, … that extra information is more than useless. It’s harmful. It confuses the issues. What screws up doctors when they are trying to predict heart attacks is that they take too much information into account.

We can all learn from Bezos’s approach, which has helped him to build an enormous company while retaining the tempo of a start-up. Bezos uses his heuristic to fight the stasis that sets in within many large organizations. It is about being effective, not about following the norm of slow decisions.

Once you understand that reversible decisions are in fact reversible you can start to see them as opportunities to increase the pace of your learning. At a corporate level, allowing employees to make and learn from reversible decisions helps you move at the pace of a start-up. After all, if someone is moving with speed, you’re going to pass them when you move with velocity.

***

Members can discuss this on the Learning Community Forum.

End Notes

[1] https://www.sec.gov/Archives/edgar/data/1018724/000119312516530910/d168744dex991.htm

The Return of a Decision-Making Jedi [The Knowledge Project #28]

Michael Mauboussin

Subscribe on iTunes | Stitcher | Spotify | Android | Google Play

Michael Mauboussin returns for a fascinating encore interview on the Knowledge Project, a show that explores ideas, methods, and mental models, that will help you expand your mind, live deliberately, and master the best of what other people have already figured out.

In my conversation with Michael, we geek out on decision making, luck vs. skill, work/life balance, and so much more.

Mauboussin was actually the very first guest on the podcast when it was still very much an experiment. I enjoyed it so much, I decided to continue with the show. (If you missed his last interview, you can listen to it here, or if you’re a member of The Learning Community, you can download a transcript.)

Michael is one of my very favorite people to talk to, and I couldn’t wait to pick up right where we left off.

In this interview, Michael and I dive deep into some of the topics we care most about here at Farnam Street, including:

  • The concept of “base rates” and how they can help us make far better decisions and avoid the pain and consequences of making poor choices.
  • How to know where you land on the luck/skill continuum and why it matters
  • Michael’s advice on creating a systematic decision-making process in your organization to improve outcomes.
  • The two most important elements of any decision-making process
  • How to train your intuition to be one of your most powerful assets instead of a dangerous liability
  • The three tests Michael uses in his company to determine the health and financial stability of his environment
  • Why “algorithm aversion” is creating such headaches in many organizations and how to help your teams overcome it, so you can make more rapid progress
  • The most significant books that he’s read since we last spoke, his reading habits, and the strategies he uses to get the most out of every book
  • The importance of sleep in Michael’s life to make sure his body and mind are running at peak efficiency
  • His greatest failures and what he learned from them
  • How Michael and his wife raised their kids and the unique parenting style they adopted
  • How Michael defines happiness and the decisions he makes to maximize the joy in his life

Any one of those insights alone is worth a listen, so I think you’re really going to enjoy this interview.

Listen

Transcript

An edited transcript is available to members of our learning community or for purchase separately ($7).

More Episodes

A complete list of all of our podcast episodes.

***

Members can discuss this post on the Learning Community Forum

What You Can Learn from Fighter Pilots About Making Fast and Accurate Decisions

“What is strategy? A mental tapestry of changing intentions for harmonizing and focusing our efforts as a basis for realizing some aim or purpose in an unfolding and often unforeseen world of many bewildering events and many contending interests.””

— John Boyd

What techniques do people use in the most extreme situations to make decisions? What can we learn from them to help us make more rational and quick decisions?

If these techniques work in the most drastic scenarios, they have a good chance of working for us. This is why military mental models can have such wide, useful applications outside their original context.

Military mental models are constantly tested in the laboratory of conflict. If they weren’t agile, versatile, and effective, they would quickly be replaced by others. Military leaders and strategists invest a great deal of time in developing and teaching decision-making processes.

One strategy that I’ve found repeatedly effective is the OODA loop.

Developed by strategist and U.S. Air Force Colonel John Boyd, the OODA loop is a practical concept designed to be the foundation of rational thinking in confusing or chaotic situations. OODA stands for Observe, Orient, Decide, and Act.

Boyd developed the strategy for fighter pilots. However, like all good mental models, it can be extended into other fields. We used it at the intelligence agency I used to work at. I know lawyers, police officers, doctors, businesspeople, politicians, athletes, and coaches who use it.

Fighter pilots have to work fast. Taking a second too long to make a decision can cost them their lives. As anyone who has ever watched Top Gun knows, pilots have a lot of decisions and processes to juggle when they’re in dogfights (close-range aerial battles). Pilots move at high speeds and need to avoid enemies while tracking them and keeping a contextual knowledge of objectives, terrains, fuel, and other key variables.

Dogfights are nasty. I’ve talked to pilots who’ve been in them. They want the fights to be over as quickly as possible. The longer they go, the higher the chances that something goes wrong. Pilots need to rely on their creativity and decision-making abilities to survive. There is no game plan to follow, no schedule or to-do list. There is only the present moment when everything hangs in the balance.

Forty-Second Boyd

Boyd was no armchair strategist. He developed his ideas during his own time as a fighter pilot. He earned the nickname “Forty-Second Boyd” for his ability to win any fight in under 40 seconds.

In a tribute written after Boyd’s death, General C.C. Krulak described him as “a towering intellect who made unsurpassed contributions to the American art of war. Indeed, he was one of the central architects of the reform of military thought…. From John Boyd we learned about competitive decision making on the battlefield—compressing time, using time as an ally.”

Reflecting Robert Greene’s maxim that everything is material, Boyd spent his career observing people and organizations. How do they adapt to changeable environments in conflicts, business, and other situations?

Over time, he deduced that these situations are characterized by uncertainty. Dogmatic, rigid theories are unsuitable for chaotic situations. Rather than trying to rise through the military ranks, Boyd focused on using his position as colonel to compose a theory of the universal logic of war.

Boyd was known to ask his mentees the poignant question, “Do you want to be someone, or do you want to do something?” In his own life, he certainly focused on the latter path and, as a result, left us ideas with tangible value. The OODA loop is just one of many.

The Four Parts of the OODA Loop

Let’s break down the four parts of the OODA loop and see how they fit together.

OODA stands for Observe, Orient, Decide, Act. The description of it as a loop is crucial. Boyd intended the four steps to be repeated again and again until a conflict finishes. Although most depictions of the OODA loop portray it as a superficial idea, there is a lot of depth to it. Using it should be simple, but it has a rich basis in interdisciplinary knowledge.

1: Observe
The first step in the OODA Loop is to observe. At this stage, the main focus is to build a comprehensive picture of the situation with as much accuracy as possible.

A fighter pilot needs to consider: What is immediately affecting me? What is affecting my opponent? What could affect us later on? Can I make any predictions, and how accurate were my prior ones? A pilot’s environment changes rapidly, so these observations need to be broad and fluid.

And information alone is not enough. The observation stage requires awareness of the overarching meaning of the information. It also necessitates separating the information which is relevant for a particular decision from that which is not. You have to add context to the variables.

The observation stage is vital in decision-making processes.

For example, faced with a patient in an emergency ward, a doctor needs to start by gathering as much foundational knowledge as possible. That might be the patient’s blood pressure, pulse, age, underlying health conditions, and reason for admission. At the same time, the doctor needs to discard irrelevant information and figure out which facts are relevant for this precise situation. Only by putting the pieces together can she make a fast decision about the best way to treat the patient. The more experienced a doctor is, the more factors she is able to take into account, including subtle ones, such as a patient’s speech patterns, his body language, and the absence (rather than presence) of certain signs.

2: Orient

Orientation, the second stage of the OODA loop, is frequently misunderstood or skipped because it is less intuitive than the other stages. Boyd referred to it as the schwerpunkt, a German term which loosely translates to “the main emphasis.” In this context, to orient is to recognize the barriers that might interfere with the other parts of the process.

Without an awareness of these barriers, the subsequent decision cannot be a fully rational one. Orienting is all about connecting with reality, not with a false version of events filtered through the lens of cognitive biases and shortcuts.

“Orientation isn’t just a state you’re in; it’s a process. You’re always orienting.”

— John Boyd

Including this step, rather than jumping straight to making a decision, gives us an edge over the competition. Even if we are at a disadvantage to begin with, having fewer resources or less information, Boyd maintained that the Orient step ensures that we can outsmart an opponent.

For Western nations, cyber-crime is a huge threat — mostly because for the first time ever, they can’t outsmart, outspend, or out-resource the competition. Boyd has some lessons for them.

Boyd believed that four main barriers prevent us from seeing information in an unbiased manner:

  1. Our cultural traditions
  2. Our genetic heritage
  3. Our ability to analyze and synthesize
  4. The influx of new information — it is hard to make sense of observations when the situation keeps changing

Boyd was one of the first people to discuss the importance of building a toolbox of mental models, prior to Charlie Munger’s popularization of the concept among investors.

Boyd believed in “destructive deduction” — taking note of incorrect assumptions and biases and then replacing them with fundamental, versatile mental models. Only then can we begin to garner a reality-oriented picture of the situation, which will inform subsequent decisions.

Boyd employed a brilliant metaphor for this — a snowmobile. In one talk, he described how a snowmobile comprises elements of different devices. The caterpillar treads of a tank, skis, the outboard motor of a boat, the handlebars of a bike — each of those elements is useless alone, but combining them creates a functional vehicle.

As Boyd put it: “A loser is someone (individual or group) who cannot build snowmobiles when facing uncertainty and unpredictable change; whereas a winner is someone (individual or group) who can build snowmobiles, and employ them in an appropriate fashion, when facing uncertainty and unpredictable change.”

To orient ourselves, we have to build a metaphorical snowmobile by combining practical concepts from different disciplines.

Although Boyd is regarded as a military strategist, he didn’t confine himself to any particular discipline. His theories encompass ideas drawn from various disciplines, including mathematical logic, biology, psychology, thermodynamics, game theory, anthropology, and physics. Boyd described his approach as a “scheme of pulling things apart (analysis) and putting them back together (synthesis) in new combinations to find how apparently unrelated ideas and actions can be related to one another.”

3. Decide

No surprises here. Having gathered information and oriented ourselves, we have to make an informed decision. The previous two steps should have generated a plethora of ideas, so this is the point where we choose the most relevant option.

Boyd cautioned against first-conclusion bias, explaining that we cannot keep making the same decision again and again. This part of the loop needs to be flexible and open to Bayesian updating. In some of his notes, Boyd described this step as the hypothesis stage. The implication is that we should test the decisions we make at this point in the loop, spotting their flaws and including any issues in future observation stages.

4. Act

While technically a decision-making process, the OODA loop is all about action. The ability to act upon rational decisions is a serious advantage.

The other steps are mere precursors. A decision made, now is the time to act upon it. Also known as the test stage, this is when we experiment to see how good our decision was. Did we observe the right information? Did we use the best possible mental models? Did we get swayed by biases and other barriers? Can we disprove the prior hypothesis? Whatever the outcome, we then cycle back to the first part of the loop and begin observing again.

Why the OODA Loop Works

The OODA loop has four key benefits.

1. Speed

Fighter pilots must make many decisions in fast succession. They don’t have time to list pros and cons or to consider every available avenue. Once the OODA loop becomes part of their mental toolboxes, they should be able to cycle through it in a matter of seconds.

Speed is a crucial element of military decision making. Using the OODA loop in everyday life, we probably have a little more time than a fighter pilot would. But Boyd emphasized the value of being decisive, taking initiative, and staying autonomous. These are universal assets and apply to many situations.

Take the example of modern growth hacker marketing.

“The ability to operate at a faster tempo or rhythm than an adversary enables one to fold the adversary back inside himself so that he can neither appreciate nor keep up with what is going on. He will become disoriented and confused…”

— John Boyd

The key advantage growth hackers have over traditional marketers is speed. They observe (look at analytics, survey customers, perform a/b tests, etc.) and orient themselves (consider vanity versus meaningful metrics, assess interpretations, and ground themselves in the reality of a market) before making a decision and then acting. The final step serves to test their ideas and they have the agility to switch tactics if the desired outcome is not achieved.

Meanwhile, traditional marketers are often trapped in lengthy campaigns which do not offer much in the way of useful metrics. Growth hackers can adapt and change their techniques every single day depending on what works. They are not confined by stagnant ideas about what worked before.

So, although they may have a small budget and fewer people to assist them, their speed gives them an advantage. Just as Boyd could defeat any opponent in under 40 seconds (even starting at a position of disadvantage), growth hackers can grow companies and sell products at extraordinary rates, starting from scratch.

2. Comfort With Uncertainty
Uncertainty does not always equate to risk. A fighter pilot is in a precarious situation, where there will be gaps in their knowledge. They cannot read the mind of the opponent and might have incomplete information about the weather conditions and surrounding environment. They can, however, take into account key factors such as the opponent’s nationality, the type of airplane they are flying, and what their maneuvers reveal about their intentions and level of training.

If the opponent uses an unexpected strategy, is equipped with a new type of weapon or airplane, or behaves in an irrational, ideologically motivated way, the pilot must accept the accompanying uncertainty. However, Boyd belabored the point that uncertainty is irrelevant if we have the right filters in place.

If we don’t, we can end up stuck at the observation stage, unable to decide or act. But if we do have the right filters, we can factor uncertainty into the observation stage. We can leave a margin of error. We can recognize the elements which are within our control and those which are not.

Three key principles supported Boyd’s ideas. In his presentations, he referred to Gödel’s Proof, Heisenberg’s Uncertainty Principle, and the Second Law of Thermodynamics.

Gödel’s theorems indicate that any mental model we have of reality will omit certain information and that Bayesian updating must be used to bring it in line with reality. Our understanding of science illustrates this.

In the past, people’s conception of reality missed crucial concepts such as criticality, relativity, the laws of thermodynamics, and gravity. As we have discovered these concepts, we have updated our view of the world. Yet we would be foolish to think that we now know everything and our worldview is complete. Other key principles remain undiscovered. The same goes for fighter pilots — their understanding of what is going on during a battle will always have gaps. Identifying this fundamental uncertainty gives it less power over us.

The second concept Boyd referred to is Heisenberg’s Uncertainty Principle. In its simplest form, this principle describes the limit of the precision with which pairs of physical properties can be understood. We cannot know the position and the velocity of a body at the same time. We can know either its location or its speed, but not both. Although Heisenberg’s Uncertainty Principle was initially used to describe particles, Boyd’s ability to combine disciplines led him to apply it to planes. If a pilot focuses too hard on where an enemy plane is, they will lose track of where it is going and vice versa. Trying harder to track the two variables will actually lead to more inaccuracy! Heisenberg’s Uncertainty Principle applies to myriad areas where excessive observation proves detrimental. Reality is imprecise.

Finally, Boyd made use of the Second Law of Thermodynamics. In a closed system, entropy always increases and everything moves towards chaos. Energy spreads out and becomes disorganized.

Although Boyd’s notes do not specify the exact applications, his inference appears to be that a fighter pilot must be an open system or they will fail. They must draw “energy” (information) from outside themselves or the situation will become chaotic. They should also aim to cut their opponent off, forcing them to become a closed system. Drawing on his studies, Boyd developed his Energy Maneuverability theory, which recast maneuvers in terms of the energy they used.

“Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.”

— Sun Tzu

3. Unpredictability

Using the OODA loop should enable us to act faster than an opponent, thereby seeming unpredictable. While they are still deciding what to do, we have already acted. This resets their own loop, moving them back to the observation stage. Keep doing this, and they are either rendered immobile or forced to act without making a considered decision. So, they start making mistakes, which can be exploited.

Boyd recommended making unpredictable changes in speed and direction, and wrote, “we should operate at a faster tempo than our adversaries or inside our adversaries[’] time scales. … Such activity will make us appear ambiguous (non predictable) [and] thereby generate confusion and disorder among our adversaries.” He even helped design planes better equipped to make those unpredictable changes.

For the same reason that you can’t run the same play 70 times in a football game, rigid military strategies often become useless after a few uses, or even one iteration, as opponents learn to recognize and counter them. The OODA loop can be endlessly used because it is a formless strategy, unconnected to any particular maneuvers.

We know that Boyd was influenced by Sun Tzu (he owned seven thoroughly annotated copies of The Art of War), and he drew many ideas from the ancient strategist. Sun Tzu depicts war as a game of deception where the best strategy is that which an opponent cannot pre-empt. Apple has long used this strategy as a key part of their product launches. Meticulously planned, their launches are shrouded in secrecy and the goal is for no one outside the company to see a product prior to the release.

When information has been leaked, the company has taken serious legal action as well as firing associated employees. We are never sure what Apple will put out next (just search for “Apple product launch 2017” and you will see endless speculation based on few facts). As a consequence, Apple can stay ahead of their rivals.

Once a product launches, rival companies scramble to emulate it. But by the time their technology is ready for release, Apple is on to the next thing and has taken most of the market share. Although inexpensive compared to the drawn-out product launches other companies use, Apple’s unpredictability makes us pay attention. Stock prices rise the day after, tickets to launches sell out in seconds, and the media reports launches as if they were news events, not marketing events.

4. Testing

A notable omission in Boyd’s work is any sort of specific instructions for how to act or which decisions to make. This is presumably due to his respect for testing. He believed that ideas should be tested and then, if necessary, discarded.

“We can’t just look at our own personal experiences or use the same mental recipes over and over again; we’ve got to look at other disciplines and activities and relate or connect them to what we know from our experiences and the strategic world we live in.”

— John Boyd

Boyd’s OODA is a feedback loop, with the outcome of actions leading back to observations. Even in Aerial Attack Study, his comprehensive manual of maneuvers, Boyd did not describe any particular one as superior. He encouraged pilots to have the widest repertoire possible so they could select the best option in response to the maneuvers of an opponent.

We can incorporate testing into our decision-making processes by keeping track of outcomes in decision journals. Boyd’s notes indicate that he may have done just that during his time as a fighter pilot, building up the knowledge that went on to form Aerial Attack Study. Rather than guessing how our decisions lead to certain outcomes, we can get a clear picture to aid us in future orientation stages. Over time, our decision journals will reveal what works and what doesn’t.

Applying the OODA Loop

In sports, there is an adage that carries over to business quite well: “Speed kills.” If you are able to be nimble, able to assess the ever-changing environment and adapt quickly, you’ll always carry the advantage over your opponent.

Start applying the OODA loop to your day-to-day decisions and watch what happens. You’ll start to notice things that you would have been oblivious to before. Before jumping to your first conclusion, you’ll pause to consider your biases, take in additional information, and be more thoughtful of consequences.

As with anything you practice,  if you do it right, the more you do it, the better you’ll get.  You’ll start making better decisions more quickly. You’ll see more rapid progress. And as John Boyd would prescribe, you’ll start to DO something in your life, and not just BE somebody.

***

Members can discuss this post on the Learning Community Forum

Poker, Speeding Tickets, and Expected Value: Making Decisions in an Uncertain World

“Take the probability of loss times the amount of possible loss from the probability of gain times the amount of possible gain. That is what we’re trying to do. It’s imperfect but that’s what it’s all about.”

— Warren Buffett

You can train your brain to think like CEOs, professional poker players, investors, and others who make tricky decisions in an uncertain world by weighing probabilities.

All decisions involve potential tradeoffs and opportunity costs. The question is, how can we make the best possible choices when the factors involved are often so complicated and confusing? How can we determine which statistics and metrics are worth paying attention to? How do we think about averages?

Expected value is one of the simplest tools you can use to think better. While not a natural way of thinking for most people, it instantly turns the world into shades of grey by forcing us to weigh probabilities and outcomes. Once we’ve mastered it, our decisions become supercharged. We know which risks to take, when to quit projects, when to go all in, and more.

Expected value refers to the long-run average of a random variable.

If you flip a fair coin ten times, the heads-to-tails ratio will probably not be exactly equal. If you flip it one hundred times, the ratio will be closer to 50:50, though again not exactly. But for a very large number of iterations, you can expect heads to come up half the time and tails the other half. The law of large numbers dictates that the values will, in the long term, regress to the mean, even if the first few flips seem unequal.

The more coin flips, the closer you get to the 50:50 ratio. If you bet a sum of money on a coin flip, the potential winnings on a fair coin have to be bigger than your potential loss to make the expected value positive.

We make many expected-value calculations without even realizing it. If we decide to stay up late and have a few drinks on a Tuesday, we regard the expected value of an enjoyable evening as higher than the expected costs the following day. If we decide to always leave early for appointments, we weigh the expected value of being on time against the frequent instances when we arrive early. When we take on work, we view the expected value in terms of income and other career benefits as higher than the cost in terms of time and/or sanity.

Likewise, anyone who reads a lot knows that most books they choose will have minimal impact on them, while a few books will change their lives and be of tremendous value. Looking at the required time and money as an investment, books have a positive expected value (provided we choose them with care and make use of the lessons they teach).

These decisions might seem obvious. But the math behind them would be somewhat complicated if we tried to sit down and calculate it. Who pulls out a calculator before deciding whether to open a bottle of wine (certainly not me) or walk into a bookstore?

The factors involved are impossible to quantify in a non-subjective manner – like trying to explain how to catch a baseball. We just have a feel for them. This expected-value analysis is unconscious – something to consider if you have ever labeled yourself as “bad at math.”

Parking Tickets

Another example of expected value is parking tickets. Let’s say that a parking spot costs $5 and the fine for not paying is $10. If you can expect to be caught one-third of the time, why pay for parking? The expected value of doing so is negative. It’s a disincentive. You can park without paying three times and pay only $10 in fines, instead of paying $15 for three parking spots. But if the fine is $100, the probability of getting caught would have to be higher than one in twenty for it to be worthwhile. This is why fines tend to seem excessive. They cover the people who are not caught while giving an incentive for everyone to pay.

Consider speeding tickets. Here, the expected value can be more abstract, encompassing different factors. If speeding on the way to work saves 15 minutes, then a monthly $100 fine might seem worthwhile to some people. For most of us, though, a weekly fine would mean that speeding has a negative expected value. Add in other disincentives (such as the loss of your driver’s license), and speeding is not worth it. So the calculation is not just financial; it takes into account other tradeoffs as well.

The same goes for free samples and trial periods on subscription services. Many companies (such as Graze, Blue Apron, and Amazon Prime) offer generous free trials. How can they afford to do this? Again, it comes down to expected value. The companies know how much the free trials cost them. They also know the probability of someone’s paying afterwards and the lifetime value of a customer. Basic math reveals why free trials are profitable. Say that a free trial costs the company $10 per person, and one in ten people then sign up for the paid service, going on to generate $150 in profits. The expected value is positive. If only one in twenty people sign up, the company needs to find a cheaper free trial or scrap it.

Similarly, expected value applies to services that offer a free “lite” version (such as Buffer and Spotify). Doing so costs them a small amount or even nothing. Yet it increases the chance of someone’s deciding to pay for the premium version. For the expected value to be positive, the combined cost of the people who never upgrade needs to be lower than the profit from the people who do pay.

Lottery tickets prove useless when viewed through the lens of expected value. If a ticket costs $1 and there is a possibility of winning $500,000, it might seem as if the expected value of the ticket is positive. But it is almost always negative. If one million people purchase a ticket, the expected value is $0.50. That difference is the profit that lottery companies make. Only on sporadic occasions is the expected value positive, even though the probability of winning remains minuscule.

Failing to understand expected value is a common logical fallacy. Getting a grasp of it can help us to overcome many limitations and cognitive biases.

“Constantly thinking in expected value terms requires discipline and is somewhat unnatural. But the leading thinkers and practitioners from somewhat varied fields have converged on the same formula: focus not on the frequency of correctness, but on the magnitude of correctness.”

— Michael Mauboussin

Expected Value and Poker

Let’s look at poker. How do professional poker players manage to win large sums of money and hold impressive track records? Well, we can be certain that the answer isn’t all luck, although there is some of that involved.

Professional players rely on mathematical mental models that create order among random variables. Although these models are basic, it takes extensive experience to create the fingerspitzengefühl (“fingertips feeling,” or instinct) necessary to use them.

A player needs to make correct calculations every minute of a game with an automaton-like mindset. Emotions and distractions can corrupt the accuracy of the raw math.

In a game of poker, the expected value is the average return on each dollar invested in the pot. Each time a player makes a bet or call, they are taking into account the probability of making more money than they invest. If a player is risking $100, with a 1 in 5 probability of success, the pot must contain at least $500 for the bet to be safe. The expected value per call is at least equal to the amount the player stands to lose. If the pot contains $300 and the probability is 1 in 5, the expected value is negative. The idea is that even if this tactic is unsuccessful at times, in the long run, the player will profit.

Expected-value analysis gives players a clear idea of probabilistic payoffs. Successful poker players can win millions one week, then make nothing or lose money the next, depending on the probability of winning. Even the best possible hands can lose due to simple probability. With each move, players also need to use Bayesian updating to adapt their calculations. because sticking with a prior figure could prove disastrous. Casinos make their fortunes from people who bet on situations with a negative expected value.

Expected Value and the Ludic Fallacy

In The Black Swan, Nassim Taleb explains the difference between everyday randomness and randomness in the context of a game or casino. Taleb coined the term “ludic fallacy” to refer to “the misuse of games to model real-life situations.” (Or, as the website logicallyfallacious.com puts it: the assumption that flawless statistical models apply to situations where they don’t actually apply.)

In Taleb’s words, gambling is “sterilized and domesticated uncertainty. In the casino, you know the rules, you can calculate the odds… ‘The casino is the only human venture I know where the probabilities are known, Gaussian (i.e., bell-curve), and almost computable.’ You cannot expect the casino to pay out a million times your bet, or to change the rules abruptly during the game….”

Games like poker have a defined, calculable expected value. That’s because we know the outcomes, the cards, and the math. Most decisions are more complicated. If you decide to bet $100 that it will rain tomorrow, the expected value of the wager is incalculable. The factors involved are too numerous and complex to compute. Relevant factors do exist; you are more likely to win the bet if you live in England than if you live in the Sahara, for example. But that doesn’t rule out Black Swan events, nor does it give you the neat probabilities which exist in games. In short, there is a key distinction between Knightian risks, which are computable because we have enough information to calculate the odds, and Knightian uncertainty, which is non-computable because we don’t have enough information to calculate odds accurately. (This distinction between risk and uncertainty is based on the writings of economist Frank Knight.) Poker falls into the former category. Real life is in the latter. If we take the concept literally and only plan for the expected, we will run into some serious problems.

As Taleb writes in Fooled By Randomness:

Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance. Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem or a brain teaser. Mother nature does not tell you how many holes there are on the roulette table, nor does she deliver problems in a textbook way (in the real world one has to guess the problem more than the solution).

The Monte Carlo Fallacy

Even in the domesticated environment of a casino, probabilistic thinking can go awry if the principle of expected value is forgotten. This famously occurred in Monte Carlo Casino in 1913. A group of gamblers lost millions when the roulette table landed on black 26 times in a row. The probability of this occurring is no more or less likely than the other 67,108,863 possible permutations, but the people present kept thinking, “It has to be red next time.” They saw the likelihood of the wheel landing on red as higher each time it landed on black. In hindsight, what sense does that make? A roulette wheel does not remember the color it landed on last time. The likelihood of either outcome is exactly 50% with each spin, regardless of the previous iteration. So the potential winnings for each spin need to be at least twice the bet a player makes, or the expected value is negative.

“A lot of people start out with a 400-horsepower motor but only get 100 horsepower of output. It’s way better to have a 200-horsepower motor and get it all into output.”

— Warren Buffett

Given all the casinos and roulette tables in the world, the Monte Carlo incident had to happen at some point. Perhaps some day a roulette wheel will land on red 26 times in a row and the incident will repeat. The gamblers involved did not consider the negative expected value of each bet they made. We know this mistake as the Monte Carlo fallacy (or the “gambler’s fallacy” or “the fallacy of the maturity of chances”) – the assumption that prior independent outcomes influence future outcomes that are actually also independent. In other words, people assume that “a random process becomes less random and more predictable as it is repeated”1.

It’s a common error. People who play the lottery for years without success think that their chance of winning rises with each ticket, but the expected value is unchanged between iterations. Amos Tversky and Daniel Kahneman consider this kind of thinking a component of the representativeness heuristic, stating that the more we believe we control random events, the more likely we are to succumb to the Monte Carlo fallacy.

Magnitude over Frequency

Steven Crist, in his book Bet with the Best, offers an example of how an expected-value mindset can be applied. Consider a hypothetical race with four horses. If you’re trying to maximize return on investment, you might want to avoid the horse with a high likelihood of winning. Crist writes,

The point of this exercise is to illustrate that even a horse with a very high likelihood of winning can be either a very good or a very bad bet, and that the difference between the two is determined by only one thing: the odds.”2

Everything comes down to payoffs. A horse with a 50% chance of winning might be a good bet, but it depends on the payoff. The same holds for a 100-to-1 longshot. It’s not the frequency of winning but the magnitude of the win that matters.

Error Rates, Averages, and Variability

When Bill Gates walks into a room with 20 people, the average wealth per person in the room quickly goes beyond a billion dollars. It doesn’t matter if the 20 people are wealthy or not; Gates’s wealth is off the charts and distorts the results.

An old joke tells of the man who drowns in a river which is, on average, three feet deep. If you’re deciding to cross a river and can’t swim, the range of depths matters a heck of a lot more than the average depth.

The Use of Expected Value: How to Make Decisions in an Uncertain World

Thinking in terms of expected value requires discipline and practice. And yet, the top performers in almost any field think in terms of probabilities. While this isn’t natural for most of us, once you implement the discipline of the process, you’ll see the quality of your thinking and decisions improve.

In poker, players can predict the likelihood of a particular outcome. In the vast majority of cases, we cannot predict the future with anything approaching accuracy. So what use is expected value outside gambling? It turns out, quite a lot. Recognizing how expected value works puts any of us at an advantage. We can mentally leap through various scenarios and understand how they affect outcomes.

Expected value takes into account wild deviations. Averages are useful, but they have limits, as the man who tried to cross the river discovered. When making predictions about the future, we need to consider the range of outcomes. The greater the possible variance from the average, the more our decisions should account for a wider range of outcomes.

There’s a saying in the design world: when you design for the average, you design for no one. Large deviations can mean more risk-which is not always a bad thing. So expected-value calculations take into account the deviations. If we can make decisions with a positive expected value and the lowest possible risk, we are open to large benefits.

Investors use expected value to make decisions. Choices with a positive expected value and minimal risk of losing money are wise. Even if some losses occur, the net gain should be positive over time. In investing, unlike in poker, the potential losses and gains cannot be calculated in exact terms. Expected-value analysis reveals opportunities that people who just use probabilistic thinking often miss. A trade with a low probability of success can still carry a high expected value. That’s why it is crucial to have a large number of robust mental models. As useful as probabilistic thinking can be, it has far more utility when combined with expected value.

Understanding expected value is also an effective way to overcome the sunk costs fallacy. Many of our decisions are based on non-recoverable past investments of time, money, or resources. These investments are irrelevant; we can’t recover them, so we shouldn’t factor them into new decisions. Sunk costs push us toward situations with a negative expected value. For example, consider a company that has invested considerable time and money in the development of a new product. As the launch date nears, they receive irrefutable evidence that the product will be a failure. Perhaps research shows that customers are disinterested, or a competitor launches a similar, better product. The sunk costs fallacy would lead them to release their product anyway. Even if they take a loss. Even if it damages their reputation. After all, why waste the money they spent developing the product? Here’s why: Because the product has a negative expected value, which will only worsen their losses. An escalation of commitment will only increase sunk costs.

When we try to justify a prior expense, calculating the expected value can prevent us from worsening the situation. The sunk costs fallacy robs us of our most precious resource: time. Each day we are faced with the choice between continuing and quitting numerous endeavors. Expected-value analysis reveals where we should continue, and where we should cut our losses and move on to a better use of time and resources. It’s an efficient way to work smarter, and not engage in unnecessary projects.

Thinking in terms of expected value will make you feel awkward when you first try it. That’s the hardest thing about it; you need to practice it a while before it becomes second nature. Once you get the hang of it, you’ll see that it’s valuable in almost every decision. That’s why the most rational people in the world constantly think about expected value. They’ve uncovered the key insight that the magnitude of correctness matters more than its frequency. And yet, human nature is such that we’re happier when we’re frequently right.

Footnotes
  • 1

    From https://rationalwiki.org/wiki/Gambler’s_fallacy, accessed on 11 January 2018.

  • 2

    Steven Crist, “Crist on Value,” in Andrew Beyer et al., Bet with the Best: All New Strategies From America’s Leading Handicappers (New York: Daily Racing Form Press, 2001), 63-64.