Blog

Double Loop Learning: Download New Skills and Information into Your Brain

We’re taught single loop learning from the time we are in grade school, but there’s a better way. Double loop learning is the quickest and most efficient way to learn anything that you want to “stick.”

***

So, you’ve done the work necessary to have an opinion, learned the mental models, and considered how you make decisions. But how do you now implement these concepts and figure out which ones work best in your situation? How do you know what’s effective and what’s not? One solution to this dilemma is double loop learning.

We can think of double loop learning as learning based on Bayesian updating — the modification of goals, rules, or ideas in response to new evidence and experience. It might sound like another piece of corporate jargon, but double loop learning cultivates creativity and innovation for both organizations and individuals.

“Every reaction is a learning process; every significant experience alters your perspective.”

— Hunter S. Thompson

Single Loop Learning

The first time we aim for a goal, follow a rule, or make a decision, we are engaging in single loop learning. This is where many people get stuck and keep making the same mistakes. If we question our approaches and make honest self-assessments, we shift into double loop learning. It’s similar to the Orient stage in John Boyd’s OODA loop. In this stage, we assess our biases, question our mental models, and look for areas where we can improve. We collect data, seek feedback, and gauge our performance. In short, we can’t learn from experience without reflection. Only reflection allows us to distill the experience into something we can learn from.

In Teaching Smart People How to Learn, business theorist Chris Argyris compares single loop learning to a typical thermostat. It operates in a homeostatic loop, always seeking to return the room to the temperature at which the thermostat is set. A thermostat might keep the temperature steady, but it doesn’t learn. By contrast, double loop learning would entail the thermostat’s becoming more efficient over time. Is the room at the optimum temperature? What’s the humidity like today and would a lower temperature be more comfortable? The thermostat would then test each idea and repeat the process. (Sounds a lot like Nest.)

Double Loop Learning

Double loop learning is part of action science — the study of how we act in difficult situations. Individuals and organizations need to learn if they want to succeed (or even survive). But few of us pay much attention to exactly how we learn and how we can optimize the process.

Even smart, well-educated people can struggle to learn from experience. We all know someone who’s been at the office for 20 years and claims to have 20 years of experience, but they really have one year repeated 20 times.

Not learning can actually make you worse off. The world is dynamic and always changing. If you’re standing still, then you won’t adapt. Forget moving ahead; you have to get better just to stay in the same relative spot, and not getting better means you’re falling behind.

Many of us are so focused on solving problems as they arise that we don’t take the time to reflect on them after we’ve dealt with them, and this omission dramatically limits our ability to learn from the experiences. Of course, we want to reflect, but we’re busy and we have more problems to solve — not to mention that reflecting on our idiocy is painful and we’re predisposed to avoid pain and protect our egos.

Reflection, however, is an example of an approach I call first-order negative, second-order positive. It’s got very visible short-term costs — it takes time and honest self-assessment about our shortcomings — but pays off in spades in the future. The problem is that the future is not visible today, so slowing down today to go faster at some future point seems like a bad idea to many. Plus with the payoff being so far in the future, it’s hard to connect to the reflection today.

The Learning Dilemma: How Success Becomes an Impediment

Argyris wrote that many skilled people excel at single loop learning. It’s what we learn in academic situations. But if we are accustomed only to success, double loop learning can ignite defensive behavior. Argyris found this to be the reason learning can be so difficult. It’s not because we aren’t competent, but because we resist learning out of a fear of seeming incompetent. Smart people aren’t used to failing, so they struggle to learn from their mistakes and often respond by blaming someone else. As Argyris put it, “their ability to learn shuts down precisely at the moment they need it the most.”

In the same way, a muscle strengthens at the point of failure, we learn best after dramatic errors.

The problem is that single loop processes can be self-fulfilling. Consider managers who assume their employees are inept. They deal with this by micromanaging and making every decision themselves. Their employees have no opportunity to learn, so they become discouraged. They don’t even try to make their own decisions. This is a self-perpetuating cycle. For double loop learning to happen, the managers would have to let go a little. Allow someone else to make minor decisions. Offer guidance instead of intervention. Leave room for mistakes. In the long run, everyone would benefit. The same applies to teachers who think their students are going to fail an exam. The teachers become condescending and assign simple work. When the exam rolls around, guess what? Many of the students do badly. The teachers think they were right, so the same thing happens the next semester.

Many of the leaders Argyris studied blamed any problems on “unclear goals, insensitive and unfair leaders, and stupid clients” rather than making useful assessments. Complaining might be cathartic, but it doesn’t let us learn. Argyris explained that this defensive reasoning happens even when we want to improve. Single loop learning just happens to be a way of minimizing effort. We would go mad if we had to rethink our response every time someone asked how we are, for example. So everyone develops their own “theory of action—a set of rules that individuals use to design and implement their own behavior as well as to understand the behavior of others.” Most of the time, we don’t even consider our theory of action. It’s only when asked to explain it that the divide between how we act and how we think we act becomes apparent. Identifying the gap between our espoused theory of action and what we are actually doing is the hard part.

The Key to Double Loop Learning: Push to the Point of Failure

The first step Argyris identified is to stop getting defensive. Justification gets us nowhere. Instead, he advocates collecting and analyzing relevant data. What conclusions can we draw from experience? How can we test them? What evidence do we need to prove a new idea is correct?

The next step is to change our mental models. Break apart paradigms. Question where conventions came from. Pivot and make reassessments if necessary.

Problem-solving isn’t a linear process. We can’t make one decision and then sit back and await success.

Argyris found that many professionals are skilled at teaching others, yet find it difficult to recognize the problems they themselves cause (see Galilean Relativity). It’s easy to focus on other people; it’s much harder to look inward and face complex challenges. Doing so brings up guilt, embarrassment, and defensiveness. As John Grey put it, “If there is anything unique about the human animal, it is that it has the ability to grow knowledge at an accelerating rate while being chronically incapable of learning from experience.”

When we repeat a single loop process, it becomes a habit. Each repetition requires less and less effort. We stop questioning or reconsidering it, especially if it does the job (or appears to). While habits are essential in many areas of our lives, they don’t serve us well if we want to keep improving. For that, we need to push the single loop to the point of failure, to strengthen how we act in the double loop. It’s a bit like the Feynman technique — we have to dismantle what we know to see how solid it truly is.

“Fail early and get it all over with. If you learn to deal with failure… you can have a worthwhile career. You learn to breathe again when you embrace failure as a part of life, not as the determining moment of life.”

— Rev. William L. Swig

One example is the typical five-day, 9-to-5 work week. Most organizations stick to it year after year. They don’t reconsider the efficacy of a schedule designed for Industrial Revolution factory workers. This is single loop learning. It’s just the way things are done, but not necessarily the smartest way to do things.

The decisions made early on in an organization have the greatest long-term impact. Changing them in the months, years, or even decades that follow becomes a non-option. How to structure the work week is one such initial decision that becomes invisible. As G.K. Chesterton put it, “The things we see every day are the things we never see at all.” Sure, a 9-to-5 schedule might not be causing any obvious problems. The organization might be perfectly successful. But that doesn’t mean things cannot improve. It’s the equivalent of a child continuing to crawl because it gets them around. Why try walking if crawling does the job? Why look for another option if the current one is working?

A growing number of organizations are realizing that conventional work weeks might not be the most effective way to structure work time. They are using double loop learning to test other structures. Some organizations are trying shorter work days or four-day work weeks or allowing people to set their own schedules. Managers then keep track of how the tested structures affect productivity and profits. Over time, it becomes apparent whether the new schedule is better than the old one.

37Signals is one company using double loop learning to restructure their work week. CEO Jason Fried began experimenting a few years ago. He tried out a four-day, 32-hour work week. He gave employees the whole of June off to explore new ideas. He cut back on meetings and created quiet spaces for focused work. Rather than following conventions, 37Signals became a laboratory looking for ways of improving. Over time, what worked and what didn’t became obvious.

Double loop learning is about data-backed experimentation, not aimless tinkering. If a new idea doesn’t work, it’s time to try something else.

In an op-ed for The New York Times, Camille Sweeney and Josh Gosfield give the example of David Chang. Double loop learning turned his failing noodle bar into an award-winning empire.

After apprenticing as a cook in Japan, Mr. Chang started his own restaurant. Yet his early efforts were ineffective. He found himself overworked and struggling to make money. He knew his cooking was excellent, so how could he make it profitable? Many people would have quit or continued making irrelevant tweaks until the whole endeavor failed. Instead, Mr. Chang shifted from single to double loop learning. A process of making honest self-assessments began. One of his foundational beliefs was that the restaurant should serve only noodles, but he decided to change the menu to reflect his skills. In time, it paid off; “the crowds came, rave reviews piled up, awards followed and unimaginable opportunities presented themselves.” This is what double loop learning looks like in action: questioning everything and starting from scratch if necessary.

Josh Waitzkin’s approach (as explained in The Art of Learning) is similar. After reaching the heights of competitive chess, Waitzkin turned his focus to martial arts. He began with tai chi chuan. Martial arts and chess are, on the surface, completely different, but Waitzkin used double loop learning for both. He progressed quickly because he was willing to lose matches if doing so meant he could learn. He noticed that other martial arts students had a tendency to repeat their mistakes, letting fruitless habits become ingrained. Like the managers Argyris worked with, students grew defensive when challenged. They wanted to be right, even if it prevented their learning. In contrast, Waitzkin viewed practice as an experiment. Each session was an opportunity to test his beliefs. He mastered several martial arts, earning a black belt in jujitsu and winning a world championship in tai ji tui shou.

Argyris found that organizations learn best when people know how to communicate. (No surprise there.) Leaders need to listen actively and open up exploratory dialogues so that problematic assumptions and conventions can be revealed. Argyris identified some key questions to consider.

  • What is the current theory in use?
  • How does it differ from proposed strategies and goals?
  • What unspoken rules are being followed, and are they detrimental?
  • What could change, and how?
  • Forget the details; what’s the bigger picture?

Meaningful learning doesn’t happen without focused effort. Double loop learning is the key to turning experience into improvements, information into action, and conversations into progress.

Smarter, Not Harder: How to Succeed at Work

We each have 96 energy blocks each day to spend however we’d like. Using this energy blocking system will ensure you’re spending each block wisely to make the most progress on your most important goals.

Warren Buffett “ruled out paying attention to almost anything but business—art, literature, science, travel, architecture—so that he could focus on his passion,” wrote Alice Schroder in her book The Snowball. This isn’t unique to Warren Buffett. Almost all of the successful people I know follow a similar approach to focusing their efforts.

The key to better outcomes is not working harder. Most of us already work long hours. We take work home, we’re always on, we tackle anything we’re asked to do, and we do it to the best of our ability. It doesn’t seem to matter how many things we check off our to-do lists or how many hours we work, though; our performance doesn’t seem to improve.

While we like to think of exceptionally successful people as being more talented than we are, the more I looked around, the more I discovered that was rarely the case. One of the reasons we think that talent is the explanation is that it gives us a pass. We’re not as talented as those super-successful people are, so of course we don’t have the same results they have. The problem with this explanation is that it’s wrong. Talent matters, of course, but not as much as you think.

As I looked around, I noticed that the most successful people I know have one thing in common: they are masters at eliminating the unnecessary from their lives. The French writer Antoine de Saint-Exupéry hit on the same idea, writing in his memoir, “Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away.” This principle, it turns out, is the key to success.

Incredibly successful people focus their time on just a few priorities and obsess over doing things right. This is simple but not easy.

Here’s one method to help you choose what to focus on and how to use your time (it’s a mix of time blocking and a variation of Warren Buffet’s two-list system):

Step 1: Change how you think about your day. Think of your day as having 96 blocks of energy, with each block being a 15-minute chunk of time (four blocks per hour × 24 hours = 96). A week has 672 blocks, and a year has 34,944.

Not all of those blocks are direct productivity blocks — they can’t be unless we’re androids. Given that we’re human, we need to allocate some blocks to activities that humans require for good health, like sleeping. Sleeping for eight hours uses 32 blocks of your 96-block day. Let’s say that another 32 blocks go toward family, friends, commuting, and general life stuff. That leaves 32 blocks for you to apply your energy toward keeping your job and doing something amazing.

Think you can get more done by sleeping less? Think again. Sleep has a way of affecting your other blocks. If you get enough sleep, the other 64 blocks are amplified. If you don’t get enough, their efficacy is reduced. Almost every successful person I know makes sleep a priority. Some go as far as getting ChiliPads to regulate their bed’s temperature and going to bed at exactly the same time every night; others use the same wind-down routine every night. Almost all of them go to bed early (or least before 12), and wake up early to get a start on the day.

Step 2: Write a list of all the goals you have. When I did this, I stopped at 100 and I could have kept going. I would venture to guess that if you sat alone for half an hour, you’d come up with just as many. Writing them down not only frees up your mind from keeping track of them but also gives you a visual representation of just how many things you want to do.

Step 3: Circle your top three goals. Take your time; there’s no need to rush. It’s hard to narrow them down, which is why so few of us think about these things consciously.

Step 4: Eliminate everything else. This is where things get interesting. When it comes to the 32 blocks of work time you have to allocate, everything that’s not on your top-three list should be dropped. You can pick up the “everything-else” list after you’ve achieved a goal, but until then it’s what Warren Buffet calls your “avoid-at-all-costs” list.

The Power of Focus

Let’s look at an example. Say we’re working on 10 projects. We have priorities that we try to focus on, but we also give the other projects a decent effort. Let’s say we allocate our 32 blocks of energy to our 10 projects as follows:

1. 10
2. 5
3. 5
4. 3
5. 2
6. 2
7. 2
8. 1
9. 1
10. 1

Not bad, eh? But if we do the above exercise, it will look more like this:

1. 16
2. 8
3. 8

Focus directs your energy toward your goals. The more focused you are, the more energy goes toward what you’re working on.

Eliminating things that you care about is hard. You have to make tradeoffs. If you can’t make those tradeoffs, you’re not going to get far. The cost of not being focused is high.

The direction you’re going in is important to the extent that you’re applying energy to it. If you’re focusing your energy on 10 goals, you’re not focused, and instead of having a few completed projects, you have numerous unfinished projects. Like Sisyphus, you’re constantly getting halfway up the mountain but never reaching the top. I can’t think of a bigger waste of time.

It’s not about working harder to get better results. You have only so much energy to apply. Pick what matters. Eliminate the rest.

FS Members can discuss this article on the Learning Community Forum.

Pain Plus Reflection Equals Progress

Our most painful moments are also our most important. Rather than run from pain, we need to identify it, accept it, and learn how to use it to better ourselves.

***

Our images of learning are filled with positive thoughts about how we learn from others. We read memoirs from the titans of industry, read op-ed pieces from thought leaders, and generally try to soak up as much as we can. With all this attention placed on learning and improving and knowing, it might surprise you that we’re missing one of the most obvious sources of learning: ourselves.

Pain is something we all try to avoid, both instinctively and consciously. But if you want to do amazing things in life, you need to change your relationship with pain. Ray Dalio, the longtime leader of Bridgewater, the largest hedge fund in the world, argues that pain “is a signal that you need to find solutions so you can progress.” Only by exploring it and reflecting on it can we start to learn and evolve. “After seeing how much more effective it is to face the painful realities that are caused by your problems, mistakes, and weaknesses, I believe you won’t want to operate any other way,” Dalio writes in his book Principles.

There is an adage that says if you’re not failing, then you’re not really pushing the limits of what’s possible. Sometimes when we push, we fall, and sometimes we break through. When we fall, the key is to reflect on the failures. Doing that in the moment, however, is often a very painful process that goes against our human operating system.

Our painful moments are important moments. When we confront something painful, we are left with a choice between an ugly and painful truth or a beautiful delusion. Many of us opt for the latter and it slows our progress.

We’ve known about this problem for a long time: We’ve watched others make mistakes and fail to learn from them. They are blind to the mistakes that are so clear to us. They run from the pain that could be the source of learning. They become comfortable operating without pain. They become comfortable protecting the version of themselves that existed yesterday, not the version of themselves that’s better than they were yesterday.

Rather than run from pain, we need to identify it, accept it, and learn how to use it to better ourselves. For us to adapt, we need to learn from the uncomfortable moments. We need to value a tough-love approach, where people show us what we’re missing and help us get better.

You have a choice: You can prefer that the people around you fail to point out your blind spots, or you can prefer that they do. If you want them to, it’s going to be uncomfortable. It’s going to be awkward. It’s going to hurt. Embracing this approach, however, means that you will learn faster and go further. It’s a great example of first-order negative, subsequent positive. That means the first step is the hardest and it hurts, but after that, you reap the benefits.

Of course, many of us prefer to tell ourselves that we have no weaknesses. That the world is wrong, and we are right. We hide our weaknesses not only from others but also from ourselves. Being open about weaknesses means being open about who we are in the moment. It doesn’t mean that’s who we are forever. But we can’t improve what we can’t see.

Many the people I talk to on our podcast have endured setbacks that seemed catastrophic at the time. Ray Dalio punched his boss in the face. Annie Duke lost millions. Countless others have been divorced, fired, or otherwise in a position where they felt unable to go on. I’ve been there too. It’s in these moments, however, that a meaningful part of life happens. Life is about what you do in the painful moments. The choices you make. The path you choose.

The easy path means being the same person you were yesterday. It’s easy and comfortable to convince yourself that the world should work differently than it does, that you have nothing to learn from the pain. The harder path is to embrace the pain and ask yourself what you could have done differently or better or what your blind spot was. It’s harder because you stop living in the bubble of your own creation and start living in reality.

The people who choose the easy path have a very hard life, whereas those who choose the harder path have an easier life. If we don’t learn to embrace being uncomfortable, we will need to learn how to embrace irrelevance, and that will be much harder.

FS Members can discuss this article on the Learning Community Forum.

Deductive vs Inductive Reasoning: Make Smarter Arguments, Better Decisions, and Stronger Conclusions

You can’t prove truth, but using deductive and inductive reasoning, you can get close. Learn the difference between the two types of reasoning and how to use them when evaluating facts and arguments.

***

As odd as it sounds, in science, law, and many other fields, there is no such thing as proof — there are only conclusions drawn from facts and observations. Scientists cannot prove a hypothesis, but they can collect evidence that points to its being true. Lawyers cannot prove that something happened (or didn’t), but they can provide evidence that seems irrefutable.

The question of what makes something true is more relevant than ever in this era of alternative facts and fake news. This article explores truth — what it means and how we establish it. We’ll dive into inductive and deductive reasoning as well as a bit of history.

“Contrariwise,” continued Tweedledee, “if it was so, it might be; and if it were so, it would be; but as it isn’t, it ain’t. That’s logic.”

— Lewis Carroll, Through the Looking-Glass

The essence of reasoning is a search for truth. Yet truth isn’t always as simple as we’d like to believe it is.

For as far back as we can imagine, philosophers have debated whether absolute truth exists. Although we’re still waiting for an answer, this doesn’t have to stop us from improving how we think by understanding a little more.

In general, we can consider something to be true if the available evidence seems to verify it. The more evidence we have, the stronger our conclusion can be. When it comes to samples, size matters. As my friend Peter Kaufman says:

What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history….

In some areas, it is necessary to accept that truth is subjective. For example, ethicists accept that it is difficult to establish absolute truths concerning whether something is right or wrong, as standards change over time and vary around the world.

When it comes to reasoning, a correctly phrased statement can be considered to have objective truth. Some statements have an objective truth that we cannot ascertain at present. For example, we do not have proof for the existence or non-existence of aliens, although proof does exist somewhere.

Deductive and inductive reasoning are both based on evidence.

Several types of evidence are used in reasoning to point to a truth:

  • Direct or experimental evidence — This relies on observations and experiments, which should be repeatable with consistent results.
  • Anecdotal or circumstantial evidence — Overreliance on anecdotal evidence can be a logical fallacy because it is based on the assumption that two coexisting factors are linked even though alternative explanations have not been explored. The main use of anecdotal evidence is for forming hypotheses which can then be tested with experimental evidence.
  • Argumentative evidence — We sometimes draw conclusions based on facts. However, this evidence is unreliable when the facts are not directly testing a hypothesis. For example, seeing a light in the sky and concluding that it is an alien aircraft would be argumentative evidence.
  • Testimonial evidence — When an individual presents an opinion, it is testimonial evidence. Once again, this is unreliable, as people may be biased and there may not be any direct evidence to support their testimony.

“The weight of evidence for an extraordinary claim must be proportioned to its strangeness.”

— Laplace, Théorie analytique des probabilités (1812)

Reasoning by Induction

The fictional character Sherlock Holmes is a master of induction. He is a careful observer who processes what he sees to reach the most likely conclusion in the given set of circumstances. Although he pretends that his knowledge is of the black-or-white variety, it often isn’t. It is true induction, coming up with the strongest possible explanation for the phenomena he observes.

Consider his description of how, upon first meeting Watson, he reasoned that Watson had just come from Afghanistan:

“Observation with me is second nature. You appeared to be surprised when I told you, on our first meeting, that you had come from Afghanistan.”
“You were told, no doubt.”

“Nothing of the sort. I knew you came from Afghanistan. From long habit the train of thoughts ran so swiftly through my mind, that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, ‘Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.’ The whole train of thought did not occupy a second. I then remarked that you came from Afghanistan, and you were astonished.”

(From Sir Arthur Conan Doyle’s A Study in Scarlet)

Inductive reasoning involves drawing conclusions from facts, using logic. We draw these kinds of conclusions all the time. If someone we know to have good literary taste recommends a book, we may assume that means we will enjoy the book.

Induction can be strong or weak. If an inductive argument is strong, the truth of the premise would mean the conclusion is likely. If an inductive argument is weak, the logic connecting the premise and conclusion is incorrect.

There are several key types of inductive reasoning:

  • Generalized — Draws a conclusion from a generalization. For example, “All the swans I have seen are white; therefore, all swans are probably white.”
  • Statistical — Draws a conclusion based on statistics. For example, “95 percent of swans are white” (an arbitrary figure, of course); “therefore, a randomly selected swan will probably be white.”
  • Sample — Draws a conclusion about one group based on a different, sample group. For example, “There are ten swans in this pond and all are white; therefore, the swans in my neighbor’s pond are probably also white.”
  • Analogous — Draws a conclusion based on shared properties of two groups. For example, “All Aylesbury ducks are white. Swans are similar to Aylesbury ducks. Therefore, all swans are probably white.”
  • Predictive — Draws a conclusion based on a prediction made using a past sample. For example, “I visited this pond last year and all the swans were white. Therefore, when I visit again, all the swans will probably be white.”
  • Causal inference — Draws a conclusion based on a causal connection. For example, “All the swans in this pond are white. I just saw a white bird in the pond. The bird was probably a swan.”

The entire legal system is designed to be based on sound reasoning, which in turn must be based on evidence. Lawyers often use inductive reasoning to draw a relationship between facts for which they have evidence and a conclusion.

The initial facts are often based on generalizations and statistics, with the implication that a conclusion is most likely to be true, even if that is not certain. For that reason, evidence can rarely be considered certain. For example, a fingerprint taken from a crime scene would be said to be “consistent with a suspect’s prints” rather than being an exact match. Implicit in that statement is the assertion that it is statistically unlikely that the prints are not the suspect’s.

Inductive reasoning also involves Bayesian updating. A conclusion can seem to be true at one point until further evidence emerges and a hypothesis must be adjusted. Bayesian updating is a technique used to modify the probability of a hypothesis’s being true as new evidence is supplied. When inductive reasoning is used in legal situations, Bayesian thinking is used to update the likelihood of a defendant’s being guilty beyond a reasonable doubt as evidence is collected. If we imagine a simplified, hypothetical criminal case, we can picture the utility of Bayesian inference combined with inductive reasoning.

Let’s say someone is murdered in a house where five other adults were present at the time. One of them is the primary suspect, and there is no evidence of anyone else entering the house. The initial probability of the prime suspect’s having committed the murder is 20 percent. Other evidence will then adjust that probability. If the four other people testify that they saw the suspect committing the murder, the suspect’s prints are on the murder weapon, and traces of the victim’s blood were found on the suspect’s clothes, jurors may consider the probability of that person’s guilt to be close enough to 100 percent to convict. Reality is more complex than this, of course. The conclusion is never certain, only highly probable.

One key distinction between deductive and inductive reasoning is that the latter accepts that a conclusion is uncertain and may change in the future. A conclusion is either strong or weak, not right or wrong. We tend to use this type of reasoning in everyday life, drawing conclusions from experiences and then updating our beliefs.

A conclusion is either strong or weak, not right or wrong.

Everyday inductive reasoning is not always correct, but it is often useful. For example, superstitious beliefs often originate from inductive reasoning. If an athlete performed well on a day when they wore their socks inside out, they may conclude that the inside-out socks brought them luck. If future successes happen when they again wear their socks inside out, the belief may strengthen. Should that not be the case, they may update their belief and recognize that it is incorrect.

Another example (let’s set aside the question of whether turkeys can reason): A farmer feeds a turkey every day, so the turkey assumes that the farmer cares for its wellbeing. Only when Thanksgiving rolls around does that assumption prove incorrect.

The issue with overusing inductive reasoning is that cognitive shortcuts and biases can warp the conclusions we draw. Our world is not always as predictable as inductive reasoning suggests, and we may selectively draw upon past experiences to confirm a belief. Someone who reasons inductively that they have bad luck may recall only unlucky experiences to support that hypothesis and ignore instances of good luck.

In The 12 Secrets of Persuasive Argument, the authors write:

In inductive arguments, focus on the inference. When a conclusion relies upon an inference and contains new information not found in the premises, the reasoning is inductive. For example, if premises were established that the defendant slurred his words, stumbled as he walked, and smelled of alcohol, you might reasonably infer the conclusion that the defendant was drunk. This is inductive reasoning. In an inductive argument the conclusion is, at best, probable. The conclusion is not always true when the premises are true. The probability of the conclusion depends on the strength of the inference from the premises. Thus, when dealing with inductive reasoning, pay special attention to the inductive leap or inference, by which the conclusion follows the premises.

… There are several popular misconceptions about inductive and deductive reasoning. When Sherlock Holmes made his remarkable “deductions” based on observations of various facts, he was usually engaging in inductive, not deductive, reasoning.

In Inductive Reasoning, Aiden Feeney and Evan Heit write:

…inductive reasoning … corresponds to everyday reasoning. On a daily basis we draw inferences such as how a person will probably act, what the weather will probably be like, and how a meal will probably taste, and these are typical inductive inferences.

[…]

[I]t is a multifaceted cognitive activity. It can be studied by asking young children simple questions involving cartoon pictures, or it can be studied by giving adults a variety of complex verbal arguments and asking them to make probability judgments.

[…]

[I]nduction is related to, and it could be argued is central to, a number of other cognitive activities, including categorization, similarity judgment, probability judgment, and decision making. For example, much of the study of induction has been concerned with category-based induction, such as inferring that your next door neighbor sleeps on the basis that your neighbor is a human animal, even if you have never seen your neighbor sleeping.

“A very great deal more truth can become known than can be proven.”

— Richard Feynman

Reasoning by Deduction

Deduction begins with a broad truth (the major premise), such as the statement that all men are mortal. This is followed by the minor premise, a more specific statement, such as that Socrates is a man. A conclusion follows: Socrates is mortal. If the major premise is true and the minor premise is true the conclusion cannot be false.

Deductive reasoning is black and white; a conclusion is either true or false and cannot be partly true or partly false. We decide whether a deductive statement is true by assessing the strength of the link between the premises and the conclusion. If all men are mortal and Socrates is a man, there is no way he can not be mortal, for example. There are no situations in which the premise is not true, so the conclusion is true.

In science, deduction is used to reach conclusions believed to be true. A hypothesis is formed; then evidence is collected to support it. If observations support its truth, the hypothesis is confirmed. Statements are structured in the form of “if A equals B, and C is A, then C is B.” If A does not equal B, then C will not equal B. Science also involves inductive reasoning when broad conclusions are drawn from specific observations; data leads to conclusions. If the data shows a tangible pattern, it will support a hypothesis.

For example, having seen ten white swans, we could use inductive reasoning to conclude that all swans are white. This hypothesis is easier to disprove than to prove, and the premises are not necessarily true, but they are true given the existing evidence and given that researchers cannot find a situation in which it is not true. By combining both types of reasoning, science moves closer to the truth. In general, the more outlandish a claim is, the stronger the evidence supporting it must be.

We should be wary of deductive reasoning that appears to make sense without pointing to a truth. Someone could say “A dog has four paws. My pet has four paws. Therefore, my pet is a dog.” The conclusion sounds logical but isn’t, because the initial premise is too specific.

The History of Reasoning

The discussion of reasoning and what constitutes truth dates back to Plato and Aristotle.

Plato (429–347 BC) believed that all things are divided into the visible and the intelligible. Intelligible things can be known through deduction (with observation being of secondary importance to reasoning) and are true knowledge.

Aristotle took an inductive approach, emphasizing the need for observations to support knowledge. He believed that we can reason only from discernable phenomena. From there, we use logic to infer causes.

Debate about reasoning remained much the same until the time of Isaac Newton. Newton’s innovative work was based on observations, but also on concepts that could not be explained by a physical cause (such as gravity). In his Principia, Newton outlined four rules for reasoning in the scientific method:

  1. “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” (We refer to this rule as Occam’s Razor.)
  2. “Therefore, to the same natural effects we must, as far as possible, assign the same causes.”
  3. “The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.”
  4. “In experimental philosophy, we are to look upon propositions collected by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, ’till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.”

In 1843, philosopher John Stuart Mill published A System of Logic, which further refined our understanding of reasoning. Mill believed that science should be based on a search for regularities among events. If a regularity is consistent, it can be considered a law. Mill described five methods for identifying causes by noting regularities. These methods are still used today:

  • Direct method of agreement — If two instances of a phenomenon have a single circumstance in common, the circumstance is the cause or effect.
  • Method of difference — If a phenomenon occurs in one experiment and does not occur in another, and the experiments are the same except for one factor, that is the cause, part of the cause, or the effect.
  • Joint method of agreement and difference — If two instances of a phenomenon have one circumstance in common, and two instances in which it does not occur have nothing in common except the absence of that circumstance, then that circumstance is the cause, part of the cause, or the effect.
  • Method of residue — When you subtract any part of a phenomenon known to be caused by a certain antecedent, the remaining residue of the phenomenon is the effect of the remaining antecedents.
  • Method of concomitant variations — If a phenomenon varies when another phenomenon varies in a particular way, the two are connected.

Karl Popper was the next theorist to make a serious contribution to the study of reasoning. Popper is well known for his focus on disconfirming evidence and disproving hypotheses. Beginning with a hypothesis, we use deductive reasoning to make predictions. A hypothesis will be based on a theory — a set of independent and dependent statements. If the predictions are true, the theory is true, and vice versa. Popper’s theory of falsification (disproving something) is based on the idea that we cannot prove a hypothesis; we can only show that certain predictions are false. This process requires vigorous testing to identify any anomalies, and Popper does not accept theories that cannot be physically tested. Any phenomenon not present in tests cannot be the foundation of a theory, according to Popper. The phenomenon must also be consistent and reproducible. Popper’s theories acknowledge that theories that are accepted at one time are likely to later be disproved. Science is always changing as more hypotheses are modified or disproved and we inch closer to the truth.

Conclusion

In How to Deliver a TED Talk, Jeremey Donovan writes:

No discussion of logic is complete without a refresher course in the difference between inductive and deductive reasoning. By its strictest definition, inductive reasoning proves a general principle—your idea worth spreading—by highlighting a group of specific events, trends, or observations. In contrast, deductive reasoning builds up to a specific principle—again, your idea worth spreading—through a chain of increasingly narrow statements.

Logic is an incredibly important skill, and because we use it so often in everyday life, we benefit by clarifying the methods we use to draw conclusions. Knowing what makes an argument sound is valuable for making decisions and understanding how the world works. It helps us to spot people who are deliberately misleading us through unsound arguments. Understanding reasoning is also helpful for avoiding fallacies and for negotiating.

FS Members can discuss this article on the Learning Community Forum.

The Pygmalion Effect: Proving Them Right

The Pygmalion Effect is a powerful secret weapon. Without even realizing it, we can nudge others towards success. In this article, discover how expectations can influence performance for better or worse.

How Expectations Influence Performance

Many people believe that their pets or children are of unusual intelligence or can understand everything they say. Some people have stories of abnormal feats. In the late 19th century, one man claimed that about his horse and appeared to have evidence. William Von Osten was a teacher and horse trainer. He believed that animals could learn to read or count. Von Osten’s initial attempts with dogs and a bear were unsuccessful, but when he began working with an unusual horse, he changed our understanding of psychology. Known as Clever Hans, the animal could answer questions, with 90% accuracy, by tapping his hoof. He could add, subtract, multiply, divide, and tell the time and the date.

Clever Hans could also read and understand questions written or asked in German. Crowds flocked to see the horse, and the scientific community soon grew interested. Researchers studied the horse, looking for signs of trickery. Yet they found none. The horse could answer questions asked by anyone, even if Von Osten was absent. This indicated that no signaling was at play. For a while, the world believed the horse was truly clever.

Then psychologist Oskar Pfungst turned his attention to Clever Hans. Assisted by a team of researchers, he uncovered two anomalies. When blinkered or behind a screen, the horse could not answer questions. Likewise, he could respond only if the questioner knew the answer. From these observations, Pfungst deduced that Clever Hans was not making any mental calculations. Nor did he understand numbers or language in the human sense. Although Von Osten had intended no trickery, the act was false.

Instead, Clever Hans had learned to detect subtle, yet consistent nonverbal cues. When someone asked a question, Clever Hans responded to their body language with a degree of accuracy many poker players would envy. For example, when someone asked Clever Hans to make a calculation, he would begin tapping his hoof. Once he reached the correct answer, the questioner would show involuntary signs. Pfungst found that many people tilted their head at this point. Clever Hans would recognize this behavior and stop. When blinkered or when the questioner did not know the answer, the horse didn’t have a clue. When he couldn’t see the cues, he had no answer.

The Pygmalion Effect

Von Osten died in 1909 and Clever Hans disappeared from record. But his legacy lives on in a particular branch of psychology.

The case of Clever Hans is of less interest than the research it went on to provoke. Psychologists working in the decades following began to study how the expectations of others affect us. If someone expected Clever Hans to answer a question and ensured that he knew it, could the same thing occur elsewhere?

Could we be, at times, responding to subtle cues? Decades of research have provided consistent, robust evidence that the answer is yes. It comes down to the concepts of the self-fulfilling prophecy and the Pygmalion effect.

The Pygmalion effect is a psychological phenomenon wherein high expectations lead to improved performance in a given area. Its name comes from the story of Pygmalion, a mythical Greek sculptor. Pygmalion carved a statue of a woman and then became enamored with it. Unable to love a human, Pygmalion appealed to Aphrodite, the goddess of love. She took pity and brought the statue to life. The couple married and went on to have a daughter, Paphos.

False Beliefs Come True Over Time

In the same way Pygmalion’s fixation on the statue brought it to life, our focus on a belief or assumption can do the same. The flipside is the Golem effect, wherein low expectations lead to decreased performance. Both effects come under the category of self-fulfilling prophecies. Whether the expectation comes from us or others, the effect manifests in the same way.

The Pygmalion effect has profound ramifications in schools and organizations and with regard to social class and stereotypes. By some estimations, it is the result of our brains’ poorly distinguishing between perception and expectation. Although many people purport to want to prove their critics wrong, we often merely end up proving our supporters right.

Understanding the Pygmalion effect is a powerful way to positively affect those around us, from our children and friends to employees and leaders. If we don’t take into account the ramifications of our expectations, we may miss out on the dramatic benefits of holding high standards.

The concept of a self-fulfilling prophecy is attributed to sociologist Robert K. Merton. In 1948, Merton published the first paper on the topic. In it, he described the phenomenon as a false belief that becomes true over time. Once this occurs, it creates a feedback loop. We assume we were always correct because it seems so in hindsight. Merton described a self-fulfilling prophecy as self-hypnosis through our own propaganda.

As with many psychological concepts, people had a vague awareness of its existence long before research confirmed anything. Renowned orator and theologian Jacques Benigne Bossuet declared in the 17th century that “The greatest weakness of all weaknesses is to fear too much to appear weak.”

Even Sigmund Freud was aware of self-fulfilling prophecies. In A Childhood Memory of Goethe, Freud wrote: “If a man has been his mother’s undisputed darling he retains throughout life the triumphant feeling, the confidence in success, which not seldom brings actual success with it.”

The IQ of Students

Research by Robert Rosenthal and Lenore Jacobson examined the influence of teachers’ expectations on students’ performance. Their subsequent paper is one of the most cited and discussed psychological studies ever conducted.

Rosenthal and Jacobson began by testing the IQ of elementary school students. Teachers were told that the IQ test showed around one-fifth of their students to be unusually intelligent. For ethical reasons, they did not label an alternate group as unintelligent and instead used unlabeled classmates as the control group. It will doubtless come as no surprise that the “gifted” students were chosen at random. They should not have had a significant statistical advantage over their peers. As the study period ended, all students had their IQs retested. Both groups showed an improvement. Yet those who were described as intelligent experienced much greater gains in their IQ points. Rosenthal and Jacobson attributed this result to the Pygmalion effect. Teachers paid more attention to “gifted” students, offering more support and encouragement than they would otherwise. Picked at random, those children ended up excelling. Sadly, no follow-up studies were ever conducted, so we do not know the long-term impact on the children involved.

Prior to studying the effect on children, Rosenthal performed preliminary research on animals. Students were given rats from two groups, one described as “maze dull” and the other as “maze bright.” Researchers claimed that the former group could not learn to properly negotiate a maze, but the latter could with ease. As you might expect, the groups of rats were the same. Like the gifted and nongifted children, they were chosen at random. Yet by the time the study finished, the “maze-bright” rats appeared to have learned faster. The students considered them tamer and more pleasant to work with than the “maze-dull” rats.

In general, authority figures have the power to influence how the people subordinate to them behave by holding high expectations. Whether consciously or not, leaders facilitate changes in behavior, such as by giving people more responsibility or setting stretch goals. Like the subtle cues that allowed Clever Hans to make calculations, these small changes in treatment can promote learning and growth. If a leader thinks an employee is competent, they will treat them as such. The employee then gets more opportunities to develop their competence, and their performance improves in a positive feedback loop. This works both ways. When we expect an authority figure to be competent or successful, we tend to be attentive and supportive. In the process, we bolster their performance, too. Students who act interested in lectures create interesting lecturers.

In Pygmalion in Management, J. Sterling Livingston writes,

Some managers always treat their subordinates in a way that leads to superior performance. But most … unintentionally treat their subordinates in a way that leads to lower performance than they are capable of achieving. The way managers treat their subordinates is subtly influenced by what they expect of them. If manager’s expectations are high, productivity is likely to be excellent. If their expectations are low, productivity is likely to be poor. It is as though there were a law that caused subordinates’ performance to rise or fall to meet managers’ expectations.

The Pygmalion effect shows us that our reality is negotiable and can be manipulated by others — on purpose or by accident. What we achieve, how we think, how we act, and how we perceive our capabilities can be influenced by the expectations of those around us. Those expectations may be the result of biased or irrational thinking, but they have the power to affect us and change what happens. While cognitive biases distort only what we perceive, self-fulfilling prophecies alter what happens.

Of course, the Pygmalion effect works only when we are physically capable of achieving what is expected of us. After Rosenthal and Jacobson published their initial research, many people were entranced by the implication that we are all capable of more than we think. Although that can be true, we have no indication that any of us can do anything if someone believes we can. Instead, the Pygmalion effect seems to involve us leveraging our full capabilities and avoiding the obstacles created by low expectations.

Clever Hans truly was an intelligent horse, but he was smart because he could read almost imperceptible nonverbal cues, not because he could do math. So, he did have unusual capabilities, as shown by the fact that few other animals have done what he did.

We can’t do anything just because someone expects us to. Overly high expectations can also be stressful. When someone sets the bar too high, we can get discouraged and not even bother trying. Stretch goals and high expectations are beneficial, up to the point of diminishing returns. Research by McClelland and Atkinson indicates that the Pygmalion effect drops off if we see our chance of success as being less than 50%. If an endeavor seems either certain or completely uncertain, the Pygmalion effect does not hold. When we are stretched but confident, high expectations can help us achieve more.

Check Your Assumptions

In Self-Fulfilling Prophecy: A Practical Guide to Its Use in Education, Robert T. Tauber describes an exercise in which people are asked to list their assumptions about people with certain descriptions. These included a cheerleader, “a minority woman with four kids at the market using food stamps,” and a “person standing outside smoking on a cold February day.” An anonymous survey of undergraduate students revealed mostly negative assumptions. Tauber asks the reader to consider how being exposed to these types of assumptions might affect someone’s day-to-day life.

The expectations people have of us affect us in countless subtle ways each day. Although we rarely notice it (unless we are on the receiving end of overt racism, sexism, and other forms of bias), those expectations dictate the opportunities we are offered, how we are spoken to, and the praise and criticism we receive. Individually, these knocks and nudges have minimal impact. In the long run, they might dictate whether we succeed or fail or fall somewhere on the spectrum in between.

The important point to note about the Pygmalion effect is that it creates a literal change in what occurs. There is nothing mystical about the effect. When we expect someone to perform well in any capacity, we treat them in a different way. Teachers tend to show more positive body language towards students they expect to be gifted. They may teach them more challenging material, offer more chances to ask questions, and provide personalized feedback. As Carl Sagan declared, “The visions we offer our children shape the future. It matters what those visions are. Often they become self-fulfilling prophecies. Dreams are maps.”

A perfect illustration is the case of James Sweeney and George Johnson, as described in Pygmalion in Management. Sweeney was a teacher at Tulane University, where Johnson worked as a porter. Aware of the Pygmalion effect, Sweeney had a hunch that he could teach anyone to be a competent computer operator. He began his experiment, offering Johnson lessons each afternoon. Other university staff were dubious, especially as Johnson appeared to have a low IQ. But the Pygmalion effect won out and the former janitor eventually became responsible for training new computer operators.

The Pygmalion effect is a powerful secret weapon. Who wouldn’t want to help their children get smarter, help employees and leaders be more competent, and generally push others to do well? That’s possible if we raise our standards and see others in the best possible light. It is not necessary to actively attempt to intervene. Without even realizing it, we can nudge others towards success. If that sounds too good to be true, remember that the effect holds up for everything from rats to CEOs.

Members of our Learning Community can discuss this article here.

The Value of Probabilistic Thinking: Spies, Crime, and Lightning Strikes

Probabilistic Thinking (c) 2018 Farnam Street Media Inc. All rights reserved. May not be used without written permission.

Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass. It is one of the best tools we have to improve the accuracy of our decisions. In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes. When we know these our decisions can be more precise and effective.

Are you going to get hit by lightning or not?

Why we need the concept of probabilities at all is worth thinking about. Things either are or are not, right? We either will get hit by lightning today or we won’t. The problem is, we just don’t know until we live out the day, which doesn’t help us at all when we make our decisions in the morning. The future is far from determined and we can better navigate it by understanding the likelihood of events that could impact us.

Our lack of perfect information about the world gives rise to all of probability theory, and its usefulness. We know now that the future is inherently unpredictable because not all variables can be known and even the smallest error imaginable in our data very quickly throws off our predictions. The best we can do is estimate the future by generating realistic, useful probabilities. So how do we do that?

Probability is everywhere, down to the very bones of the world. The probabilistic machinery in our minds—the cut-to-the-quick heuristics made so famous by the psychologists Daniel Kahneman and Amos Tversky—was evolved by the human species in a time before computers, factories, traffic, middle managers, and the stock market. It served us in a time when human life was about survival, and still serves us well in that capacity.

But what about today—a time when, for most of us, survival is not so much the issue? We want to thrive. We want to compete, and win. Mostly, we want to make good decisions in complex social systems that were not part of the world in which our brains evolved their (quite rational) heuristics.

For this, we need to consciously add in a needed layer of probability awareness. What is it and how can I use it to my advantage?

There are three important aspects of probability that we need to explain so you can integrate them into your thinking to get into the ballpark and improve your chances of catching the ball:

  1. Bayesian thinking,
  2. Fat-tailed curves
  3. Asymmetries

Thomas Bayes and Bayesian thinking: Bayes was an English minister in the first half of the 18th century, whose most famous work, “An Essay Toward Solving a Problem in the Doctrine of Chances” was brought to the attention of the Royal Society by his friend Richard Price in 1763—two years after his death. The essay, the key to what we now know as Bayes’s Theorem, concerned how we should adjust probabilities when we encounter new data.

The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new. As much of it as possible. Bayesian thinking allows us to use all relevant prior information in making decisions. Statisticians might call it a base rate, taking in outside information about past situations like the one you’re in.

Consider the headline “Violent Stabbings on the Rise.” Without Bayesian thinking, you might become genuinely afraid because your chances of being a victim of assault or murder is higher than it was a few months ago. But a Bayesian approach will have you putting this information into the context of what you already know about violent crime.

You know that violent crime has been declining to its lowest rates in decades. Your city is safer now than it has been since this measurement started. Let’s say your chance of being a victim of a stabbing last year was one in 10,000, or 0.01%. The article states, with accuracy, that violent crime has doubled. It is now two in 10,000, or 0.02%. Is that worth being terribly worried about? The prior information here is key. When we factor it in, we realize that our safety has not really been compromised.

Conversely, if we look at the diabetes statistics in the United States, our application of prior knowledge would lead us to a different conclusion. Here, a Bayesian analysis indicates you should be concerned. In 1958, 0.93% of the population was diagnosed with diabetes. In 2015 it was 7.4%. When you look at the intervening years, the climb in diabetes diagnosis is steady, not a spike. So the prior relevant data, or priors, indicate a trend that is worrisome.

It is important to remember that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually, some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know. When making uncertain decisions, it’s nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation?

Now we need to look at fat-tailed curves: Many of us are familiar with the bell curve, that nice, symmetrical wave that captures the relative frequency of so many things from height to exam scores. The bell curve is great because it’s easy to understand and easy to use. Its technical name is “normal distribution.” If we know we are in a bell curve situation, we can quickly identify our parameters and plan for the most likely outcomes.

Fat-tailed curves are different. Take a look.

(c) 2018 Farnam Street Media Inc. All rights reserved. May not be used without written permission.

At first glance they seem similar enough. Common outcomes cluster together, creating a wave. The difference is in the tails. In a bell curve the extremes are predictable. There can only be so much deviation from the mean. In a fat-tailed curve there is no real cap on extreme events.

The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means that we can’t rely on the most common outcomes as representing the average. The more extreme events that are possible, the higher the probability that one of them will occur. Crazy things are definitely going to happen, and we have no way of identifying when.

Think of it this way. In a bell curve type of situation, like displaying the distribution of height or weight in a human population, there are outliers on the spectrum of possibility, but the outliers have a fairly well defined scope. You’ll never meet a man who is ten times the size of an average man. But in a curve with fat tails, like wealth, the central tendency does not work the same way. You may regularly meet people who are ten, 100, or 10,000 times wealthier than the average person. That is a very different type of world.

Let’s re-approach the example of the risks of violence we discussed in relation to Bayesian thinking. Suppose you hear that you had a greater risk of slipping on the stairs and cracking your head open than being killed by a terrorist. The statistics, the priors, seem to back it up: 1,000 people slipped on the stairs and died last year in your country and only 500 died of terrorism. Should you be more worried about stairs or terror events?

Some use examples like these to prove that terror risk is low—since the recent past shows very few deaths, why worry?[1] The problem is in the fat tails: The risk of terror violence is more like wealth, while stair-slipping deaths are more like height and weight. In the next ten years, how many events are possible? How fat is the tail?

The important thing is not to sit down and imagine every possible scenario in the tail (by definition, it is impossible) but to deal with fat-tailed domains in the correct way: by positioning ourselves to survive or even benefit from the wildly unpredictable future, by being the only ones thinking correctly and planning for a world we don’t fully understand.

Asymmetries: Finally, you need to think about something we might call “metaprobability” —the probability that your probability estimates themselves are any good.

This massively misunderstood concept has to do with asymmetries. If you look at nicely polished stock pitches made by professional investors, nearly every time an idea is presented, the investor looks their audience in the eye and states they think they’re going to achieve a rate of return of 20% to 40% per annum, if not higher. Yet exceedingly few of them ever attain that mark, and it’s not because they don’t have any winners. It’s because they get so many so wrong. They consistently overestimate their confidence in their probabilistic estimates. (For reference, the general stock market has returned no more than 7% to 8% per annum in the United States over a long period, before fees.)

Another common asymmetry is people’s ability to estimate the effect of traffic on travel time. How often do you leave “on time” and arrive 20% early? Almost never? How often do you leave “on time” and arrive 20% late? All the time? Exactly. Your estimation errors are asymmetric, skewing in a single direction. This is often the case with probabilistic decision-making.[2]

Far more probability estimates are wrong on the “over-optimistic” side than the “under-optimistic” side. You’ll rarely read about an investor who aimed for 25% annual return rates who subsequently earned 40% over a long period of time. You can throw a dart at the Wall Street Journal and hit the names of lots of investors who aim for 25% per annum with each investment and end up closer to 10%.

The spy world

Successful spies are very good at probabilistic thinking. High-stakes survival situations tend to make us evaluate our environment with as little bias as possible.

When Vera Atkins was second in command of the French unit of the Special Operations Executive (SOE), a British intelligence organization reporting directly to Winston Churchill during World War II[3], she had to make hundreds of decisions by figuring out the probable accuracy of inherently unreliable information.

Atkins was responsible for the recruitment and deployment of British agents into occupied France. She had to decide who could do the job, and where the best sources of intelligence were. These were literal life-and-death decisions, and all were based in probabilistic thinking.

First, how do you choose a spy? Not everyone can go undercover in high-stress situations and make the contacts necessary to gather intelligence. The result of failure in France in WWII was not getting fired; it was death. What factors of personality and experience show that a person is right for the job? Even today, with advancements in psychology, interrogation, and polygraphs, it’s still a judgment call.

For Vera Atkins in the 1940s, it was very much a process of assigning weight to the various factors and coming up with a probabilistic assessment of who had a decent chance of success. Who spoke French? Who had the confidence? Who was too tied to family? Who had the problem-solving capabilities? From recruitment to deployment, her development of each spy was a series of continually updated, educated estimates.

Getting an intelligence officer ready to go is only half the battle. Where do you send them? If your information was so great that you knew exactly where to go, you probably wouldn’t need an intelligence mission. Choosing a target is another exercise in probabilistic thinking. You need to evaluate the reliability of the information you have and the networks you have set up. Intelligence is not evidence. There is no chain of command or guarantee of authenticity.

The stuff coming out of German-occupied France was at the level of grainy photographs, handwritten notes that passed through many hands on the way back to HQ, and unverifiable wireless messages sent quickly, sometimes sporadically, and with the operator under incredible stress. When deciding what to use, Atkins had to consider the relevancy, quality, and timeliness of the information she had.

She also had to make decisions based not only on what had happened, but what possibly could. Trying to prepare for every eventuality means that spies would never leave home, but they must somehow prepare for a good deal of the unexpected. After all, their jobs are often executed in highly volatile, dynamic environments. The women and men Atkins sent over to France worked in three primary occupations: organizers were responsible for recruiting locals, developing the network, and identifying sabotage targets; couriers moved information all around the country, connecting people and networks to coordinate activities; and wireless operators had to set up heavy communications equipment, disguise it, get information out of the country, and be ready to move at a moment’s notice. All of these jobs were dangerous. The full scope of the threats was never completely identifiable. There were so many things that could go wrong, so many possibilities for discovery or betrayal, that it was impossible to plan for them all. The average life expectancy in France for one of Atkins’ wireless operators was six weeks.

Finally, the numbers suggest an asymmetry in the estimation of the probability of success of each individual agent. Of the 400 agents that Atkins sent over to France, 100 were captured and killed. This is not meant to pass judgment on her skills or smarts. Probabilistic thinking can only get you in the ballpark. It doesn’t guarantee 100% success.

There is no doubt that Atkins relied heavily on probabilistic thinking to guide her decisions in the challenging quest to disrupt German operations in France during World War II. It is hard to evaluate the success of an espionage career, because it is a job that comes with a lot of loss. Atkins was extremely successful in that her network conducted valuable sabotage to support the allied cause during the war, but the loss of life was significant.

Conclusion

Successfully thinking in shades of probability means roughly identifying what matters, coming up with a sense of the odds, doing a check on our assumptions, and then making a decision. We can act with a higher level of certainty in complex, unpredictable situations. We can never know the future with exact precision. Probabilistic thinking is an extremely useful tool to evaluate how the world will most likely look so that we can effectively strategize.

Members can discuss this post on the Learning Community Forum

References:

[1] Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012.

[2] Bernstein, Peter L. Against the Gods: The Remarkable Story of Risk. New York: John Wiley and Sons, 1996. (This book includes an excellent discussion in Chapter 13 on the idea of the scope of events in the past as relevant to figuring out the probability of events in the future, drawing on the work of Frank Knight and John Maynard Keynes.)

[3] Helm, Sarah. A Life in Secrets: The Story of Vera Atkins and the Lost Agents of SOE. London: Abacus, 2005.