Blog

Deductive vs Inductive Reasoning: Make Smarter Arguments, Better Decisions, and Stronger Conclusions

You can’t prove truth, but using deductive and inductive reasoning, you can get close. Learn the difference between the two types of reasoning and how to use them when evaluating facts and arguments.

***

As odd as it sounds, in science, law, and many other fields, there is no such thing as proof — there are only conclusions drawn from facts and observations. Scientists cannot prove a hypothesis, but they can collect evidence that points to its being true. Lawyers cannot prove that something happened (or didn’t), but they can provide evidence that seems irrefutable.

The question of what makes something true is more relevant than ever in this era of alternative facts and fake news. This article explores truth — what it means and how we establish it. We’ll dive into inductive and deductive reasoning as well as a bit of history.

“Contrariwise,” continued Tweedledee, “if it was so, it might be; and if it were so, it would be; but as it isn’t, it ain’t. That’s logic.”

— Lewis Carroll, Through the Looking-Glass

The essence of reasoning is a search for truth. Yet truth isn’t always as simple as we’d like to believe it is.

For as far back as we can imagine, philosophers have debated whether absolute truth exists. Although we’re still waiting for an answer, this doesn’t have to stop us from improving how we think by understanding a little more.

In general, we can consider something to be true if the available evidence seems to verify it. The more evidence we have, the stronger our conclusion can be. When it comes to samples, size matters. As my friend Peter Kaufman says:

What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history….

In some areas, it is necessary to accept that truth is subjective. For example, ethicists accept that it is difficult to establish absolute truths concerning whether something is right or wrong, as standards change over time and vary around the world.

When it comes to reasoning, a correctly phrased statement can be considered to have objective truth. Some statements have an objective truth that we cannot ascertain at present. For example, we do not have proof for the existence or non-existence of aliens, although proof does exist somewhere.

Deductive and inductive reasoning are both based on evidence.

Several types of evidence are used in reasoning to point to a truth:

  • Direct or experimental evidence — This relies on observations and experiments, which should be repeatable with consistent results.
  • Anecdotal or circumstantial evidence — Overreliance on anecdotal evidence can be a logical fallacy because it is based on the assumption that two coexisting factors are linked even though alternative explanations have not been explored. The main use of anecdotal evidence is for forming hypotheses which can then be tested with experimental evidence.
  • Argumentative evidence — We sometimes draw conclusions based on facts. However, this evidence is unreliable when the facts are not directly testing a hypothesis. For example, seeing a light in the sky and concluding that it is an alien aircraft would be argumentative evidence.
  • Testimonial evidence — When an individual presents an opinion, it is testimonial evidence. Once again, this is unreliable, as people may be biased and there may not be any direct evidence to support their testimony.

“The weight of evidence for an extraordinary claim must be proportioned to its strangeness.”

— Laplace, Théorie analytique des probabilités (1812)

Reasoning by Induction

The fictional character Sherlock Holmes is a master of induction. He is a careful observer who processes what he sees to reach the most likely conclusion in the given set of circumstances. Although he pretends that his knowledge is of the black-or-white variety, it often isn’t. It is true induction, coming up with the strongest possible explanation for the phenomena he observes.

Consider his description of how, upon first meeting Watson, he reasoned that Watson had just come from Afghanistan:

“Observation with me is second nature. You appeared to be surprised when I told you, on our first meeting, that you had come from Afghanistan.”
“You were told, no doubt.”

“Nothing of the sort. I knew you came from Afghanistan. From long habit the train of thoughts ran so swiftly through my mind, that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, ‘Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.’ The whole train of thought did not occupy a second. I then remarked that you came from Afghanistan, and you were astonished.”

(From Sir Arthur Conan Doyle’s A Study in Scarlet)

Inductive reasoning involves drawing conclusions from facts, using logic. We draw these kinds of conclusions all the time. If someone we know to have good literary taste recommends a book, we may assume that means we will enjoy the book.

Induction can be strong or weak. If an inductive argument is strong, the truth of the premise would mean the conclusion is likely. If an inductive argument is weak, the logic connecting the premise and conclusion is incorrect.

There are several key types of inductive reasoning:

  • Generalized — Draws a conclusion from a generalization. For example, “All the swans I have seen are white; therefore, all swans are probably white.”
  • Statistical — Draws a conclusion based on statistics. For example, “95 percent of swans are white” (an arbitrary figure, of course); “therefore, a randomly selected swan will probably be white.”
  • Sample — Draws a conclusion about one group based on a different, sample group. For example, “There are ten swans in this pond and all are white; therefore, the swans in my neighbor’s pond are probably also white.”
  • Analogous — Draws a conclusion based on shared properties of two groups. For example, “All Aylesbury ducks are white. Swans are similar to Aylesbury ducks. Therefore, all swans are probably white.”
  • Predictive — Draws a conclusion based on a prediction made using a past sample. For example, “I visited this pond last year and all the swans were white. Therefore, when I visit again, all the swans will probably be white.”
  • Causal inference — Draws a conclusion based on a causal connection. For example, “All the swans in this pond are white. I just saw a white bird in the pond. The bird was probably a swan.”

The entire legal system is designed to be based on sound reasoning, which in turn must be based on evidence. Lawyers often use inductive reasoning to draw a relationship between facts for which they have evidence and a conclusion.

The initial facts are often based on generalizations and statistics, with the implication that a conclusion is most likely to be true, even if that is not certain. For that reason, evidence can rarely be considered certain. For example, a fingerprint taken from a crime scene would be said to be “consistent with a suspect’s prints” rather than being an exact match. Implicit in that statement is the assertion that it is statistically unlikely that the prints are not the suspect’s.

Inductive reasoning also involves Bayesian updating. A conclusion can seem to be true at one point until further evidence emerges and a hypothesis must be adjusted. Bayesian updating is a technique used to modify the probability of a hypothesis’s being true as new evidence is supplied. When inductive reasoning is used in legal situations, Bayesian thinking is used to update the likelihood of a defendant’s being guilty beyond a reasonable doubt as evidence is collected. If we imagine a simplified, hypothetical criminal case, we can picture the utility of Bayesian inference combined with inductive reasoning.

Let’s say someone is murdered in a house where five other adults were present at the time. One of them is the primary suspect, and there is no evidence of anyone else entering the house. The initial probability of the prime suspect’s having committed the murder is 20 percent. Other evidence will then adjust that probability. If the four other people testify that they saw the suspect committing the murder, the suspect’s prints are on the murder weapon, and traces of the victim’s blood were found on the suspect’s clothes, jurors may consider the probability of that person’s guilt to be close enough to 100 percent to convict. Reality is more complex than this, of course. The conclusion is never certain, only highly probable.

One key distinction between deductive and inductive reasoning is that the latter accepts that a conclusion is uncertain and may change in the future. A conclusion is either strong or weak, not right or wrong. We tend to use this type of reasoning in everyday life, drawing conclusions from experiences and then updating our beliefs.

A conclusion is either strong or weak, not right or wrong.

Everyday inductive reasoning is not always correct, but it is often useful. For example, superstitious beliefs often originate from inductive reasoning. If an athlete performed well on a day when they wore their socks inside out, they may conclude that the inside-out socks brought them luck. If future successes happen when they again wear their socks inside out, the belief may strengthen. Should that not be the case, they may update their belief and recognize that it is incorrect.

Another example (let’s set aside the question of whether turkeys can reason): A farmer feeds a turkey every day, so the turkey assumes that the farmer cares for its wellbeing. Only when Thanksgiving rolls around does that assumption prove incorrect.

The issue with overusing inductive reasoning is that cognitive shortcuts and biases can warp the conclusions we draw. Our world is not always as predictable as inductive reasoning suggests, and we may selectively draw upon past experiences to confirm a belief. Someone who reasons inductively that they have bad luck may recall only unlucky experiences to support that hypothesis and ignore instances of good luck.

In The 12 Secrets of Persuasive Argument, the authors write:

In inductive arguments, focus on the inference. When a conclusion relies upon an inference and contains new information not found in the premises, the reasoning is inductive. For example, if premises were established that the defendant slurred his words, stumbled as he walked, and smelled of alcohol, you might reasonably infer the conclusion that the defendant was drunk. This is inductive reasoning. In an inductive argument the conclusion is, at best, probable. The conclusion is not always true when the premises are true. The probability of the conclusion depends on the strength of the inference from the premises. Thus, when dealing with inductive reasoning, pay special attention to the inductive leap or inference, by which the conclusion follows the premises.

… There are several popular misconceptions about inductive and deductive reasoning. When Sherlock Holmes made his remarkable “deductions” based on observations of various facts, he was usually engaging in inductive, not deductive, reasoning.

In Inductive Reasoning, Aiden Feeney and Evan Heit write:

…inductive reasoning … corresponds to everyday reasoning. On a daily basis we draw inferences such as how a person will probably act, what the weather will probably be like, and how a meal will probably taste, and these are typical inductive inferences.

[…]

[I]t is a multifaceted cognitive activity. It can be studied by asking young children simple questions involving cartoon pictures, or it can be studied by giving adults a variety of complex verbal arguments and asking them to make probability judgments.

[…]

[I]nduction is related to, and it could be argued is central to, a number of other cognitive activities, including categorization, similarity judgment, probability judgment, and decision making. For example, much of the study of induction has been concerned with category-based induction, such as inferring that your next door neighbor sleeps on the basis that your neighbor is a human animal, even if you have never seen your neighbor sleeping.

“A very great deal more truth can become known than can be proven.”

— Richard Feynman

Reasoning by Deduction

Deduction begins with a broad truth (the major premise), such as the statement that all men are mortal. This is followed by the minor premise, a more specific statement, such as that Socrates is a man. A conclusion follows: Socrates is mortal. If the major premise is true and the minor premise is true the conclusion cannot be false.

Deductive reasoning is black and white; a conclusion is either true or false and cannot be partly true or partly false. We decide whether a deductive statement is true by assessing the strength of the link between the premises and the conclusion. If all men are mortal and Socrates is a man, there is no way he can not be mortal, for example. There are no situations in which the premise is not true, so the conclusion is true.

In science, deduction is used to reach conclusions believed to be true. A hypothesis is formed; then evidence is collected to support it. If observations support its truth, the hypothesis is confirmed. Statements are structured in the form of “if A equals B, and C is A, then C is B.” If A does not equal B, then C will not equal B. Science also involves inductive reasoning when broad conclusions are drawn from specific observations; data leads to conclusions. If the data shows a tangible pattern, it will support a hypothesis.

For example, having seen ten white swans, we could use inductive reasoning to conclude that all swans are white. This hypothesis is easier to disprove than to prove, and the premises are not necessarily true, but they are true given the existing evidence and given that researchers cannot find a situation in which it is not true. By combining both types of reasoning, science moves closer to the truth. In general, the more outlandish a claim is, the stronger the evidence supporting it must be.

We should be wary of deductive reasoning that appears to make sense without pointing to a truth. Someone could say “A dog has four paws. My pet has four paws. Therefore, my pet is a dog.” The conclusion sounds logical but isn’t, because the initial premise is too specific.

The History of Reasoning

The discussion of reasoning and what constitutes truth dates back to Plato and Aristotle.

Plato (429–347 BC) believed that all things are divided into the visible and the intelligible. Intelligible things can be known through deduction (with observation being of secondary importance to reasoning) and are true knowledge.

Aristotle took an inductive approach, emphasizing the need for observations to support knowledge. He believed that we can reason only from discernable phenomena. From there, we use logic to infer causes.

Debate about reasoning remained much the same until the time of Isaac Newton. Newton’s innovative work was based on observations, but also on concepts that could not be explained by a physical cause (such as gravity). In his Principia, Newton outlined four rules for reasoning in the scientific method:

  1. “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” (We refer to this rule as Occam’s Razor.)
  2. “Therefore, to the same natural effects we must, as far as possible, assign the same causes.”
  3. “The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.”
  4. “In experimental philosophy, we are to look upon propositions collected by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, ’till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.”

In 1843, philosopher John Stuart Mill published A System of Logic, which further refined our understanding of reasoning. Mill believed that science should be based on a search for regularities among events. If a regularity is consistent, it can be considered a law. Mill described five methods for identifying causes by noting regularities. These methods are still used today:

  • Direct method of agreement — If two instances of a phenomenon have a single circumstance in common, the circumstance is the cause or effect.
  • Method of difference — If a phenomenon occurs in one experiment and does not occur in another, and the experiments are the same except for one factor, that is the cause, part of the cause, or the effect.
  • Joint method of agreement and difference — If two instances of a phenomenon have one circumstance in common, and two instances in which it does not occur have nothing in common except the absence of that circumstance, then that circumstance is the cause, part of the cause, or the effect.
  • Method of residue — When you subtract any part of a phenomenon known to be caused by a certain antecedent, the remaining residue of the phenomenon is the effect of the remaining antecedents.
  • Method of concomitant variations — If a phenomenon varies when another phenomenon varies in a particular way, the two are connected.

Karl Popper was the next theorist to make a serious contribution to the study of reasoning. Popper is well known for his focus on disconfirming evidence and disproving hypotheses. Beginning with a hypothesis, we use deductive reasoning to make predictions. A hypothesis will be based on a theory — a set of independent and dependent statements. If the predictions are true, the theory is true, and vice versa. Popper’s theory of falsification (disproving something) is based on the idea that we cannot prove a hypothesis; we can only show that certain predictions are false. This process requires vigorous testing to identify any anomalies, and Popper does not accept theories that cannot be physically tested. Any phenomenon not present in tests cannot be the foundation of a theory, according to Popper. The phenomenon must also be consistent and reproducible. Popper’s theories acknowledge that theories that are accepted at one time are likely to later be disproved. Science is always changing as more hypotheses are modified or disproved and we inch closer to the truth.

Conclusion

In How to Deliver a TED Talk, Jeremey Donovan writes:

No discussion of logic is complete without a refresher course in the difference between inductive and deductive reasoning. By its strictest definition, inductive reasoning proves a general principle—your idea worth spreading—by highlighting a group of specific events, trends, or observations. In contrast, deductive reasoning builds up to a specific principle—again, your idea worth spreading—through a chain of increasingly narrow statements.

Logic is an incredibly important skill, and because we use it so often in everyday life, we benefit by clarifying the methods we use to draw conclusions. Knowing what makes an argument sound is valuable for making decisions and understanding how the world works. It helps us to spot people who are deliberately misleading us through unsound arguments. Understanding reasoning is also helpful for avoiding fallacies and for negotiating.

FS Members can discuss this article on the Learning Community Forum.

The Pygmalion Effect: Proving Them Right

The Pygmalion Effect is a powerful secret weapon. Without even realizing it, we can nudge others towards success. In this article, discover how expectations can influence performance for better or worse.

How Expectations Influence Performance

Many people believe that their pets or children are of unusual intelligence or can understand everything they say. Some people have stories of abnormal feats. In the late 19th century, one man claimed that about his horse and appeared to have evidence. William Von Osten was a teacher and horse trainer. He believed that animals could learn to read or count. Von Osten’s initial attempts with dogs and a bear were unsuccessful, but when he began working with an unusual horse, he changed our understanding of psychology. Known as Clever Hans, the animal could answer questions, with 90% accuracy, by tapping his hoof. He could add, subtract, multiply, divide, and tell the time and the date.

Clever Hans could also read and understand questions written or asked in German. Crowds flocked to see the horse, and the scientific community soon grew interested. Researchers studied the horse, looking for signs of trickery. Yet they found none. The horse could answer questions asked by anyone, even if Von Osten was absent. This indicated that no signaling was at play. For a while, the world believed the horse was truly clever.

Then psychologist Oskar Pfungst turned his attention to Clever Hans. Assisted by a team of researchers, he uncovered two anomalies. When blinkered or behind a screen, the horse could not answer questions. Likewise, he could respond only if the questioner knew the answer. From these observations, Pfungst deduced that Clever Hans was not making any mental calculations. Nor did he understand numbers or language in the human sense. Although Von Osten had intended no trickery, the act was false.

Instead, Clever Hans had learned to detect subtle, yet consistent nonverbal cues. When someone asked a question, Clever Hans responded to their body language with a degree of accuracy many poker players would envy. For example, when someone asked Clever Hans to make a calculation, he would begin tapping his hoof. Once he reached the correct answer, the questioner would show involuntary signs. Pfungst found that many people tilted their head at this point. Clever Hans would recognize this behavior and stop. When blinkered or when the questioner did not know the answer, the horse didn’t have a clue. When he couldn’t see the cues, he had no answer.

The Pygmalion Effect

Von Osten died in 1909 and Clever Hans disappeared from record. But his legacy lives on in a particular branch of psychology.

The case of Clever Hans is of less interest than the research it went on to provoke. Psychologists working in the decades following began to study how the expectations of others affect us. If someone expected Clever Hans to answer a question and ensured that he knew it, could the same thing occur elsewhere?

Could we be, at times, responding to subtle cues? Decades of research have provided consistent, robust evidence that the answer is yes. It comes down to the concepts of the self-fulfilling prophecy and the Pygmalion effect.

The Pygmalion effect is a psychological phenomenon wherein high expectations lead to improved performance in a given area. Its name comes from the story of Pygmalion, a mythical Greek sculptor. Pygmalion carved a statue of a woman and then became enamored with it. Unable to love a human, Pygmalion appealed to Aphrodite, the goddess of love. She took pity and brought the statue to life. The couple married and went on to have a daughter, Paphos.

False Beliefs Come True Over Time

In the same way Pygmalion’s fixation on the statue brought it to life, our focus on a belief or assumption can do the same. The flipside is the Golem effect, wherein low expectations lead to decreased performance. Both effects come under the category of self-fulfilling prophecies. Whether the expectation comes from us or others, the effect manifests in the same way.

The Pygmalion effect has profound ramifications in schools and organizations and with regard to social class and stereotypes. By some estimations, it is the result of our brains’ poorly distinguishing between perception and expectation. Although many people purport to want to prove their critics wrong, we often merely end up proving our supporters right.

Understanding the Pygmalion effect is a powerful way to positively affect those around us, from our children and friends to employees and leaders. If we don’t take into account the ramifications of our expectations, we may miss out on the dramatic benefits of holding high standards.

The concept of a self-fulfilling prophecy is attributed to sociologist Robert K. Merton. In 1948, Merton published the first paper on the topic. In it, he described the phenomenon as a false belief that becomes true over time. Once this occurs, it creates a feedback loop. We assume we were always correct because it seems so in hindsight. Merton described a self-fulfilling prophecy as self-hypnosis through our own propaganda.

As with many psychological concepts, people had a vague awareness of its existence long before research confirmed anything. Renowned orator and theologian Jacques Benigne Bossuet declared in the 17th century that “The greatest weakness of all weaknesses is to fear too much to appear weak.”

Even Sigmund Freud was aware of self-fulfilling prophecies. In A Childhood Memory of Goethe, Freud wrote: “If a man has been his mother’s undisputed darling he retains throughout life the triumphant feeling, the confidence in success, which not seldom brings actual success with it.”

The IQ of Students

Research by Robert Rosenthal and Lenore Jacobson examined the influence of teachers’ expectations on students’ performance. Their subsequent paper is one of the most cited and discussed psychological studies ever conducted.

Rosenthal and Jacobson began by testing the IQ of elementary school students. Teachers were told that the IQ test showed around one-fifth of their students to be unusually intelligent. For ethical reasons, they did not label an alternate group as unintelligent and instead used unlabeled classmates as the control group. It will doubtless come as no surprise that the “gifted” students were chosen at random. They should not have had a significant statistical advantage over their peers. As the study period ended, all students had their IQs retested. Both groups showed an improvement. Yet those who were described as intelligent experienced much greater gains in their IQ points. Rosenthal and Jacobson attributed this result to the Pygmalion effect. Teachers paid more attention to “gifted” students, offering more support and encouragement than they would otherwise. Picked at random, those children ended up excelling. Sadly, no follow-up studies were ever conducted, so we do not know the long-term impact on the children involved.

Prior to studying the effect on children, Rosenthal performed preliminary research on animals. Students were given rats from two groups, one described as “maze dull” and the other as “maze bright.” Researchers claimed that the former group could not learn to properly negotiate a maze, but the latter could with ease. As you might expect, the groups of rats were the same. Like the gifted and nongifted children, they were chosen at random. Yet by the time the study finished, the “maze-bright” rats appeared to have learned faster. The students considered them tamer and more pleasant to work with than the “maze-dull” rats.

In general, authority figures have the power to influence how the people subordinate to them behave by holding high expectations. Whether consciously or not, leaders facilitate changes in behavior, such as by giving people more responsibility or setting stretch goals. Like the subtle cues that allowed Clever Hans to make calculations, these small changes in treatment can promote learning and growth. If a leader thinks an employee is competent, they will treat them as such. The employee then gets more opportunities to develop their competence, and their performance improves in a positive feedback loop. This works both ways. When we expect an authority figure to be competent or successful, we tend to be attentive and supportive. In the process, we bolster their performance, too. Students who act interested in lectures create interesting lecturers.

In Pygmalion in Management, J. Sterling Livingston writes,

Some managers always treat their subordinates in a way that leads to superior performance. But most … unintentionally treat their subordinates in a way that leads to lower performance than they are capable of achieving. The way managers treat their subordinates is subtly influenced by what they expect of them. If manager’s expectations are high, productivity is likely to be excellent. If their expectations are low, productivity is likely to be poor. It is as though there were a law that caused subordinates’ performance to rise or fall to meet managers’ expectations.

The Pygmalion effect shows us that our reality is negotiable and can be manipulated by others — on purpose or by accident. What we achieve, how we think, how we act, and how we perceive our capabilities can be influenced by the expectations of those around us. Those expectations may be the result of biased or irrational thinking, but they have the power to affect us and change what happens. While cognitive biases distort only what we perceive, self-fulfilling prophecies alter what happens.

Of course, the Pygmalion effect works only when we are physically capable of achieving what is expected of us. After Rosenthal and Jacobson published their initial research, many people were entranced by the implication that we are all capable of more than we think. Although that can be true, we have no indication that any of us can do anything if someone believes we can. Instead, the Pygmalion effect seems to involve us leveraging our full capabilities and avoiding the obstacles created by low expectations.

Clever Hans truly was an intelligent horse, but he was smart because he could read almost imperceptible nonverbal cues, not because he could do math. So, he did have unusual capabilities, as shown by the fact that few other animals have done what he did.

We can’t do anything just because someone expects us to. Overly high expectations can also be stressful. When someone sets the bar too high, we can get discouraged and not even bother trying. Stretch goals and high expectations are beneficial, up to the point of diminishing returns. Research by McClelland and Atkinson indicates that the Pygmalion effect drops off if we see our chance of success as being less than 50%. If an endeavor seems either certain or completely uncertain, the Pygmalion effect does not hold. When we are stretched but confident, high expectations can help us achieve more.

Check Your Assumptions

In Self-Fulfilling Prophecy: A Practical Guide to Its Use in Education, Robert T. Tauber describes an exercise in which people are asked to list their assumptions about people with certain descriptions. These included a cheerleader, “a minority woman with four kids at the market using food stamps,” and a “person standing outside smoking on a cold February day.” An anonymous survey of undergraduate students revealed mostly negative assumptions. Tauber asks the reader to consider how being exposed to these types of assumptions might affect someone’s day-to-day life.

The expectations people have of us affect us in countless subtle ways each day. Although we rarely notice it (unless we are on the receiving end of overt racism, sexism, and other forms of bias), those expectations dictate the opportunities we are offered, how we are spoken to, and the praise and criticism we receive. Individually, these knocks and nudges have minimal impact. In the long run, they might dictate whether we succeed or fail or fall somewhere on the spectrum in between.

The important point to note about the Pygmalion effect is that it creates a literal change in what occurs. There is nothing mystical about the effect. When we expect someone to perform well in any capacity, we treat them in a different way. Teachers tend to show more positive body language towards students they expect to be gifted. They may teach them more challenging material, offer more chances to ask questions, and provide personalized feedback. As Carl Sagan declared, “The visions we offer our children shape the future. It matters what those visions are. Often they become self-fulfilling prophecies. Dreams are maps.”

A perfect illustration is the case of James Sweeney and George Johnson, as described in Pygmalion in Management. Sweeney was a teacher at Tulane University, where Johnson worked as a porter. Aware of the Pygmalion effect, Sweeney had a hunch that he could teach anyone to be a competent computer operator. He began his experiment, offering Johnson lessons each afternoon. Other university staff were dubious, especially as Johnson appeared to have a low IQ. But the Pygmalion effect won out and the former janitor eventually became responsible for training new computer operators.

The Pygmalion effect is a powerful secret weapon. Who wouldn’t want to help their children get smarter, help employees and leaders be more competent, and generally push others to do well? That’s possible if we raise our standards and see others in the best possible light. It is not necessary to actively attempt to intervene. Without even realizing it, we can nudge others towards success. If that sounds too good to be true, remember that the effect holds up for everything from rats to CEOs.

Members of our Learning Community can discuss this article here.

The Value of Probabilistic Thinking: Spies, Crime, and Lightning Strikes

Probabilistic Thinking (c) 2018 Farnam Street Media Inc. All rights reserved. May not be used without written permission.

Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass. It is one of the best tools we have to improve the accuracy of our decisions. In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes. When we know these our decisions can be more precise and effective.

Are you going to get hit by lightning or not?

Why we need the concept of probabilities at all is worth thinking about. Things either are or are not, right? We either will get hit by lightning today or we won’t. The problem is, we just don’t know until we live out the day, which doesn’t help us at all when we make our decisions in the morning. The future is far from determined and we can better navigate it by understanding the likelihood of events that could impact us.

Our lack of perfect information about the world gives rise to all of probability theory, and its usefulness. We know now that the future is inherently unpredictable because not all variables can be known and even the smallest error imaginable in our data very quickly throws off our predictions. The best we can do is estimate the future by generating realistic, useful probabilities. So how do we do that?

Probability is everywhere, down to the very bones of the world. The probabilistic machinery in our minds—the cut-to-the-quick heuristics made so famous by the psychologists Daniel Kahneman and Amos Tversky—was evolved by the human species in a time before computers, factories, traffic, middle managers, and the stock market. It served us in a time when human life was about survival, and still serves us well in that capacity.

But what about today—a time when, for most of us, survival is not so much the issue? We want to thrive. We want to compete, and win. Mostly, we want to make good decisions in complex social systems that were not part of the world in which our brains evolved their (quite rational) heuristics.

For this, we need to consciously add in a needed layer of probability awareness. What is it and how can I use it to my advantage?

There are three important aspects of probability that we need to explain so you can integrate them into your thinking to get into the ballpark and improve your chances of catching the ball:

  1. Bayesian thinking,
  2. Fat-tailed curves
  3. Asymmetries

Thomas Bayes and Bayesian thinking: Bayes was an English minister in the first half of the 18th century, whose most famous work, “An Essay Toward Solving a Problem in the Doctrine of Chances” was brought to the attention of the Royal Society by his friend Richard Price in 1763—two years after his death. The essay, the key to what we now know as Bayes’s Theorem, concerned how we should adjust probabilities when we encounter new data.

The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new. As much of it as possible. Bayesian thinking allows us to use all relevant prior information in making decisions. Statisticians might call it a base rate, taking in outside information about past situations like the one you’re in.

Consider the headline “Violent Stabbings on the Rise.” Without Bayesian thinking, you might become genuinely afraid because your chances of being a victim of assault or murder is higher than it was a few months ago. But a Bayesian approach will have you putting this information into the context of what you already know about violent crime.

You know that violent crime has been declining to its lowest rates in decades. Your city is safer now than it has been since this measurement started. Let’s say your chance of being a victim of a stabbing last year was one in 10,000, or 0.01%. The article states, with accuracy, that violent crime has doubled. It is now two in 10,000, or 0.02%. Is that worth being terribly worried about? The prior information here is key. When we factor it in, we realize that our safety has not really been compromised.

Conversely, if we look at the diabetes statistics in the United States, our application of prior knowledge would lead us to a different conclusion. Here, a Bayesian analysis indicates you should be concerned. In 1958, 0.93% of the population was diagnosed with diabetes. In 2015 it was 7.4%. When you look at the intervening years, the climb in diabetes diagnosis is steady, not a spike. So the prior relevant data, or priors, indicate a trend that is worrisome.

It is important to remember that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually, some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know. When making uncertain decisions, it’s nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation?

Now we need to look at fat-tailed curves: Many of us are familiar with the bell curve, that nice, symmetrical wave that captures the relative frequency of so many things from height to exam scores. The bell curve is great because it’s easy to understand and easy to use. Its technical name is “normal distribution.” If we know we are in a bell curve situation, we can quickly identify our parameters and plan for the most likely outcomes.

Fat-tailed curves are different. Take a look.

(c) 2018 Farnam Street Media Inc. All rights reserved. May not be used without written permission.

At first glance they seem similar enough. Common outcomes cluster together, creating a wave. The difference is in the tails. In a bell curve the extremes are predictable. There can only be so much deviation from the mean. In a fat-tailed curve there is no real cap on extreme events.

The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means that we can’t rely on the most common outcomes as representing the average. The more extreme events that are possible, the higher the probability that one of them will occur. Crazy things are definitely going to happen, and we have no way of identifying when.

Think of it this way. In a bell curve type of situation, like displaying the distribution of height or weight in a human population, there are outliers on the spectrum of possibility, but the outliers have a fairly well defined scope. You’ll never meet a man who is ten times the size of an average man. But in a curve with fat tails, like wealth, the central tendency does not work the same way. You may regularly meet people who are ten, 100, or 10,000 times wealthier than the average person. That is a very different type of world.

Let’s re-approach the example of the risks of violence we discussed in relation to Bayesian thinking. Suppose you hear that you had a greater risk of slipping on the stairs and cracking your head open than being killed by a terrorist. The statistics, the priors, seem to back it up: 1,000 people slipped on the stairs and died last year in your country and only 500 died of terrorism. Should you be more worried about stairs or terror events?

Some use examples like these to prove that terror risk is low—since the recent past shows very few deaths, why worry?[1] The problem is in the fat tails: The risk of terror violence is more like wealth, while stair-slipping deaths are more like height and weight. In the next ten years, how many events are possible? How fat is the tail?

The important thing is not to sit down and imagine every possible scenario in the tail (by definition, it is impossible) but to deal with fat-tailed domains in the correct way: by positioning ourselves to survive or even benefit from the wildly unpredictable future, by being the only ones thinking correctly and planning for a world we don’t fully understand.

Asymmetries: Finally, you need to think about something we might call “metaprobability” —the probability that your probability estimates themselves are any good.

This massively misunderstood concept has to do with asymmetries. If you look at nicely polished stock pitches made by professional investors, nearly every time an idea is presented, the investor looks their audience in the eye and states they think they’re going to achieve a rate of return of 20% to 40% per annum, if not higher. Yet exceedingly few of them ever attain that mark, and it’s not because they don’t have any winners. It’s because they get so many so wrong. They consistently overestimate their confidence in their probabilistic estimates. (For reference, the general stock market has returned no more than 7% to 8% per annum in the United States over a long period, before fees.)

Another common asymmetry is people’s ability to estimate the effect of traffic on travel time. How often do you leave “on time” and arrive 20% early? Almost never? How often do you leave “on time” and arrive 20% late? All the time? Exactly. Your estimation errors are asymmetric, skewing in a single direction. This is often the case with probabilistic decision-making.[2]

Far more probability estimates are wrong on the “over-optimistic” side than the “under-optimistic” side. You’ll rarely read about an investor who aimed for 25% annual return rates who subsequently earned 40% over a long period of time. You can throw a dart at the Wall Street Journal and hit the names of lots of investors who aim for 25% per annum with each investment and end up closer to 10%.

The spy world

Successful spies are very good at probabilistic thinking. High-stakes survival situations tend to make us evaluate our environment with as little bias as possible.

When Vera Atkins was second in command of the French unit of the Special Operations Executive (SOE), a British intelligence organization reporting directly to Winston Churchill during World War II[3], she had to make hundreds of decisions by figuring out the probable accuracy of inherently unreliable information.

Atkins was responsible for the recruitment and deployment of British agents into occupied France. She had to decide who could do the job, and where the best sources of intelligence were. These were literal life-and-death decisions, and all were based in probabilistic thinking.

First, how do you choose a spy? Not everyone can go undercover in high-stress situations and make the contacts necessary to gather intelligence. The result of failure in France in WWII was not getting fired; it was death. What factors of personality and experience show that a person is right for the job? Even today, with advancements in psychology, interrogation, and polygraphs, it’s still a judgment call.

For Vera Atkins in the 1940s, it was very much a process of assigning weight to the various factors and coming up with a probabilistic assessment of who had a decent chance of success. Who spoke French? Who had the confidence? Who was too tied to family? Who had the problem-solving capabilities? From recruitment to deployment, her development of each spy was a series of continually updated, educated estimates.

Getting an intelligence officer ready to go is only half the battle. Where do you send them? If your information was so great that you knew exactly where to go, you probably wouldn’t need an intelligence mission. Choosing a target is another exercise in probabilistic thinking. You need to evaluate the reliability of the information you have and the networks you have set up. Intelligence is not evidence. There is no chain of command or guarantee of authenticity.

The stuff coming out of German-occupied France was at the level of grainy photographs, handwritten notes that passed through many hands on the way back to HQ, and unverifiable wireless messages sent quickly, sometimes sporadically, and with the operator under incredible stress. When deciding what to use, Atkins had to consider the relevancy, quality, and timeliness of the information she had.

She also had to make decisions based not only on what had happened, but what possibly could. Trying to prepare for every eventuality means that spies would never leave home, but they must somehow prepare for a good deal of the unexpected. After all, their jobs are often executed in highly volatile, dynamic environments. The women and men Atkins sent over to France worked in three primary occupations: organizers were responsible for recruiting locals, developing the network, and identifying sabotage targets; couriers moved information all around the country, connecting people and networks to coordinate activities; and wireless operators had to set up heavy communications equipment, disguise it, get information out of the country, and be ready to move at a moment’s notice. All of these jobs were dangerous. The full scope of the threats was never completely identifiable. There were so many things that could go wrong, so many possibilities for discovery or betrayal, that it was impossible to plan for them all. The average life expectancy in France for one of Atkins’ wireless operators was six weeks.

Finally, the numbers suggest an asymmetry in the estimation of the probability of success of each individual agent. Of the 400 agents that Atkins sent over to France, 100 were captured and killed. This is not meant to pass judgment on her skills or smarts. Probabilistic thinking can only get you in the ballpark. It doesn’t guarantee 100% success.

There is no doubt that Atkins relied heavily on probabilistic thinking to guide her decisions in the challenging quest to disrupt German operations in France during World War II. It is hard to evaluate the success of an espionage career, because it is a job that comes with a lot of loss. Atkins was extremely successful in that her network conducted valuable sabotage to support the allied cause during the war, but the loss of life was significant.

Conclusion

Successfully thinking in shades of probability means roughly identifying what matters, coming up with a sense of the odds, doing a check on our assumptions, and then making a decision. We can act with a higher level of certainty in complex, unpredictable situations. We can never know the future with exact precision. Probabilistic thinking is an extremely useful tool to evaluate how the world will most likely look so that we can effectively strategize.

Members can discuss this post on the Learning Community Forum

References:

[1] Taleb, Nassim Nicholas. Antifragile. New York: Random House, 2012.

[2] Bernstein, Peter L. Against the Gods: The Remarkable Story of Risk. New York: John Wiley and Sons, 1996. (This book includes an excellent discussion in Chapter 13 on the idea of the scope of events in the past as relevant to figuring out the probability of events in the future, drawing on the work of Frank Knight and John Maynard Keynes.)

[3] Helm, Sarah. A Life in Secrets: The Story of Vera Atkins and the Lost Agents of SOE. London: Abacus, 2005.

Earning Your Stripes: My Conversation with Patrick Collison [The Knowledge Project #32]

Subscribe on iTunes | Stitcher | Spotify | Android | Google Play

On this episode of the Knowledge Project, I chat with Patrick Collison, co-founder and CEO of the leading online payment processing company, Stripe. If you’ve purchased anything online recently, there’s a good chance that Stripe facilitated the transaction.

What is now an organization with over a thousand employees and handling billions of dollars of online purchases every year, began as a small side experiment while Patrick and his brother John were going to college.

During our conversation, Patrick shares the details of their unlikely journey and some of the hard-earned wisdom he picked up along the way. I hope you have something handy to write with because the nuggets per minute in this episode are off the charts. Patrick was so open and generous with his responses that I’m really excited for you to hear what he has to say.

Here are just a few of the things we cover:

  • The biggest (and most valuable) mistakes Patrick made in the early days of Stripe and how they helped him get better
  • The characteristics that Patrick looks for in a new hire to fit and contribute to the Stripe company culture
  • What compelled he and his brother to move forward with the early concept of Stripe, even though on paper it was doomed to fail from the start
  • The gaps Patrick saw in the market that dozens of other processing companies were missing — and how he capitalized on them
  • The lessons Patrick learned from scaling Stripe from two employees (he and his brother) to nearly 1,000 today
  • How he evaluates the upsides and potential dangers of speculative positions within the company
  • How his Irish upbringing influenced his ability to argue and disagree without taking offense (and how we can all be a little more “Irish”)
  • The power of finding the right peer group in your social and professional circles and how impactful and influential it can be in determining where you end up.
  • The 4 ways Patrick has modified his decision-making process over the last 5 years and how it’s helped him develop as a person and as a business leader (this part alone is worth the listen)
  • Patrick’s unique approach to books and how he chooses what he’s going to spend his time reading
  • …life in Silicon Valley, Baumol’s cost disease, and so, so much more.

Patrick truly is one of the warmest, humble and down to earth people I’ve had the pleasure to speak with and I thoroughly enjoyed our conversation together. I hope you will too!

Listen

Transcript

Normally only members of our learning community have access to transcripts, however, we pick one or two a year to make avilable to everyone. Here’s the complete transcript of the interview with Patrick.

If you liked this, check out other episodes of the knowledge project.

***

Members can discuss this podcast on the Learning Community Forum

The Nerds Were Right. Math Makes Life Beautiful.

Math has long been the language of science, engineering, and finance, but can math help you feel calm on a turbulent flight? Get a date? Make better decisions? Here are some heroic ways math shows up in our everyday life.

***

Sounds intellectually sophisticated, doesn’t it? Other than sounding really smart at after-work cocktails, what could be the benefit of understanding where math and physics permeate your life?

Well, what if I told you that math and physics can help you make better decisions by aligning with how the world works? What if I told you that math can help you get a date? Help you solve problems? What if I told you that knowing the basics of math and physics can help make you less afraid and confused? And, perhaps most important, they can help make life more beautiful. Seriously.

If you’ve ever been on a plane when turbulence has hit, you know how unnerving that can be. Most people get freaked out by it, and no matter how much we fly, most of us have a turbulence threshold. When the sides of the plane are shaking, noisily holding themselves together, and the people beside us are white with fear, hands clenched on their armrests, even the calmest of us will ponder the wisdom of jetting 38,000 feet above the ground in a metal tube moving at 1,000 km an hour.

Considering that most planes don’t fall from the sky on account of turbulence isn’t that comforting in the moment. Aren’t there always exceptions to the rule? But what if you understood why, or could explain the physics involved to the freaked-out person beside you? That might help.

In Storm in a Teacup: The Physics of Everyday Life, Helen Czerski spends a chapter describing the gas laws. Covering subjects from the making of popcorn to the deep dives of sperm whales, her amazingly accessible prose describes how the movement of gas is fundamental to the functioning of pretty much everything on earth, including our lungs. She reveals air to be not the static clear thing that we perceive when we bother to look, but rivers of molecules in constant collision, pushing and moving, giving us both storms and cloudless skies.

So when you appreciate air this way, as a continually flowing and changing collection of particles, turbulence is suddenly less scary. Planes are moving through a substance that is far from uniform. Of course, there are going to be pockets of more or less dense air molecules. Of course, they will have minor impacts on the plane as it moves through these slightly different pressure areas. Given that the movement of air can create hurricanes, it’s amazing that most flights are as smooth as they are.

You know what else is really scary? Approaching someone for a date or a job. Rejection sucks. It makes us feel awful, and therefore the threat of it often stops us from taking risks. You know the scene. You’re out at a bar with some friends. A group of potential dates is across the way. Do you risk the cringingly icky feeling of rejection and approach the person you find most attractive, or do you just throw out a lot of eye contact and hope that person approaches you?

Most men go with the former, as difficult as it is. Women will often opt for the latter. We could discuss social conditioning, with the roles that our culture expects each of us to follow. But this post is about math and physics, which actually turn out to be a lot better in providing guidance to optimize our chances of success in the intimidating bar situation.

In The Mathematics of Love, Hannah Fry explains the Gale-Shapley matching algorithm, which essentially proves that “If you put yourself out there, start at the top of the list, and work your way down, you’ll always end up with the best possible person who’ll have you. If you sit around and wait for people to talk to you, you’ll end up with the least bad person who approaches you. Regardless of the type of relationship you’re after, it pays to take the initiative.”

The math may be complicated, but the principle isn’t. Your chances of ending up with what you want — say, the guy with the amazing smile or that lab director job in California — dramatically increase if you make the first move. Fry says, “aim high, and aim frequently. The math says so.” Why argue with that?

Understanding more physics can also free us from the panic-inducing, heart-pounding fear that we are making the wrong decisions. Not because physics always points out the right decision, but because it can lead us away from this unproductive, subjective, binary thinking. How? By giving us the tools to ask better questions.

Consider this illuminating passage from Czerski:

We live in the middle of the timescales, and sometimes it’s hard to take the rest of time seriously. It’s not just the difference between now and then, it’s the vertigo you get when you think about what “now” actually is. It could be a millionth of a second, or a year. Your perspective is completely different when you’re looking at incredibly fast events or glacially slow ones. But the difference hasn’t got anything to do with how things are changing; it’s just a question of how long they take to get there. And where is “there”? It is equilibrium, a state of balance. Left to itself, nothing will ever shift from this final position because it has no reason to do so. At the end, there are no forces to move anything, because they’re all balanced. They physical world, all of it, only ever has one destination: equilibrium.

How can this change your decision-making process?

You might start to consider whether you are speeding up the goal of equilibrium (working with force) or trying to prevent equilibrium (working against force).  One option isn’t necessarily worse than the other. But the second one is significantly more work.

So then you will understand how much effort is going to be required on your part. Love that house with the period Georgian windows? Great. But know that you will have to spend more money fighting to counteract the desire of the molecules on both sides of the window to achieve equilibrium in varying temperatures than you will if you go with the modern bungalow with the double-paned windows.

And finally, curiosity. Being curious about the world helps us find solutions to problems by bringing new knowledge to bear on old challenges. Math and physics are actually powerful tools for investigating the possibilities of what is out there.

Fry writes that “Mathematics is about abstracting away from reality, not replicating it. And it offers real value in the process. By allowing yourself to view the world from an abstract perspective, you create a language that is uniquely able to capture and describe the patterns and mechanisms that would otherwise remain hidden.”

Physics is very similar. Czerski says, “Seeing what makes the world tick changes your perspective. The world is a mosaic of physical patterns, and once you’re familiar with the basics, you start to see how those patterns fit together.”

Math and physics enhance your curiosity. These subjects allow us to dive into the unknown without being waylaid by charlatans or sidetracked by the impossible. They allow us to tackle the mysteries of life one at a time, opening up the possibilities of the universe.

As Czerski says, “Knowing about some basics bits of physics [and math!] turns the world into a toybox.” A toybox full of powerful and beautiful things.

Inertia: The Force That Holds the Universe Together

Inertia is the force that holds the universe together. Literally. Without it, things would fall apart. It’s also what keeps us locked in destructive habits, and resistant to change.

***

“If it were possible to flick a switch and turn off inertia, the universe would collapse in an instant to a clump of matter,” write Peter and Neal Garneau in In the Grip of the Distant Universe: The Science of Inertia.

“…death is the destination we all share. No one has ever escaped it. And that is as it should be, because death is very likely the single best invention of life. It’s life’s change agent; it clears out the old to make way for the new … Your time is limited, so don’t waste it living someone else’s life.”

— Steve Jobs

Inertia is the force that holds the universe together. Literally. Without it, matter would lack the electric forces necessary to form its current arrangement. Inertia is counteracted by the heat and kinetic energy produced by moving particles. Subtract it and everything cools to -459.67 degrees Fahrenheit (absolute zero temperature). Yet we know so little about inertia and how to leverage it in our daily lives.

Inertia: The Force That Holds the Universe Together

The Basics

The German astronomer Johannes Kepler (1571–1630) coined the word “inertia.” The etymology of the term is telling. Kepler obtained it from the Latin for “unskillfulness, ignorance; inactivity or idleness.” True to its origin, inertia keeps us in bed on a lazy Sunday morning (we need to apply activation energy to overcome this state).

Inertia refers to resistance to change — in particular, resistance to changes in motion. Inertia may manifest in physical objects or in the minds of people.

We learn the principle of inertia early on in life. We all know that it takes a force to get something moving, to change its direction, or to stop it.

Our intuitive sense of how inertia works enables us to exercise a degree of control over the world around us. Learning to drive offers further lessons. Without external physical forces, a car would keep moving in a straight line in the same direction. It takes a force (energy) to get a car moving and overcome the inertia that kept it still in a parking space. Changing direction to round a corner or make a U-turn requires further energy. Inertia is why a car does not stop the moment the brakes are applied.

The heavier a vehicle is, the harder it is to overcome inertia and make it stop. A light bicycle stops with ease, while an eight-carriage passenger train needs a good mile to halt. Similarly, the faster we run, the longer it takes to stop. Running in a straight line is much easier than twisting through a crowded sidewalk, changing direction to dodge people.

Any object that can be rotated, such as a wheel, has rotational inertia. This tells us how hard it is to change the object’s speed around the axis. Rotational inertia depends on the mass of the object and its distribution relative to the axis.

Inertia is Newton’s first law of motion, a fundamental principle of physics. Newton summarized it this way: “The vis insita, or innate force of matter, is a power of resisting by which every body, as much as in it lies, endeavors to preserve its present state, whether it be of rest or of moving uniformly forward in a straight line.”

When developing his first law, Newton drew upon the work of Galileo Galilei. In a 1624 letter to Francesco Ingoli, Galileo outlined the principle of inertia:

I tell you that if natural bodies have it from Nature to be moved by any movement, this can only be a circular motion, nor is it possible that Nature has given to any of its integral bodies a propensity to be moved by straight motion. I have many confirmations of this proposition, but for the present one alone suffices, which is this.

I suppose the parts of the universe to be in the best arrangement so that none is out of its place, which is to say that Nature and God have perfectly arranged their structure… Therefore, if the parts of the world are well ordered, the straight motion is superfluous and not natural, and they can only have it when some body is forcibly removed from its natural place, to which it would then return to a straight line.

In 1786, Immanuel Kant elaborated further: “All change of matter has an external cause. (Every body remains in its state of rest or motion in the same direction and with the same velocity, if not compelled by an external cause to forsake this state.) … This mechanical law can only be called the law of inertia (lex inertiæ)….”

Now that we understand the principle, let’s look at some of the ways we can understand it better and apply it to our advantage.

Decision Making and Cognitive Inertia

We all experience cognitive inertia: the tendency to stick to existing ideas, beliefs, and habits even when they no longer serve us well. Few people are truly able to revise their opinions in light of disconfirmatory information. Instead, we succumb to confirmation bias and seek out verification of existing beliefs. It’s much easier to keep thinking what we’ve always been thinking than to reflect on the chance that we might be wrong and update our views. It takes work to overcome cognitive dissonance, just as it takes effort to stop a car or change its direction.

When the environment changes, clinging to old beliefs can be harmful or even fatal. Whether we fail to perceive the changes or fail to respond to them, the result is the same. Even when it’s obvious to others that we must change, it’s not obvious to us. It’s much easier to see something when you’re not directly involved. If I ask you how fast you’re moving right now, you’d likely say zero, but you’re moving 18,000 miles an hour around the sun. Perspective is everything, and the perspective that matters is the one that most closely lines up with reality.

“Sometimes you make up your mind about something without knowing why, and your decision persists by the power of inertia. Every year it gets harder to change.”

— Milan Kundera, The Unbearable Lightness of Being

Cognitive inertia is the reason that changing our habits can be difficult. The default is always the path of least resistance, which is easy to accept and harder to question. Consider your bank, for example. Perhaps you know that there are better options at other banks. Or you have had issues with your bank that took ages to get sorted. Yet very few people actually change their banks, and many of us stick with the account we first opened. After all, moving away from the status quo would require a lot of effort: researching alternatives, transferring balances, closing accounts, etc. And what if something goes wrong? Sounds risky. The switching costs are high, so we stick to the status quo.

Sometimes inertia helps us. After all, questioning everything would be exhausting. But in many cases, it is worthwhile to overcome inertia and set something in motion, or change direction, or halt it.

The important thing about inertia is that it is only the initial push that is difficult. After that, progress tends to be smoother. Ernest Hemingway had a trick for overcoming inertia in his writing. Knowing that getting started was always the hardest part, he chose to finish work each day at a point where he had momentum (rather than when he ran out of ideas). The next day, he could pick up from there. In A Moveable Feast, Hemingway explains:

I always worked until I had something done and I always stopped when I knew what was going to happen next. That way I could be sure of going on the next day.

Later on in the book, he describes another method, which was to write just one sentence:

Do not worry. You have always written before and you will write now. All you have to do is write one true sentence. Write the truest sentence that you know. So, finally I would write one true sentence and go on from there. It was easy then because there was always one true sentence that I knew or had seen or had heard someone say. If I started to write elaborately, or like someone introducing or presenting something, I found that I could cut that scrollwork or ornament out and throw it away and start with the first true simple declarative sentence I had written.

We can learn a lot from Hemingway’s approach to tackling inertia and apply it in areas beyond writing. As with physics, the momentum from getting started can carry us a long way. We just need to muster the required activation energy and get going.

Status Quo Bias: “When in Doubt, Do Nothing”

Cognitive inertia also manifests in the form of status quo bias. When making decisions, we are rarely rational. Faced with competing options and information, we often opt for the default because it’s easy. Doing something other than what we’re already doing requires mental energy that we would rather preserve. In many areas, this helps us avoid decision fatigue.

Many of us eat the same meals most of the time, wear similar outfits, and follow routines. This tendency usually serves us well. But the status quo is not necessarily the optimum solution. Indeed, it may be outright harmful or at least unhelpful if something has changed in the environment or we want to optimize our use of time.

“The great enemy of any attempt to change men’s habits is inertia. Civilization is limited by inertia.”

— Edward L. Bernays, Propaganda

In a paper entitled “If you like it, does it matter if it’s real?” Felipe De Brigard[1] offers a powerful illustration of status quo bias. One of the best-known thought experiments concerns Robert Nozick’s “experience machine.” Nozick asked us to imagine that scientists have created a virtual reality machine capable of simulating any pleasurable experience. We are offered the opportunity to plug ourselves in and live out the rest of our lives in permanent, but fake enjoyment. The experience machine would later inspire the Matrix film series. Presented with the thought experiment, most people balk and claim they would prefer reality. But what if we flip the narrative? De Brigard believed that we are opposed to the experience machine because it contradicts the status quo, the life we are accustomed to.

In an experiment, he asked participants to imagine themselves woken by the doorbell on a Saturday morning. A man in black, introducing himself as Mr. Smith, is at the door. He claims to have vital information. Mr. Smith explains that there has been an error and you are in fact connected to an experience machine. Everything you have lived through so far has been a simulation. He offers a choice: stay plugged in, or return to an unknown real life. Unsurprisingly, far fewer people wished to return to reality in the latter situation than wished to remain in it in the former. The aversive element is not the experience machine itself, but the departure from the status quo it represents.

Conclusion

Inertia is a pervasive, problematic force. It’s the pull that keeps us clinging to old ways and prevents us from trying new things. But as we have seen, it is also a necessary one. Without it, the universe would collapse. Inertia is what enables us to maintain patterns of functioning, maintain relationships, and get through the day without questioning everything. We can overcome inertia much like Hemingway did — by recognizing its influence and taking the necessary steps to create that all-important initial momentum.

***

Prime Members can discuss this on the Learning Community Forum.

End Notes

[1] https://www.tandfonline.com/doi/abs/10.1080/09515080903532290