Category: Mental Models

The Surprising Power of The Long Game

It’s easy to overestimate the importance of luck on success and underestimate the importance of investing in success every single day. Too often, we convince ourselves that success was just luck. We tell ourselves, the school teacher that left millions was just lucky. No. She wasn’t. She was just playing a different game than you were. She was playing the long game.

The long game isn’t particularly notable and sometimes it’s not even noticeable. It’s boring. But when someone chooses to play the long game from an early age, the results can be extraordinary. The long game changes how you conduct your personal and business affairs.

There is an old saying that I think of often, but I’m not sure where it comes from: If you do what everyone else is doing, you shouldn’t be surprised to get the same results everyone else is getting.

Ignoring the effect of luck on outcomes — the proverbial lottery ticket —doing what everyone else is doing pretty much ensures that you’re going to be average. Not average in the world, but average to people in similar circumstances. There are a lot of ways not to be average, but one of them is the tradeoff between the long game and the short game.

What starts small compounds into something more. The longer you play the long game, the easier it is to play and the greater the rewards. The longer you play the short game the harder it becomes to change and the bigger the bill facing you when you do want to change.

The Short Game

The short game is putting off anything that seems hard for doing something that seems easy or fun. The short game offers visible and immediate benefits. The short game is seductive.

  • Why do your homework when you can go out and play?
  • Why wait to pay for a phone in cash, when you can put it on your credit card?
  • Why go to the gym when you can go drinking with your friends?
  • Why invest in your relationship with your partner today when you can work a little bit extra in the office?
  • Why learn something boring that doesn’t change when you can learn something sexy that impresses people?
  • Why bust your butt at work to do the work before the meeting when you can read the executive summary and pretend like everyone else?

The effects of the short game multiply the longer you play. On any given day the impact is small but as days turn into months and years the result is enormous. People who play the short game don’t realize the costs until they become too large to ignore.

The problem with the short game is that the costs are small and never seem to matter much on any given day. Doing your homework today won’t give you straight A’s. Saving $5 today won’t make you a millionaire. Going to the gym and eating healthy today won’t make you fit. Reading a book won’t make you smart. Going to sleep on time tonight won’t make you healthier tomorrow. Sure we might try these things when we’re motivated but since the results are not immediate we revert back to the short game.

As the weeks turn into months and the months into years, the short game compounds into disastrous results. It’s not the one day trade off that matters but it’s accumulation.

Playing the long game means suffering a little today. And why would we want to suffer today when we can suffer tomorrow. But if our intention is to always change tomorrow, then tomorrow never comes. All we have is today.

The Long Game

The long game is the opposite of the short game, it means paying a small price today to make tomorrow’s tomorrow easier. If we can do this long enough to see the results, it feeds on itself. 

From the outside, the long game looks pretty boring:

  • Saving money and investing it for tomorrow
  • Leaving the party early to go get some sleep
  • Investing time in your relationship today so you have a foundation when something happens
  • Doing your homework before you go out to play
  • Going to the gym rather than watching Netflix

… and countless other examples.

In its simplest form, the long game isn’t really debatable. Everyone agrees, for example, we should spend less than we make and invest the difference. Playing the long game is a slight change, one that seems insignificant at the moment, but one that becomes the difference between financial freedom and struggling to make next month’s rent.

The first step to the long game is the hardest. The first step is visibly negative. You have to be willing to suffer today in order to not suffer tomorrow. This is why the long game is hard to play. People rarely see the small steps when they’re looking for enormous outcomes, but deserving enormous outcomes is mostly the result of a series of small steps that culminate into something visible.

Conclusion

In everything you do, you’re either playing a short term or long term game. You can’t opt out and you can’t play a long-term game in everything, you need to pick what matters to you. But in everything you do time amplifies the difference between long and short-term games. The question you need to think about is when and where to play a long-term game. A good place to start is with things that compound: knowledge, relationships, and finances.

 

This article is an expansion of something I originally touched on here

Winner Takes it All: How Markets Favor the Few at the Expense of the Many

Markets tend to favor unequal distributions of market share and profits, with a few leaders emerging in any industry. Winner-take-all markets are hard to disrupt and suppress the entry of new players by locking in market share for leading players.

***

In almost any market, crowds of competitors fight for business within their niche. But over time, with few exceptions, a small number of companies come to dominate the industry.

These are the names we all know. The logos we see every day. The brands which shape the world with every decision they make. Even those which are not household names have a great influence on our lives. Operating behind the scenes, they quietly grow more powerful each year, often sowing the seeds of their own destruction in the process.

A winner-take-all market doesn’t mean there is only one company in the market. Rather, when we say a winner takes all, what we mean is that a single company receives the majority of available profits. A few others have at best a modest share. The rest fight over a miniscule remnant, and tend not to survive long.

In a winner-take-all market, the winners have tremendous power to dictate outcomes. Winner-take-all markets occur in many different areas. We can apply the concept to all situations which involve unequal distributions.

Unequal Distribution

As a general rule, resources are never distributed evenly among people. In almost every situation, a small number of people or organizations are the winners.

Most of the books sold each year are written by a handful of authors. Most internet traffic is to a few websites. The top 100 websites get more traffic than ranks 100-999 combined (welcome to power laws). Most citations in any field refer to the same few papers and researchers. Most clicks on Google searches are on the first result. Each of these is an instance of a winner-take-all market.

Wealth is a prime example of this type of market. The Pareto Principle states that in a given nation, 20% of the people own 80% of the wealth (the actual figures are 15% and 85%.) However, the Pareto Principle goes deeper than that. We can look at the richest 20%, then calculate the wealth of the richest 20% of that group. Once again, the Pareto principle applies. So roughly 4% own 64% of the wealth. Keep repeating that calculation and we end up with about 9 people. By some estimates, this tiny group has as much as wealth as the poorest half of the world.

“With limited time or opportunity to experiment, we intentionally narrow our choices to those at the top.”

— Seth Godin

The Perks of Being the Best

There are tremendous benefits to being the best in any particular area. Top performers might be only slightly more skilled than the people one level below them, yet they receive an exponential payoff. A small difference in relative performance—an athlete who can run 100 meters a few microseconds faster, a leader who can make better decisions, an opera singer who can go a little higher—can mean the difference between a lucrative career and relative obscurity. The people at the tops of their fields get it all. They are the winners in that particular market. And once someone is regarded as the best, they tend to retain that status. It takes a monumental effort for a newcomer to rise to such a position. Every day new people do make it to the top, but it’s a lot easier to stay there than to get there.

Top performers don’t just earn the most. They also tend to receive the majority of media coverage and win most awards. They have the most leverage when it comes to choosing their work. These benefits are exponential, following a power law distribution. A silver medalist might get 10 times the benefits the bronze medalist does. But the gold medalist will receive 10 times the benefits of the silver. If a company is risking millions over a lawsuit, they will want the best possible lawyer no matter the cost. And a surgeon who is 10% better than average can charge more than 10% higher fees. When someone or something is the best, we hear about it. The winners take all the attention. It’s one reason why the careers of Nobel Prize winners tend to go downhill after receiving the award. It becomes too lucrative for them to devote their time to the media, giving talks or writing books. Producing more original research falls by the wayside.

Leverage

One reason the best are rewarded more now than ever is leverage. Up until recently, if you were a nanosecond faster than someone else, there was no real advantage. Now there is. Small differences in performance translate into large differences in real-world benefits. A gold medallist in the Olympics, even one that wins by a nanosecond, is disproportionately rewarded for a very small edge.

Now we all live in a world of leverage, through capital, technology, and productivity. Leveraged workers can outperform unleveraged ones by orders of magnitude. When you’re leveraged, judgment becomes far more important. That small difference in ability can be put to better use. Software engineers can create billions of dollars of value through code. Ten coders working 10 times harder but slightly less effective in their thinking will have nothing to show for it. Just as with winner-take-all markets, the inputs don’t match the outputs.

Feedback Loops

Economist Sherwin Rosen looked at unequal distribution in The Economics of Superstars. Rosen found that the demand for classical music and live comedy is high and continues to grow. Yet each area only employs about two hundred full-time performers. These top-performing comedians and musicians take most of the market. Meanwhile, thousands of others struggle for any recognition. Performers regarded as second best within a field earn considerably less than the top performers, even though the average person cannot discern any difference.

In Success and Luck, Robert H. Frank explains the self-perpetuating nature of winner take all markets:

Is the Mona Lisa special? Is Kim Kardashian? They’re both famous, but sometimes things are famous just for being famous. Although we often try to explain their success by scrutinising their objective qualities, they are in fact often no more special than many of their less renowned counterparts…Success often results from positive feedback loops that amplify tiny initial variations into enormous differences in final outcomes.

Winner-take-all markets are increasingly dictated by feedback loops. Feedback loops develop when the output becomes the input. Consider books. More people will buy a best-selling book because it’s a best-selling book. More people will listen to a song that tops charts. More people will go to see an Oscar winning film. These feedback loops serve to magnify initial luck or manipulation. Some writers will purchase thousands of copies of their own book to push it onto best seller lists. Once it makes it onto the list, the feedback loop will begin and possibly keep it there longer than it merits.1

It’s hard to establish what sets off these feedback loops. In many cases, the answer is simple: luck. Although many people and organizations create narratives to explain their achievements, luck plays a large role. This is a combination of hindsight bias and the narrative fallacy. In retrospect, becoming the winner in the market seems inevitable. In truth, luck plays a substantial role in the creation of winner-take-all markets. A combination of timing, location and connections serves to create winners. Their status is never inevitable, no matter what they might tell those who ask.

In some cases, governments deliberately strive to create positive feedback loops. Drug patents are one example. These create a powerful incentive for companies to invest in research and development. Releasing a new, copyrighted drug is a lucrative enterprise. As the only company in that particular market, a company can set the price to whatever it wishes. Until the patent runs out, that company is the winner. This is exactly how the market plays out. In 2016, the highest grossing drug company earned $71 billion. The three runners up each earned around $50 billion. From there on, the other drug companies have a comparatively small share of the market.

Profit enables companies to invest in more research and development, pay employees more, and invest in their communities. A positive feedback loop forms. Talented researchers join successful teams. They gather valuable data. Developing new drugs becomes easier. Drug companies gain greater and greater market power over time. A few winners end up with almost total control. They become the names we trust and hold their position, absorbing any risks or scandals. New effective drugs benefit society on the whole, improving our well-being. This winner-take-all market has its upsides. Issues emerge when patent holders set prices above the means of the people who need the drugs most.

Once the patent runs out on a drug (generally after 12 years) any other firm can produce an identical product. Prices soon fall as other companies enter the market. The feedback loop breaks, and the winner no longer takes all. Even so, the former winner will retain a large share of the market. People tend to be unwilling to switch to a new brand of drug, even if it has the same effects.

Ironically, winner-take-all markets tend to perpetuate themselves by attracting more losers. When we look at founders in Silicon Valley or actors in LA, we don’t see the failures. Survivorship bias means we only see those who succeed. Attracted by the thought of winning, growing numbers of people flock to try their luck in the market. Most fail, overconfident and misled. The rewards become even more concentrated. More people are attracted and the cycle continues.

DeBeers Diamonds

In the market for diamonds, there is one main winner: DeBeers. This international corporation controls most of the global diamond market, including mining, trading and retail. For around a century, DeBeers had a complete monopoly. Diamonds are a scarce Veblen good with minimal practical use. The value depends on our perception.

Prior to the late 19th century, the global production of diamonds totaled a couple of pounds a year. Demand barely existed, so no one had much interest in supplying it. However, the discovery of several large mines increased production from pounds to tons. Those who stood to profit recognized that diamonds have no intrinsic value. They needed to create a perception of scarcity. DeBeers began taking control of the market in 1888, quickly forming a monopoly. It had an ambitious vision for the diamond market. DeBeers wanted to promote the stones as irreplaceable. Other gemstones have basically the same properties—hard, shiny rocks which make nice jewelry. As Edward Jay Epstein wrote in 1982:

The diamond invention is far more than a monopoly for fixing diamond prices; it is a mechanism for converting tiny crystals of carbon into universally recognized tokens of wealth, power, and romance. To achieve this goal, De Beers had to control demand as well as supply. Both women and men had to be made to perceive diamonds not as marketable precious stones but as an inseparable part of courtship and married life.

Their ensuing role as winners in the diamond market is all down to clever marketing. Slogans such as “diamonds are forever” have cemented the monopoly. Note that the slogan applies to all diamonds, not their particular brand. Imagine if Apple made adverts declaring “phones are forever”. Or if McDonald’s made adverts saying ”fast food is forever.” That’s how powerful DeBeers is. It can promote the entire market, knowing it will be the one to benefit. Throughout the twentieth century, DeBeer gave famous actresses diamond rings, pitched stories featuring the stones to magazines and incorporated their products into images of the British royal family. As their advertising agency, N. W. Ayer, explained, “There was no direct sale to be made. There was no brand name to be impressed on the public mind. There was simply an idea—the eternal emotional value surrounding the diamond…. The substantial diamond gift can be made a more widely sought symbol of personal and family success—an expression of socioeconomic achievement.”

The Impact of Technology

In our interconnected, globalized world, a few large firms continue to grow in power. Modern technology enables firms like Walmart to open branches all over the world. Without the barriers once associated with communication and supply networks, large firms can take over the local market anywhere they open. Small businesses have a very hard time competing.

When a new market appears, entrepreneurs rush to create products, services or technology. There is a flurry of activity for a few months or year. With time, customers gravitate toward the two or three companies they prefer. Starved of revenue, the other competitors shut down. Technology has exacerbated the growth of winner-take-all markets.

We are seeing this at the moment with ride-hailing services. In a once-crowded marketplace, two giant winners remain to take all the profits. It’s hard to say exactly why Uber and Lyft triumphed over numerous similar services. But it’s unlikely they will lose their market share anytime soon.

The same occurred with search engines. Google has now eliminated any meaningful competition. As their profits soar each year, even their nearest competitors—Yahoo, Bing—struggle. We can see from the example of Google how winner-take-all markets can self-perpetuate. Google is on top, so it gets the best employees, and has high research and development budgets. Google can afford to take risks and accumulate growing mountains of user data. Any losses or failures get absorbed. Consistent growth holds the trust of shareholders. Google essentially uses a form of Nassim Taleb’s barbell strategy. As Taleb writes in The Black Swan:

True, the Web produces acute concentration. A large number of users visit just a few sites, such as Google, which, at the time of this writing, has total market dominance. At no time in history has a company grown so dominant so quickly—Google can service people from Nicaragua to southwestern Mongolia to the American West Coast, without having to worry about phone operators, shipping, delivery, and manufacturing. This is the ultimate winner-take-all case study. People forget, though, that before Google, Alta Vista dominated the search-engine market. I am prepared to revise the Google metaphor by replacing it with a new name for future editions of this book.

The role of data is particularly important. The more data a company has on its customers, the better equipped it is to release new products and market existing ones. Facebook has a terrifying amount of information about its users, so it can keep updating the social network to make it addictive and to lock people in. Newer or less popular social networks are working with less data and cannot compete for attention. A positive feedback loop forms for the entrenched companies. Facebook has a lot of data, and it can use that data to make the site more appealing. In turn, this more attractive Facebook leads people to spend more time clicking and generates even more data.2

Winner-take-all markets can be the result of lock-in. When the costs of switching between one supplier and another are too high to be worthwhile, consumers become locked in. Microsoft is a winner in the software market because most of the world is locked in to their products. As it stands, it would be nearly impossible for anyone to erode the market share Windows possesses. As Windows is copyrighted, no one can replicate it. Threatened by inconvenience, we become loyal to avoid incurring switching costs.

Marc Andreessen described the emergence of winner-take-all technology markets in 2013:

In normal markets, you can have Pepsi and Coke. In technology markets, in the long run, you tend to only have one…. The big companies, though, in technology tend to have 90 percent market share. So we think that generally, these are winner-take-all markets. Generally, number one is going to get like 90 percent of the profits. Number two is going to get like 10 percent of the profits, and numbers three through 10 are going to get nothing.

Leaders in certain areas are becoming winners and taking all because they can leverage small advantages, thanks to technology. In the past, an amazing teacher, singer, accountant, artist or stock broker could only reach a small number of people in their community. As their status grew, they would often charge more and choose to see fewer people, meaning their expertise became even more scarce. Now, however, those same top performers can reach a limitless audience through blogs, podcasts, videos, online courses and so on.

Think of it another way. For most of history we were limited to learning from the people in our community. Say you wanted to learn how to draw. You had access to your community art teacher. The odds they were the best art teacher in the world were extremely slim. Now, however, you can go on the internet and access the best teacher in the world.

For most of history, comedians (or rather, their predecessors such as vaudeville performers) and musicians performed live. There was a distinct limit to how many shows they could do a year and how many people could attend each. So, there were many people at the top of each field, as many as needed to meet audience demand for performers. Now that we are no longer confined to live performances, we gravitate towards a few exceptional entertainers. Or consider the example of sports. Athletes were paid far more modest wages until TV allowed them to leverage their skills and reach millions of homes.

Having more information available offers us further incentives to pay attention only to the winners. Online, we can filter by popularity, look at aggregate reviews, select the first search option, or go with other people’s preferences. With too many options, we google ‘best Chinese restaurant near me’ or ‘best horror film 2016.’ Sorting through all the options is too time-consuming, so the best stay as the best.

“In order to win, you must first survive.”

— Warren Buffett

The Downsides of Winner-Take-All Markets

There are some serious downsides to winner-take-all markets. Economic growth and innovation rely on the emergence of new startups and entrepreneurs with disruptive ideas. When the gale of creative destruction stops blowing, industries stagnate. When a handful of winners control a market, they may discourage newcomers who cannot compete with established giants’ budgets and power over the industry. According to some estimates, startups are failing faster and more frequently than in the past. Investors prefer established companies with secure short-term returns. Even when a startup succeeds, it tends to get acquired by a larger company. Apple, Amazon, Facebook and others acquire hundreds of companies each year.

Winner-take-all markets tend to discourage collaboration and cooperation. The winners have incentive to keep their knowledge and new data to themselves. Patents and copyright are liberally used to suppress any serious competition. Skilled workers are snapped up the second they leave education, and have powerful inducements to stay working for the winners. The result is a prisoner’s dilemma-style situation. Although collaboration may be best for everyone, each individual organization benefits from being selfish. As a result, no one collaborates, they just compete.

The result is what Warren Buffett calls a “moat’—a substantial barrier against competition. Business moats come in many forms. Apple’s superior brand identity is a moat, for example. It has taken enormous investments of resources to build and newer companies cannot compete. No number of Facebook adverts or billboards could replicate the kind of importance Apple has in our cultural consciousness. For other winners, the moat could be the ability to provide a product or service at a lower price than competitors, as with Amazon and Alibaba. Each of these has a great deal of market power and can influence prices. If Amazon drops their prices, competitors have no choice but to do the same and make less profit. If Apple decides to raise their prices, we are unlikely to buy our phones and laptops elsewhere and will pay a premium. As Greg Mankiw writes in Principles of Microeconomics, “Market power can cause markets to be inefficient because it keeps the price and quantity away from the equilibrium of supply and demand.”

Luckily for us, winners tend to sow the seeds of their own destruction—but we’ll save that for another article.

Members of the Farnam Street Learning Community can discuss this on the member forum.

Footnotes
  • 1

    For related thoughts see activation energy and escape velocity.

  • 2

    An argument could be made, that data should be anonymized and available to the public as a means to ensure competition.

Predicting the Future with Bayes’ Theorem

In a recent podcast, we talked with professional poker player Annie Duke about thinking in probabilities, something good poker players do all the time. At the poker table or in life, it’s really useful to think in probabilities versus absolutes based on all the information you have available to you. You can improve your decisions and get better outcomes. Probabilistic thinking leads you to ask yourself, how confident am I in this prediction? What information would impact this confidence?

Bayes’ Theorem

Bayes’ theorem is an accessible way of integrating probability thinking into our lives. Thomas Bayes was an English minister in the 18th century, whose most famous work, “An Essay toward Solving a Problem in the Doctrine of Chances,” was brought to the attention of the Royal Society in 1763—two years after his death—by his friend Richard Price. The essay did not contain the theorem as we now know it, but had the seeds of the idea. It looked at how we should adjust our estimates of probabilities when we encounter new data that influence a situation. Later development by French scholar Pierre-Simon Laplace and others helped codify the theorem and develop it into a useful tool for thinking.

Knowing the exact math of probability calculations is not the key to understanding Bayesian thinking. More critical is your ability and desire to assign probabilities of truth and accuracy to anything you think you know, and then being willing to update those probabilities when new information comes in. Here is a short example, found in Investing: The Last Liberal Art, of how it works:

Let’s imagine that you and a friend have spent the afternoon playing your favorite board game, and now, at the end of the game, you are chatting about this and that. Something your friend says leads you to make a friendly wager: that with one roll of the die from the game, you will get a 6. Straight odds are one in six, a 16 percent probability. But then suppose your friend rolls the die, quickly covers it with her hand, and takes a peek. “I can tell you this much,” she says; “it’s an even number.” Now you have new information and your odds change dramatically to one in three, a 33 percent probability. While you are considering whether to change your bet, your friend teasingly adds: “And it’s not a 4.” With this additional bit of information, your odds have changed again, to one in two, a 50 percent probability. With this very simple example, you have performed a Bayesian analysis. Each new piece of information affected the original probability, and that is Bayesian [updating].

Both Nate Silver and Eliezer Yudkowsky have written about Bayes’ theorem in the context of medical testing, specifically mammograms. Imagine you live in a country with 100 million women under 40. Past trends have revealed that there is a 1.4% chance of a woman under 40 in this country getting breast cancer—so roughly 1.4 million women.

Mammograms will detect breast cancer 75% of the time. They will give out false positives—say a woman has breast cancer when she actually doesn’t—about 10% of the time. At first, you might focus just on the mammogram numbers and think that 75% success rate means that a positive is bad news. Let’s do the math.

If all the women under 40 get mammograms, then the false positive rate will give 10 million women under 40 the news that they have breast cancer. But because you know the first statistic, that only 1.4 women under 40 actually get breast cancer, you know that 8.6 million of the women who tested positive are not actually going to have breast cancer!
That’s a lot of needless worrying, which leads to a lot of needless medical care. In order to remedy this poor understanding and make better decisions about using mammograms, we absolutely must consider prior knowledge when we look at the results, and try to update our beliefs with that knowledge in mind.

Weigh the Evidence

Often we ignore prior information, simply called “priors” in Bayesian-speak. We can blame this habit in part on the availability heuristic—we focus on what’s readily available. In this case, we focus on the newest information and the bigger picture gets lost. We fail to adjust the probability of old information to reflect what we have learned.

The big idea behind Bayes’ theorem is that we must continuously update our probability estimates on an as-needed basis. In their book The Signal and the Noise, Nate Silver and Allen Lane give a contemporary example, reminding us that new information is often most useful when we put it in the larger context of what we already know:

Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Skeptics react with glee, while true believers dismiss the new information.

A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming—but only a little.

The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.

Limitations of the Bayesian

Don’t walk away thinking the Bayesian approach will enable you to predict everything! In addition to seeing the world as an ever-shifting array of probabilities, we must also remember the limitations of inductive reasoning. A high probability of something being true is not the same as saying it is true. A great example of this is from Bertrand Russell’s The Problems of Philosophy:

A horse which has been often driven along a certain road resists the attempt to drive him in a different direction. Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.

In the final analysis, though, picking up Bayesian reasoning can truly change your life, as observed in this Big Think video by Julia Galef of the Center for Applied Rationality:

After you’ve been steeped in Bayes’ rule for a little while, it starts to produce some fundamental changes to your thinking. For example, you become much more aware that your beliefs are grayscale. They’re not black and white and that you have levels of confidence in your beliefs about how the world works that are less than 100 percent but greater than zero percent and even more importantly as you go through the world and encounter new ideas and new evidence, that level of confidence fluctuates, as you encounter evidence for and against your beliefs.

So be okay with uncertainty, and use it to your advantage. Instead of holding on to outdated beliefs by rejecting new information, take in what comes your way through a system of evaluating probabilities.

Bayes’ Theorem is part of the Farnam Street latticework of mental models. Still Curious? Read Bayes and Deadweight: Using Statistics to Eject the Deadweight From Your Life next. 

Learning community members can discuss this on the member forum

The Disproportional Power of Anecdotes

Humans, it seems, have an innate tendency to overgeneralize from small samples. How many times have you been caught in an argument where the only proof offered is anecdotal? Perhaps your co-worker saw this bratty kid make a mess in the grocery store while the parents appeared to do nothing. “They just let that child pull things off the shelves and create havoc! My parents would never have allowed that. Parents are so permissive now.” Hmm. Is it true that most parents commonly allow young children to cause trouble in public? It would be a mistake to assume so based on the evidence presented, but a lot of us would go with it anyway. Your co-worker did.

Our propensity to confuse the “now” with “what always is,” as if the immediate world before our eyes consistently represents the entire universe, leads us to bad conclusions and bad decisions. We don’t bother asking questions and verifying validity. So we make mistakes and allow ourselves to be easily manipulated.

Political polling is a good example. It’s actually really hard to design and conduct a good poll. Matthew Mendelsohn and Jason Brent, in their article “Understanding Polling Methodology,” say:

Public opinion cannot be understood by using only a single question asked at a single moment. It is necessary to measure public opinion along several different dimensions, to review results based on a variety of different wordings, and to verify findings on the basis of repetition. Any one result is filled with potential error and represents one possible estimation of the state of public opinion.

This makes sense. But it’s amazing how often we forget.

We see a headline screaming out about the state of affairs and we dive right in, instant believers, without pausing to question the validity of the methodology. How many people did they sample? How did they select them? Most polling aims for random sampling, but there is pre-selection at work immediately, depending on the medium the pollsters use to reach people.

Truly random samples of people are hard to come by. In order to poll people, you have to be able to reach them. The more complicated this is, the more expensive the poll becomes, which acts as a deterrent to thoroughness. The internet can offer high accessibility for a relatively low cost, but it’s a lot harder to verify the integrity of the demographics. And if you go the telephone route, as a lot of polling does, are you already distorting the true randomness of your sample size? Are the people who answer “unknown” numbers already different from those who ignore them?

Polls are meant to generalize larger patterns of behavior based on small samples. You need to put a lot of effort in to make sure that sample is truly representative of the population you are trying to generalize about. Otherwise, erroneous information is presented as truth.

Why does this matter?

It matters because generalization is a widespread human bias, which means a lot of our understanding of the world actually is based on extrapolations made from relatively small sample sizes. Consequently, our individual behavior is shaped by potentially incomplete or inadequate facts that we use to make the decisions that are meant to lead us to success. This bias also shapes a fair degree of public policy and government legislation. We don’t want people who make decisions that affect millions to be dependent on captivating bullshit. (A further concern is that once you are invested, other biases kick in).

Some really smart people are perpetual victims of the problem.

Joseph Henrich, Steven J. Heine, and Ara Norenzayan wrote an article called “The weirdest people in the world?” It’s about how many scientific psychology studies use college students who are predominantly Western, Educated, Industrialized, Rich, and Democratic (WEIRD), and then draw conclusions about the entire human race from these outliers. They reviewed scientific literature from domains such as “visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans.”

Uh-oh. This is a double whammy. “It’s not merely that researchers frequently make generalizations from a narrow subpopulation. The concern is that this particular subpopulation is highly unrepresentative of the species.”

This is why it can be dangerous to make major life decisions based on small samples, like anecdotes or a one-off experience. The small sample may be an outlier in the greater range of possibilities. You could be correcting for a problem that doesn’t exist or investing in an opportunity that isn’t there.

This tendency of mistaken extrapolation from small samples can have profound consequences.

Are you a fan of the San Francisco 49ers? They exist, in part, because of our tendency to over-generalize. In the 19th century in Western America and Canada, a few findings of gold along some creek beds led to a massive rush as entire populations flocked to these regions in the hope of getting rich. San Francisco grew from 200 residents in 1846 to about 36,000 only six years later. The gold rush provided enormous impetus toward California becoming a state, and the corresponding infrastructure developments touched off momentum that long outlasted the mining of gold.

But for most of the actual rushers, those hoping for gold based on the anecdotes that floated east, there wasn’t much to show for their decision to head west. The Canadian Encyclopedia states, “If the nearly 29 million (figure unadjusted) in gold that was recovered during the heady years of 1897 to 1899 [in the Klondike] was divided equally among all those who participated in the gold rush, the amount would fall far short of the total they had invested in time and money.”

How did this happen? Because those miners took anecdotes as being representative of a broader reality. Quite literally, they learned mining from rumor, and didn’t develop any real knowledge. Most people fought for claims along the creeks, where easy gold had been discovered, while rejecting the bench claims on the hillsides above, which often had just as much gold.

You may be thinking that these men must have been desperate if they packed themselves up, heading into unknown territory, facing multiple dangers along the way, to chase a dream of easy money. But most of us aren’t that different. How many times have you invested in a “hot stock” on a tip from one person, only to have the company go under within a year? Ultimately, the smaller the sample size, the greater role the factors of chance play in determining an outcome.

If you want to limit the capriciousness of chance in your quest for success, increase your sample size when making decisions. You need enough information to be able to plot the range of possibilities, identify the outliers, and define the average.

So next time you hear the words “the polls say,” “studies show,” or “you should buy this,” ask questions before you take action. Think about the population that is actually being represented before you start modifying your understanding. Accept the limits of small sample sizes from large populations. And don’t give power to anecdotes.

5 Mental Models to Remove (Some of) the Confusion from Parenting

Just a few days ago, I saw a three-year-old wandering around at 10:30 at night and wondered if he was lost or jet-lagged. The parent came over and explained that they believed in children setting their own sleep schedule.

Interesting.

The problem with this approach is that it may work, or it may not. It may work for your oldest, but not your youngest. And therein lies the problem with the majority of the parenting advice available. It’s all tactics, no principles.

Few topics provoke more unsolicited advice than parenting. The problem is, no matter how good the advice, it might not work for your child. Parenting is the ultimate “the map is not the territory“ situation. There are so many maps out there, and often when we try to use them to navigate the territory that is each individual child, we end up lost and confused. As in other situations, when the map doesn’t match the territory, better to get rid of the map and pay attention to what you are experiencing on the ground. The territory is the reality.

We’ve all dealt with the seemingly illogical behavior of children. Take trying to get your child to sleep through the night—often the first, and most important, challenge. Do you sleep beside them and slowly work your way out of the room? Do you let them “cry it out?” Do you put them in your bed? Do you feed them on demand, or not until morning? Soft music or no music? The options are endless, and each of them has a decently researched book to back it up.

When any subsequent children come along, the problem is often exacerbated. You stick to what worked the first time, because it worked, but this little one is different. Now you’re in a battle of wills, and it’s hard to change your tactics at 3:00 a.m. Parenting is often a rinse and repeat of this scenario: ideas you have about how it should be, combined with what experience is telling you that it is, overlaid with too many options and chronic exhaustion.

This is where mental models can help. As in any other area of your life, developing some principles or models that help you see how the world works will give you options for relevant and useful solutions. Mental models are amazing tools that can be applied across our lives. Here are five principle-based models you can apply to almost any family, situation, or child. These are ones I use often, but don’t let this limit you—so many more apply!

Adaptation

Adaptation is a concept from evolutionary biology. It describes the development of genetic traits that are successful relative to their performance in a specific environment—that is, relative to organisms’ survival in the face of competitive pressures. As Geerat Vermeij explains in Nature: An Economic History, “Adaptation is as good as it has to be; it need not be the best that could be designed. Adaptation depends on context.”

In terms of parenting, this is a big one: the model we can use to stop criticizing ourselves for our inevitable parenting mistakes, to get out of the no-point comparisons with our peers, and to give us the freedom to make changes depending on the situation we find ourselves in.

Species adapt. It is a central feature of the theory of evolution—the ability of a species to survive and thrive in the face of changing environmental conditions. So why not apply this basic biological idea to parenting? Too often we see changing as a weakness. We’re certain that if we aren’t absolutely consistent with our children, they will grow up to be entitled underachievers or something. Or we put pressure on ourselves to be perfect, and strive for an ideal that requires an insane amount of work and sacrifice that may actually be detrimental to our overall success.

We can get out of this type of thinking if we reframe ‘changing’ as ‘adapting’. It’s okay to have different rules in the home versus a public space. I am always super grateful when a parent pacifies a screaming child with a cookie, especially on an airplane or in a restaurant. They probably don’t use the same strategy at home, but they adapt to the different environment. It’s also okay to have two children in soccer, and the third in music. Adapting to their interests will offer a much better return of investment on all those lessons.

No doubt your underlying goals for your children are consistent, like the desire of an individual to survive. How you meet those goals is where the adaptability comes in. Give yourself the freedom to respond to the individual characteristics of your children—and the specific needs of the moment—by trying different behaviors to see what works. And, just as with adaptation in the biological sense, you only need to be as good as you have to be to get the outcomes that are important to you, not be the best parent that ever was.

Velocity

There is a difference between speed and velocity. With speed you move, but with velocity you move somewhere. You have direction.

As many have said of parenting, the days are long but the years are short. It’s hard to be focusing on your direction when homework needs to be done and dinner needs to get made before one child goes off in the carpool to soccer while you rush the other one to art class. Every day begins at a dead run and ends with you collapsing into bed only to go through it all again tomorrow. Between their activities and social lives, and your need to work and have time for yourself, there is no doubt that you move with considerable speed throughout your day.

But it’s useful to sometimes ask, ‘Where am I going?’ Take a moment to make sure it’s not all speed and no direction.

When it comes to time with your kids, what does the goal state look like? How do you move in that direction? If you are just speeding without moving then you have no frame of reference for your choices. You might ask, did I spend enough time with them today? But ten minutes or two hours isn’t going to impact your velocity if you don’t know where you are headed.

When you factor in a goal of movement, it helps you decide what to do when you have time with them. What is it you want out of it? What kind of memories do you want them to have? What kind of parent do you want to be and what kind of children do you want to raise? The answers are different for everyone, but knowing the direction you wish to go helps you evaluate the decisions you make. And it might have the added benefit of cutting out some unnecessary activity and slowing you down.

Algebraic Equivalence

“He got more pancakes than I did!” Complaints about fairness are common among siblings. They watch each other like hawks, counting everything from presents to hugs to make sure everyone gets the same. What can you do? You can drive yourself mad running out to buy an extra whatever, or you can teach your children the difference between ‘same’ and ‘equal’.

If you haven’t solved for x in a while, it doesn’t really matter. In algebra, symbols are used to represent unknown numbers that can be solved for given other relevant information. The general point about algebraic equivalence is that it teaches us that two things need not be the same in order to be equal.

For example, x + y = 5. Here are some of the options for the values of x and y:

3 + 2

4 + 1

2.5 + 2.5

1.8 + 3.2

And those are just the simple ones. What is useful is this idea of abstracting to see what the full scope of possibilities are. Then you can demonstrate that what is on each side of those little parallel lines doesn’t have to look the same to have equal value. When it comes to the pancakes, it’s better to focus on an equal feeling of fullness then the number of pancakes on the plate.

In a deeper way, algebraic equivalence helps us deal with one accusation that all parents get at one time or another: “You love my sibling more than me.” It’s not true, but our default usually is to say, “No, I love you both the same.” This can be confusing for children, because, after all, they are not the same as their sibling, and you likely interact with them differently, so how can the love be the same?

Using algebraic equivalence as a model shifts it. You can respond instead that you love them both equally. Even though what’s on either side of the equation is different, it is equal. Swinging the younger child up in the air is equivalent to asking the older one about her school project. Appreciating one’s sense of humor is equivalent to respecting the other’s organizational abilities. They may be different, but the love is equal.

Seizing the middle

In chess, the middle is the key territory to hold. As explained on Wikipedia: “The center is the most important part of the chessboard, as pieces from the center can easily move to either flank with great speed. However, amateurs often prefer to concentrate on the king’s side of the board. This is an incorrect mindset.”

In parenting, seizing the middle means you must forget trying to control every single move. It’s impossible anyway. Instead, focus on trying to control what I think of as the middle territory. I don’t mind losing a few battles on the fringes, if I’m holding my ground in the area that will allow me to respond quickly to problems.

The other night my son and I got into perhaps our eighth fight of the week on the state of his room. The continual explosion makes it hard to walk in there, plus he loses things all the time, which is an endless source of frustration to both of us. I’ve explained that I hate buying replacements only to have them turn up in the morass months later.

So I got cranky and got on his case again, and he felt bad and cried again. When I went to the kitchen to find some calm, I realized that my strategy was all wrong. I was focused on the pawn in the far column of the chess board instead of what the pieces were doing right in front of me.

My thinking then went like this: what is the territory I want to be present in? Continuing the way I was would lead to a clean room, maybe. But by focusing on this flank I was sacrificing control of the middle. Eventually he was going to tune me out because no one wants to feel bad about their shortcomings every day. Is it worth saving a pawn if it leaves your queen vulnerable?

The middle territory with our kids is mutual respect and trust. If I want my son to come to me for help when life gets really complicated, which I do, then I need to focus on behaviors that will allow me to have that strategic influence throughout my relationship with him. Making him feel like crap every day, because his shirts are mixed in with his pants or because of all the Pokemon cards are on the floor, isn’t going to cut it. Make no mistake, seizing the middle is not about throwing out all the rules. This is about knowing which battles to fight, so you can keep the middle territory of the trust and respect of your child.

Inversion

Sometimes it’s not about providing solutions, but removing obstacles. Sociologist Kurt Lewin observes in his work on force field analysis[1] that reaching any goal has two components: augmenting the forces for, and removing the forces against. When it comes to parenting, we need to ask ourselves not only what we could be doing more of, but also what we could be doing less of.

When my friend was going on month number nine of her baby waking up four times a night, she felt at her wits’ end. Out of desperation, she decided to invert the problem. She had been trying different techniques and strategies, thinking that there was something she wasn’t doing right. When nothing seemed to be working, she stopped trying to add elements like new tactics, and changed her strategy. She looked instead for obstacles to remove. Was there anything preventing the baby from sleeping through the night?

The first night she made it darker. No effect. The second night she made it warmer. Her son has slept through the night ever since. It wasn’t her parenting skills or the adherence to a particular sleep philosophy that was causing him to wake up so often. Her baby was cold. Once she removed that obstacle with a space heater the problem was resolved.

We do this all the time, trying to fix problem by throwing new parenting philosophies at the situation. What can I do better? More time, more money, more lessons, more stuff. But it can be equally valuable to look for what you could be doing less of. In so doing, you may enrich your relationships with your children immeasurably.

Parenting is inherently complex: the territory changes almost overnight. Different environments, different children—figuring out how to raise your kids plays out against a backdrop of some fast-paced evolution. Some tactics are great, and once in a while a technique fits the situation perfectly. But when your tactics fail, or your experience seems to provide no obvious direction, a principle-based mental models approach to parenting can give you the insight to find solutions as you go.

[1] Lewin’s original work on force field analysis can be found in Lewin, Kurt. Field Theory in Social Science. New York: Harper and Row, 1951.

Deductive vs Inductive Reasoning: Make Smarter Arguments, Better Decisions, and Stronger Conclusions

You can’t prove truth, but using deductive and inductive reasoning, you can get close. Learn the difference between the two types of reasoning and how to use them when evaluating facts and arguments.

***

As odd as it sounds, in science, law, and many other fields, there is no such thing as proof — there are only conclusions drawn from facts and observations. Scientists cannot prove a hypothesis, but they can collect evidence that points to its being true. Lawyers cannot prove that something happened (or didn’t), but they can provide evidence that seems irrefutable.

The question of what makes something true is more relevant than ever in this era of alternative facts and fake news. This article explores truth — what it means and how we establish it. We’ll dive into inductive and deductive reasoning as well as a bit of history.

“Contrariwise,” continued Tweedledee, “if it was so, it might be; and if it were so, it would be; but as it isn’t, it ain’t. That’s logic.”

— Lewis Carroll, Through the Looking-Glass

The essence of reasoning is a search for truth. Yet truth isn’t always as simple as we’d like to believe it is.

For as far back as we can imagine, philosophers have debated whether absolute truth exists. Although we’re still waiting for an answer, this doesn’t have to stop us from improving how we think by understanding a little more.

In general, we can consider something to be true if the available evidence seems to verify it. The more evidence we have, the stronger our conclusion can be. When it comes to samples, size matters. As my friend Peter Kaufman says:

What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history….

In some areas, it is necessary to accept that truth is subjective. For example, ethicists accept that it is difficult to establish absolute truths concerning whether something is right or wrong, as standards change over time and vary around the world.

When it comes to reasoning, a correctly phrased statement can be considered to have objective truth. Some statements have an objective truth that we cannot ascertain at present. For example, we do not have proof for the existence or non-existence of aliens, although proof does exist somewhere.

Deductive and inductive reasoning are both based on evidence.

Several types of evidence are used in reasoning to point to a truth:

  • Direct or experimental evidence — This relies on observations and experiments, which should be repeatable with consistent results.
  • Anecdotal or circumstantial evidence — Overreliance on anecdotal evidence can be a logical fallacy because it is based on the assumption that two coexisting factors are linked even though alternative explanations have not been explored. The main use of anecdotal evidence is for forming hypotheses which can then be tested with experimental evidence.
  • Argumentative evidence — We sometimes draw conclusions based on facts. However, this evidence is unreliable when the facts are not directly testing a hypothesis. For example, seeing a light in the sky and concluding that it is an alien aircraft would be argumentative evidence.
  • Testimonial evidence — When an individual presents an opinion, it is testimonial evidence. Once again, this is unreliable, as people may be biased and there may not be any direct evidence to support their testimony.

“The weight of evidence for an extraordinary claim must be proportioned to its strangeness.”

— Laplace, Théorie analytique des probabilités (1812)

Reasoning by Induction

The fictional character Sherlock Holmes is a master of induction. He is a careful observer who processes what he sees to reach the most likely conclusion in the given set of circumstances. Although he pretends that his knowledge is of the black-or-white variety, it often isn’t. It is true induction, coming up with the strongest possible explanation for the phenomena he observes.

Consider his description of how, upon first meeting Watson, he reasoned that Watson had just come from Afghanistan:

“Observation with me is second nature. You appeared to be surprised when I told you, on our first meeting, that you had come from Afghanistan.”
“You were told, no doubt.”

“Nothing of the sort. I knew you came from Afghanistan. From long habit the train of thoughts ran so swiftly through my mind, that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, ‘Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.’ The whole train of thought did not occupy a second. I then remarked that you came from Afghanistan, and you were astonished.”

(From Sir Arthur Conan Doyle’s A Study in Scarlet)

Inductive reasoning involves drawing conclusions from facts, using logic. We draw these kinds of conclusions all the time. If someone we know to have good literary taste recommends a book, we may assume that means we will enjoy the book.

Induction can be strong or weak. If an inductive argument is strong, the truth of the premise would mean the conclusion is likely. If an inductive argument is weak, the logic connecting the premise and conclusion is incorrect.

There are several key types of inductive reasoning:

  • Generalized — Draws a conclusion from a generalization. For example, “All the swans I have seen are white; therefore, all swans are probably white.”
  • Statistical — Draws a conclusion based on statistics. For example, “95 percent of swans are white” (an arbitrary figure, of course); “therefore, a randomly selected swan will probably be white.”
  • Sample — Draws a conclusion about one group based on a different, sample group. For example, “There are ten swans in this pond and all are white; therefore, the swans in my neighbor’s pond are probably also white.”
  • Analogous — Draws a conclusion based on shared properties of two groups. For example, “All Aylesbury ducks are white. Swans are similar to Aylesbury ducks. Therefore, all swans are probably white.”
  • Predictive — Draws a conclusion based on a prediction made using a past sample. For example, “I visited this pond last year and all the swans were white. Therefore, when I visit again, all the swans will probably be white.”
  • Causal inference — Draws a conclusion based on a causal connection. For example, “All the swans in this pond are white. I just saw a white bird in the pond. The bird was probably a swan.”

The entire legal system is designed to be based on sound reasoning, which in turn must be based on evidence. Lawyers often use inductive reasoning to draw a relationship between facts for which they have evidence and a conclusion.

The initial facts are often based on generalizations and statistics, with the implication that a conclusion is most likely to be true, even if that is not certain. For that reason, evidence can rarely be considered certain. For example, a fingerprint taken from a crime scene would be said to be “consistent with a suspect’s prints” rather than being an exact match. Implicit in that statement is the assertion that it is statistically unlikely that the prints are not the suspect’s.

Inductive reasoning also involves Bayesian updating. A conclusion can seem to be true at one point until further evidence emerges and a hypothesis must be adjusted. Bayesian updating is a technique used to modify the probability of a hypothesis’s being true as new evidence is supplied. When inductive reasoning is used in legal situations, Bayesian thinking is used to update the likelihood of a defendant’s being guilty beyond a reasonable doubt as evidence is collected. If we imagine a simplified, hypothetical criminal case, we can picture the utility of Bayesian inference combined with inductive reasoning.

Let’s say someone is murdered in a house where five other adults were present at the time. One of them is the primary suspect, and there is no evidence of anyone else entering the house. The initial probability of the prime suspect’s having committed the murder is 20 percent. Other evidence will then adjust that probability. If the four other people testify that they saw the suspect committing the murder, the suspect’s prints are on the murder weapon, and traces of the victim’s blood were found on the suspect’s clothes, jurors may consider the probability of that person’s guilt to be close enough to 100 percent to convict. Reality is more complex than this, of course. The conclusion is never certain, only highly probable.

One key distinction between deductive and inductive reasoning is that the latter accepts that a conclusion is uncertain and may change in the future. A conclusion is either strong or weak, not right or wrong. We tend to use this type of reasoning in everyday life, drawing conclusions from experiences and then updating our beliefs.

A conclusion is either strong or weak, not right or wrong.

Everyday inductive reasoning is not always correct, but it is often useful. For example, superstitious beliefs often originate from inductive reasoning. If an athlete performed well on a day when they wore their socks inside out, they may conclude that the inside-out socks brought them luck. If future successes happen when they again wear their socks inside out, the belief may strengthen. Should that not be the case, they may update their belief and recognize that it is incorrect.

Another example (let’s set aside the question of whether turkeys can reason): A farmer feeds a turkey every day, so the turkey assumes that the farmer cares for its wellbeing. Only when Thanksgiving rolls around does that assumption prove incorrect.

The issue with overusing inductive reasoning is that cognitive shortcuts and biases can warp the conclusions we draw. Our world is not always as predictable as inductive reasoning suggests, and we may selectively draw upon past experiences to confirm a belief. Someone who reasons inductively that they have bad luck may recall only unlucky experiences to support that hypothesis and ignore instances of good luck.

In The 12 Secrets of Persuasive Argument, the authors write:

In inductive arguments, focus on the inference. When a conclusion relies upon an inference and contains new information not found in the premises, the reasoning is inductive. For example, if premises were established that the defendant slurred his words, stumbled as he walked, and smelled of alcohol, you might reasonably infer the conclusion that the defendant was drunk. This is inductive reasoning. In an inductive argument the conclusion is, at best, probable. The conclusion is not always true when the premises are true. The probability of the conclusion depends on the strength of the inference from the premises. Thus, when dealing with inductive reasoning, pay special attention to the inductive leap or inference, by which the conclusion follows the premises.

… There are several popular misconceptions about inductive and deductive reasoning. When Sherlock Holmes made his remarkable “deductions” based on observations of various facts, he was usually engaging in inductive, not deductive, reasoning.

In Inductive Reasoning, Aiden Feeney and Evan Heit write:

…inductive reasoning … corresponds to everyday reasoning. On a daily basis we draw inferences such as how a person will probably act, what the weather will probably be like, and how a meal will probably taste, and these are typical inductive inferences.

[…]

[I]t is a multifaceted cognitive activity. It can be studied by asking young children simple questions involving cartoon pictures, or it can be studied by giving adults a variety of complex verbal arguments and asking them to make probability judgments.

[…]

[I]nduction is related to, and it could be argued is central to, a number of other cognitive activities, including categorization, similarity judgment, probability judgment, and decision making. For example, much of the study of induction has been concerned with category-based induction, such as inferring that your next door neighbor sleeps on the basis that your neighbor is a human animal, even if you have never seen your neighbor sleeping.

“A very great deal more truth can become known than can be proven.”

— Richard Feynman

Reasoning by Deduction

Deduction begins with a broad truth (the major premise), such as the statement that all men are mortal. This is followed by the minor premise, a more specific statement, such as that Socrates is a man. A conclusion follows: Socrates is mortal. If the major premise is true and the minor premise is true the conclusion cannot be false.

Deductive reasoning is black and white; a conclusion is either true or false and cannot be partly true or partly false. We decide whether a deductive statement is true by assessing the strength of the link between the premises and the conclusion. If all men are mortal and Socrates is a man, there is no way he can not be mortal, for example. There are no situations in which the premise is not true, so the conclusion is true.

In science, deduction is used to reach conclusions believed to be true. A hypothesis is formed; then evidence is collected to support it. If observations support its truth, the hypothesis is confirmed. Statements are structured in the form of “if A equals B, and C is A, then C is B.” If A does not equal B, then C will not equal B. Science also involves inductive reasoning when broad conclusions are drawn from specific observations; data leads to conclusions. If the data shows a tangible pattern, it will support a hypothesis.

For example, having seen ten white swans, we could use inductive reasoning to conclude that all swans are white. This hypothesis is easier to disprove than to prove, and the premises are not necessarily true, but they are true given the existing evidence and given that researchers cannot find a situation in which it is not true. By combining both types of reasoning, science moves closer to the truth. In general, the more outlandish a claim is, the stronger the evidence supporting it must be.

We should be wary of deductive reasoning that appears to make sense without pointing to a truth. Someone could say “A dog has four paws. My pet has four paws. Therefore, my pet is a dog.” The conclusion sounds logical but isn’t, because the initial premise is too specific.

The History of Reasoning

The discussion of reasoning and what constitutes truth dates back to Plato and Aristotle.

Plato (429–347 BC) believed that all things are divided into the visible and the intelligible. Intelligible things can be known through deduction (with observation being of secondary importance to reasoning) and are true knowledge.

Aristotle took an inductive approach, emphasizing the need for observations to support knowledge. He believed that we can reason only from discernable phenomena. From there, we use logic to infer causes.

Debate about reasoning remained much the same until the time of Isaac Newton. Newton’s innovative work was based on observations, but also on concepts that could not be explained by a physical cause (such as gravity). In his Principia, Newton outlined four rules for reasoning in the scientific method:

  1. “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” (We refer to this rule as Occam’s Razor.)
  2. “Therefore, to the same natural effects we must, as far as possible, assign the same causes.”
  3. “The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.”
  4. “In experimental philosophy, we are to look upon propositions collected by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, ’till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.”

In 1843, philosopher John Stuart Mill published A System of Logic, which further refined our understanding of reasoning. Mill believed that science should be based on a search for regularities among events. If a regularity is consistent, it can be considered a law. Mill described five methods for identifying causes by noting regularities. These methods are still used today:

  • Direct method of agreement — If two instances of a phenomenon have a single circumstance in common, the circumstance is the cause or effect.
  • Method of difference — If a phenomenon occurs in one experiment and does not occur in another, and the experiments are the same except for one factor, that is the cause, part of the cause, or the effect.
  • Joint method of agreement and difference — If two instances of a phenomenon have one circumstance in common, and two instances in which it does not occur have nothing in common except the absence of that circumstance, then that circumstance is the cause, part of the cause, or the effect.
  • Method of residue — When you subtract any part of a phenomenon known to be caused by a certain antecedent, the remaining residue of the phenomenon is the effect of the remaining antecedents.
  • Method of concomitant variations — If a phenomenon varies when another phenomenon varies in a particular way, the two are connected.

Karl Popper was the next theorist to make a serious contribution to the study of reasoning. Popper is well known for his focus on disconfirming evidence and disproving hypotheses. Beginning with a hypothesis, we use deductive reasoning to make predictions. A hypothesis will be based on a theory — a set of independent and dependent statements. If the predictions are true, the theory is true, and vice versa. Popper’s theory of falsification (disproving something) is based on the idea that we cannot prove a hypothesis; we can only show that certain predictions are false. This process requires vigorous testing to identify any anomalies, and Popper does not accept theories that cannot be physically tested. Any phenomenon not present in tests cannot be the foundation of a theory, according to Popper. The phenomenon must also be consistent and reproducible. Popper’s theories acknowledge that theories that are accepted at one time are likely to later be disproved. Science is always changing as more hypotheses are modified or disproved and we inch closer to the truth.

Conclusion

In How to Deliver a TED Talk, Jeremey Donovan writes:

No discussion of logic is complete without a refresher course in the difference between inductive and deductive reasoning. By its strictest definition, inductive reasoning proves a general principle—your idea worth spreading—by highlighting a group of specific events, trends, or observations. In contrast, deductive reasoning builds up to a specific principle—again, your idea worth spreading—through a chain of increasingly narrow statements.

Logic is an incredibly important skill, and because we use it so often in everyday life, we benefit by clarifying the methods we use to draw conclusions. Knowing what makes an argument sound is valuable for making decisions and understanding how the world works. It helps us to spot people who are deliberately misleading us through unsound arguments. Understanding reasoning is also helpful for avoiding fallacies and for negotiating.

FS Members can discuss this article on the Learning Community Forum.