Tag: Daniel Kahneman

The Inner Game: Why Trying Too Hard Can Be Counterproductive

The standard way of learning is far from being the fastest or most enjoyable. It’s slow, makes us second guess ourselves, and interferes with our natural learning process. Here we explore a better way to learn and enjoy the process.

***

It’s the final moment before an important endeavor—a speech, a performance, a presentation, an interview, a date, or perhaps a sports match. Up until now, you’ve felt good and confident about your abilities. But suddenly, something shifts. You feel a wave of self-doubt. You start questioning how well you prepared. The urge to run away and sabotage the whole thing starts bubbling to the surface.

As hard as you try to overcome your inexplicable insecurity, something tells you that you’ve already lost. And indeed, things don’t go well. You choke up, forget what you were meaning to say, long to just walk out, or make silly mistakes. None of this comes as a surprise—you knew beforehand that something had gone wrong in your mind. You just don’t know why.

Conversely, perhaps you’ve been in a situation where you knew you’d succeeded before you even began. You felt confident and in control. Your mind could focus with ease, impervious to self-doubt or distraction. Obstacles melted away, and abilities you never knew you possessed materialized.

This phenomenon—winning or losing something in your mind before you win or lose it in reality—is what tennis player and coach W. Timothy Gallwey first called “the Inner Game” in his book The Inner Game of Tennis. Gallwey wrote the book in the 1970s when people viewed sport as a purely physical matter. Athletes focused on their muscles, not their mindsets. Today, we know that psychology is in fact of the utmost importance.

Gallwey recognized that physical ability was not the full picture in any sport. In tennis, success is very psychological because there are really two games going on: the Inner Game and the Outer Game. If a player doesn’t pay attention to how they play the Inner Game—against their insecurities, their wandering mind, their self-doubt and uncertainty—they will never be as good as they have the potential to be. The Inner Game is fought against your own self-defeating tendencies, not against your actual opponent. Gallwey writes in the introduction:

Every game is composed of two parts, an outer game, and an inner game. . . . It is the thesis of this book that neither mastery nor satisfaction can be found in the playing of any game without giving some attention to the relatively neglected skills of the inner game. This is the game that takes place in the mind of the player, and it is played against such obstacles as lapses in concentration, nervousness, self-doubt, and self-condemnation. In short, it is played to overcome all habits of mind which inhibit excellence in performance. . . . Victories in the inner game may provide no additions to the trophy case, but they bring valuable rewards which are more permanent and which can contribute significantly to one’s success, off the court as well as on.

Ostensibly, The Inner Game of Tennis is a book about tennis. But dig beneath the surface, and it teems with techniques and insights we can apply to any challenge. The book is really about overcoming the external obstacles we create that prevent us from succeeding. You don’t need to be interested in tennis or even know anything about it to benefit from this book.

One of the most important insights Gallwey shares is that a major thing which leads us to lose the Inner Game is trying too hard and interfering with our own natural learning capabilities. Let’s take a look at how we can win the Inner Game in our own lives by seeing the importance of not forcing things.

The Two Sides of You

Gallwey was not a psychologist. But his experience as both a tennis player and a coach for other players gave him a deep understanding of how human psychology influences playing. The tennis court was his laboratory. As is evident throughout The Inner Game of Tennis, he studied himself, his students, and opponents with care. He experimented and tested out theories until he uncovered the best teaching techniques.

When we’re learning something new, we often internally talk to ourselves. We give ourselves instructions. When Gallwey noticed this in his students, he wondered who was talking to who. From his observations, he drew his key insight: the idea of Self 1 and Self 2.

Self 1 is the conscious self. Self 2 is the subconscious. The two are always in dialogue.

If both selves can communicate in harmony, the game will go well. More often, this isn’t what happens. Self 1 gets judgmental and critical, trying to instruct Self 2 in what to do. The trick is to quiet Self 1 and let Self 2 follow the natural learning process we are all born competent at; this is the process that enables us to learn as small children. This capacity is within us—we just need to avoid impeding it. As Gallwey explains:

Now we are ready for the first major postulate of the Inner Game: within each player the kind of relationship that exists between Self 1 and Self 2 is the prime factor in determining one’s ability to translate his knowledge of technique into effective action. In other words, the key to better tennis—or better anything—lies in improving the relationship between the conscious teller, Self 1, and the natural capabilities of Self 2.

Self 1 tries to instruct Self 2 using words. But Self 2 responds best to images and internalizing the physical experience of carrying out the desired action.

In short, if we let ourselves lose touch with our ability to feel our actions, by relying too heavily on instructions, we can seriously compromise our access to our natural learning processes and our potential to perform.

Stop Trying so Hard

Gallwey writes that “great music and art are said to arise from the quiet depths of the unconscious, and true expressions of love are said to come from a source which lies beneath words and thoughts. So it is with the greatest efforts in sports; they come when the mind is as still as a glass lake.”

What’s the most common piece of advice you’re likely to receive for getting better at something? Try harder. Work harder. Put more effort in. Pay more attention to what you’re doing. Do more.

Yet what do we experience when we are performing at our best? The exact opposite. Everything becomes effortless. We act without thinking or even giving ourselves time to think. We stop judging our actions as good or bad and observe them as they are. Colloquially, we call this being in the zone. In psychology, it’s known as “flow” or a “peak experience.”

Compare this to the typical tennis lesson. As Gallwey describes it, the teacher wants the student to feel that the cost of the lesson was worthwhile. So they give detailed, continuous feedback. Every time they spot the slightest flaw, they highlight it. The result is that the student does indeed feel the lesson fee is justifiable. They’re now aware of dozens of errors they need to fix—so they book more classes.

In his early days as a tennis coach, Gallwey took this approach. Over time, he saw that when he stepped back and gave his students less feedback, not more, they improved faster. Players would correct obvious mistakes without any guidance. On some deeper level, they knew the correct way to play tennis. They just needed to overcome the habits of the mind getting in the way. Whatever impeded them was not a lack of information. Gallwey writes:

I was beginning to learn what all good pros and students of tennis must learn: that images are better than words, showing better than telling, too much instruction worse than none, and that trying too hard often produces negative results.

There are numerous instances outside of sports when we can see how trying too hard can backfire. Consider a manager who feels the need to constantly micromanage their employees and direct every detail of their work, not allowing any autonomy or flexibility. As a result, the employees lose interest in ever taking initiative or directing their own work. Instead of getting the perfect work they want, the manager receives lackluster efforts.

Or consider a parent who wants their child to do well at school, so they control their studying schedule, limit their non-academic activities, and offer enticing rewards for good grades. It may work in the short term, but in the long run, the child doesn’t learn to motivate themselves or develop an intrinsic desire to study. Once their parent is no longer breathing down their neck, they don’t know how to learn.

Positive Thinking Backfires

Not only are we often advised to try harder to improve our skills, we’re also encouraged to think positively. According to Gallwey, when it comes to winning the Inner Game, this is the wrong approach altogether.

To quiet Self 1, we need to stop attaching judgments to our performance, either positive or negative. Thinking of, say, a tennis serve as “good” or “bad” shuts down Self 2’s intuitive sense of what to do. Gallwey noticed that “judgment results in tightness and tightness interferes with the fluidity required for accurate and quick movement. Relaxation produces smooth strokes and results from accepting your strokes as they are, even if erratic.”

In order to let Self 2’s sense of the correct action take over, we need to learn to see our actions as they are. We must focus on what is happening, not what is right or wrong. Once we can see clearly, we can tap into our inbuilt learning process, as Gallwey explains:

But to see things as they are, we must take off our judgmental glasses, whether they’re dark or rose-tinted. This action unlocks a process of natural development, which is as surprising as it is beautiful. . . . The first step is to see your strokes as they are. They must be perceived clearly. This can be done only when personal judgment is absent. As soon as a stroke is seen clearly and accepted as it is, a natural and speedy process of change begins.

It’s hard to let go of judgments when we can’t or won’t trust ourselves. Gallwey noticed early on that negative assessments—telling his students what they had done wrong—didn’t seem to help them. He tried only making positive assessments—telling them what they were doing well. Eventually, Gallwey recognized that attaching any sort of judgment to how his students played tennis was detrimental.

Positive and negative evaluations are two sides of the same coin. To say something is good is to implicitly imply its inverse is bad. When Self 1 hears praise, Self 2 picks up on the underlying criticism.

Clearly, positive and negative evaluations are relative to each other. It is impossible to judge one event as positive without seeing other events as not positive or negative. There is no way to stop just the negative side of the judgmental process.

The trick may be to get out of the binary of good or bad completely by doing more showing and asking questions like “Why did the ball go that way?” or “What are you doing differently now than you did last time?” Sometimes, getting people to articulate how they are doing by observing their own performance removes the judgments and focuses on developmental possibilities. When we have the right image in mind, we move toward it naturally. Value judgments get in the way of that process.

The Inner Game Way of Learning

We’re all constantly learning and picking up new skills. But few of us pay much attention to how we learn and whether we’re doing it in the best possible way. Often, what we think of as “learning” primarily involves berating ourselves for our failures and mistakes, arguing with ourselves, and not using the most effective techniques. In short, we try to brute-force ourselves into adopting a capability. Gallwey describes the standard way of learning as such:

Step 1: Criticize or judge past behavior.

Step 2: Tell yourself to change, instructing with word commands repeatedly.

Step 3: Try hard; make yourself do it right.

Step 4: Critical judgment about results leading to Self 1 vicious cycle.

The standard way of learning is far from being the fastest or most enjoyable. It’s slow, it makes us feel awful about ourselves, and it interferes with our natural learning process. Instead, Gallwey advocates following the Inner Game way of learning.

First, we must observe our existing behavior without attaching any judgment to it. We must see what is, not what we think it should be. Once we are aware of what we are doing, we can move onto the next step: picturing the desired outcome. Gallwey advocates images over outright commands because he believes visualizing actions is the best way to engage Self 2’s natural learning capabilities. The next step is to trust Self 2 and “let it happen!” Once we have the right image in mind, Self 2 can take over—provided we do not interfere by trying too hard to force our actions. The final step is to continue “nonjudgmental, calm observation of the results” in order to repeat the cycle and keep learning. It takes nonjudgmental observation to unlearn bad habits.

Conclusion

Towards the end of the book, Gallwey writes:

Clearly, almost every human activity involves both the outer and inner games. There are always external obstacles between us and our external goals, whether we are seeking wealth, education, reputation, friendship, peace on earth or simply something to eat for dinner. And the inner obstacles are always there; the very mind we use in obtaining our external goals is easily distracted by its tendency to worry, regret, or generally muddle the situation, thereby causing needless difficulties within.

Whatever we’re trying to achieve, it would serve us well to pay more attention to the internal, not just the external. If we can overcome the instinct to get in our own way and be more comfortable trusting in our innate abilities, the results may well be surprising.

Survivorship Bias: The Tale of Forgotten Failures

Survivorship bias is a common logical error that distorts our understanding of the world. It happens when we assume that success tells the whole story and when we don’t adequately consider past failures.

There are thousands, even tens of thousands of failures for every big success in the world. But stories of failure are not as sexy as stories of triumph, so they rarely get covered and shared. As we consume one story of success after another, we forget the base rates and overestimate the odds of real success.

“See,” says he, “you who deny a providence, how many have been saved by their prayers to the Gods.”

“Ay,” says Diagoras, “I see those who were saved, but where are those painted who were shipwrecked?”

— Cicero

The Basics

A college dropout becomes a billionaire. Batuli Lamichhane, a chain-smoker, lives to the age of 118. Four young men are rejected by record labels and told “guitar groups are on the way out,” then go on to become the most successful band in history.

Bill Gates, Batuli Lamichhane, and the Beatles are oft-cited examples of people who broke the rules without the expected consequences. We like to focus on people like them—the result of a cognitive shortcut known as survivorship bias.

When we only pay attention to those who survive, we fail to account for base rates and end up misunderstanding how selection processes actually work. The base rate is the probability of a given result we can expect from a sample, expressed as a percentage. If you play roulette, for example, you can be expected to win one out of 38 games, or 2.63%, which is the base rate. The problem arises when we mistake the winners for the rule and not the exception. People like Gates, Lamichhane, and the Beatles are anomalies at one end of a distribution curve. While there is much to learn from them, it would be a mistake to expect the same results from doing the same things.

A stupid decision that works out well becomes a brilliant decision in hindsight.

— Daniel Kahneman

Cause and Effect

Can we achieve anything if we try hard enough? Not necessarily. Survivorship bias leads to an erroneous understanding of cause and effect. People see correlation in mere coincidence. We all love to hear stories of those who beat the odds and became successful, holding them up as proof that the impossible is possible. We ignore failures in pursuit of a coherent narrative about success.

Few would think to write the biography of a business person who goes bankrupt and spends their entire life in debt. Or a musician who tried again and again to get signed and was ignored by record labels. Or of someone who dreams of becoming an actor, moves to LA, and ends up returning a year later, defeated and broke. After all, who wants to hear that? We want the encouragement survivorship bias provides, and the subsequent belief in our own capabilities. The result is an inflated idea of how many people become successful.

The discouraging fact is that success is never guaranteed. Most businesses fail. Most people do not become rich or famous. Most leaps of faith go wrong. It does not mean we should not try, just that we should be realistic with our understanding of reality.

Beware of advice from the successful.

— Barnaby James

Survivorship Bias in Business

Survivorship bias is particularly common in the world of business. Companies which fail early on are ignored, while the rare successes are lauded for decades. Studies of market performance often exclude companies which collapse. This can distort statistics and make success seem more probable than it truly is. Just as history is written by the winners, so is much of our knowledge about business. Those who end up broke and chastened lack a real voice. They may be blamed for their failures by those who ignore the role coincidence plays in the upward trajectories of the successful.

Nassim Taleb writes of our tendency to ignore the failures: “We favor the visible, the embedded, the personal, the narrated, and the tangible; we scorn the abstract.” Business books laud the rule-breakers who ignore conventional advice and still create profitable enterprises. For most entrepreneurs, taking excessive risks and eschewing all norms is an ill-advised gamble. Many of the misfit billionaires who are widely celebrated succeeded in spite of their unusual choices, not because of them. We also ignore the role of timing, luck, connections and socio-economic background. A person from a prosperous family, with valuable connections, who founds a business at a lucrative time has a greater chance of survival, even if they drop out of college or do something unconventional. Someone with a different background, acting at an inopportune time, will have less of a chance.

In No Startup Hipsters: Build Scalable Technology Companies, Samir Rath and Teodora Georgieva write:

Almost every single generic presentation for startups starts with “Ninety Five percent of all startups fail”, but very rarely do we pause for a moment and think “what does this really mean?” We nod our heads in somber acknowledgement and with great enthusiasm turn to the heroes who “made it” — Zuckerberg, Gates, etc. to absorb pearls of wisdom and find the Holy Grail of building successful companies. Learning from the successful is a much deeper problem and can reduce the probability of success more than we might imagine.

Examining the lives of successful entrepreneurs teaches us very little. We would do far better to analyze the causes of failure, then act accordingly. Even better would be learning from both failures and successes.

Focusing on successful outliers does not account for base rates. As Rath and Georgieva go on to write:

After any process that picks winners, the non-survivors are often destroyed or hidden or removed from public view. The huge failure rate for start-ups is a classic example; if failures become invisible, not only do we fail to recognise that missing instances hold important information, but we may also fail to acknowledge that there is any missing information at all.

They describe how this leads us to base our choices on inaccurate assumptions:

Often, as we revel in stories of start-up founders who struggled their way through on cups of ramen before the tide finally turned on viral product launches, high team performance or strategic partnerships, we forget how many other founders did the same thing, in the same industry and perished…The problem we mention is compounded by biographical or autobiographical narratives. The human brain is obsessed with building a cause and effect narrative. The problem arises when this cognitive machinery misfires and finds patterns where there are none.

These success narratives are created both by those within successful companies and those outside. Looking back on their ramen days, founders may believe they had a plan all along. They always knew everything would work out. In truth, they may lack an idea of the cause and effect relationships underlying their progress. When external observers hear their stories, they may, in a quasi-superstitious manner, spot “signs” of the success to come. As Daniel Kahneman has written, the only true similarity is luck.

Consider What You Don’t See

When we read about survivorship bias, we usually come across the archetypical story of Abraham Wald, a statistician studying World War II airplanes. His research group at Columbia University was asked to figure out how to better protect airplanes from damage. The initial approach to the problem was to look at the planes coming back, seeing where they were hit the worst, then reinforcing that area.

However, Wald realized there was a missing, yet valuable, source of evidence: Planes that were hit that did not make it back. Planes that went down, that weren’t surviving, had much better information to provide on areas that were most important to reinforce. Wald’s approach is an example of how to overcome survivorship bias. Don’t look just at what you can see. Consider all the things that started on the same path but didn’t make it. Try to figure out their story, as there is as much, if not more, to be learned from failure.

Considering survivorship bias when presented with examples of success is difficult. It is not instinctive to pause, reflect, and think through what the base rate odds of success are and whether you’re looking at an outlier or the expected outcome. And yet if you don’t know the real odds, if you don’t know if what you’re looking at is an example of survivorship bias, then you’ve got a blind spot.

Whenever you read about a success story in the media, think of all the people who tried to do what that person did and failed. Of course, understanding survivorship bias isn’t an excuse for not taking action, but rather an essential tool to help you cut through the noise and understand the world. If you’re going to do something, do it fully informed.

To learn more, consider reading Fooled By Randomness, or The Art of Thinking Clearly.

The Lies We Tell

We make up stories in our minds and then against all evidence, defend them tooth and nail. Understanding why we do this is the key to discovering truth and making wiser decisions.

***

Our brains are quirky.

When I put my hand on a hot stove, I have instantly created awareness of a cause and effect relationship—“If I put my hand on a hot stove, it will hurt.” I’ve learned something fundamental about the world. Our brains are right to draw that conclusion. It’s a linear relationship, cause and effects are tightly coupled, feedback is near immediate, and there aren’t many other variables at play.

The world isn’t always this easy to understand. When cause and effect aren’t obvious, we still draw conclusions. Nobel Prize winning psychologist Daniel Kahneman offers an example of how our brains look for, and assume, causality:

“After spending a day exploring beautiful sights in the crowded streets of New York, Jane discovered that her wallet was missing.”

That’s all you get. No background on Jane, or any particulars about where she went. Kahneman presented this miniature story to his test subjects hidden among several other statements. When Kahneman later offered a surprise recall test, “the word pickpocket was more strongly associated with the story than the word sights, even though the latter was actually in the sentence while the former was not.” 1

What happened here?

There’s a bug in the evolutionary code that makes up our brains. We have a hard time distinguishing between when cause and effect is clear,  as with the hot stove or chess, and when it’s not, as in the case of Jane and her wallet. We don’t like not knowing. We also love a story.

Our minds create plausible stories. In the case of Jane, many test subjects thought a pickpocket had taken her wallet, but there are other possible scenarios. More people lose wallets than have them stolen. But our patterns of beliefs take over, such as how we feel about New York or crowds, and we construct cause and effect relationships. We tell ourselves stories that are convincing, cheap, and often wrong. We don’t think about how these stories are created, whether they’re right, or how they persist. And we’re often uncomfortable when someone asks us to explain our reasoning.

Imagine a meeting where we are discussing Jane and her wallet, not unlike any meeting you have this week to figure out what happened and what decisions your organization needs to make next.

You start the meeting by saying “Jane’s wallet was stolen. Here’s what we’re going to do in response.”

But one person in the meeting, Micky, Jane’s second cousin, asks you to explain the situation.

You volunteer what you know. “After spending a day exploring beautiful sights in the crowded streets of New York, Jane discovered that her wallet was missing.” And you quickly launch into improved security measures.

Micky, however, tells herself a different story, because just last week a friend of hers left his wallet at a store. And she knows Jane can sometimes be absentminded. The story she tells herself is that Jane probably lost her wallet in New York. So she asks you, “What makes you think the wallet was stolen?”

The answer is obvious to you. You feel your heart rate start to rise. Frustration sets in.

You tell yourself that Micky is an idiot. This is so obvious. Jane was out. In New York. In a crowd. And we need to put in place something to address this wallet issue so that it doesn’t happen again. You think to yourself that she’s slowing the group down and we need to act now.

What else is happening? It’s likely you looked at the evidence again and couldn’t really explain how you drew your conclusion. Rather than have an honest conversation about the story you told yourself and the story Micky is telling herself, the meeting gets tense and goes nowhere.

The next time you catch someone asking you about your story and you can’t explain it in a falsifiable way, pause, and hit reset. Take your ego out of it. What you really care about is finding the truth, even if that means the story you told yourself is wrong.

Footnotes
  • 1

    Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus & Giroux 2011

Complexity Bias: Why We Prefer Complicated to Simple

Complexity bias is a logical fallacy that leads us to give undue credence to complex concepts.

Faced with two competing hypotheses, we are likely to choose the most complex one. That’s usually the option with the most assumptions and regressions. As a result, when we need to solve a problem, we may ignore simple solutions — thinking “that will never work” — and instead favor complex ones.

To understand complexity bias, we need first to establish the meaning of three key terms associated with it: complexity, simplicity, and chaos.

The Cambridge Dictionary defines complexity as “the state of having many parts and being difficult to understand or find an answer to.” The definition of simplicity is the inverse: “something [that] is easy to understand or do.” Chaos is defined as “a state of total confusion with no order.”

“Life is really simple, but we insist on making it complicated.”

— Confucius

Complex systems contain individual parts that combine to form a collective that often can’t be predicted from its components. Consider humans. We are complex systems. We’re made of about 100 trillion cells and yet we are so much more than the aggregation of our cells. You’d never predict what we’re like or who we are from looking at our cells.

Complexity bias is our tendency to look at something that is easy to understand, or look at it when we are in a state of confusion, and view it as having many parts that are difficult to understand.

We often find it easier to face a complex problem than a simple one.

A person who feels tired all the time might insist that their doctor check their iron levels while ignoring the fact that they are unambiguously sleep deprived. Someone experiencing financial difficulties may stress over the technicalities of their telephone bill while ignoring the large sums of money they spend on cocktails.

Marketers make frequent use of complexity bias.

They do this by incorporating confusing language or insignificant details into product packaging or sales copy. Most people who buy “ammonia-free” hair dye, or a face cream which “contains peptides,” don’t fully understand the claims. Terms like these often mean very little, but we see them and imagine that they signify a product that’s superior to alternatives.

How many of you know what probiotics really are and how they interact with gut flora?

Meanwhile, we may also see complexity where only chaos exists. This tendency manifests in many forms, such as conspiracy theories, superstition, folklore, and logical fallacies. The distinction between complexity and chaos is not a semantic one. When we imagine that something chaotic is in fact complex, we are seeing it as having an order and more predictability than is warranted. In fact, there is no real order, and prediction is incredibly difficult at best.

Complexity bias is interesting because the majority of cognitive biases occur in order to save mental energy. For example, confirmation bias enables us to avoid the effort associated with updating our beliefs. We stick to our existing opinions and ignore information that contradicts them. Availability bias is a means of avoiding the effort of considering everything we know about a topic. It may seem like the opposite is true, but complexity bias is, in fact, another cognitive shortcut. By opting for impenetrable solutions, we sidestep the need to understand. Of the fight-or-flight responses, complexity bias is the flight response. It is a means of turning away from a problem or concept and labeling it as too confusing. If you think something is harder than it is, you surrender your responsibility to understand it.

“Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.”

— Andy Benoit

Faced with too much information on a particular topic or task, we see it as more complex than it is. Often, understanding the fundamentals will get us most of the way there. Software developers often find that 90% of the code for a project takes about half the allocated time. The remaining 10% takes the other half. Writing — and any other sort of creative work — is much the same. When we succumb to complexity bias, we are focusing too hard on the tricky 10% and ignoring the easy 90%.

Research has revealed our inherent bias towards complexity.

In a 1989 paper entitled “Sensible reasoning in two tasks: Rule discovery and hypothesis evaluation,” Hilary F. Farris and Russell Revlin evaluated the topic. In one study, participants were asked to establish an arithmetic rule. They received a set of three numbers (such as 2, 4, 6) and tried to generate a hypothesis by asking the experimenter if other number sequences conformed to the rule. Farris and Revlin wrote, “This task is analogous to one faced by scientists, with the seed triple functioning as an initiating observation, and the act of generating the triple is equivalent to performing an experiment.”

The actual rule was simple: list any three ascending numbers.

The participants could have said anything from “1, 2, 3” to “3, 7, 99” and been correct. It should have been easy for the participants to guess this, but most of them didn’t. Instead, they came up with complex rules for the sequences. (Also see Falsification of Your Best Loved Ideas.)

A paper by Helena Matute looked at how intermittent reinforcement leads people to see complexity in chaos. Three groups of participants were placed in rooms and told that a loud noise would play from time to time. The volume, length, and pattern of the sound were identical for each group. Group 1 (Control) was told to sit and listen to the noises. Group 2 (Escape) was told that there was a specific action they could take to stop the noises. Group 3 (Yoked) was told the same as Group 2, but in their case, there was actually nothing they could do.

Matute wrote:

Yoked participants received the same pattern and duration of tones that had been produced by their counterparts in the Escape group. The amount of noise received by Yoked and Control subjects depends only on the ability of the Escape subjects to terminate the tones. The critical factor is that Yoked subjects do not have control over reinforcement (noise termination) whereas Escape subjects do, and Control subjects are presumably not affected by this variable.

The result? Not one member of the Yoked group realized that they had no control over the sounds. Many members came to repeat particular patterns of “superstitious” behavior. Indeed, the Yoked and Escape groups had very similar perceptions of task controllability. Faced with randomness, the participants saw complexity.

Does that mean the participants were stupid? Not at all. We all exhibit the same superstitious behavior when we believe we can influence chaotic or simple systems.

Funnily enough, animal studies have revealed much the same. In particular, consider B.F. Skinner’s well-known research on the effects of random rewards on pigeons. Skinner placed hungry pigeons in cages equipped with a random-food-delivery mechanism. Over time, the pigeons came to believe that their behavior affected the food delivery. Skinner described this as a form of superstition. One bird spun in counterclockwise circles. Another butted its head against a corner of the cage. Other birds swung or bobbed their heads in specific ways. Although there is some debate as to whether “superstition” is an appropriate term to apply to birds, Skinner’s research shed light on the human tendency to see things as being more complex than they actually are.

Skinner wrote (in “‘Superstition’ in the Pigeon,” Journal of Experimental Psychology, 38):

The bird behaves as if there were a causal relation between its behavior and the presentation of food, although such a relation is lacking. There are many analogies in human behavior. Rituals for changing one’s fortune at cards are good examples. A few accidental connections between a ritual and favorable consequences suffice to set up and maintain the behavior in spite of many unreinforced instances. The bowler who has released a ball down the alley but continues to behave as if he were controlling it by twisting and turning his arm and shoulder is another case in point. These behaviors have, of course, no real effect upon one’s luck or upon a ball half way down an alley, just as in the present case the food would appear as often if the pigeon did nothing—or, more strictly speaking, did something else.

The world around us is a chaotic, entropic place. But it is rare for us to see it that way.

In Living with Complexity, Donald A. Norman offers a perspective on why we need complexity:

We seek rich, satisfying lives, and richness goes along with complexity. Our favorite songs, stories, games, and books are rich, satisfying, and complex. We need complexity even while we crave simplicity… Some complexity is desirable. When things are too simple, they are also viewed as dull and uneventful. Psychologists have demonstrated that people prefer a middle level of complexity: too simple and we are bored, too complex and we are confused. Moreover, the ideal level of complexity is a moving target, because the more expert we become at any subject, the more complexity we prefer. This holds true whether the subject is music or art, detective stories or historical novels, hobbies or movies.

As an example, Norman asks readers to contemplate the complexity we attach to tea and coffee. Most people in most cultures drink tea or coffee each day. Both are simple beverages, made from water and coffee beans or tea leaves. Yet we choose to attach complex rituals to them. Even those of us who would not consider ourselves to be connoisseurs have preferences. Offer to make coffee for a room full of people, and we can be sure that each person will want it made in a different way.

Coffee and tea start off as simple beans or leaves, which must be dried or roasted, ground and infused with water to produce the end result. In principle, it should be easy to make a cup of coffee or tea. Simply let the ground beans or tea leaves [steep] in hot water for a while, then separate the grounds and tea leaves from the brew and drink. But to the coffee or tea connoisseur, the quest for the perfect taste is long-standing. What beans? What tea leaves? What temperature water and for how long? And what is the proper ratio of water to leaves or coffee?

The quest for the perfect coffee or tea maker has been around as long as the drinks themselves. Tea ceremonies are particularly complex, sometimes requiring years of study to master the intricacies. For both tea and coffee, there has been a continuing battle between those who seek convenience and those who seek perfection.

Complexity, in this way, can enhance our enjoyment of a cup of tea or coffee. It’s one thing to throw some instant coffee in hot water. It’s different to select the perfect beans, grind them ourselves, calculate how much water is required, and use a fancy device. The question of whether this ritual makes the coffee taste better or not is irrelevant. The point is the elaborate surrounding ritual. Once again, we see complexity as superior.

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.”

— Edsger W. Dijkstra

The Problem with Complexity

Imagine a person who sits down one day and plans an elaborate morning routine. Motivated by the routines of famous writers they have read about, they lay out their ideal morning. They decide they will wake up at 5 a.m., meditate for 15 minutes, drink a liter of lemon water while writing in a journal, read 50 pages, and then prepare coffee before planning the rest of their day.

The next day, they launch into this complex routine. They try to keep at it for a while. Maybe they succeed at first, but entropy soon sets in and the routine gets derailed. Sometimes they wake up late and do not have time to read. Their perceived ideal routine has many different moving parts. Their actual behavior ends up being different each day, depending on random factors.

Now imagine that this person is actually a famous writer. A film crew asks to follow them around on a “typical day.” On the day of filming, they get up at 7 a.m., write some ideas, make coffee, cook eggs, read a few news articles, and so on. This is not really a routine; it is just a chaotic morning based on reactive behavior. When the film is posted online, people look at the morning and imagine they are seeing a well-planned routine rather than the randomness of life.

This hypothetical scenario illustrates the issue with complexity: it is unsustainable without effort.

The more individual constituent parts a system has, the greater the chance of its breaking down. Charlie Munger once said that “Where you have complexity, by nature you can have fraud and mistakes.” Any complex system — be it a morning routine, a business, or a military campaign — is difficult to manage. Addressing one of the constituent parts inevitably affects another (see the Butterfly Effect). Unintended and unexpected consequences are likely to occur.

As Daniel Kahneman and Amos Tversky wrote in 1974 (in Judgment Under Uncertainty: Heuristics and Biases): “A complex system, such as a nuclear reactor or the human body, will malfunction if any of its essential components fails. Even when the likelihood of failure in each component is slight, the probability of an overall failure can be high if many components are involved.”

This is why complexity is less common than we think. It is unsustainable without constant maintenance, self-organization, or adaptation. Chaos tends to disguise itself as complexity.

“Human beings are pattern-seeking animals. It’s part of our DNA. That’s why conspiracy theories and gods are so popular: we always look for the wider, bigger explanations for things.”

— Adrian McKinty, The Cold Cold Ground

Complexity Bias and Conspiracy Theories

A musician walks barefoot across a zebra-crossing on an album cover. People decide he died in a car crash and was replaced by a lookalike. A politician’s eyes look a bit odd in a blurry photograph. People conclude that he is a blood-sucking reptilian alien taking on a human form. A photograph shows an indistinct shape beneath the water of a Scottish lake. The area floods with tourists hoping to glimpse a surviving prehistoric creature. A new technology overwhelms people. So, they deduce that it is the product of a government mind-control program.

Conspiracy theories are the ultimate symptom of our desire to find complexity in the world. We don’t want to acknowledge that the world is entropic. Disasters happen and chaos is our natural state. The idea that hidden forces animate our lives is an appealing one. It seems rational. But as we know, we are all much less rational and logical than we think. Studies have shown that a high percentage of people believe in some sort of conspiracy. It’s not a fringe concept. According to research by Joseph E. Uscinski and Joseph M. Parent, about one-third of Americans believe the notion that Barack Obama’s birth certificate is fake. Similar numbers are convinced that 9/11 was an inside job orchestrated by George Bush. Beliefs such as these are present in all types of people, regardless of class, age, gender, race, socioeconomic status, occupation, or education level.

Conspiracy theories are invariably far more complex than reality. Although education does reduce the chances of someone’s believing in conspiracy theories, one in five Americans with postgraduate degrees still hold conspiratorial beliefs.

Uscinski and Parent found that, just as uncertainty led Skinner’s pigeons to see complexity where only randomness existed, a sense of losing control over the world around us increases the likelihood of our believing in conspiracy theories. Faced with natural disasters and political or economic instability, we are more likely to concoct elaborate explanations. In the face of horrific but chaotic events such as Hurricane Katrina, or the recent Grenfell Tower fire, many people decide that secret institutions are to blame.

Take the example of the “Paul McCartney is dead” conspiracy theory. Since the 1960s, a substantial number of people have believed that McCartney died in a car crash and was replaced by a lookalike, usually said to be a Scottish man named William Campbell. Of course, conspiracy theorists declare, The Beatles wanted their most loyal fans to know this, so they hid clues in songs and on album covers.

The beliefs surrounding the Abbey Road album are particularly illustrative of the desire to spot complexity in randomness and chaos. A police car is parked in the background — an homage to the officers who helped cover up the crash. A car’s license plate reads “LMW 28IF” — naturally, a reference to McCartney being 28 if he had lived (although he was 27) and to Linda McCartney (whom he had not met yet). Matters were further complicated once The Beatles heard about the theory and began to intentionally plant “clues” in their music. The song “I’m So Tired” does in fact feature backwards mumbling about McCartney’s supposed death. The 1960s were certainly a turbulent time, so is it any wonder that scores of people pored over album art or played records backwards, looking for evidence of a complex hidden conspiracy?

As Henry Louis Gates Jr. wrote, “Conspiracy theories are an irresistible labor-saving device in the face of complexity.”

Complexity Bias and Language

We have all, at some point, had a conversation with someone who speaks like philosopher Theodor Adorno wrote: using incessant jargon and technical terms even when simpler synonyms exist and would be perfectly appropriate. We have all heard people say things which we do not understand, but which we do not question for fear of sounding stupid.

Jargon is an example of how complexity bias affects our communication and language usage. When we use jargon, especially out of context, we are putting up unnecessary semantic barriers that reduce the chances of someone’s challenging or refuting us.

In an article for The Guardian, James Gingell describes his work translating scientific jargon into plain, understandable English:

It’s quite simple really. The first step is getting rid of the technical language. Whenever I start work on refining a rough-hewn chunk of raw science into something more pleasant I use David Dobbs’ (rather violent) aphorism as a guiding principle: “Hunt down jargon like a mercenary possessed, and kill it.” I eviscerate acronyms and euthanise decrepit Latin and Greek. I expunge the esoteric. I trim and clip and pare and hack and burn until only the barest, most easily understood elements remain.

[…]

Jargon…can be useful for people as a shortcut to communicating complex concepts. But it’s intrinsically limited: it only works when all parties involved know the code. That may be an obvious point but it’s worth emphasising — to communicate an idea to a broad, non-specialist audience, it doesn’t matter how good you are at embroidering your prose with evocative imagery and clever analogies, the jargon simply must go.”

Gingell writes that even the most intelligent scientists struggle to differentiate between thinking (and speaking and writing) like a scientist, and thinking like a person with minimal scientific knowledge.

Unnecessarily complex language is not just annoying. It’s outright harmful. The use of jargon in areas such as politics and economics does real harm. People without the requisite knowledge to understand it feel alienated and removed from important conversations. It leads people to believe that they are not intelligent enough to understand politics, or not educated enough to comprehend economics. When a politician talks of fiscal charters or rolling four-quarter growth measurements in a public statement, they are sending a crystal clear message to large numbers of people whose lives will be shaped by their decisions: this is not about you.

Complexity bias is a serious issue in politics. For those in the public eye, complex language can be a means of minimizing the criticism of their actions. After all, it is hard to dispute something you don’t really understand. Gingell considers jargon to be a threat to democracy:

If we can’t fully comprehend the decisions that are made for us and about us by the government then how we can we possibly revolt or react in an effective way? Yes, we have a responsibility to educate ourselves more on the big issues, but I also think it’s important that politicians and journalists meet us halfway.

[…]

Economics and economic decisions are more important than ever now, too. So we should implore our journalists and politicians to write and speak to us plainly. Our democracy depends on it.

In his essay “Politics and the English Language,” George Orwell wrote:

In our time, political speech and writing are largely the defence of the indefensible. … Thus, political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements.

An example of the problems with jargon is the Sokal affair. In 1996, Alan Sokal (a physics professor) submitted a fabricated scientific paper entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity.” The paper had absolutely no relation to reality and argued that quantum gravity is a social and linguistic construct. Even so, the paper was published in a respected journal. Sokal’s paper consisted of convoluted, essentially meaningless claims, such as this paragraph:

Secondly, the postmodern sciences deconstruct and transcend the Cartesian metaphysical distinctions between humankind and Nature, observer and observed, Subject and Object. Already quantum mechanics, earlier in this century, shattered the ingenious Newtonian faith in an objective, pre-linguistic world of material objects “out there”; no longer could we ask, as Heisenberg put it, whether “particles exist in space and time objectively.”

(If you’re wondering why no one called him out, or more specifically why we have a bias to not call BS out, check out pluralistic ignorance).

Jargon does have its place. In specific contexts, it is absolutely vital. But in everyday communication, its use is a sign that we wish to appear complex and therefore more intelligent. Great thinkers throughout the ages have stressed the crucial importance of using simple language to convey complex ideas. Many of the ancient thinkers whose work we still reference today — people like Plato, Marcus Aurelius, Seneca, and Buddha — were known for their straightforward communication and their ability to convey great wisdom in a few words.

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.”

— Ernst F. Schumacher

How Can We Overcome Complexity Bias?

The most effective tool we have for overcoming complexity bias is Occam’s razor. Also known as the principle of parsimony, this is a problem-solving principle used to eliminate improbable options in a given situation. Occam’s razor suggests that the simplest solution or explanation is usually the correct one. When we don’t have enough empirical evidence to disprove a hypothesis, we should avoid making unfounded assumptions or adding unnecessary complexity so we can make quick decisions or establish truths.

An important point to note is that Occam’s razor does not state that the simplest hypothesis is the correct one, but states rather that it is the best option before the establishment of empirical evidence. It is also useful in situations where empirical data is difficult or impossible to collect. While complexity bias leads us towards intricate explanations and concepts, Occam’s razor can help us to trim away assumptions and look for foundational concepts.

Returning to Skinner’s pigeons, had they known of Occam’s razor, they would have realized that there were two main possibilities:

  • Their behavior affects the food delivery.

Or:

  • Their behavior is irrelevant because the food delivery is random or on a timed schedule.

Using Occam’s razor, the head-bobbing, circles-turning pigeons would have realized that the first hypothesis involves numerous assumptions, including:

  • There is a particular behavior they must enact to receive food.
  • The delivery mechanism can somehow sense when they enact this behavior.
  • The required behavior is different from behaviors that would normally give them access to food.
  • The delivery mechanism is consistent.

And so on. Occam’s razor would dictate that because the second hypothesis is the simplest, involving the fewest assumptions, it is most likely the correct one.

So many geniuses, are really good at eliminating unnecessary complexity. Einstein, for instance, was a master at sifting the essential from the non-essential. Steve Jobs was the same.

Comment on Facebook | Twitter

Do Algorithms Beat Us at Complex Decision Making?

Decision-making algorithms are undoubtedly controversial. If a decision is being made that will have a major influence on your life, most people would prefer a human make it. But what if algorithms really can make better decisions?

***

Algorithms are all the rage these days. AI researchers are taking more and more ground from humans in areas like rules-based games, visual recognition, and medical diagnosis. However, the idea that algorithms make better predictive decisions than humans in many fields is a very old one.

In 1954, the psychologist Paul Meehl published a controversial book with a boring sounding name: Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.

The controversy? After reviewing the data, Meehl claimed that mechanical, data-driven algorithms could better predict human behavior than trained clinical psychologists — and with much simpler criteria. He was right.

The passing of time has not been friendly to humans in this game: Studies continue to show that the algorithms do a better job than experts in a range of fields. In Daniel Kahneman’s Thinking Fast and Slow, he details a selection of fields which have demonstrated inferior human judgment compared to algorithms:

The range of predicted outcomes has expanded to cover medical variables such as the longevity of cancer patients, the length of hospital stays, the diagnosis of cardiac disease, and the susceptibility of babies to sudden infant death syndrome; economic measures such as the prospects of success for new businesses, the evaluation of credit risks by banks, and the future career satisfaction of workers; questions of interest to government agencies, including assessments of the suitability of foster parents, the odds of recidivism among juvenile offenders, and the likelihood of other forms of violent behavior; and miscellaneous outcomes such as the evaluation of scientific presentations, the winners of football games, and the future prices of Bordeaux wine.

The connection between them? Says Kahneman: “Each of these domains entails a significant degree of uncertainty and unpredictability.” He called them “low-validity environments”, and in those environments, simple algorithms matched or outplayed humans and their “complex” decision making criteria, essentially every time.

***

A typical case is described in Michael Lewis’ book on the relationship between Daniel Kahneman and Amos Tversky, The Undoing Project. He writes of work done at the Oregon Research Institute on radiologists and their x-ray diagnoses:

The Oregon researchers began by creating, as a starting point, a very simple algorithm, in which the likelihood that an ulcer was malignant depended on the seven factors doctors had mentioned, equally weighted. The researchers then asked the doctors to judge the probability of cancer in ninety-six different individual stomach ulcers, on a seven-point scale from “definitely malignant” to “definitely benign.” Without telling the doctors what they were up to, they showed them each ulcer twice, mixing up the duplicates randomly in the pile so the doctors wouldn’t notice they were being asked to diagnose the exact same ulcer they had already diagnosed. […] The researchers’ goal was to see if they could create an algorithm that would mimic the decision making of doctors.

This simple first attempt, [Lewis] Goldberg assumed, was just a starting point. The algorithm would need to become more complex; it would require more advanced mathematics. It would need to account for the subtleties of the doctors’ thinking about the cues. For instance, if an ulcer was particularly big, it might lead them to reconsider the meaning of the other six cues.

But then UCLA sent back the analyzed data, and the story became unsettling. (Goldberg described the results as “generally terrifying”.) In the first place, the simple model that the researchers had created as their starting point for understanding how doctors rendered their diagnoses proved to be extremely good at predicting the doctors’ diagnoses. The doctors might want to believe that their thought processes were subtle and complicated, but a simple model captured these perfectly well. That did not mean that their thinking was necessarily simple, only that it could be captured by a simple model.

More surprisingly, the doctors’ diagnoses were all over the map: The experts didn’t agree with each other. Even more surprisingly, when presented with duplicates of the same ulcer, every doctor had contradicted himself and rendered more than one diagnosis: These doctors apparently could not even agree with themselves.

[…]

If you wanted to know whether you had cancer or not, you were better off using the algorithm that the researchers had created than you were asking the radiologist to study the X-ray. The simple algorithm had outperformed not merely the group of doctors; it had outperformed even the single best doctor.

The fact that doctors (and psychiatrists, and wine experts, and so forth) cannot even agree with themselves is a problem called decision making “noise”: Given the same set of data twice, we make two different decisions. Noise. Internal contradiction.

Algorithms win, at least partly, because they don’t do this: The same inputs generate the same outputs every single time. They don’t get distracted, they don’t get bored, they don’t get mad, they don’t get annoyed. Basically, they don’t have off days. And they don’t fall prey to the litany of biases that humans do, like the representativeness heuristic.

The algorithm doesn’t even have to be a complex one. As demonstrated above with radiology, simple rules work just as well as complex ones. Kahneman himself addresses this in Thinking, Fast and Slow when discussing Robyn Dawes’s research on the superiority of simple algorithms using a few equally-weighted predictive variables:

The surprising success of equal-weighting schemes has an important practical implication: it is possible to develop useful algorithms without prior statistical research. Simple equally weight formulas based on existing statistics or on common sense are often very good predictors of significant outcomes. In a memorable example, Dawes showed that marital stability is well predicted by a formula: Frequency of lovemaking minus frequency of quarrels.

You don’t want your result to be a negative number.

The important conclusion from this research is that an algorithm that is constructed on the back of an envelope is often good enough to compete with an optimally weighted formula, and certainly good enough to outdo expert judgment. This logic can be applied in many domains, ranging from the selection of stocks by portfolio managers to the choices of medical treatments by doctors or patients.

Stock selection, certainly a “low validity environment”, is an excellent example of the phenomenon.

As John Bogle pointed out to the world in the 1970’s, a point which has only strengthened with time, the vast majority of human stock-pickers cannot outperform a simple S&P 500 index fund, an investment fund that operates on strict algorithmic rules about which companies to buy and sell and in what quantities. The rules of the index aren’t complex, and many people have tried to improve on them with less success than might be imagined.

***

Another interesting area where this holds is interviewing and hiring, a notoriously difficult “low-validity” environment. Even elite firms often don’t do it that well, as has been well documented.

Fortunately, if we take heed of the advice of the psychologists, operating in a low-validity environment has rules that can work very well. In Thinking Fast and Slow, Kahneman recommends fixing your hiring process by doing the following (or some close variant), in order to replicate the success of the algorithms:

Suppose you need to hire a sales representative for your firm. If you are serious about hiring the best possible person for the job, this is what you should do. First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on). Don’t overdo it — six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of questions for each trait and think about how you will score it, say on a 1-5 scale. You should have an idea of what you will call “very weak” or “very strong.”

These preparations should take you half an hour or so, a small investment that can make a significant difference in the quality of the people you hire. To avoid halo effects, you must collect the information one at a time, scoring each before you move on to the next one. Do not skip around. To evaluate each candidate, add up the six scores. […] Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better–try to resit your wish to invent broken legs to change the ranking. A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as “I looked into his eyes and liked what I saw.”

In the battle of man vs algorithm, unfortunately, man often loses. The promise of Artificial Intelligence is just that. So if we’re going to be smart humans, we must learn to be humble in situations where our intuitive judgment simply is not as good as a set of simple rules.

Blog Posts, Book Reviews, and Abstracts: On Shallowness

We’re quite glad that you read Farnam Street, and we hope we’re always offering you a massive amount of value. (If not, email us and tell us what we can do more effectively.)

But there’s a message all of our readers should appreciate: Blog posts are not enough to generate the deep fluency you need to truly understand or get better at something. We offer a starting point, not an end point.

This goes just as well for book reviews, abstracts, cliff’s notes, and a good deal of short-form journalism.

This is a hard message for some who want a shortcut. They want the “gist” and the “high level takeaways”, without doing the work or eating any of the broccoli. They think that’s all it takes: Check out a 5-minute read, and instantly their decision making and understanding of the world will improve right-quick. Most blogs, of course, encourage this kind of shallowness. Because it makes you feel that the whole thing is pretty easy.

Here’s the problem: The world is more complex than that. It doesn’t actually work this way. The nuanced detail behind every “high level takeaway” gives you the context needed to use it in the real world. The exceptions, the edge cases, and the contradictions.

Let me give you an example.

A high-level takeaway from reading Kahneman’s Thinking Fast, and Slow would be that we are subject to something he and Amos Tversky call the Representativeness Heuristic. We create models of things in our head, and then fit our real-world experiences to the model, often over-fitting drastically. A very useful idea.

However, that’s not enough. There are so many follow-up questions. Where do we make the most mistakes? Why does our mind create these models? Where is this generally useful? What are the nuanced examples of where this tendency fails us? And so on. Just knowing about the Heuristic, knowing that it exists, won’t perform any work for you.

Or take the rise of human species as laid out by Yuval Harari. It’s great to post on his theory; how myths laid the foundation for our success, how “natural” is probably a useless concept the way it’s typically used, and how biology is the great enabler.

But Harari’s book itself contains the relevant detail that fleshes all of this out. And further, his bibliography is full of resources that demand your attention to get even more backup. How did he develop that idea? You have to look to find out.

Why do all this? Because without the massive, relevant detail, your mind is built on a house of cards.

What Farnam Street and a lot of other great resources give you is something like a brief map of the territory.

Welcome to Colonial Williamsburg! Check out the re-enactors, the museum, and the theatre. Over there is the Revolutionary City. Gettysburg is 4 hours north. Washington D.C. is closer to 2.5 hours.

Great – now you have a lay of the land. Time to dig in and actually learn about the American Revolution. (This book is awesome, if you actually want to do that.)

Going back to Kahneman, one of his and Tversky’s great findings was the concept of the Availability Heuristic. Basically, the mind operates on what it has close at hand.

As Kahneman puts it, “An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.”

That means that in the moment of decision making, when you’re thinking hard on some complex problem you face, it’s unlikely that your mind is working all that successfully without the details. It doesn’t have anything to draw on. It’d be like a chess player who read a book about great chess players, but who hadn’t actually studied all of their moves. Not very effective.

The great difficulty, of course, is that we lack the time to dig deep into everything. Opportunity costs and trade-offs are quite real.

That’s why you must develop excellent filters. What’s worth learning this deeply? We think it’s the first-principle style mental models. The great ideas from physical systems, biological systems, and human systems. The new-new thing you’re studying is probably either A. Wrong or B. Built on one of those great ideas anyways. Farnam Street, in a way, is just a giant filtering mechanism to get you started down the hill.

But don’t stop there. Don’t stop at the starting line. Resolve to increase your depth and stop thinking you can have it all in 5 minutes or less. Use our stuff, and whoever else’s stuff you like, as an entrée to the real thing.