Tag: Amos Tversky

Efficiency is the Enemy

There’s a good chance most of the problems in your life and work come down to insufficient slack. Here’s how slack works and why you need more of it.

Imagine if you, as a budding productivity enthusiast, one day gained access to a time machine and decided to take a trip back several decades to the office of one of your old-timey business heroes. Let’s call him Tony.

You disguise yourself as a janitor and figure a few days of observation should be enough to reveal the secret of that CEO’s incredible productivity and shrewd decision-making. You want to learn the habits and methods that enabled him to transform an entire industry for good.

Arriving at the (no doubt smoke-filled) office, you’re a little surprised to find it’s far from a hive of activity. In fact, the people you can see around seem to be doing next to nothing. Outside your hero’s office, his secretary lounges at her desk (and let’s face it, the genders wouldn’t have been the other way around.) Let’s call her Gloria. She doesn’t appear busy at all. You observe for half an hour as she reads, tidies her desk, and chats with other secretaries who pass by. They don’t seem busy either. Confused as to why Tony would squander money on idle staff, you stick around for a few more hours.

With a bit more observation, you realize your initial impression was entirely wrong. Gloria does indeed do nothing much of the time. But every so often, a request, instruction, or alert comes from Tony and she leaps into action. Within minutes, she answers the call, sends the letter, reschedules the appointment, or finds the right document. Any time he has a problem, she solves it right away. There’s no to-do list, no submitting a ticket, no waiting for a reply to an email for either Tony or Gloria.

As a result, Tony’s day goes smoothly and efficiently. Every minute of his time goes on the most important part of his work—making decisions—and not on dealing with trivial inconveniences like waiting in line at the post office.

All that time Gloria spends doing nothing isn’t wasted time. It’s slack: excess capacity allowing for responsiveness and flexibility. The slack time is important because it means she never has a backlog of tasks to complete. She can always deal with anything new straight away. Gloria’s job is to ensure Tony is as busy as he needs to be. It’s not to be as busy as possible.

If you ever find yourself stressed, overwhelmed, sinking into stasis despite wanting to change, or frustrated when you can’t respond to new opportunities, you need more slack in your life.

In Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency, Tom DeMarco explains that most people and organizations fail to recognize the value of slack. Although the book is now around twenty years old, its primary message is timeless and worth revisiting.

***

The enemy of efficiency

“You’re efficient when you do something with minimum waste. And you’re effective when you’re doing the right something.”

Many organizations are obsessed with efficiency. They want to be sure every resource is utilized to its fullest capacity and everyone is sprinting around every minute of the day doing something. They hire expert consultants to sniff out the faintest whiff of waste.

As individuals, many of us are also obsessed with the mirage of total efficiency. We schedule every minute of our day, pride ourselves on forgoing breaks, and berate ourselves for the slightest moment of distraction. We view sleep, sickness, and burnout as unwelcome weaknesses and idolize those who never seem to succumb to them. This view, however, fails to recognize that efficiency and effectiveness are not the same thing.

Total efficiency is a myth. Let’s return to Gloria and Tony. Imagine if Tony decided to assign her more work to ensure she spends a full eight hours a day busy. Would that be more efficient? Not really. Slack time enables her to respond to his requests right away, thus being effective at her job. If Gloria is already occupied, Tony will have to wait and whatever he’s doing will get held up. Both of them would be less effective as a result.

Any time we eliminate slack, we create a build-up of work. DeMarco writes, “As a practical matter, it is impossible to keep everyone in the organization 100 percent busy unless we allow for some buffering at each employee’s desk. That means there is an inbox where work stacks up.

Many of us have come to expect work to involve no slack time because of the negative way we perceive it. In a world of manic efficiency, slack often comes across as laziness or a lack of initiative. Without slack time, however, we know we won’t be able to get through new tasks straight away, and if someone insists we should, we have to drop whatever we were previously doing. One way or another, something gets delayed. The increase in busyness may well be futile:

“It’s possible to make an organization more efficient without making it better. That’s what happens when you drive out slack. It’s also possible to make an organization a little less efficient and improve it enormously. In order to do that, you need to reintroduce enough slack to allow the organization to breathe, reinvent itself, and make necessary change.”

***

Defining slack

DeMarco defines slack as “the degree of freedom required to effect change. Slack is the natural enemy of efficiency and efficiency is the natural enemy of slack.” Elsewhere, he writes: “Slack represents operational capacity sacrificed in the interests of long-term health.”

To illustrate the concept, DeMarco asks the reader to imagine one of those puzzle games consisting of eight numbered tiles in a box, with one empty space so you can slide them around one at a time. The objective is to shuffle the tiles into numerical order. That empty space is the equivalent of slack. If you remove it, the game is technically more efficient, but “something else is lost. Without the open space, there is no further possibility of moving tiles at all. The layout is optimal as it is, but if time proves otherwise, there is no way to change it.

Having a little bit of wiggle room allows us to respond to changing circumstances, to experiment, and to do things that might not work.

Slack consists of excess resources. It might be time, money, people on a job, or even expectations. Slack is vital because it prevents us from getting locked into our current state, unable to respond or adapt because we just don’t have the capacity.

Not having slack is taxing. Scarcity weighs on our minds and uses up energy that could go toward doing the task at hand better. It amplifies the impact of failures and unintended consequences.

Too much slack is bad because resources get wasted and people get bored. But, on the whole, an absence of slack is a problem far more often than an excess of it. If you give yourself too much slack time when scheduling a project that goes smoother than expected, you probably won’t spend the spare time sitting like a lemon. Maybe you’ll recuperate from an earlier project that took more effort than anticipated. Maybe you’ll tinker with some on-hold projects. Maybe you’ll be able to review why this one went well and derive lessons for the future. And maybe slack time is just your reward for doing a good job already! You deserve breathing room.

Slack also allows us to handle the inevitable shocks and surprises of life. If every hour in our schedules is accounted for, we can’t slow down to recover from a minor cold, shift a bit of focus to learning a new skill for a while, or absorb a couple of hours of technical difficulties.

In general, you need more slack than you expect. Unless you have a lot of practice, your estimations of how long things will take or how difficult they are will almost always be on the low end. Most of us treat best-case scenarios as if they are the most likely scenarios and will inevitably come to pass, but they rarely do.

You also need to keep a vigilant eye on how fast you use up your slack so you can replenish it in time. For example, you might want to review your calendar once per week to check it still has white space each day and you haven’t allowed meetings to fill up your slack time. Think of the forms of slack that are more important to you, then check up on them regularly. If you find you’re running out of slack, take action.

Once in a while, you might need to forgo slack to reap the benefits of constraints. Lacking slack in the short term or in a particular area can force you to be more inventive. If you find yourself struggling to come up with a creative solution, try consciously reducing your slack. For example, give yourself five-minutes to brainstorm ideas or ask yourself what you might do if your budget were slashed by 90%.

Most of the time, though, it’s critical to guard your slack with care. It’s best to assume you’ll always tend toward using it up—or other people will try to steal it from you. Set clear boundaries in your work and keep an eye on tasks that might inflate.

***

Slack and change

In the past, people and organizations could sometimes get by without much slack—at least for a while. Now, even as slack keeps becoming more and more vital for survival, we’re keener than ever to eliminate it in the name of efficiency. Survival requires constant change and reinvention, which “require a commodity that is absent in our time as it has never been before. That commodity—the catalytic ingredient of change—is slack.” DeMarco goes on to write:

“Slack is the time when reinvention happens. It is time when you are not 100 percent busy doing the operational business of your firm. Slack is the time when you are 0 percent busy. Slack at all levels is necessary to make the organization work effectively and to grow. It is the lubricant of change. Good companies excel in creative use of slack. And bad ones only obsess about removing it.”

Only when we are 0 percent busy can we step back and look at the bigger picture of what we’re doing. Slack allows us to think ahead. To consider whether we’re on the right trajectory. To contemplate unseen problems. To mull over information. To decide if we’re making the right trade-offs. To do things that aren’t scalable or that might not have a chance to prove profitable for a while. To walk away from bad deals.

***

Slack and productivity

The irony is that we achieve far more in the long run when we have slack. We are more productive when we don’t try to be productive all the time.

DeMarco explains that the amount of work each person in an organization has is never static: “Things change on a day-to-day basis. This results in new unevenness of the tasks, with some people incurring additional work (their buffers build up), while others become less loaded, since someone ahead of them in the work chain is slower to generate their particular kind of work to pass along.” An absence of slack is unsustainable. Inevitably, we end up needing additional resources, which have to come from somewhere.

Being comfortable with sometimes being 0 percent busy means we think about whether we’re doing the right thing. This is in contrast to grabbing the first task we see so no one thinks we’re lazy. The expectation of “constant busyness means efficiency” creates pressure to always look occupied and keep a buffer of work on hand. If we see our buffer shrinking and we want to keep busy, the only possible solution is to work slower.

Trying to eliminate slack causes work to expand. There’s never any free time because we always fill it.

Amos Tversky said the secret to doing good research is to always be a little underemployed; you waste years by not being able to waste hours. Those wasted hours are necessary to figure out if you’re headed in the right direction.

“Jootsing”: The Key to Creativity

Creativity can seem like a mysterious process. But many of the most creative people understand that you can actually break it down into a simple formula, involving what researcher Douglas Hofstadter calls “jootsing.” Here’s how understanding systems can help us think more creatively.

“Art is limitation; the essence of every picture is the frame. If you draw a giraffe, you must draw him with a long neck. If in your bold creative way you hold yourself free to draw a giraffe with a short neck, you will really find that you are not free to draw a giraffe.” —G.K. Chesterton

We can break the creative process down into the following three steps:

  1. Gain a deep understanding of a particular system and its rules.
  2. Step outside of that system and look for something surprising that subverts its rules.
  3. Use what you find as the basis for making something new and creative.

It may not be simple to do, but it is reliable and repeatable.

In Intuition Pumps and Other Tools for Thinking, philosopher Daniel C. Dennett describes this process of understanding a system in order to step outside of it as “jootsing,” using a term coined by Douglas Hofstadter. “Jootsing” means “jumping out of the system.”

Dennett explains that jootsing is the method behind creativity in science, philosophy, and the arts: “Creativity, that ardently sought but only rarely found virtue, often is a heretofore unimagined violation of the rules of the system from which it springs.” The rules within a system could be things like the idea that a painting must have a frame, a haiku must only have seventeen syllables, or a depiction of landscape must have a blue sky. But galleries hang paintings without frames all the time. Haiku without seventeen syllables win international contests. And landscape paintings don’t need to contain a sky, let alone a blue one.

***

Creativity, as Dennett describes it, is not about pure novelty. The concept of jootsing shows us that constraints and restrictions are essential for creativity.

Breaking rules you don’t know exist is not a statement. It’s a common refrain that much of modern art could be the work of a five-year-old. Yet while a five-year-old could produce a random combination of elements that looks similar to a famous work of modern art, it would not be creative in the same way because the child would not be jootsing. They wouldn’t have an understanding of the system they now sought to subvert.

Limitations are essential because they give us a starting point and a shape to work against.

While amateurs may attempt to start from scratch when trying to make something creative in a new area, professionals know they must first get in touch with the existing territory. Before even contemplating their own work, they take the time to master the conventional ways of doing things, to know what the standards are, and to become well-versed in the types of work considered exemplary. Doing so can take years or even the best part of a career. Dennett summarizes: “It helps to know the tradition if you want to subvert it. That’s why so few dabblers or novices succeed in coming up with anything truly creative.”

***

Understanding a system first is necessary for creativity for two reasons. First, it provides something comprehensible to use as a starting point, and second, it makes it possible to come up with something more interesting or useful. If you try to start a creative effort from nothing, you’ll end up with mere chaos.

Dennett writes: “Sit down at a piano and try to come up with a good new melody and you soon discover how hard it is. All the keys are available, in any combination you choose, but until you can find something to lean on, some style or genre or pattern to lay down and exploit a bit, or allude to, before you twist it, you will come up with nothing but noise.

Creativity often begins with accidents that end up showing a new possibility or reveal that violating a particular rule isn’t as harmful as expected. Elsewhere in the book, Dennett suggests that any computer model intended to generate creativity must include mistakes and randomness, “junk lying around that your creative process can bump into, noises that your creative process can’t help overhearing.

***

Most of us say we want to be creative—and we want the people we work with and for to be creative. The concept of jootsing reveals why we often end up preventing that from happening. Creativity is impossible without in some way going against rules that exist for a good reason.

Psychologists Jacob Getzels and Phillip Jackson studied creativity in the 1950s. Their findings were repeated across many studies and described what was termed as the Getzels-Jackson effect: “The vast majority—98 percent—of teachers say creating is so important that it should be taught daily, but when tested, they nearly always favor less creative children over more creative children.”

Kevin Ashton, in How to Fly a Horse, explains why. Teachers favor less creative children “because people who are more creative also tend to be more playful, unconventional, and unpredictable, and all of this makes them harder to control. No matter how much we say we value creation, deep down, most of us value control more. And so we fear change and favor familiarity. Rejecting is a reflex.” Ashton notes that the Getzels-Jackson effect is also present in the organizations we are a part of in adulthood. When the same tests are applied to decision-makers and authority figures in business, science, and government, the results are the same: they all say they value creation, but it turns out they don’t value creators.

***

If you want people to be creative, you can’t complain or punish them when they question a system that is “typically so entrenched that it is as invisible as the air you breathe,” as Dennett says. You need to permit a lot of exploration, including ideas that don’t work out. Not everything outside of a system proves worth pursuing. And often the rules that are most beneficial to break are those that seem the most load-bearing, as if meddling with them will cause the whole system to collapse. It might—or it might make it much better.

You also need to permit the making of mistakes if you want to foster creativity, because that often ends up leading to new discoveries. Dennett writes, “The exploitation of accidents is the key to creativity, whether what is being made is a new genome, a new behavior, or a new melody.” Most accidents never end up being profitable or valuable in a measurable way. But they’re necessary because they’re part of the process of developing something new. Accidents fuel creativity.

In the book Loonshots, Safi Bahcall explores, among other ideas, how to nurture and develop those seemingly crazy ideas that turn out to be paradigm-shifting innovations. He gives many examples of now ubiquitous technologies that were initially laughed at, rejected, or buried. He notes that it’s not easy to immediately buy in to radical developments, and if we want to have environments where creating is possible, then we have to give creativity space and understanding. “It’s worth keeping in mind,” he says, “that revving the creative engine to fire at higher speeds . . . means more ideas and more experiments, which also means, inevitably, more failed experiments.

As individuals, if we want to be creative, we need to give ourselves space to play and experiment without a set agenda. Amos Tversky famously said that the secret to doing good work is being a little unemployed so you always have hours in the day to waste as you wish. During that wasted time, you’ll likely have your best, most creative ideas.

If your schedule is crammed with only room for what’s productive in an obvious way, you’ll have a hard time seeing outside of the existing system.

Complexity Bias: Why We Prefer Complicated to Simple

Complexity bias is a logical fallacy that leads us to give undue credence to complex concepts.

Faced with two competing hypotheses, we are likely to choose the most complex one. That’s usually the option with the most assumptions and regressions. As a result, when we need to solve a problem, we may ignore simple solutions — thinking “that will never work” — and instead favor complex ones.

To understand complexity bias, we need first to establish the meaning of three key terms associated with it: complexity, simplicity, and chaos.

The Cambridge Dictionary defines complexity as “the state of having many parts and being difficult to understand or find an answer to.” The definition of simplicity is the inverse: “something [that] is easy to understand or do.” Chaos is defined as “a state of total confusion with no order.”

“Life is really simple, but we insist on making it complicated.”

— Confucius

Complex systems contain individual parts that combine to form a collective that often can’t be predicted from its components. Consider humans. We are complex systems. We’re made of about 100 trillion cells and yet we are so much more than the aggregation of our cells. You’d never predict what we’re like or who we are from looking at our cells.

Complexity bias is our tendency to look at something that is easy to understand, or look at it when we are in a state of confusion, and view it as having many parts that are difficult to understand.

We often find it easier to face a complex problem than a simple one.

A person who feels tired all the time might insist that their doctor check their iron levels while ignoring the fact that they are unambiguously sleep deprived. Someone experiencing financial difficulties may stress over the technicalities of their telephone bill while ignoring the large sums of money they spend on cocktails.

Marketers make frequent use of complexity bias.

They do this by incorporating confusing language or insignificant details into product packaging or sales copy. Most people who buy “ammonia-free” hair dye, or a face cream which “contains peptides,” don’t fully understand the claims. Terms like these often mean very little, but we see them and imagine that they signify a product that’s superior to alternatives.

How many of you know what probiotics really are and how they interact with gut flora?

Meanwhile, we may also see complexity where only chaos exists. This tendency manifests in many forms, such as conspiracy theories, superstition, folklore, and logical fallacies. The distinction between complexity and chaos is not a semantic one. When we imagine that something chaotic is in fact complex, we are seeing it as having an order and more predictability than is warranted. In fact, there is no real order, and prediction is incredibly difficult at best.

Complexity bias is interesting because the majority of cognitive biases occur in order to save mental energy. For example, confirmation bias enables us to avoid the effort associated with updating our beliefs. We stick to our existing opinions and ignore information that contradicts them. Availability bias is a means of avoiding the effort of considering everything we know about a topic. It may seem like the opposite is true, but complexity bias is, in fact, another cognitive shortcut. By opting for impenetrable solutions, we sidestep the need to understand. Of the fight-or-flight responses, complexity bias is the flight response. It is a means of turning away from a problem or concept and labeling it as too confusing. If you think something is harder than it is, you surrender your responsibility to understand it.

“Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.”

— Andy Benoit

Faced with too much information on a particular topic or task, we see it as more complex than it is. Often, understanding the fundamentals will get us most of the way there. Software developers often find that 90% of the code for a project takes about half the allocated time. The remaining 10% takes the other half. Writing — and any other sort of creative work — is much the same. When we succumb to complexity bias, we are focusing too hard on the tricky 10% and ignoring the easy 90%.

Research has revealed our inherent bias towards complexity.

In a 1989 paper entitled “Sensible reasoning in two tasks: Rule discovery and hypothesis evaluation,” Hilary F. Farris and Russell Revlin evaluated the topic. In one study, participants were asked to establish an arithmetic rule. They received a set of three numbers (such as 2, 4, 6) and tried to generate a hypothesis by asking the experimenter if other number sequences conformed to the rule. Farris and Revlin wrote, “This task is analogous to one faced by scientists, with the seed triple functioning as an initiating observation, and the act of generating the triple is equivalent to performing an experiment.”

The actual rule was simple: list any three ascending numbers.

The participants could have said anything from “1, 2, 3” to “3, 7, 99” and been correct. It should have been easy for the participants to guess this, but most of them didn’t. Instead, they came up with complex rules for the sequences. (Also see Falsification of Your Best Loved Ideas.)

A paper by Helena Matute looked at how intermittent reinforcement leads people to see complexity in chaos. Three groups of participants were placed in rooms and told that a loud noise would play from time to time. The volume, length, and pattern of the sound were identical for each group. Group 1 (Control) was told to sit and listen to the noises. Group 2 (Escape) was told that there was a specific action they could take to stop the noises. Group 3 (Yoked) was told the same as Group 2, but in their case, there was actually nothing they could do.

Matute wrote:

Yoked participants received the same pattern and duration of tones that had been produced by their counterparts in the Escape group. The amount of noise received by Yoked and Control subjects depends only on the ability of the Escape subjects to terminate the tones. The critical factor is that Yoked subjects do not have control over reinforcement (noise termination) whereas Escape subjects do, and Control subjects are presumably not affected by this variable.

The result? Not one member of the Yoked group realized that they had no control over the sounds. Many members came to repeat particular patterns of “superstitious” behavior. Indeed, the Yoked and Escape groups had very similar perceptions of task controllability. Faced with randomness, the participants saw complexity.

Does that mean the participants were stupid? Not at all. We all exhibit the same superstitious behavior when we believe we can influence chaotic or simple systems.

Funnily enough, animal studies have revealed much the same. In particular, consider B.F. Skinner’s well-known research on the effects of random rewards on pigeons. Skinner placed hungry pigeons in cages equipped with a random-food-delivery mechanism. Over time, the pigeons came to believe that their behavior affected the food delivery. Skinner described this as a form of superstition. One bird spun in counterclockwise circles. Another butted its head against a corner of the cage. Other birds swung or bobbed their heads in specific ways. Although there is some debate as to whether “superstition” is an appropriate term to apply to birds, Skinner’s research shed light on the human tendency to see things as being more complex than they actually are.

Skinner wrote (in “‘Superstition’ in the Pigeon,” Journal of Experimental Psychology, 38):

The bird behaves as if there were a causal relation between its behavior and the presentation of food, although such a relation is lacking. There are many analogies in human behavior. Rituals for changing one’s fortune at cards are good examples. A few accidental connections between a ritual and favorable consequences suffice to set up and maintain the behavior in spite of many unreinforced instances. The bowler who has released a ball down the alley but continues to behave as if he were controlling it by twisting and turning his arm and shoulder is another case in point. These behaviors have, of course, no real effect upon one’s luck or upon a ball half way down an alley, just as in the present case the food would appear as often if the pigeon did nothing—or, more strictly speaking, did something else.

The world around us is a chaotic, entropic place. But it is rare for us to see it that way.

In Living with Complexity, Donald A. Norman offers a perspective on why we need complexity:

We seek rich, satisfying lives, and richness goes along with complexity. Our favorite songs, stories, games, and books are rich, satisfying, and complex. We need complexity even while we crave simplicity… Some complexity is desirable. When things are too simple, they are also viewed as dull and uneventful. Psychologists have demonstrated that people prefer a middle level of complexity: too simple and we are bored, too complex and we are confused. Moreover, the ideal level of complexity is a moving target, because the more expert we become at any subject, the more complexity we prefer. This holds true whether the subject is music or art, detective stories or historical novels, hobbies or movies.

As an example, Norman asks readers to contemplate the complexity we attach to tea and coffee. Most people in most cultures drink tea or coffee each day. Both are simple beverages, made from water and coffee beans or tea leaves. Yet we choose to attach complex rituals to them. Even those of us who would not consider ourselves to be connoisseurs have preferences. Offer to make coffee for a room full of people, and we can be sure that each person will want it made in a different way.

Coffee and tea start off as simple beans or leaves, which must be dried or roasted, ground and infused with water to produce the end result. In principle, it should be easy to make a cup of coffee or tea. Simply let the ground beans or tea leaves [steep] in hot water for a while, then separate the grounds and tea leaves from the brew and drink. But to the coffee or tea connoisseur, the quest for the perfect taste is long-standing. What beans? What tea leaves? What temperature water and for how long? And what is the proper ratio of water to leaves or coffee?

The quest for the perfect coffee or tea maker has been around as long as the drinks themselves. Tea ceremonies are particularly complex, sometimes requiring years of study to master the intricacies. For both tea and coffee, there has been a continuing battle between those who seek convenience and those who seek perfection.

Complexity, in this way, can enhance our enjoyment of a cup of tea or coffee. It’s one thing to throw some instant coffee in hot water. It’s different to select the perfect beans, grind them ourselves, calculate how much water is required, and use a fancy device. The question of whether this ritual makes the coffee taste better or not is irrelevant. The point is the elaborate surrounding ritual. Once again, we see complexity as superior.

“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.”

— Edsger W. Dijkstra

The Problem with Complexity

Imagine a person who sits down one day and plans an elaborate morning routine. Motivated by the routines of famous writers they have read about, they lay out their ideal morning. They decide they will wake up at 5 a.m., meditate for 15 minutes, drink a liter of lemon water while writing in a journal, read 50 pages, and then prepare coffee before planning the rest of their day.

The next day, they launch into this complex routine. They try to keep at it for a while. Maybe they succeed at first, but entropy soon sets in and the routine gets derailed. Sometimes they wake up late and do not have time to read. Their perceived ideal routine has many different moving parts. Their actual behavior ends up being different each day, depending on random factors.

Now imagine that this person is actually a famous writer. A film crew asks to follow them around on a “typical day.” On the day of filming, they get up at 7 a.m., write some ideas, make coffee, cook eggs, read a few news articles, and so on. This is not really a routine; it is just a chaotic morning based on reactive behavior. When the film is posted online, people look at the morning and imagine they are seeing a well-planned routine rather than the randomness of life.

This hypothetical scenario illustrates the issue with complexity: it is unsustainable without effort.

The more individual constituent parts a system has, the greater the chance of its breaking down. Charlie Munger once said that “Where you have complexity, by nature you can have fraud and mistakes.” Any complex system — be it a morning routine, a business, or a military campaign — is difficult to manage. Addressing one of the constituent parts inevitably affects another (see the Butterfly Effect). Unintended and unexpected consequences are likely to occur.

As Daniel Kahneman and Amos Tversky wrote in 1974 (in Judgment Under Uncertainty: Heuristics and Biases): “A complex system, such as a nuclear reactor or the human body, will malfunction if any of its essential components fails. Even when the likelihood of failure in each component is slight, the probability of an overall failure can be high if many components are involved.”

This is why complexity is less common than we think. It is unsustainable without constant maintenance, self-organization, or adaptation. Chaos tends to disguise itself as complexity.

“Human beings are pattern-seeking animals. It’s part of our DNA. That’s why conspiracy theories and gods are so popular: we always look for the wider, bigger explanations for things.”

— Adrian McKinty, The Cold Cold Ground

Complexity Bias and Conspiracy Theories

A musician walks barefoot across a zebra-crossing on an album cover. People decide he died in a car crash and was replaced by a lookalike. A politician’s eyes look a bit odd in a blurry photograph. People conclude that he is a blood-sucking reptilian alien taking on a human form. A photograph shows an indistinct shape beneath the water of a Scottish lake. The area floods with tourists hoping to glimpse a surviving prehistoric creature. A new technology overwhelms people. So, they deduce that it is the product of a government mind-control program.

Conspiracy theories are the ultimate symptom of our desire to find complexity in the world. We don’t want to acknowledge that the world is entropic. Disasters happen and chaos is our natural state. The idea that hidden forces animate our lives is an appealing one. It seems rational. But as we know, we are all much less rational and logical than we think. Studies have shown that a high percentage of people believe in some sort of conspiracy. It’s not a fringe concept. According to research by Joseph E. Uscinski and Joseph M. Parent, about one-third of Americans believe the notion that Barack Obama’s birth certificate is fake. Similar numbers are convinced that 9/11 was an inside job orchestrated by George Bush. Beliefs such as these are present in all types of people, regardless of class, age, gender, race, socioeconomic status, occupation, or education level.

Conspiracy theories are invariably far more complex than reality. Although education does reduce the chances of someone’s believing in conspiracy theories, one in five Americans with postgraduate degrees still hold conspiratorial beliefs.

Uscinski and Parent found that, just as uncertainty led Skinner’s pigeons to see complexity where only randomness existed, a sense of losing control over the world around us increases the likelihood of our believing in conspiracy theories. Faced with natural disasters and political or economic instability, we are more likely to concoct elaborate explanations. In the face of horrific but chaotic events such as Hurricane Katrina, or the recent Grenfell Tower fire, many people decide that secret institutions are to blame.

Take the example of the “Paul McCartney is dead” conspiracy theory. Since the 1960s, a substantial number of people have believed that McCartney died in a car crash and was replaced by a lookalike, usually said to be a Scottish man named William Campbell. Of course, conspiracy theorists declare, The Beatles wanted their most loyal fans to know this, so they hid clues in songs and on album covers.

The beliefs surrounding the Abbey Road album are particularly illustrative of the desire to spot complexity in randomness and chaos. A police car is parked in the background — an homage to the officers who helped cover up the crash. A car’s license plate reads “LMW 28IF” — naturally, a reference to McCartney being 28 if he had lived (although he was 27) and to Linda McCartney (whom he had not met yet). Matters were further complicated once The Beatles heard about the theory and began to intentionally plant “clues” in their music. The song “I’m So Tired” does in fact feature backwards mumbling about McCartney’s supposed death. The 1960s were certainly a turbulent time, so is it any wonder that scores of people pored over album art or played records backwards, looking for evidence of a complex hidden conspiracy?

As Henry Louis Gates Jr. wrote, “Conspiracy theories are an irresistible labor-saving device in the face of complexity.”

Complexity Bias and Language

We have all, at some point, had a conversation with someone who speaks like philosopher Theodor Adorno wrote: using incessant jargon and technical terms even when simpler synonyms exist and would be perfectly appropriate. We have all heard people say things which we do not understand, but which we do not question for fear of sounding stupid.

Jargon is an example of how complexity bias affects our communication and language usage. When we use jargon, especially out of context, we are putting up unnecessary semantic barriers that reduce the chances of someone’s challenging or refuting us.

In an article for The Guardian, James Gingell describes his work translating scientific jargon into plain, understandable English:

It’s quite simple really. The first step is getting rid of the technical language. Whenever I start work on refining a rough-hewn chunk of raw science into something more pleasant I use David Dobbs’ (rather violent) aphorism as a guiding principle: “Hunt down jargon like a mercenary possessed, and kill it.” I eviscerate acronyms and euthanise decrepit Latin and Greek. I expunge the esoteric. I trim and clip and pare and hack and burn until only the barest, most easily understood elements remain.

[…]

Jargon…can be useful for people as a shortcut to communicating complex concepts. But it’s intrinsically limited: it only works when all parties involved know the code. That may be an obvious point but it’s worth emphasising — to communicate an idea to a broad, non-specialist audience, it doesn’t matter how good you are at embroidering your prose with evocative imagery and clever analogies, the jargon simply must go.”

Gingell writes that even the most intelligent scientists struggle to differentiate between thinking (and speaking and writing) like a scientist, and thinking like a person with minimal scientific knowledge.

Unnecessarily complex language is not just annoying. It’s outright harmful. The use of jargon in areas such as politics and economics does real harm. People without the requisite knowledge to understand it feel alienated and removed from important conversations. It leads people to believe that they are not intelligent enough to understand politics, or not educated enough to comprehend economics. When a politician talks of fiscal charters or rolling four-quarter growth measurements in a public statement, they are sending a crystal clear message to large numbers of people whose lives will be shaped by their decisions: this is not about you.

Complexity bias is a serious issue in politics. For those in the public eye, complex language can be a means of minimizing the criticism of their actions. After all, it is hard to dispute something you don’t really understand. Gingell considers jargon to be a threat to democracy:

If we can’t fully comprehend the decisions that are made for us and about us by the government then how we can we possibly revolt or react in an effective way? Yes, we have a responsibility to educate ourselves more on the big issues, but I also think it’s important that politicians and journalists meet us halfway.

[…]

Economics and economic decisions are more important than ever now, too. So we should implore our journalists and politicians to write and speak to us plainly. Our democracy depends on it.

In his essay “Politics and the English Language,” George Orwell wrote:

In our time, political speech and writing are largely the defence of the indefensible. … Thus, political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements.

An example of the problems with jargon is the Sokal affair. In 1996, Alan Sokal (a physics professor) submitted a fabricated scientific paper entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity.” The paper had absolutely no relation to reality and argued that quantum gravity is a social and linguistic construct. Even so, the paper was published in a respected journal. Sokal’s paper consisted of convoluted, essentially meaningless claims, such as this paragraph:

Secondly, the postmodern sciences deconstruct and transcend the Cartesian metaphysical distinctions between humankind and Nature, observer and observed, Subject and Object. Already quantum mechanics, earlier in this century, shattered the ingenious Newtonian faith in an objective, pre-linguistic world of material objects “out there”; no longer could we ask, as Heisenberg put it, whether “particles exist in space and time objectively.”

(If you’re wondering why no one called him out, or more specifically why we have a bias to not call BS out, check out pluralistic ignorance).

Jargon does have its place. In specific contexts, it is absolutely vital. But in everyday communication, its use is a sign that we wish to appear complex and therefore more intelligent. Great thinkers throughout the ages have stressed the crucial importance of using simple language to convey complex ideas. Many of the ancient thinkers whose work we still reference today — people like Plato, Marcus Aurelius, Seneca, and Buddha — were known for their straightforward communication and their ability to convey great wisdom in a few words.

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.”

— Ernst F. Schumacher

How Can We Overcome Complexity Bias?

The most effective tool we have for overcoming complexity bias is Occam’s razor. Also known as the principle of parsimony, this is a problem-solving principle used to eliminate improbable options in a given situation. Occam’s razor suggests that the simplest solution or explanation is usually the correct one. When we don’t have enough empirical evidence to disprove a hypothesis, we should avoid making unfounded assumptions or adding unnecessary complexity so we can make quick decisions or establish truths.

An important point to note is that Occam’s razor does not state that the simplest hypothesis is the correct one, but states rather that it is the best option before the establishment of empirical evidence. It is also useful in situations where empirical data is difficult or impossible to collect. While complexity bias leads us towards intricate explanations and concepts, Occam’s razor can help us to trim away assumptions and look for foundational concepts.

Returning to Skinner’s pigeons, had they known of Occam’s razor, they would have realized that there were two main possibilities:

  • Their behavior affects the food delivery.

Or:

  • Their behavior is irrelevant because the food delivery is random or on a timed schedule.

Using Occam’s razor, the head-bobbing, circles-turning pigeons would have realized that the first hypothesis involves numerous assumptions, including:

  • There is a particular behavior they must enact to receive food.
  • The delivery mechanism can somehow sense when they enact this behavior.
  • The required behavior is different from behaviors that would normally give them access to food.
  • The delivery mechanism is consistent.

And so on. Occam’s razor would dictate that because the second hypothesis is the simplest, involving the fewest assumptions, it is most likely the correct one.

So many geniuses, are really good at eliminating unnecessary complexity. Einstein, for instance, was a master at sifting the essential from the non-essential. Steve Jobs was the same.

Comment on Facebook | Twitter

The Psychology of Risk and Reward

The Psychology of Risk and Reward

An excerpt from The Aspirational Investor: Taming the Markets to Achieve Your Life’s Goals that I think you’d enjoy.

Most of us have a healthy understanding of risk in the short term.

When crossing the street, for example, you would no doubt speed up to avoid an oncoming car that suddenly rounds the corner.

Humans are wired to survive: it’s a basic instinct that takes command almost instantly, enabling our brains to resolve ambiguity quickly so that we can take decisive action in the face of a threat.

The impulse to resolve ambiguity manifests itself in many ways and in many contexts, even those less fraught with danger. Glance at the (above) picture for no more than a couple of seconds. What do you see?

Some observers perceive the profile of a young woman with flowing hair, an elegant dress, and a bonnet. Others see the image of a woman stooped in old age with a wart on her large nose. Still others—in the gifted minority—are able to see both of the images simultaneously.

What is interesting about this illusion is that our brains instantly decide what image we are looking at, based on our first glance. If your initial glance was toward the vertical profile on the left-hand side, you were all but destined to see the image of the elegant young woman: it was just a matter of your brain interpreting every line in the picture according to the mental image that you already formed, even though each line can be interpreted in two different ways. Conversely, if your first glance fell on the central dark horizontal line that emphasizes the mouth and chin, your brain quickly formed an image of the older woman.

Regardless of your interpretation, your brain wasn’t confused. It simply decided what the picture was and filled in the missing pieces. Your brain resolved ambiguity and extracted order from conflicting information.

What does this have to do with decision making? Every bit of information can be interpreted differently according to our perspective. Ashvin Chhabra directs us to investing. I suggest you reframe this in the context of decision making in general.

Every trade has a seller and a buyer: your state of mind is paramount. If you are in a risk-averse mental framework, then you are likely to interpret a further fall in stocks as additional confirmation of your sell bias. If instead your framework is positive, you will interpret the same event as a buying opportunity.

The challenge of investing is compounded by the fact that our brains, which excel at resolving ambiguity in the face of a threat, are less well equipped to navigate the long term intelligently. Since none of us can predict the future, successful investing requires planning and discipline.

Unfortunately, when reason is in apparent conflict with our instincts—about markets or a “hot stock,” for example—it is our instincts that typically prevail. Our “reptilian brain” wins out over our “rational brain,” as it so often does in other facets of our lives. And as we have seen, investors trade too frequently, and often at the wrong time.

One way our brains resolve conflicting information is to seek out safety in numbers. In the animal kingdom, this is called “moving with the herd,” and it serves a very important purpose: helping to ensure survival. Just as a buffalo will try to stay with the herd in order to minimize its individual vulnerability to predators, we tend to feel safer and more confident investing alongside equally bullish investors in a rising market, and we tend to sell when everyone around us is doing the same. Even the so-called smart money falls prey to a herd mentality: one study, aptly titled “Thy Neighbor’s Portfolio,” found that professional mutual fund managers were more likely to buy or sell a particular stock if other managers in the same city were also buying or selling.

This comfort is costly. The surge in buying activity and the resulting bullish sentiment is self-reinforcing, propelling markets to react even faster. That leads to overvaluation and the inevitable crash when sentiment reverses. As we shall see, such booms and busts are characteristic of all financial markets, regardless of size, location, or even the era in which they exist.

Even though the role of instinct and human emotions in driving speculative bubbles has been well documented in popular books, newspapers, and magazines for hundreds of years, these factors were virtually ignored in conventional financial and economic models until the 1970s.

This is especially surprising given that, in 1951, a young PhD student from the University of Chicago, Harry Markowitz, published two very important papers. The first, entitled “Portfolio Selection,” published in the Journal of Finance, led to the creation of what we call modern portfolio theory, together with the widespread adoption of its important ideas such as asset allocation and diversification. It earned Harry Markowitz a Nobel Prize in Economics.

The second paper, entitled “The Utility of Wealth” and published in the prestigious Journal of Political Economy, was about the propensity of people to hold insurance (safety) and to buy lottery tickets at the same time. It delved deeper into the psychological aspects of investing but was largely forgotten for decades.

The field of behavioral finance really came into its own through the pioneering work of two academic psychologists, Amos Tversky and Daniel Kahneman, who challenged conventional wisdom about how people make decisions involving risk. Their work garnered Kahneman the Nobel Prize in Economics in 2002. Behavioral finance and neuroeconomics are relatively new fields of study that seek to identify and understand human behavior and decision making with regard to choices involving trade-offs between risk and reward. Of particular interest are the human biases that prevent individuals from making fully rational financial decisions in the face of uncertainty.

As behavioral economists have documented, our propensity for herd behavior is just the tip of the iceberg. Kahneman and Tversky, for example, showed that people who were asked to choose between a certain loss and a gamble, in which they could either lose more money or break even, would tend to choose the double down (that is, gamble to avoid the prospect of losses), a behavior the authors called “loss aversion.” Building on this work, Hersh Shefrin and Meir Statman, professors at the University of Santa Clara Leavey School of Business, have linked the propensity for loss aversion to investors’ tendency to hold losing investments too long and to sell winners too soon. They called this bias the disposition effect.

The lengthy list of behaviorally driven market effects often converge in an investor’s tale of woe. Overconfidence causes investors to hold concentrated portfolios and to trade excessively, behaviors that can destroy wealth. The illusion of control causes investors to overestimate the probability of success and underestimate risk because of familiarity—for example, causing investors to hold too much employer stock in their 401(k) plans, resulting in under-diversification. Cognitive dissonance causes us to ignore evidence that is contrary to our opinions, leading to myopic investing behavior. And the representativeness bias leads investors to assess risk and return based on superficial characteristics—for example, by assuming that shares of companies that make products you like are good investments.

Several other key behavioral biases come into play in the realm of investing. Framing can cause investors to make a decision based on how the question is worded and the choices presented. Anchoring often leads investors to unconsciously create a reference point, say for securities prices, and then adjust decisions or expectations with respect to that anchor. This bias might impede your ability to sell a losing stock, for example, in the false hope that you can earn your money back. Similarly, the endowment bias might lead you to overvalue a stock that you own and thus hold on to the position too long. And regret aversion may lead you to avoid taking a tough action for fear that it will turn out badly. This can lead to decision paralysis in the wake of a market crash, even though, statistically, it is a good buying opportunity.

Behavioral finance has generated plenty of debate. Some observers have hailed the field as revolutionary; others bemoan the discipline’s seeming lack of a transcendent, unifying theory. This much is clear: behavioral finance treats biases as mistakes that, in academic parlance, prevent investors from thinking “rationally” and cause them to hold “suboptimal” portfolios.

But is that really true? In investing, as in life, the answer is more complex than it appears. Effective decision making requires us to balance our “reptilian brain,” which governs instinctive thinking, with our “rational brain,” which is responsible for strategic thinking. Instinct must integrate with experience.

Put another way, behavioral biases are nothing more than a series of complex trade-offs between risk and reward. When the stock market is taking off, for example, a failure to rebalance by selling winners is considered a mistake. The same goes for a failure to add to a position in a plummeting market. That’s because conventional finance theory assumes markets to be inherently stable, or “mean-reverting,” so most deviations from the historical rate of return are viewed as fluctuations that will revert to the mean, or self-correct, over time.

But what if a precipitous market drop is slicing into your peace of mind, affecting your sleep, your relationships, and your professional life? What if that assumption about markets reverting to the mean doesn’t hold true and you cannot afford to hold on for an extended period of time? In both cases, it might just be “rational” to sell and accept your losses precisely when investment theory says you should be buying. A concentrated bet might also make sense, if you possess the skill or knowledge to exploit an opportunity that others might not see, even if it flies in the face of conventional diversification principles.

Of course, the time to create decision rules for extreme market scenarios and concentrated bets is when you are building your investment strategy, not in the middle of a market crisis or at the moment a high-risk, high-reward opportunity from a former business partner lands on your desk and gives you an adrenaline jolt. A disciplined process for managing risk in relation to a clear set of goals will enable you to use the insights offered by behavioral finance to your advantage, rather than fall prey to the common pitfalls. This is one of the central insights of the Wealth Allocation Framework. But before we can put these insights to practical use, we need to understand the true nature of financial markets.

Leonard Mlodinow: The Three Laws of Probability

“These three laws, simple as they are, form much of the basis of probability theory. Properly applied, they can give us much insight into the workings of nature and the everyday world.”

***

In his book, The Drunkard’s Walk, Leonard Mlodinow outlines the three key “laws” of probability.

The first law of probability is the most basic of all. But before we get to that, let’s look at this question.

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?

Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.

To Kahneman and Tversky’s surprise, 87 percent of the subjects in the study believed that the probability of Linda being a bank teller and active in the feminist movement was a higher probability than the probability that Linda is a bank teller.

1. The probability that two events will both occur can never be greater than the probability that each will occur individually.

This is the conjunction fallacy.

Mlodinow explains:

Why not? Simple arithmetic: the chances that event A will occur = the chances that events A and B will occur + the chance that event A will occur and event B will not occur.

The interesting thing that Kahneman and Tversky discovered was that we don’t tend to make this mistake unless we know something about the subject.

“For example,” Mlodinow muses, “suppose Kahneman and Tversky had asked which of these statements seems most probable:”

Linda owns an International House of Pancakes franchise.
Linda had a sex-change operation and is now known as Larry.
Linda had a sex-change operation, is now known as Larry, and owns an International House of Pancakes franchise.

In this case, it’s unlikely you would choose the last option.

Via The Drunkard’s Walk:

If the details we are given fit our mental picture of something, then the more details in a scenario, the more real it seems and hence the more probable we consider it to be—even though any act of adding less-than-certain details to a conjecture makes the conjecture less probable.

Or as Kahneman and Tversky put it, “A good story is often less probable than a less satisfactory… .”

2. If two possible events, A and B, are independent, then the probability that both A and B will occur is equal to the product of their individual probabilities.

Via The Drunkard’s Walk:

Suppose a married person has on average roughly a 1 in 50 chance of getting divorced each year. On the other hand, a police officer has about a 1 in 5,000 chance each year of being killed on the job. What are the chances that a married police officer will be divorced and killed in the same year? According to the above principle, if those events were independent, the chances would be roughly 1⁄50 × 1⁄5,000, which equals 1⁄250,000. Of course the events are not independent; they are linked: once you die, darn it, you can no longer get divorced. And so the chance of that much bad luck is actually a little less than 1 in 250,000.

Why multiply rather than add? Suppose you make a pack of trading cards out of the pictures of those 100 guys you’ve met so far through your Internet dating service, those men who in their Web site photos often look like Tom Cruise but in person more often resemble Danny DeVito. Suppose also that on the back of each card you list certain data about the men, such as honest (yes or no) and attractive (yes or no). Finally, suppose that 1 in 10 of the prospective soul mates rates a yes in each case. How many in your pack of 100 will pass the test on both counts? Let’s take honest as the first trait (we could equally well have taken attractive). Since 1 in 10 cards lists a yes under honest, 10 of the 100 cards will qualify. Of those 10, how many are attractive? Again, 1 in 10, so now you are left with 1 card. The first 1 in 10 cuts the possibilities down by 1⁄10, and so does the next 1 in 10, making the result 1 in 100. That’s why you multiply. And if you have more requirements than just honest and attractive, you have to keep multiplying, so . . . well, good luck.

And there are situations where probabilities should be added. That’s the next law.

“These occur when we want to know the chances of either one event or another occurring, as opposed to the earlier situation, in which we wanted to know the chance of one event and another event happening.”

3. If an event can have a number of different and distinct possible outcomes, A, B, C, and so on, then the probability that either A or B will occur is equal to the sum of the individual probabilities of A and B, and the sum of the probabilities of all the possible outcomes (A, B, C, and so on) is 1 (that is, 100 percent).

Via The Drunkard’s Walk:

When you want to know the chances that two independent events, A and B, will both occur, you multiply; if you want to know the chances that either of two mutually exclusive events, A or B, will occur, you add. Back to our airline: when should the gate attendant add the probabilities instead of multiplying them? Suppose she wants to know the chances that either both passengers or neither passenger will show up. In this case she should add the individual probabilities, which according to what we calculated above, would come to 55 percent.

These three simple laws form the basis of probability. “Properly applied,” Mlodinow writes, “they can give us much insight into the workings of nature and the everyday world.” We use them all the time, we just don’t use them properly.