Tag: Decision Making

The Best of Farnam Street 2019

We read for the same reasons we have conversations — to enrich our lives.

Reading helps us to think, feel, and reflect — not only upon ourselves and others but upon our ideas, and our relationship with the world. Reading deepens our understanding and helps us live consciously.

Of the 31 articles we published on FS this year, here are the top ten as measured by a combination of page views, responses, and feeling.

How Not to Be Stupid — Stupidity is overlooking or dismissing conspicuously crucial information. Here are seven situational factors that compromise your cognitive ability and result in increased odds of stupidity.

The Danger of Comparing Yourself to Others — When you stop comparing yourself to others and turn your focus inward, you start being better at what really matters: being you.

Yes, It’s All Your Fault: Active vs. Passive Mindsets — The hard truth is that most things in your life – good and bad – are your fault. The sooner you realize that, the better things will be. Here’s how to cultivate an active mindset and take control of your life.

Getting Ahead By Being Inefficient — Inefficient does not mean ineffective, and it is certainly not the same as lazy. You get things done – just not in the most effective way possible. You’re a bit sloppy, and use more energy. But don’t feel bad about it. There is real value in not being the best.

How to Do Great Things — If luck is the cause of a person’s success, why are so many so lucky time and time again? Learn how to create your own luck by being intelligently prepared.

The Anatomy of a Great Decision — Making better decisions is one of the best skills we can develop. Good decisions save time, money, and stress. Here, we break down what makes a good decision and what we can do to improve our decision-making processes.

The Importance of Working With “A” Players — Building a team is more complicated than collecting talent. I once tried to solve a problem by putting a bunch of PhDs in a room. While comments like that sounded good and got me a lot of projects above my level, they were rarely effective at delivering actual results.

Compounding Knowledge — The filing cabinet of knowledge stored in Warren Buffett’s brain has helped make him the most successful investor of our time. But it takes much more than simply reading a lot. In this article, learn how to create your own “snowball effect” to compound what you know into opportunity.

An Investment Approach That Works — There are as many investment strategies as there are investment opportunities. Some are good; many are terrible. Here’s the one that I lean on the most when I’m looking for low risk and above average returns.

Resonance: How to Open Doors For Other People — Opening doors for other people is a critical concept to understand in life. Read this article to learn more about how to show people that you care.

More interesting things, you might have missed

Thank you

As we touched on in the annual letter, it’s been a wonderful year at FS. We are looking forward to a wider variety of content on the blog in 2020 with a mix of deep dives and pieces exploring new subjects.

Thank you for an amazing 2019 and we look forward to learning new things with you in 2020.

Still curious? You can find the top five podcast episodes in 2019 here. Our Best of Farnam Street archive can be found here.

Elastic: Flexible Thinking in a Constantly Changing World

The less rigid we are in our thinking, the more open minded, creative and innovative we become. Here’s how to develop the power of an elastic mind.

***

Society is changing fast. Do we need to change how we think in order to survive?

In his book Elastic: Flexible Thinking in a Constantly Changing World, Leonard Mlodinow confirms that the speed of technological and cultural development is requiring us to embrace types of thinking besides the rational, logical style of analysis that tends to be emphasized in our society. He also offers good news: we already have the diverse cognitive capabilities necessary to effectively respond to new and novel challenges. He calls this “elastic thinking.”

Mlodinow explains elastic thinking as:

“the capacity to let go of comfortable ideas and become accustomed to ambiguity and contradiction; the capability to rise above conventional mind-sets and to reframe the questions we ask; the ability to abandon our ingrained assumptions and open ourselves to new paradigms; the propensity to rely on imagination as much as on logic and to generate and integrate a wide variety of ideas; and the willingness to experiment and be tolerant of failure.”

In simpler terms, elastic thinking is about letting your brain make connections without direction.

Let’s explore why elastic thinking is useful and how we can get better at it.

***

First of all, let’s throw out the metaphor that our brain is exactly like a computer. Sure, it can perform similar analytic functions. But our brains are capable of insight that is neither analytical nor programmable. Before we can embrace the other types of thinking our brains have innate capacity for, we need to accept that analytic thinking—generally described as the application of systematic, logical analysis—has limitations.

As Mlodinow explains,

“Analytical thought is the form of reflection that has been most prized in modern society. Best suited to analyzing life’s more straightforward issues, it is the kind of thinking we focus on in our schools. We quantify our ability in it through IQ tests and college entrance examinations, and we seek it in our employees. But although analytical thinking is powerful, like scripted processing, it proceeds in a linear fashion…and often fails to meet the challenges of novelty and change.”

Although incredibly useful in a variety of daily situations, analytical thinking may not be best for solving problems whose answers require new ways of doing things.

For those types of problems, elastic thinking is most useful. This is the kind of thinking that enjoys wandering outside the box and generating ideas that fly in and out of left field. “Ours is a far more complex process than occurs in a computer, an insect brain, or even the brains of other mammals,” Mlodinow elaborates. “It allows us to face the world armed with a capability for an astonishing breadth of conceptual analysis.”

Think of it this way: when you come to a river and need to cross it, your analytic thinking comes in handy. It scans the environment to evaluate your options. Where might the water be lowest? Where is it moving the fastest, and thus where is the most dangerous crossing point? What kind of materials are on hand to assist in your crossing? How might others have solved this problem?

This particular river might be new for you, but the concept of crossing one likely isn’t, so you can easily rely on the logical steps of an analytical thinking process.

Elastic thinking is about generating new or novel ideas. When contemplating how best to cross a river, it was this kind of thinking that took us from log bridges to suspension bridges and from rowboats to steamboats. Elastic thinking involves us putting together many disparate ideas to form a new way of doing things.

We don’t need to abandon analytical thinking altogether. We just need to recognize that it has its limitations. If the way we are doing things doesn’t seem to be getting us the results we want, that might be a sign that more elastic thinking is called for.

Why Elasticity?

Mlodinow writes that “humans tend to be attracted to both novelty and change.”

Throughout our history we have willingly lined up and paid to be shocked and amazed. From magic shows and roller coasters to the circus and movies, our entertainment industries never seem to run out of audiences. Our propensity to engage with the new isn’t just confined to entertainment. Think back to the large technological expositions around the turn of the twentieth century that displayed the cutting edge of invention and visions for the future and attracted millions of visitors. Or, going further back, think of the pilgrimages that people made to see new architectural wonders often captured in churches and cathedrals in a time when travel was difficult.

Mlodinow contends these types of actions display a quality “that makes us human…our ability and desire to adapt, to explore, and to generate new ideas.” Part of the reason that novelty attracts us is that we get a hit of feel-good dopamine when we are confronted with something new (and non-threatening). Thus, in terms of our evolutionary history, our tendency to explore and learn was rewarded with a boost of pleasure, which then led to more exploration.

He is careful to explain that exploring doesn’t necessarily mean signing up to go to Mars. We explore when we try something new. “When you socialize with strangers, you are exploring the possibility of new relationships.…When you go on a job interview even though you are employed, you are exploring a new career move.”

The relation of exploration to elasticity is that exploration requires elastic thinking. Exploration, by definition, is venturing into parts unknown where we might be confronted with any manner of new and novel experiences. It’s hard to logically analyze something for which you have no knowledge or experience. It is this attraction to novelty that contributed to our ability to think elastically.

The Value of Emotions in Decision-Making

You can’t make a decision without tapping into your emotions.

Mlodinow suggests that “we tend to praise analytical thought as being objective, untinged by the distortions of human feelings, and therefore tending towards accuracy. But though many praise analytical thought for its detachment from emotion, one could also criticize it as not being inspired by emotion, as elastic thinking is.”

He tells the story of EVR, a man who had brain surgery to remove a benign tumor. After the surgery, EVR couldn’t make decisions. He passed IQ tests and tests about current affairs and ethics. But his life slowly fell apart because he couldn’t make a decision.

“In hindsight, the problem in diagnosing EVR was that all the exams were focused on his capability for analytical thinking. They revealed nothing wrong because his knowledge and logical reasoning skills were intact. His deficit would have been more apparent had they given him a test of elastic thinking—or watched him eat a brownie, or kicked him in the shin, or probed his emotions in some other manner.”

EVR had his orbitofrontal cortex removed—a big part of the brain’s reward system. According to Mlodinow, “Without it, EVR could not experience conscious pleasure. That left him with no motivation to make choices or to formulate and attempt to achieve goals. And that explains why decisions such as where to eat caused him problems: We make such decisions based on our goals, such as enjoying the food or the atmosphere, and he had no goals.”

Our ability to feel emotions is therefore a large and valuable component of our biological decision-making process. As Mlodinow explains, “Evolution endowed us with emotions like pleasure and fear in order that we may evaluate the positive or negative implications of circumstances and events.” Without emotion, we have no motivation to make decisions. What is new would have the same effect as what is old. This state of affairs would not be terribly useful for responding to change. Although we are attracted to novelty, not everything new is good. It is our emotional capabilities that can help us navigate whether the change is positive and determine how we can best deal with it.

Mlodinow contends that “emotions are an integral ingredient in our ability to face the challenges of our environment.” Our inclination to novelty can be exploited, however, and today we have to face and address the multiple drains on our emotions and thus our cognitive abilities. Chronic distractions that manipulate our emotional responses require energy to address, leaving us emotionally spent. This leaves us with less emotional energy to process new experiences and information, leaving us with an unclear picture of what might benefit us and what we should run away from.

Frozen Thoughts

Mlodinow explains that “frozen thinking” occurs when you have a fixed orientation that determines the way you frame or approach a problem.

Frozen thinking most likely occurs when you are an expert in your field. Mlodinow argues that “it is ironic that frozen thinking is a particular risk if you are an expert at something. When you are an expert, your deep knowledge is obviously of great value in facing the usual challenges of your profession, but your immersion in that body of conventional wisdom can impede you from creating or accepting new ideas, and hamper you when confronted with novelty and change.”

When you cling to the idea that the way things are is the way they always are going to be, you close off your brain from noticing new opportunities. In most jobs, this might translate into missed opportunities or an inability to find solutions under changing parameters. But there are some professions where the consequences can be significantly more dire. For instance, as Mlodinow discusses, if you’re a doctor, frozen thinking can lead to major errors in diagnosis.

Frozen thinking is incompatible with elastic thinking. So if you want to make sure you aren’t just regurgitating more of the same while the world evolves around you, augment your elastic thinking.

The ‘How’ of Elastic Thinking

Our brains are amazing. In order to tap into our innate elastic thinking abilities, we really just have to get out of our own way and stop trying to force a particular thinking process.

“The default network governs our interior mental life—the dialogue we have with ourselves, both consciously and subconsciously. Kicking into gear when we turn away from the barrage of sensory input produced by the outside world, it looks toward our inner selves. When that happens, the neural networks of our elastic thought can rummage around the huge database of knowledge and memories and feelings that is stored in the brain, combining concepts that we normally would not recognize. That’s why resting, daydreaming, and other quiet activities such as taking a walk can be powerful ways to generate ideas.”

Mlodinow emphasizes that elastic thinking will happen when we give ourselves quiet space to let the brain do its thing.

“The associative processes of elastic thinking do not thrive when the conscious mind is in a focused state. A relaxed mind explores novel ideas; an occupied mind searches for the most familiar ideas, which are usually the least interesting. Unfortunately, as our default networks are sidelined more and more, we have less unfocused time for our extended internal dialogue to proceed. As a result, we have diminished opportunity to string together those random associations that lead to new ideas and realizations.”

Here are some suggestions for how to develop elastic thinking:

  • Cultivate a “beginner’s mind” by questioning situations as if you have no experience in them.
  • Introduce discord by pursuing relationships and ideas that challenge your beliefs.
  • Recognize the value of diversity.
  • Generate lots of ideas and don’t be bothered that most of them will be bad.
  • Develop a positive mood.
  • Relax when you see yourself becoming overly analytical.

The main lesson is that fruitful elastic thinking doesn’t need be directed. Like children and unstructured play, sometimes we have to give our brains the opportunity to just be. We also have to be willing to stop distracting ourselves all the time. Often it seems that we are afraid of our own thoughts, or we assume that to be quiet is to be bored, so we search for distractions that keep our brain occupied. To encourage elastic thinking in our society, we have to wean ourselves away from the constant stimuli provided by screens.

Mlodinow explains that you can prime your brain for insights by cultivating the kind of mindset that generates them. Don’t force your thinking or apply an analytical approach to the situation. “The challenge of insight is the analogous issue of freeing yourself from narrow, conventional thinking.”

When it comes to developing and exploring the possibilities of elastic thinking, it is perhaps best to remember that, as Mlodinow writes, “the thought processes we use to create what are hailed as great masterpieces of art and science are not fundamentally different from those we use to create our failures.”

Externalities: Why We Can Never Do “One Thing”

No action exists in a vacuum. There are ripples that have consequences that we can and can’t see. Here are the three types of externalities that can help us guide our actions so they don’t come back to bite us.

***

An externality affects someone without them agreeing to it. As with unintended consequences, externalities can be positive or negative. Understanding the types of externalities and the impact they have in our lives can help us improve our decision making, and how we interact with the world.

Externalities provide useful mental models for understanding complex systems. They show us that systems don’t exist in isolation from other systems. Externalities may affect uninvolved third parties which make them a form of market failure —an inefficient allocation of resources.

We both create and are subject to externalities. Most are very minor but compound over time. They can inflict numerous second-order effects. Someone reclines their seat on an airplane. They get the benefit of comfort. The person behind bears the cost of discomfort by having less space. One family member leaves their dirty dishes in the sink. They get the benefit of using the plate. Someone else bears the cost of washing it later. We can’t expect to interact with any system without repercussions. Over time, even minor externalities can cause significant strain in our lives and relationships.

The First Law of Ecology

To understand externalities it is first useful to consider second-order consequences. In Filters Against Folly, Garrett Hardin describes what he considers to be the First Law of Ecology: We can never do one thing. Whenever we interact with a system, we need to ask, “And then what? What will the wider repercussions of our actions be?” There is bound to be at least one externality.

Hardin gives the example of the Prohibition Amendment in the U.S. In 1920, lawmakers banned the production and sale of alcoholic beverages throughout the entire country. This was in response to an extended campaign by those who believed alcohol was evil. It wasn’t enough to restrict its consumption—it needed to go.

The addition of 61 words to the American Constitution changed the social and legal landscape for over a decade. Policymakers presumably thought they could make the change and people would stop drinking. But Prohibition led to numerous externalities. Alcohol is an important part of many people’s lives. Few were willing to suddenly give it up without a fight. The demand was more than strong enough to ensure a black-market supply re-emerged.

Wealthy people stockpiled alcohol in their homes before the ban went into effect. Thousands of speakeasies and gin joints flourished. Walgreens grew from 20 stores to 500, in large part due to its sales of ‘medicinal’ whiskey. Former alcohol producers simply sold the ingredients for people to make their own. Gangsters like Al Capone made their fortune smuggling, and murdered his rivals in the process. Crime gangs undermined official institutions. Tax revenues plummeted. People lost their jobs. Prisons became overcrowded and bribery commonplace. Thousands died from crime and drinking unsafe homemade alcohol.

Policymakers did not fully ask, “And then what?” before legislating. Drinking did decrease during this time, on average by about half.  But this was far from the hope of a total ban. The second-order consequences outweighed any benefits.

As economist Gregory Mankiw explains in Principles of Microeconomics,

In the presence of externalities, society’s interest in a market outcome extends beyond the well-being of buyers and sellers who participate in the market; it also includes the well-being of bystanders who are affected indirectly…. The market equilibrium is not efficient when there are externalities. That is, the equilibrium fails to maximize the total benefit to society as a whole.

Negative Externalities

Negative externalities can occur during the production or consumption of a service or good. Pollution is a useful example. If a factory pollutes nearby water supplies, it causes harm without incurring costs. The costs to society are high and are not reflected in the price of whatever the factory makes. Economists often view environmental damage as another factor in a production process. But even if pollution is taxed, the harmful effects don’t go away.

Transport and manufacturing release toxins into the environment, harming our health and altering our climate. The reality though, is these externalities are hard to see, and it is often difficult to trace them back to their root causes. There’s also the question of whether we are responsible for externalities or not.

Imagine you’re driving down the road. As you go by an apartment, the noise disturbs someone who didn’t agree to it. Your car emits air pollution, which affects everyone living nearby. Each of these small externalities will affect people you don’t see and who didn’t choose them. They won’t receive any compensation from you. Are you really responsible for the externalities you cause? If you’re not being outright careless or malicious, isn’t it just part of life? How much responsibility do we have as individuals, anyway?

Calling something a negative externality can be a convenient way of abdicating responsibility.

Positive Externalities

A positive externality imposes an unexpected benefit on a third party. The producer doesn’t agree to this, nor do they receive compensation for it.

Scientific research often leads to positive externalities. Research findings can have applications beyond their initial scope. The resulting information becomes part of our collective knowledge base. However, the researcher who makes a discovery cannot receive the full benefits. Nor do they necessarily feel entitled to them.

Blaise Pascal and Pierre de Fermat developed probability theory to solve a gambling dispute. Their work went on to inform numerous disciplines (like the field of calculus) and transform our understanding of the world. Probabilities are now a core part of how we think. Pascal and Fermat created a positive externality.

Someone who comes up with an equation cannot expect compensation each time it gets used. As a result, the incentives to invest the time and effort to discover new equations are reduced. Algorithms, patents, and copyright laws change this by allowing creators to protect and profit from their ideas for years before other people can freely use them. We all benefit, and researchers have an incentive to continue their work.

Network effects are an example of a positive externality. Silicon Valley understands this well. Each person who joins a network, like a marketplace app, increases the value to all other users. Those who own the network have an incentive improve it to encourage new users. Everyone benefits from being able to communicate with more people. While we might not join a new network intending to improve it for other people, that is what normally happens. (On the flipside, network effects can also produce negative externalities, as too many members can decrease the value of a network.)

Positive externalities often lead to the “free rider” problem. When we enjoy something that we aren’t paying for, we tend not to value it. Not paying can remove the incentive to look after a resource and leads to a Tragedy of the Commons situation. As Aristotle put it, “For that which is common to the greatest number has the least care bestowed upon it.” A good portion of online content succumbs to the free rider problem. We enjoy it and yet we don’t pay for it. We expect it to be free and yet, if users weren’t willing to support sites like Farnam Street, they would likely fold, start publishing lower quality articles, or sell readers to advertisers who collect their data. The end result, as we see too frequently, is low-quality content funded by page-view advertising. (This is why we have a membership program. Members of our learning community create a positive externality for non-members by helping support the free content.)

Positional Externalities

Positional externalities are a form of second-order effects. They occur when our decisions alter the context of future perception or value.

For example, consider what happens when a person decides to start staying at the office an hour late. Perhaps they want a promotion and think it will endear them to managers. Parkinson’s Law states that tasks expand to fit the time allocated to them. What this person would otherwise get done by 5pm, now takes until 6pm. Staying late becomes their norm. Their co-workers notice and start to also stay late. Before long, staying at the office until 6pm becomes the standard for everyone. Anyone who leaves at 5pm is perceived as lazy. Now that 6pm is the norm, everyone suffers. They are forced to work more without deriving any real benefits. It’s a lose-lose situation for everyone.

Someone we know once made an investment with a nearly unlimited return by gaming the system. He worked for an investment firm that valued employees according to a perception of how hard they worked and not necessarily by their results. Each Monday he brought in a series of sport coats and left them in the office. He paid the cleaning staff $20 a week to change the coat hanging on his chair and to turn on his computer. No matter what happened, it appeared he was always the first one into the office even though he often didn’t show up from a “client meeting” until 10. When it came to bonus time, he’d get an enormous return on that $20 investment.

Purchasing luxury goods can create positional externalities. Veblen goods are items we value because of their scarcity and high cost. Diamonds, Lamborghinis, tailor-made suits — owning them is a status symbol, and they lose their value if they become cheaper or if too many people have them. As Luca Lambertini puts it in The Economics of Vertically Differentiated Markets,

The utility derived from consumption is a function of the quantity purchased relative to the average of the society or the reference group to whom the consumer compares.” In other words, a shiny new car seems more valuable if all your friends are driving battered old wrecks. If they have equally (or more) fancy cars, the value of yours drops. At some point, it seems worthless and it’s time to find a new one. In this way, the purchase of a Veblen good confers a positional externality on other people who own it too.

That utility can also be a matter of comparison. A person earning $40,000 a year while their friends earn $30,000 will be happier than one earning $60,000 when their friends earn $70,000. When someone’s salary increases, it raises the bar, giving others a new point of reference.

We can confer positional externalities on ourselves by changing our attitudes. Let’s say someone enjoys wine but is not a connoisseur. A $10 bottle and a $100 bottle make them equally happy. When they decide to go on a course and learn the subtleties and technicalities of fine wines, they develop an appreciation for the $100 wine and a distaste for the $10. They may no longer be able to enjoy a cheap drink because they raised their standards.

Conclusion

Externalities are everywhere. It’s easy to ignore the impact of our decisions—to recline an airplane seat, to stay late at the office, or drop litter. Eventually though, someone always ends up paying. Like the villagers in Hardin’s Tragedy of the Commons, who end up with no grass for their animals, we run the risk of ruining a good thing if we don’t take care of it. Keeping the three types of externalities in mind is a useful way to make decisions that won’t come back to bite you. Whenever we interact with a system, we should remember to ask Hardin’s question: and then what?

Defensive Decision Making: What IS Best vs. What LOOKS Best

“It wasn’t the best decision we could make,” said one of my old bosses, “but it was the most defensible.”

What she meant was that she wanted to choose option A but ended up choosing option B because it was the defensible default. She realized that if she chose option A and something went wrong, it would be hard to explain because it was outside of normal. On the other hand, if she chose option A and everything went right, she’d get virtually no upside. A good outcome was merely expected, but a bad outcome would have significant consequences for her. The decision she landed on wasn’t the one she would have made if she owned the entire company. Since she didn’t, she wanted to protect her downside. In asymmetrical organizations, defensive decisions like this one protect the person making the decision.

My friend and advertising legend Rory Sutherland calls defensive decisions the Heathrow Option. Americans might think of it as the IBM Option. There’s a story behind this:

A while ago, British Airways noticed a reluctance for personal assistants to book their bosses on flights from London City Airport to JFK. They almost always picked Heathrow, which was further away, and harder to get to. Rory believed this was because “flying from London City might be better on average,” but “because it was a non-standard option, if anything were to go wrong, you were much more likely to get it in the neck.”

Of course, if you book your boss to fly out of Heathrow—the default—and the flight is delayed, they’ll blame the airline and not you. But if you opted for the London City airport, they’d blame you.

At first glance, it might seem like defensive decision making is irrational. It’s actually perfectly rational when you consider the asymmetry involved. This asymmetry also offers insight into why cultures rarely change.

Some decisions place the decisionmakers in situations where outcomes offer little upside and massive downside. In these cases, it can seem like great outcomes carry a 1% upside, good outcomes are neutral, and poor outcomes carry at least 20% downside—if they don’t get you fired.

It’s easy to see why people opt for the default choice in these cases. If you do something that’s different—and thus hard to defend—and it works out, you’ve risked a lot for very little gain. If you do something that’s different and it doesn’t work out, and you might find yourself unemployed.

This asymmetry explains why your boss, who has nice rhetoric about challenging norms and thinking outside the box, is likely to continue with the status quo rather than change things. After all, why would they risk looking like a fool by doing something different? It’s much easier to protect themselves. Defaults give people a possible out, a way to avoid being held accountable for their decisions if things go wrong. You can distance yourself from your decision and perhaps be safe from the consequences of a poor outcome.

Doing the safe thing is not the same as doing the right thing. Often, the problem with the safe thing is that there is no growth, no innovation. It’s churning out more of the same. So in the short term, while you may think that the default is a better choice for your job security, in the long game there’s a negative. When you are unwilling to take risks, you stop recognizing opportunities. If you aren’t willing to put yourself out there for 1% gain, how do you grow? After all, the 1% upsides are more common than the 50% upsides. But in either case, if you become afraid of downside, then what level of risk would be acceptable? It’s not that choosing the default makes you a bad person. But a lifetime of opting for the default limits your opportunities and your potential.

And for anyone who owns a company, a staff full of default decision makers is a death knell. You get amazing results when people have the space to take risks and not be penalized for every downside.

Footnotes
  • 1

    Image source: https://www.flickr.com/photos/hyku/3474143529

The Decision Matrix: How to Prioritize What Matters

The decisions we spend the most time on are rarely the most important ones. Not all decisions need the same process. Sometimes, trying to impose the same process on all decisions leads to difficulty identifying which ones are most important, bogging us down and stressing us out.

I remember once struggling at the intelligence agency shortly after I received a promotion. I was being asked to make too many decisions. I had no way to sort through them to figure out which ones mattered, and which ones were inconsequential.

The situation built slowly over a period of weeks. My employees were scared to make decisions because their previous boss had hung them out to dry when things went wrong. My boss, a political high flyer, also liked to delegate down the riskiest decisions. As a result, I had more decisions to make than capacity to make them. I was working longer and longer to keep up with the volume of decisions. Worse, I followed the same process for all of them. I was focusing on the most urgent decisions as the cost of the most important decisions.

It was clear to me that I wasn’t the right person to make all of the decisions. I needed a quick and flexible framework to categorize decisions into the ones I should be making and the ones I should be delegating. I figured most of the urgent decisions could be made by the team because they were easily reversible and not very consequential. In fact, they were only becoming urgent because the team wasn’t making the decisions in the first place. And because I was rushing through these decisions in an effort to put more time into the important decisions, I was making worse choices than the team would have.

As I was walking home one night, I came up with an idea that I used from the next day on, with pretty good success. I call it the Decision Matrix. It’s a decision making version of the Eisenhower Matrix, which helps you distinguish between what’s important and what’s urgent. It’s so simple you can draw it on a napkin, and once you get it, you get it.

While it won’t make the decisions for you, it will help you quickly identify which decisions you should focus on.

The Decision Matrix

My strategy for triaging was simple. I separated decisions into four possibilities based on the type of decision I was making.

  1. Irreversible and inconsequential
  2. Irreversible and consequential
  3. Reversible and inconsequential
  4. Reversible and consequential

The great thing about the matrix is that it can help you quickly delegate decisions. You do have to do a bit of mental work before you start, such as defining and communicating consequentiality and reversibility, as well as where the blurring lines are.

The Decision Matrix in Practice

This matrix became a powerful ally to help me manage time and make sure I wasn’t bogged down in decisions where I wasn’t the best person to decide.

I delegated both types of inconsequential decisions. Inconsequential decisions are the perfect training ground to develop judgment. This saved me a ton of time. Before this people would come to me with decisions that were relatively easy to make, with fairly predictable results. The problem wasn’t making the decision—that took seconds in most cases. The problem was the 30 minutes the person spent presenting the decision to me. I saved at least 5–7 hours a week by implementing this one change.

I invested some of that time meeting with the people making these decisions once a week. I wanted to know what types of decisions they made, how they thought about them, and how the results were going. We tracked old decisions as well, so they could see their judgment improving (or not).

Consequential decisions are a different beast. Reversible and consequential decisions are my favorite. These decisions trick you into thinking they are one big important decision. In reality, reversible and consequential decisions are the perfect decisions to run experiments and gather information. The team or individual would decide experiments we were going to run, the results that would indicate we were on the right path, and who would be responsible for execution. They’d present these findings.

Consequential and irreversible decisions are the ones that you really need to focus on. All of the time I saved from using this matrix didn’t allow me to sip drinks on the beach. Rather, I invested it in the most important decisions, the ones I couldn’t justify delegating. I also had another rule that proved helpful: unless the decision needed to be made on the spot, as some operational decisions do, I would take a 30-minute walk first.

The key to successfully employing this in practice was to make sure everyone was on same page with the terms of consequential and reversible. At first, people checked with me but later, as the terms became clear, they just started deciding.

While the total volume of decisions we made as a team didn’t change, how they were allocated within the team changed. I estimate that I was personally making 75% fewer decisions. But the real kicker was that the quality of all the decisions we made improved dramatically. People started feeling connected to their work again, productivity improved, and sick days (a proxy for how engaged people were) dropped.

Give the Decision Matrix a try—especially if you’re bogged down and fighting to manage your time, it may change your working life.

Still Curious? Read The Eisenhower Matrix: Master Productivity and Eliminate Noise next. 

Members can discuss this article in the Learning Community. If you’re not a member, see what you’re missing.

Double Loop Learning: Download New Skills and Information into Your Brain

We’re taught single loop learning from the time we are in grade school, but there’s a better way. Double loop learning is the quickest and most efficient way to learn anything that you want to “stick.”

***

So, you’ve done the work necessary to have an opinion, learned the mental models, and considered how you make decisions. But how do you now implement these concepts and figure out which ones work best in your situation? How do you know what’s effective and what’s not? One solution to this dilemma is double loop learning.

We can think of double loop learning as learning based on Bayesian updating — the modification of goals, rules, or ideas in response to new evidence and experience. It might sound like another piece of corporate jargon, but double loop learning cultivates creativity and innovation for both organizations and individuals.

“Every reaction is a learning process; every significant experience alters your perspective.”

— Hunter S. Thompson

Single Loop Learning

The first time we aim for a goal, follow a rule, or make a decision, we are engaging in single loop learning. This is where many people get stuck and keep making the same mistakes. If we question our approaches and make honest self-assessments, we shift into double loop learning. It’s similar to the Orient stage in John Boyd’s OODA loop. In this stage, we assess our biases, question our mental models, and look for areas where we can improve. We collect data, seek feedback, and gauge our performance. In short, we can’t learn from experience without reflection. Only reflection allows us to distill the experience into something we can learn from.

In Teaching Smart People How to Learn, business theorist Chris Argyris compares single loop learning to a typical thermostat. It operates in a homeostatic loop, always seeking to return the room to the temperature at which the thermostat is set. A thermostat might keep the temperature steady, but it doesn’t learn. By contrast, double loop learning would entail the thermostat’s becoming more efficient over time. Is the room at the optimum temperature? What’s the humidity like today and would a lower temperature be more comfortable? The thermostat would then test each idea and repeat the process. (Sounds a lot like Nest.)

Double Loop Learning

Double loop learning is part of action science — the study of how we act in difficult situations. Individuals and organizations need to learn if they want to succeed (or even survive). But few of us pay much attention to exactly how we learn and how we can optimize the process.

Even smart, well-educated people can struggle to learn from experience. We all know someone who’s been at the office for 20 years and claims to have 20 years of experience, but they really have one year repeated 20 times.

Not learning can actually make you worse off. The world is dynamic and always changing. If you’re standing still, then you won’t adapt. Forget moving ahead; you have to get better just to stay in the same relative spot, and not getting better means you’re falling behind.

Many of us are so focused on solving problems as they arise that we don’t take the time to reflect on them after we’ve dealt with them, and this omission dramatically limits our ability to learn from the experiences. Of course, we want to reflect, but we’re busy and we have more problems to solve — not to mention that reflecting on our idiocy is painful and we’re predisposed to avoid pain and protect our egos.

Reflection, however, is an example of an approach I call first-order negative, second-order positive. It’s got very visible short-term costs — it takes time and honest self-assessment about our shortcomings — but pays off in spades in the future. The problem is that the future is not visible today, so slowing down today to go faster at some future point seems like a bad idea to many. Plus with the payoff being so far in the future, it’s hard to connect to the reflection today.

The Learning Dilemma: How Success Becomes an Impediment

Argyris wrote that many skilled people excel at single loop learning. It’s what we learn in academic situations. But if we are accustomed only to success, double loop learning can ignite defensive behavior. Argyris found this to be the reason learning can be so difficult. It’s not because we aren’t competent, but because we resist learning out of a fear of seeming incompetent. Smart people aren’t used to failing, so they struggle to learn from their mistakes and often respond by blaming someone else. As Argyris put it, “their ability to learn shuts down precisely at the moment they need it the most.”

In the same way, a muscle strengthens at the point of failure, we learn best after dramatic errors.

The problem is that single loop processes can be self-fulfilling. Consider managers who assume their employees are inept. They deal with this by micromanaging and making every decision themselves. Their employees have no opportunity to learn, so they become discouraged. They don’t even try to make their own decisions. This is a self-perpetuating cycle. For double loop learning to happen, the managers would have to let go a little. Allow someone else to make minor decisions. Offer guidance instead of intervention. Leave room for mistakes. In the long run, everyone would benefit. The same applies to teachers who think their students are going to fail an exam. The teachers become condescending and assign simple work. When the exam rolls around, guess what? Many of the students do badly. The teachers think they were right, so the same thing happens the next semester.

Many of the leaders Argyris studied blamed any problems on “unclear goals, insensitive and unfair leaders, and stupid clients” rather than making useful assessments. Complaining might be cathartic, but it doesn’t let us learn. Argyris explained that this defensive reasoning happens even when we want to improve. Single loop learning just happens to be a way of minimizing effort. We would go mad if we had to rethink our response every time someone asked how we are, for example. So everyone develops their own “theory of action—a set of rules that individuals use to design and implement their own behavior as well as to understand the behavior of others.” Most of the time, we don’t even consider our theory of action. It’s only when asked to explain it that the divide between how we act and how we think we act becomes apparent. Identifying the gap between our espoused theory of action and what we are actually doing is the hard part.

The Key to Double Loop Learning: Push to the Point of Failure

The first step Argyris identified is to stop getting defensive. Justification gets us nowhere. Instead, he advocates collecting and analyzing relevant data. What conclusions can we draw from experience? How can we test them? What evidence do we need to prove a new idea is correct?

The next step is to change our mental models. Break apart paradigms. Question where conventions came from. Pivot and make reassessments if necessary.

Problem-solving isn’t a linear process. We can’t make one decision and then sit back and await success.

Argyris found that many professionals are skilled at teaching others, yet find it difficult to recognize the problems they themselves cause (see Galilean Relativity). It’s easy to focus on other people; it’s much harder to look inward and face complex challenges. Doing so brings up guilt, embarrassment, and defensiveness. As John Grey put it, “If there is anything unique about the human animal, it is that it has the ability to grow knowledge at an accelerating rate while being chronically incapable of learning from experience.”

When we repeat a single loop process, it becomes a habit. Each repetition requires less and less effort. We stop questioning or reconsidering it, especially if it does the job (or appears to). While habits are essential in many areas of our lives, they don’t serve us well if we want to keep improving. For that, we need to push the single loop to the point of failure, to strengthen how we act in the double loop. It’s a bit like the Feynman technique — we have to dismantle what we know to see how solid it truly is.

“Fail early and get it all over with. If you learn to deal with failure… you can have a worthwhile career. You learn to breathe again when you embrace failure as a part of life, not as the determining moment of life.”

— Rev. William L. Swig

One example is the typical five-day, 9-to-5 work week. Most organizations stick to it year after year. They don’t reconsider the efficacy of a schedule designed for Industrial Revolution factory workers. This is single loop learning. It’s just the way things are done, but not necessarily the smartest way to do things.

The decisions made early on in an organization have the greatest long-term impact. Changing them in the months, years, or even decades that follow becomes a non-option. How to structure the work week is one such initial decision that becomes invisible. As G.K. Chesterton put it, “The things we see every day are the things we never see at all.” Sure, a 9-to-5 schedule might not be causing any obvious problems. The organization might be perfectly successful. But that doesn’t mean things cannot improve. It’s the equivalent of a child continuing to crawl because it gets them around. Why try walking if crawling does the job? Why look for another option if the current one is working?

A growing number of organizations are realizing that conventional work weeks might not be the most effective way to structure work time. They are using double loop learning to test other structures. Some organizations are trying shorter work days or four-day work weeks or allowing people to set their own schedules. Managers then keep track of how the tested structures affect productivity and profits. Over time, it becomes apparent whether the new schedule is better than the old one.

37Signals is one company using double loop learning to restructure their work week. CEO Jason Fried began experimenting a few years ago. He tried out a four-day, 32-hour work week. He gave employees the whole of June off to explore new ideas. He cut back on meetings and created quiet spaces for focused work. Rather than following conventions, 37Signals became a laboratory looking for ways of improving. Over time, what worked and what didn’t became obvious.

Double loop learning is about data-backed experimentation, not aimless tinkering. If a new idea doesn’t work, it’s time to try something else.

In an op-ed for The New York Times, Camille Sweeney and Josh Gosfield give the example of David Chang. Double loop learning turned his failing noodle bar into an award-winning empire.

After apprenticing as a cook in Japan, Mr. Chang started his own restaurant. Yet his early efforts were ineffective. He found himself overworked and struggling to make money. He knew his cooking was excellent, so how could he make it profitable? Many people would have quit or continued making irrelevant tweaks until the whole endeavor failed. Instead, Mr. Chang shifted from single to double loop learning. A process of making honest self-assessments began. One of his foundational beliefs was that the restaurant should serve only noodles, but he decided to change the menu to reflect his skills. In time, it paid off; “the crowds came, rave reviews piled up, awards followed and unimaginable opportunities presented themselves.” This is what double loop learning looks like in action: questioning everything and starting from scratch if necessary.

Josh Waitzkin’s approach (as explained in The Art of Learning) is similar. After reaching the heights of competitive chess, Waitzkin turned his focus to martial arts. He began with tai chi chuan. Martial arts and chess are, on the surface, completely different, but Waitzkin used double loop learning for both. He progressed quickly because he was willing to lose matches if doing so meant he could learn. He noticed that other martial arts students had a tendency to repeat their mistakes, letting fruitless habits become ingrained. Like the managers Argyris worked with, students grew defensive when challenged. They wanted to be right, even if it prevented their learning. In contrast, Waitzkin viewed practice as an experiment. Each session was an opportunity to test his beliefs. He mastered several martial arts, earning a black belt in jujitsu and winning a world championship in tai ji tui shou.

Argyris found that organizations learn best when people know how to communicate. (No surprise there.) Leaders need to listen actively and open up exploratory dialogues so that problematic assumptions and conventions can be revealed. Argyris identified some key questions to consider.

  • What is the current theory in use?
  • How does it differ from proposed strategies and goals?
  • What unspoken rules are being followed, and are they detrimental?
  • What could change, and how?
  • Forget the details; what’s the bigger picture?

Meaningful learning doesn’t happen without focused effort. Double loop learning is the key to turning experience into improvements, information into action, and conversations into progress.