Tag: Decision Making

A Primer on Algorithms and Bias

The growing influence of algorithms on our lives means we owe it to ourselves to better understand what they are and how they work. Understanding how the data we use to inform algorithms influences the results they give can help us avoid biases and make better decisions.

***

Algorithms are everywhere: driving our cars, designing our social media feeds, dictating which mixer we end up buying on Amazon, diagnosing diseases, and much more.

Two recent books explore algorithms and the data behind them. In Hello World: Being Human in the Age of Algorithms, mathematician Hannah Fry shows us the potential and the limitations of algorithms. And Invisible Women: Data Bias in a World Designed for Men by writer, broadcaster, and feminist activist Caroline Criado Perez demonstrates how we need to be much more conscientious of the quality of the data we feed into them.

Humans or algorithms?

First, what is an algorithm? Explanations of algorithms can be complex. Fry explains that at their core, they are defined as step-by-step procedures for solving a problem or achieving a particular end. We tend to use the term to refer to mathematical operations that crunch data to make decisions.

When it comes to decision-making, we don’t necessarily have to choose between doing it ourselves and relying wholly on algorithms. The best outcome may be a thoughtful combination of the two.

We all know that in certain contexts, humans are not the best decision-makers. For example, when we are tired, or when we already have a desired outcome in mind, we may ignore relevant information. In Thinking, Fast and Slow, Daniel Kahneman gave multiple examples from his research with Amos Tversky that demonstrated we are heavily influenced by cognitive biases such as availability and anchoring when making certain types of decisions. It’s natural, then, that we would want to employ algorithms that aren’t vulnerable to the same tendencies. In fact, their main appeal for use in decision-making is that they can override our irrationalities.

Algorithms, however, aren’t without their flaws. One of the obvious ones is that because algorithms are written by humans, we often code our biases right into them. Criado Perez offers many examples of algorithmic bias.

For example, an online platform designed to help companies find computer programmers looks through activity such as sharing and developing code in online communities, as well as visiting Japanese manga (comics) sites. People visiting certain sites with frequency received higher scores, thus making them more visible to recruiters.

However, Criado Perez presents the analysis of this recruiting algorithm by Cathy O’Neil, scientist and author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, who points out that “women, who do 75% of the world’s unpaid care work, may not have the spare leisure time to spend hours chatting about manga online . . . and if, like most of techdom, that manga site is dominated by males and has a sexist tone, a good number of women in the industry will probably avoid it.”

Criado Perez postulates that the authors of the recruiting algorithm didn’t intend to encode a bias that discriminates against women. But, she says, “if you aren’t aware of how those biases operate, if you aren’t collecting data and taking a little time to produce evidence-based processes, you will continue to blindly perpetuate old injustices.”

Fry also covers algorithmic bias and asserts that “wherever you look, in whatever sphere you examine, if you delve deep enough into any system at all, you’ll find some kind of bias.” We aren’t perfect—and we shouldn’t expect our algorithms to be perfect, either.

In order to have a conversation about the value of an algorithm versus a human in any decision-making context, we need to understand, as Fry explains, that “algorithms require a clear, unambiguous idea of exactly what we want them to achieve and a solid understanding of the human failings they are replacing.”

Garbage in, garbage out

No algorithm is going to be successful if the data it uses is junk. And there’s a lot of junk data in the world. Far from being a new problem, Criado Perez argues that “most of recorded human history is one big data gap.” And that has a serious negative impact on the value we are getting from our algorithms.

Criado Perez explains the situation this way: We live in “a world [that is] increasingly reliant on and in thrall to data. Big data. Which in turn is panned for Big Truths by Big Algorithms, using Big Computers. But when your data is corrupted by big silences, the truths you get are half-truths, at best.”

A common human bias is one regarding the universality of our own experience. We tend to assume that what is true for us is generally true across the population. We have a hard enough time considering how things may be different for our neighbors, let alone for other genders or races. It becomes a serious problem when we gather data about one subset of the population and mistakenly assume that it represents all of the population.

For example, Criado Perez examines the data gap in relation to incorrect information being used to inform decisions about safety and women’s bodies. From personal protective equipment like bulletproof vests that don’t fit properly and thus increase the chances of the women wearing them getting killed to levels of exposure to toxins that are unsafe for women’s bodies, she makes the case that without representative data, we can’t get good outputs from our algorithms. She writes that “we continue to rely on data from studies done on men as if they apply to women. Specifically, Caucasian men aged twenty-five to thirty, who weigh 70 kg. This is ‘Reference Man’ and his superpower is being able to represent humanity as whole. Of course, he does not.” Her book contains a wide variety of disciplines and situations where the gender gap in data leads to increased negative outcomes for women.

The limits of what we can do

Although there is a lot we can do better when it comes to designing algorithms and collecting the data sets that feed them, it’s also important to consider their limits.

We need to accept that algorithms can’t solve all problems, and there are limits to their functionality. In Hello World, Fry devotes a chapter to the use of algorithms in justice. Specifically, algorithms designed to provide information to judges about the likelihood of a defendant committing further crimes. Our first impulse is to say, “Let’s not rely on bias here. Let’s not have someone’s skin color or gender be a key factor for the algorithm.” After all, we can employ that kind of bias just fine ourselves. But simply writing bias out of an algorithm is not as easy as wishing it so. Fry explains that “unless the fraction of people who commit crimes is the same in every group of defendants, it is mathematically impossible to create a test which is equally accurate at predicting across the board and makes false positive and false negative mistakes at the same rate for every group of defendants.”

Fry comes back to such limits frequently throughout her book, exploring them in various disciplines. She demonstrates to the reader that “there are boundaries to the reach of algorithms. Limits to what can be quantified.” Perhaps a better understanding of those limits is needed to inform our discussions of where we want to use algorithms.

There are, however, other limits that we can do something about. Both authors make the case for more education about algorithms and their input data. Lack of understanding shouldn’t hold us back. Algorithms that have a significant impact on our lives specifically need to be open to scrutiny and analysis. If an algorithm is going to put you in jail or impact your ability to get a mortgage, then you ought to be able to have access to it.

Most algorithm writers and the companies they work for wave the “proprietary” flag and refuse to open themselves up to public scrutiny. Many algorithms are a black box—we don’t actually know how they reach the conclusions they do. But Fry says that shouldn’t deter us. Pursuing laws (such as the data access and protection rights being instituted in the European Union) and structures (such as an algorithm-evaluating body playing a role similar to the one the U.S. Food and Drug Administration plays in evaluating whether pharmaceuticals can be made available to the U.S. market) will help us decide as a society what we want and need our algorithms to do.

Where do we go from here?

Algorithms aren’t going away, so it’s best to acquire the knowledge needed to figure out how they can help us create the world we want.

Fry suggests that one way to approach algorithms is to “imagine that we designed them to support humans in their decisions, rather than instruct them.” She envisions a world where “the algorithm and the human work together in partnership, exploiting each other’s strengths and embracing each other’s flaws.”

Part of getting to a world where algorithms provide great benefit is to remember how diverse our world really is and make sure we get data that reflects the realities of that diversity. We can either actively change the algorithm, or we change the data set. And if we do the latter, we need to make sure we aren’t feeding our algorithms data that, for example, excludes half the population. As Criado Perez writes, “when we exclude half of humanity from the production of knowledge, we lose out on potentially transformative insights.”

Given how complex the world of algorithms is, we need all the amazing insights we can get. Algorithms themselves perhaps offer the best hope, because they have the inherent flexibility to improve as we do.

Fry gives this explanation: “There’s nothing inherent in [these] algorithms that means they have to repeat the biases of the past. It all comes down to the data you give them. We can choose to be ‘crass empiricists’ (as Richard Berk put it ) and follow the numbers that are already there, or we can decide that the status quo is unfair and tweak the numbers accordingly.”

We can get excited about the possibilities that algorithms offer us and use them to create a world that is better for everyone.

Why We Focus on Trivial Things: The Bikeshed Effect

Bikeshedding is a metaphor to illustrate the strange tendency we have to spend excessive time on trivial matters, often glossing over important ones. Here’s why we do it, and how to stop.

***

How can we stop wasting time on unimportant details? From meetings at work that drag on forever without achieving anything to weeks-long email chains that don’t solve the problem at hand, we seem to spend an inordinate amount of time on the inconsequential. Then, when an important decision needs to be made, we hardly have any time to devote to it.

To answer this question, we first have to recognize why we get bogged down in the trivial. Then we must look at strategies for changing our dynamics towards generating both useful input and time to consider it.

The Law of Triviality

You’ve likely heard of Parkinson’s Law, which states that tasks expand to fill the amount of time allocated to them. But you might not have heard of the lesser-known Parkinson’s Law of Triviality, also coined by British naval historian and author Cyril Northcote Parkinson in the 1950s.

The Law of Triviality states that the amount of time spent discussing an issue in an organization is inversely correlated to its actual importance in the scheme of things. Major, complex issues get the least discussion while simple, minor ones get the most discussion.

Parkinson’s Law of Triviality is also known as “bike-shedding,” after the story Parkinson uses to illustrate it. He asks readers to imagine a financial committee meeting to discuss a three-point agenda. The points are as follows:

  1. A proposal for a £10 million nuclear power plant
  2. A proposal for a £350 bike shed
  3. A proposal for a £21 annual coffee budget

What happens? The committee ends up running through the nuclear power plant proposal in little time. It’s too advanced for anyone to really dig into the details, and most of the members don’t know much about the topic in the first place. One member who does is unsure how to explain it to the others. Another member proposes a redesigned proposal, but it seems like such a huge task that the rest of the committee decline to consider it.

The discussion soon moves to the bike shed. Here, the committee members feel much more comfortable voicing their opinions. They all know what a bike shed is and what it looks like. Several members begin an animated debate over the best possible material for the roof, weighing out options that might enable modest savings. They discuss the bike shed for far longer than the power plant.

At last, the committee moves onto item three: the coffee budget. Suddenly, everyone’s an expert. They all know about coffee and have a strong sense of its cost and value. Before anyone realizes what is happening, they spend longer discussing the £21 coffee budget than the power plant and the bike shed combined! In the end, the committee runs out of time and decides to meet again to complete their analysis. Everyone walks away feeling satisfied, having contributed to the conversation.

Why this happens

Bike-shedding happens because the simpler a topic is, the more people will have an opinion on it and thus more to say about it. When something is outside of our circle of competence, like a nuclear power plant, we don’t even try to articulate an opinion.

But when something is just about comprehensible to us, even if we don’t have anything of genuine value to add, we feel compelled to say something, lest we look stupid. What idiot doesn’t have anything to say about a bike shed? Everyone wants to show that they know about the topic at hand and have something to contribute.

With any issue, we shouldn’t be according equal importance to every opinion anyone adds. We should emphasize the inputs from those who have done the work to have an opinion. And when we decide to contribute, we should be putting our energy into the areas where we have something valuable to add that will improve the outcome of the decision.

Strategies for avoiding bike-shedding

The main thing you can do to avoid bike-shedding is for your meeting to have a clear purpose. In The Art of Gathering: How We Meet and Why It Matters, Priya Parker, who has decades of experience designing high-stakes gatherings, says that any successful gathering (including a business meeting) needs to have a focused and particular purpose. “Specificity,” she says, “is a crucial ingredient.”

Why is having a clear purpose so critical? Because you use it as the lens to filter all other decisions about your meeting, including who to have in the room.

With that in mind, we can see that it’s probably not a great idea to discuss building a nuclear power plant and a bike shed in the same meeting. There’s not enough specificity there.

The key is to recognize that the available input on an issue doesn’t all need considering. The most informed opinions are most relevant. This is one reason why big meetings with lots of people present, most of whom don’t need to be there, are such a waste of time in organizations. Everyone wants to participate, but not everyone has anything meaningful to contribute.

When it comes to choosing your list of invitees, Parker writes, “if the purpose of your meeting is to make a decision, you may want to consider having fewer cooks in the kitchen.” If you don’t want bike-shedding to occur, avoid inviting contributions from those who are unlikely to have relevant knowledge and experience. Getting the result you want—a thoughtful, educated discussion about that power plant—depends on having the right people in the room.

It also helps to have a designated individual in charge of making the final judgment. When we make decisions by committee with no one in charge, reaching a consensus can be almost impossible. The discussion drags on and on. The individual can decide in advance how much importance to accord to the issue (for instance, by estimating how much its success or failure could help or harm the company’s bottom line). They can set a time limit for the discussion to create urgency. And they can end the meeting by verifying that it has indeed achieved its purpose.

Any issue that invites a lot of discussions from different people might not be the most important one at hand. Avoid descending into unproductive triviality by having clear goals for your meeting and getting the best people to the table to have a productive, constructive discussion.

Preserving Optionality: Preparing for the Unknown

We’re often advised to excel at one thing. But as the future gets harder to predict, preserving optionality allows us to pivot when the road ahead crumbles.

***

How do we prepare for a world that often changes drastically and rapidly? We can preserve our optionality.

We don’t often get the advice to keep our options open. Instead, we’re told to specialize by investing huge hours in our passion so we can be successful in a niche.

The problem is, it’s bad advice. We live in a world that’s constantly changing, and if we can’t respond effectively to those changes, we become redundant, frustrated, and useless.

Instead of focusing on becoming great at one thing, there is another, counterintuitive strategy that will get us further: preserving optionality. The more options we have, the better suited we are to deal with unpredictability and uncertainty. We can stay calm when others panic because we have choices.

Optionality refers to the act of keeping as many options open as possible. Preserving optionality means avoiding limiting choices or dependencies. It means staying open to opportunities and always having a backup plan.

An option is usually defined as something we have the freedom to choose. That’s a fairly broad definition. In the context of a strategy, it must also have a limited downside and an open-ended upside. Betting in a casino is not an option, for example—the upside is known. Losses and gains are both constrained. What about betting on a new tech startup? That’s an option—the upside is theoretically unlimited; the losses are limited to the amount you invest.

Options present themselves all the time, but life-altering ones often come up during times of great change. These options are the ones we have the hardest time capitalizing on. If we’ve specialized too much, change is a threat, not an opportunity. Thus, if we aren’t certain where the opportunities are going to be (and we never are), then we need to make choices to keep our options open.

Baron Rothschild is often quoted as having said that “the time to buy is when there’s blood in the streets.” That’s a misquote, however. What he actually said was “buy when there’s blood in the streets, even if the blood is your own.” Rothschild recognized that those are the times when new options emerge. That’s when many investors make their fortunes and when entrepreneurs innovate. Rothschild saw opportunity in chaos. He made a fortune buying during the panic after the Battle of Waterloo.

When we occupy a small niche, we sacrifice optionality. That means less freedom and greater dependency. No one can predict the future—not even experts—so isn’t it a good idea to have as many avenues open as possible?

The coach’s dilemma: strength vs. optionality

In Simple Rules: How to Thrive in a Complex World, Kathleen Eisenhardt and Donald Sull describe the experience of strength coach Shannon Turley. For the uninitiated, the role of a strength coach is to help athletes stay healthy and perform better, rather than teach specific skills.

Turley began his career working at Virginia Polytechnic Institute and State University. When he started, the football players there followed a strength program based on weightlifting alone. Athletes wore t-shirts listing their personal records and competed to outdo each other. The mantra was: get stronger by lifting more weight.

But Turley soon realized that this program was not effective because it left the athletes with limited optionality. Turley found no correlation between weightlifting prowess and competitive performance. Being able to bench press a lot of weight didn’t serve them well on the football field. As he put it, “In football if you’re on your back, you’ve already lost.” Keeping a record of what he saw, he began looking for different options for the athletes.

After gaining experience coaching in several sports, Turley realized that strength was not the most important factor for athletic success. What mattered for any type of athlete was staying free of injuries and good nutrition. Why? Because that gave athletes greater optionality.

An uninjured, healthy player could stay in each game for longer and miss fewer training sessions. It also meant less chance of requiring surgery, which many of his students faced, or of being forced to retire from competitive sports at a young age.

Turley began coaching football players at Stanford University. He implemented a program focusing on proper nutrition and flexibility exercises such as yoga—not weightlifting. He also focused on healing existing injuries that restricted athletes’ performance. One football player he worked with had ongoing back problems, so Turley designed a regime to improve that issue. It worked: the athlete never missed a game and went on to play in the NFL. Turley’s approach served to preserve optionality for his players. Even the best athlete will lose many competitions. So the more an athlete is healthy enough to participate in games, the greater the chances of those crucial successes. Turley’s experience illustrates the trade-offs between particular physical abilities and optionality.

Over-specializing in one area is highly limiting, especially if it requires extensive upkeep. Like a football player, we can retain optionality by avoiding overtly damaging risks and ensuring we stay in the game for as long as possible—whatever that game is. That might mean lifting less metaphorical weight at any one time, while also working to keep ourselves flexible.

The tyranny of small decisions

Few people would deliberately lock themselves into an undesirable situation. Yet we often make small, rational decisions that end up removing options over time. This is the tyranny of small decisions. Economist Alfred Kahn identified the concept in 1966. Kahn begins the article with a provocative thought experiment:

Suppose, 75 years ago, some being from outer space had made us this proposition: “I know how to make a vehicle that could in effect put 200 horses at the disposal of each of you. It would permit you to travel about, alone or in small groups, at 60 to 80 miles an hour. But the costs of this gadget are 40,000 lives per year, global warming, the decay of the inner city, endless commuting, and suburban sprawl.” What would we have chosen collectively?

Put that way, the answer, of course, is no—we wouldn’t choose the advancement of transportation technology if we could immediately see the grievous cost. But we have said yes to that exact offer over time through a million small decisions, and now it is difficult to back out. Most of the modern world is built to accommodate cars. Driving is now the “rational” choice, no matter the destructive effect. Sometimes it feels as though we have no other option.

Kahn’s point is that small decisions can lead to bad outcomes. At some point, alternatives disappear. We lose our optionality. It is easy to see the downsides of big decisions. The costs of smaller ones can be more elusive. In a market economy, Kahn explains, change is the result of tiny steps. Combined, they have a tremendous cumulative effect on our collective freedom. Day to day, it is hard to see the path that is forming. At some point, we may look up and not like where we are going. By then it is too late. Kahn writes:

Only if consumers are given the full range of economically feasible and socially desirable alternatives in a big discrete bundle will misallocation of resources due to the tyranny of small market-determined decisions be broken.

The tragedy of the commons is another such instance of the power of small decisions. Garett Hardin’s parable illustrates why common resources are used more than is desirable from the standpoint of society as a whole. No one person makes a single decision to deplete the resources. Instead, each person makes a series of small choices that ultimately cause environmental ruin. In the original example where villagers are freely able to graze their animals on common land, having access to it gives everyone a lot of options for raising animals or farming. Once the pasture is exhausted from everyone putting too many animals out to graze, however, everyone loses their optionality.

Optionality can be a matter of perspective

As Seneca put it, “In one and the same meadow, the cow looks for grass, the dog for a hare, and the stork for a lizard.” Where some people only see blood in the streets, other people see a chance to succeed.

Preserving optionality can be as much about changing our attitudes as our circumstances. It can be about learning to spot opportunities—and to make them. Optionality is not a new concept. A portion of the Old Testament dating back to between 450 and 180 BCE declares:

Invest in seven ventures, yes, in eight; you do not know what disaster may come upon the land. If clouds are full of water, they pour rain on the earth. Whether a tree falls to the south or to the north, in the place where it falls, there it will lie. Whoever watches the wind will not plant; whoever looks at the clouds will not reap . . . Sow your seed in the morning, and at evening let your hands not be idle, for you do not know which will succeed, whether this or that, or whether both will do equally well.

In today’s world, optionality can be integrated into a number of different areas of our lives by looking for ways to prepare for a variety of possible events, instead of optimizing for the recent past.

Keeping our options open means developing generalist skills like creativity, rather than specializing in one area, like a particular technology. The more diverse the knowledge and skills you can draw on, the better positioned you are to take advantage of new opportunities.

It means not relying on a single distributor for your company’s product or having the supply chain for an entire industry dependent on one country. You can’t make your decisions solely on how the world was yesterday. Preserving optionality means you may take a short-term hit in sales by funding diversity, but the result is you will be much better positioned in the future to keep your business going when circumstances change.

It means not relying on a single energy source to power the vehicles that move us and the goods we need around. Building our society around oil—a finite resource—is limiting. Developing multiple forms of sustainable energy creates new options for when that finite resource is depleted.

Or consider the lean startup methodology. Building a minimum viable product means having the flexibility to pivot or change plans. No demand? No problem! Just try something else. Lean startups iterate until they find product/market fit. Many founders keep their teams as small as possible. They avoid fixed costs and commitments. They keep their options open.

The lean startup methodology recognizes that a new company cannot make a grand plan; it needs to adapt and evolve. As Steve Jobs understood, most customers don’t know they will want something until they have tried it. It’s hard to prepare for changing customer desires without optionality. If a company is flexible, they can adapt to the information they receive once a product hits the market.

“Wealth is not about having a lot of money; it’s about having a lot of options.”

— Chris Rock

Ultimately, preserving optionality means paying attention and looking at life from multiple perspectives. It means building a versatile base of foundational knowledge and allowing for serendipity and unexpected connections. We must seek to expand our comfort zone and circle of competence, and we should take minor risks that have potentially large upsides and limited downsides.

Paradoxically, preserving optionality can mean saying no to a lot of opportunities and avoiding anything that will prove to be restrictive. We need to look at choices through the lens of the optionality they will give us in the future and only say yes to those that create more options.

Preserving your optionality is important because it gives you the flexibility to capitalize on inevitable change. In order to keep your options open, you need diversity. Diversity of perspective, thought, knowledge, and skills. You don’t want to find yourself in a position of only being able to sell something that no one wants. Rapid, extraordinary change is the norm. In order to adapt in a way that is useful, keep your options open.

Chesterton’s Fence: A Lesson in Second Order Thinking

A core component of making great decisions is understanding the rationale behind previous decisions. If we don’t understand how we got “here,” we run the risk of making things much worse.

***

When we seek to intervene in any system created by someone, it’s not enough to view their decisions and choices simply as the consequences of first-order thinking because we can inadvertently create serious problems. Before changing anything, we should wonder whether they were using second-order thinking. Their reasons for making certain choices might be more complex than they seem at first. It’s best to assume they knew things we don’t or had experience we can’t fathom, so we don’t go for quick fixes and end up making things worse.

Second-order thinking is the practice of not just considering the consequences of our decisions but also the consequences of those consequences. Everyone can manage first-order thinking, which is just considering the immediate anticipated result of an action. It’s simple and quick, usually requiring little effort. By comparison, second-order thinking is more complex and time-consuming. The fact that it is difficult and unusual is what makes the ability to do it such a powerful advantage.

Second-order thinking will get you extraordinary results, and so will learning to recognize when other people are using second-order thinking. To understand exactly why this is the case, let’s consider Chesterton’s Fence, described by G. K. Chesterton himself as follows:

There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

***

Chesterton’s Fence is a heuristic inspired by a quote from the writer and polymath G. K. Chesterton’s 1929 book, The Thing. It’s best known as being one of John F. Kennedy’s favored sayings, as well as a principle Wikipedia encourages its editors to follow. In the book, Chesterton describes the classic case of the reformer who notices something, such as a fence, and fails to see the reason for its existence. However, before they decide to remove it, they must figure out why it exists in the first place. If they do not do this, they are likely to do more harm than good with its removal. In its most concise version, Chesterton’s Fence states the following:

Do not remove a fence until you know why it was put up in the first place.

Chesterton went on to explain why this principle holds true, writing that fences don’t grow out of the ground, nor do people build them in their sleep or during a fit of madness. He explained that fences are built by people who carefully planned them out and “had some reason for thinking [the fence] would be a good thing for somebody.” Until we establish that reason, we have no business taking an ax to it. The reason might not be a good or relevant one; we just need to be aware of what the reason is. Otherwise, we may end up with unintended consequences: second- and third-order effects we don’t want, spreading like ripples on a pond and causing damage for years.

Elsewhere, in his essay collection Heretics, Chesterton makes a similar point, detailed here:

Suppose that a great commotion arises in the street about something, let us say a lamp-post, which many influential persons desire to pull down. A grey-clad monk, who is the spirit of the Middle Ages, is approached upon the matter, and begins to say, in the arid manner of the Schoolmen, “Let us first of all consider, my brethren, the value of Light. If Light be in itself good—” At this point he is somewhat excusably knocked down. All the people make a rush for the lamp-post, the lamp-post is down in ten minutes, and they go about congratulating each other on their un-mediaeval practicality. But as things go on they do not work out so easily. Some people have pulled the lamp-post down because they wanted the electric light; some because they wanted old iron; some because they wanted darkness, because their deeds were evil. Some thought it not enough of a lamp-post, some too much; some acted because they wanted to smash municipal machinery; some because they wanted to smash something. And there is war in the night, no man knowing whom he strikes. So, gradually and inevitably, to-day, to-morrow, or the next day, there comes back the conviction that the monk was right after all, and that all depends on what is the philosophy of Light. Only what we might have discussed under the gas-lamp, we now must discuss in the dark.

As simple as Chesterton’s Fence is as a principle, it teaches us an important lesson. Many of the problems we face in life occur when we intervene with systems without an awareness of what the consequences could be. We can easily forget that this applies to subtraction as much as to addition. If a fence exists, there is likely a reason for it. It may be an illogical or inconsequential reason, but it is a reason nonetheless.


“Before I built a wall I’d ask to know
What I was walling in or walling out,
And to whom I was like to give offence.”

— Robert Frost, “Mending Wall”

Chesterton also alluded to the all-too-common belief that previous generations were bumbling fools, stumbling around, constructing fences wherever they fancied. Should we fail to respect their judgement and not try to understand it, we run the risk of creating new, unexpected problems. By and large, people do not do things for no reason. We’re all lazy at heart. We don’t like to waste time and resources on useless fences. Not understanding something does not mean it must be pointless.

Take the case of supposedly hierarchy-free companies. Someone came along and figured that having management and an overall hierarchy is an imperfect system. It places additional stress on those at the bottom and can even be damaging to their health. It leaves room for abuse of power and manipulative company politics. It makes it unlikely that good ideas from those at the bottom will get listened to.

However, despite the numerous problems inherent in hierarchical companies, doing away with this structure altogether belies a lack of awareness of the reasons why it is so ubiquitous. Someone needs to make decisions and be held responsible for their consequences. During times of stress or disorganization, people naturally tend to look to leaders for direction. Without a formal hierarchy, people often form an invisible one, which is far more complex to navigate and can lead to the most charismatic or domineering individual taking control, rather than the most qualified.

It is certainly admirable that hierarchy-free companies are taking the enormous risk inherent in breaking the mold and trying something new. However, their approach ignores Chesterton’s Fence and doesn’t address why hierarchies exist within companies in the first place. Removing them does not necessarily lead to a fairer, more productive system.

Yes, doing things the way they’ve always been done means getting what we’ve always got. There’s certainly nothing positive about being resistant to any change. Things become out of date and redundant with time. Sometimes an outside perspective is ideal for shaking things up and finding new ways. Even so, we can’t let ourselves be too overconfident about the redundancy of things we see as pointless.

Or, to paraphrase Rory Sutherland, the peacock’s tail is not about efficiency. In fact, its whole value lies in its inefficiency. It signals a bird is healthy enough to waste energy growing it and has the strength to carry it around. Peahens use the tails of peacocks as guidance for choosing which mates are likely to have the best genes to pass on to their offspring. If an outside observer were to somehow swoop in and give peacocks regular, functional tails, it would be more energy efficient and practical, but it would deprive them of the ability to advertise their genetic potential.

***

All of us, at one point or another, make some attempt to change a habit to improve our lives. If you’re engaging in a bad habit, it’s admirable to try to eliminate it—except part of why many attempts to do so fail is that bad habits do not appear out of nowhere. No one wakes up one day and decides they want to start smoking or drinking every night or watching television until the early hours of the morning. Bad habits generally evolve to serve an unfulfilled need: connection, comfort, distraction, take your pick.

Attempting to remove the habit and leave everything else untouched does not eliminate the need and can simply lead to a replacement habit that might be just as harmful or even worse. Because of this, more successful approaches often involve replacing a bad habit with a good, benign, or less harmful one—or dealing with the underlying need. In other words, that fence went up for a reason, and it can’t come down without something either taking its place or removing the need for it to be there in the first place.

To give a further example, in a classic post from 2009 on his website, serial entrepreneur Steve Blank gives an example of a decision he has repeatedly seen in startups. They grow to the point where it makes sense to hire a Chief Financial Officer. Eager to make an immediate difference, the new CFO starts looking for ways to cut costs so they can point to how they’re saving the company money. They take a look at the free snacks and sodas offered to employees and calculate how much they cost per year—perhaps a few thousand dollars. It seems like a waste of money, so they decide to do away with free sodas or start charging a few cents for them. After all, they’re paying people enough. They can buy their own sodas.

Blank writes that, in his experience, the outcome is always the same. The original employees who helped the company grow initially notice the change and realize things are not how they were before. Of course they can afford to buy their own sodas. But suddenly having to is just an unmissable sign that the company’s culture is changing, which can be enough to prompt the most talented people to jump ship. Attempting to save a relatively small amount of money ends up costing far more in employee turnover. The new CFO didn’t consider why that fence was up in the first place.

***

Chesterton’s Fence is not an admonishment of anyone who tries to make improvements; it is a call to be aware of second-order thinking before intervening. It reminds us that we don’t always know better than those who made decisions before us, and we can’t see all the nuances to a situation until we’re intimate with it. Unless we know why someone made a decision, we can’t safely change it or conclude that they were wrong.

The first step before modifying an aspect of a system is to understand it. Observe it in full. Note how it interconnects with other aspects, including ones that might not be linked to you personally. Learn how it works, and then propose your change.

The Best of Farnam Street 2019

We read for the same reasons we have conversations — to enrich our lives.

Reading helps us to think, feel, and reflect — not only upon ourselves and others but upon our ideas, and our relationship with the world. Reading deepens our understanding and helps us live consciously.

Of the 31 articles we published on FS this year, here are the top ten as measured by a combination of page views, responses, and feeling.

How Not to Be Stupid — Stupidity is overlooking or dismissing conspicuously crucial information. Here are seven situational factors that compromise your cognitive ability and result in increased odds of stupidity.

The Danger of Comparing Yourself to Others — When you stop comparing yourself to others and turn your focus inward, you start being better at what really matters: being you.

Yes, It’s All Your Fault: Active vs. Passive Mindsets — The hard truth is that most things in your life – good and bad – are your fault. The sooner you realize that, the better things will be. Here’s how to cultivate an active mindset and take control of your life.

Getting Ahead By Being Inefficient — Inefficient does not mean ineffective, and it is certainly not the same as lazy. You get things done – just not in the most effective way possible. You’re a bit sloppy, and use more energy. But don’t feel bad about it. There is real value in not being the best.

How to Do Great Things — If luck is the cause of a person’s success, why are so many so lucky time and time again? Learn how to create your own luck by being intelligently prepared.

The Anatomy of a Great Decision — Making better decisions is one of the best skills we can develop. Good decisions save time, money, and stress. Here, we break down what makes a good decision and what we can do to improve our decision-making processes.

The Importance of Working With “A” Players — Building a team is more complicated than collecting talent. I once tried to solve a problem by putting a bunch of PhDs in a room. While comments like that sounded good and got me a lot of projects above my level, they were rarely effective at delivering actual results.

Compounding Knowledge — The filing cabinet of knowledge stored in Warren Buffett’s brain has helped make him the most successful investor of our time. But it takes much more than simply reading a lot. In this article, learn how to create your own “snowball effect” to compound what you know into opportunity.

An Investment Approach That Works — There are as many investment strategies as there are investment opportunities. Some are good; many are terrible. Here’s the one that I lean on the most when I’m looking for low risk and above average returns.

Resonance: How to Open Doors For Other People — Opening doors for other people is a critical concept to understand in life. Read this article to learn more about how to show people that you care.

More interesting things, you might have missed

Thank you

As we touched on in the annual letter, it’s been a wonderful year at FS. We are looking forward to a wider variety of content on the blog in 2020 with a mix of deep dives and pieces exploring new subjects.

Thank you for an amazing 2019 and we look forward to learning new things with you in 2020.

Still curious? You can find the top five podcast episodes in 2019 here. Our Best of Farnam Street archive can be found here.

Elastic: Flexible Thinking in a Constantly Changing World

The less rigid we are in our thinking, the more open minded, creative and innovative we become. Here’s how to develop the power of an elastic mind.

***

Society is changing fast. Do we need to change how we think in order to survive?

In his book Elastic: Flexible Thinking in a Constantly Changing World, Leonard Mlodinow confirms that the speed of technological and cultural development is requiring us to embrace types of thinking besides the rational, logical style of analysis that tends to be emphasized in our society. He also offers good news: we already have the diverse cognitive capabilities necessary to effectively respond to new and novel challenges. He calls this “elastic thinking.”

Mlodinow explains elastic thinking as:

“the capacity to let go of comfortable ideas and become accustomed to ambiguity and contradiction; the capability to rise above conventional mind-sets and to reframe the questions we ask; the ability to abandon our ingrained assumptions and open ourselves to new paradigms; the propensity to rely on imagination as much as on logic and to generate and integrate a wide variety of ideas; and the willingness to experiment and be tolerant of failure.”

In simpler terms, elastic thinking is about letting your brain make connections without direction.

Let’s explore why elastic thinking is useful and how we can get better at it.

***

First of all, let’s throw out the metaphor that our brain is exactly like a computer. Sure, it can perform similar analytic functions. But our brains are capable of insight that is neither analytical nor programmable. Before we can embrace the other types of thinking our brains have innate capacity for, we need to accept that analytic thinking—generally described as the application of systematic, logical analysis—has limitations.

As Mlodinow explains,

“Analytical thought is the form of reflection that has been most prized in modern society. Best suited to analyzing life’s more straightforward issues, it is the kind of thinking we focus on in our schools. We quantify our ability in it through IQ tests and college entrance examinations, and we seek it in our employees. But although analytical thinking is powerful, like scripted processing, it proceeds in a linear fashion…and often fails to meet the challenges of novelty and change.”

Although incredibly useful in a variety of daily situations, analytical thinking may not be best for solving problems whose answers require new ways of doing things.

For those types of problems, elastic thinking is most useful. This is the kind of thinking that enjoys wandering outside the box and generating ideas that fly in and out of left field. “Ours is a far more complex process than occurs in a computer, an insect brain, or even the brains of other mammals,” Mlodinow elaborates. “It allows us to face the world armed with a capability for an astonishing breadth of conceptual analysis.”

Think of it this way: when you come to a river and need to cross it, your analytic thinking comes in handy. It scans the environment to evaluate your options. Where might the water be lowest? Where is it moving the fastest, and thus where is the most dangerous crossing point? What kind of materials are on hand to assist in your crossing? How might others have solved this problem?

This particular river might be new for you, but the concept of crossing one likely isn’t, so you can easily rely on the logical steps of an analytical thinking process.

Elastic thinking is about generating new or novel ideas. When contemplating how best to cross a river, it was this kind of thinking that took us from log bridges to suspension bridges and from rowboats to steamboats. Elastic thinking involves us putting together many disparate ideas to form a new way of doing things.

We don’t need to abandon analytical thinking altogether. We just need to recognize that it has its limitations. If the way we are doing things doesn’t seem to be getting us the results we want, that might be a sign that more elastic thinking is called for.

Why Elasticity?

Mlodinow writes that “humans tend to be attracted to both novelty and change.”

Throughout our history we have willingly lined up and paid to be shocked and amazed. From magic shows and roller coasters to the circus and movies, our entertainment industries never seem to run out of audiences. Our propensity to engage with the new isn’t just confined to entertainment. Think back to the large technological expositions around the turn of the twentieth century that displayed the cutting edge of invention and visions for the future and attracted millions of visitors. Or, going further back, think of the pilgrimages that people made to see new architectural wonders often captured in churches and cathedrals in a time when travel was difficult.

Mlodinow contends these types of actions display a quality “that makes us human…our ability and desire to adapt, to explore, and to generate new ideas.” Part of the reason that novelty attracts us is that we get a hit of feel-good dopamine when we are confronted with something new (and non-threatening). Thus, in terms of our evolutionary history, our tendency to explore and learn was rewarded with a boost of pleasure, which then led to more exploration.

He is careful to explain that exploring doesn’t necessarily mean signing up to go to Mars. We explore when we try something new. “When you socialize with strangers, you are exploring the possibility of new relationships.…When you go on a job interview even though you are employed, you are exploring a new career move.”

The relation of exploration to elasticity is that exploration requires elastic thinking. Exploration, by definition, is venturing into parts unknown where we might be confronted with any manner of new and novel experiences. It’s hard to logically analyze something for which you have no knowledge or experience. It is this attraction to novelty that contributed to our ability to think elastically.

The Value of Emotions in Decision-Making

You can’t make a decision without tapping into your emotions.

Mlodinow suggests that “we tend to praise analytical thought as being objective, untinged by the distortions of human feelings, and therefore tending towards accuracy. But though many praise analytical thought for its detachment from emotion, one could also criticize it as not being inspired by emotion, as elastic thinking is.”

He tells the story of EVR, a man who had brain surgery to remove a benign tumor. After the surgery, EVR couldn’t make decisions. He passed IQ tests and tests about current affairs and ethics. But his life slowly fell apart because he couldn’t make a decision.

“In hindsight, the problem in diagnosing EVR was that all the exams were focused on his capability for analytical thinking. They revealed nothing wrong because his knowledge and logical reasoning skills were intact. His deficit would have been more apparent had they given him a test of elastic thinking—or watched him eat a brownie, or kicked him in the shin, or probed his emotions in some other manner.”

EVR had his orbitofrontal cortex removed—a big part of the brain’s reward system. According to Mlodinow, “Without it, EVR could not experience conscious pleasure. That left him with no motivation to make choices or to formulate and attempt to achieve goals. And that explains why decisions such as where to eat caused him problems: We make such decisions based on our goals, such as enjoying the food or the atmosphere, and he had no goals.”

Our ability to feel emotions is therefore a large and valuable component of our biological decision-making process. As Mlodinow explains, “Evolution endowed us with emotions like pleasure and fear in order that we may evaluate the positive or negative implications of circumstances and events.” Without emotion, we have no motivation to make decisions. What is new would have the same effect as what is old. This state of affairs would not be terribly useful for responding to change. Although we are attracted to novelty, not everything new is good. It is our emotional capabilities that can help us navigate whether the change is positive and determine how we can best deal with it.

Mlodinow contends that “emotions are an integral ingredient in our ability to face the challenges of our environment.” Our inclination to novelty can be exploited, however, and today we have to face and address the multiple drains on our emotions and thus our cognitive abilities. Chronic distractions that manipulate our emotional responses require energy to address, leaving us emotionally spent. This leaves us with less emotional energy to process new experiences and information, leaving us with an unclear picture of what might benefit us and what we should run away from.

Frozen Thoughts

Mlodinow explains that “frozen thinking” occurs when you have a fixed orientation that determines the way you frame or approach a problem.

Frozen thinking most likely occurs when you are an expert in your field. Mlodinow argues that “it is ironic that frozen thinking is a particular risk if you are an expert at something. When you are an expert, your deep knowledge is obviously of great value in facing the usual challenges of your profession, but your immersion in that body of conventional wisdom can impede you from creating or accepting new ideas, and hamper you when confronted with novelty and change.”

When you cling to the idea that the way things are is the way they always are going to be, you close off your brain from noticing new opportunities. In most jobs, this might translate into missed opportunities or an inability to find solutions under changing parameters. But there are some professions where the consequences can be significantly more dire. For instance, as Mlodinow discusses, if you’re a doctor, frozen thinking can lead to major errors in diagnosis.

Frozen thinking is incompatible with elastic thinking. So if you want to make sure you aren’t just regurgitating more of the same while the world evolves around you, augment your elastic thinking.

The ‘How’ of Elastic Thinking

Our brains are amazing. In order to tap into our innate elastic thinking abilities, we really just have to get out of our own way and stop trying to force a particular thinking process.

“The default network governs our interior mental life—the dialogue we have with ourselves, both consciously and subconsciously. Kicking into gear when we turn away from the barrage of sensory input produced by the outside world, it looks toward our inner selves. When that happens, the neural networks of our elastic thought can rummage around the huge database of knowledge and memories and feelings that is stored in the brain, combining concepts that we normally would not recognize. That’s why resting, daydreaming, and other quiet activities such as taking a walk can be powerful ways to generate ideas.”

Mlodinow emphasizes that elastic thinking will happen when we give ourselves quiet space to let the brain do its thing.

“The associative processes of elastic thinking do not thrive when the conscious mind is in a focused state. A relaxed mind explores novel ideas; an occupied mind searches for the most familiar ideas, which are usually the least interesting. Unfortunately, as our default networks are sidelined more and more, we have less unfocused time for our extended internal dialogue to proceed. As a result, we have diminished opportunity to string together those random associations that lead to new ideas and realizations.”

Here are some suggestions for how to develop elastic thinking:

  • Cultivate a “beginner’s mind” by questioning situations as if you have no experience in them.
  • Introduce discord by pursuing relationships and ideas that challenge your beliefs.
  • Recognize the value of diversity.
  • Generate lots of ideas and don’t be bothered that most of them will be bad.
  • Develop a positive mood.
  • Relax when you see yourself becoming overly analytical.

The main lesson is that fruitful elastic thinking doesn’t need be directed. Like children and unstructured play, sometimes we have to give our brains the opportunity to just be. We also have to be willing to stop distracting ourselves all the time. Often it seems that we are afraid of our own thoughts, or we assume that to be quiet is to be bored, so we search for distractions that keep our brain occupied. To encourage elastic thinking in our society, we have to wean ourselves away from the constant stimuli provided by screens.

Mlodinow explains that you can prime your brain for insights by cultivating the kind of mindset that generates them. Don’t force your thinking or apply an analytical approach to the situation. “The challenge of insight is the analogous issue of freeing yourself from narrow, conventional thinking.”

When it comes to developing and exploring the possibilities of elastic thinking, it is perhaps best to remember that, as Mlodinow writes, “the thought processes we use to create what are hailed as great masterpieces of art and science are not fundamentally different from those we use to create our failures.”