Category: Decision Making

Strategy vs. Tactics: What’s the Difference and Why Does it Matter?

In order to do anything meaningful, you have to know where you are going.

Strategy and tactics are two terms that get thrown around a lot, and are often used interchangeably in numerous contexts. But what exactly do they mean, what is the difference, and why is it important? In this article, we will look at the contrast between strategy and tactics, and the most effective ways to use each.

While strategy and tactics originated as military terminology, their use has spread to planning in many areas of life. Strategy is overarching plan or set of goals. Changing strategies is like trying to turn around an aircraft carrier—it can be done but not quickly. Tactics are the specific actions or steps you undertake to accomplish your strategy. For example, in a war, a nation’s strategy might be to win the hearts and minds of the opponent’s civilian population. To achieve this they could use tactics such as radio broadcasts or building hospitals.  A personal strategy might be to get into a particular career, whereas your tactics might include choosing your educational path, seeking out a helpful mentor, or distinguishing yourself from the competition.

We might have strategies for anything from gaining political power or getting promoted, to building relationships and growing the audience of a blog. Whatever we are trying to do, we would do well to understand how strategy and tactics work, the distinction, and how we can fit the two together. Without a strategy we run the risk of ambling through life, uncertain and confused about if we are making progress towards what we want. Without tactics, we are destined for a lifetime of wishful thinking or chronic dissatisfaction. As Lawrence Freedman writes in Strategy: A History, “Without a strategy, facing up to any problem or striving for any objective would be considered negligent. Certainly, no military campaign, company investment, or government initiative is likely to receive backing unless there is a strategy to evaluate…. There is a call for strategy every time the path to a given destination is not straightforward.” And without tactics you become dependent on pure luck to implement your strategy.

To achieve anything we need a view of both the micro and the macro, the forest and the trees—and how both perspectives slot together. Strategy and tactics are complementary. Neither works well without the other. Sun Tzu recognized this two and a half millennia ago when he stated, “Strategy without tactics is the slowest route to victory. Tactics without strategy are the noise before defeat.” We need to take a long-term view and think ahead, while choosing short-term steps to take now for the sake of what we want later.

The Relationship Between Strategy and Tactics

Any time we decide on a goal and invest resources in achieving it, we are strategizing. Freedman writes:

One common contemporary definition describes it as being about maintaining a balance between ends, ways, and means; about identifying objectives; and about the resources and methods available for meeting such objectives. This balance requires not only finding out how to achieve desired ends but also adjusting ends so that realistic ways can be found to meet them by available means.

In The Grand Strategy of the Roman Empire, Edward N. Luttwak writes that strategy “is not about moving armies over geography, as in board games. It encompasses the entire struggle of adversarial forces, which need not have a spatial dimension at all….” When you think about winning a war, what does it mean to actually win? History is full of examples of wars that were “won” on paper, only to be restarted as soon as the adversary had time to regroup. So being precise in your goal, to encompass the entirety of what you want to achieve, is necessary to articulate a good strategy. It’s not about success in the moment, but success in the long term. It’s the difference between the end of WWI and WWII. World War I was about winning that war. World War II was about never fighting a war like that again. The strategies articulated and pursued by the Treaty of Versailles and the Marshall Plan were full of markedly different tactics.

In Good Strategy, Bad Strategy, Richard Rumelt writes: “The most basic idea of strategy is the application of strength against weakness. Or if you prefer, strength applied to the most promising opportunity…A good strategy doesn’t just draw on existing strength; it creates strength.” Rumelt’s definition of strategy as creating strength is particularly important. You don’t deplete yourself as you execute your strategy. You choose tactics that reinforce and build strength as they are deployed. Back to winning hearts and minds – the tactics require up-front costs. But as they proceed, and as the strategy unfolds, strength and further support are gained by having the support of the local population. A good strategy makes you stronger.

“Grand strategy is the art of looking beyond the present battle and calculating ahead. Focus on your ultimate goal and plot to reach it.”

― Robert Greene, The 33 Strategies of War

The Components of Strategy

The strategic theorist Henry Mintzberg provides a useful approach to thinking about strategy in adversarial situations. According to Mintzberg, there are five key components or types:

  1. Plan: A consciously chosen series of actions to achieve a goal, made in advance.
  2. Ploy: A deliberate attempt to confuse, mislead or distract an opponent.
  3. Pattern: A consistent, repeated series of actions that achieve the desired result.
  4. Position: A considered relationship between an entity (organization, army, individual etc) and its context.
  5. Perspective: A particular way of viewing the world, a mindset regarding actions that lead to a distinct way of behaving.

Geoffrey P. Chamberlain offers a slightly different perspective on the components of strategy, useful when the strategy is more about a personal goal. He identifies seven parts:

  1. A strategy is used within a particular domain.
  2. A strategy has a single, well defined focus.
  3. A strategy lays out a path to be followed.
  4. A strategy is made up of parts (tactics).
  5. Each of a strategy’s parts pushes towards the defined focus.
  6. A strategy recognises its sphere of influence.
  7. A strategy is either intentionally formed or emerges naturally.

According to Rumelt, a strategy must include “premeditation, the anticipation of others’ behavior, and the purposeful design of coordinated actions. As a general rule, strategy is more important in situations where other parties have the potential to thwart or disrupt actions, or where our plans are at risk if we don’t take meaningful steps to achieve them. Good strategy requires us to both focus on a goal, and anticipate obstacles to reaching that goal.  When we encounter obstacles, we may need to employ what Freedman calls “deceits, ruses, feints, manoeuvres and a quicker wit”—our tactics.

“The skillful tactician may be likened to the Shuai-Jan. Now the Shuai-Jan is a snake that is found in the Ch’ang mountains. Strike at its head, and you will be attacked by its tail; strike at its tail, and you will be attacked by its head; strike at its middle, and you will be attacked by head and tail both.”

— Sun Tzu, The Art of War

A Few Words on Tactics

Even the most elegant, well-planned strategy is useless if we do not take thoughtful steps to achieve it. While the overall goal remains stable, the steps we take to achieve it must be flexible enough to adjust to the short-term realities of our situation.

The word “tactic” comes from the Ancient Greek “taktikos,” which loosely translates to “the art of ordering or arranging.” We now use the term to denote actions toward a goal. Tactics often center around the efficient use of available resources, whether money, people, time, ammunition, or materials. Tactics also tend to be shorter-term and more specific than strategies.

Many tactics are timeless and have been used for centuries or even millennia. Military tactics such as ambushes, using prevailing weather, and divide-and-conquer have been around as long as people have fought each other. The same applies to tactics used by politicians and protesters. Successful tactics often include an ‘implementation intention’—a specific trigger that signals when they should be used. Simply deciding what to do is rarely enough. We need an “if this, then that” plan for where, when and why. The short-term nature and flexibility of tactics allow us to pivot as needed, choosing the right ones for the situation, to achieve our larger, strategic goals.

If you don’t have a strategy, you are part of someone else’s strategy.”

— Alvin Toffler

Conclusion

Although often regarded as interchangeable, strategy and tactics are somewhat different, though complementary concepts. According to the skilled strategist Sun Tzu, strategy is about winning before the battle begins, while tactics are about striking at weakness. Both are ancient concepts that have come to be an essential part of numerous disciplines and offer endless new ways of thinking.

Break the Chain: Stop Being a Slave

A vendor once tried to buy me a laptop. Not just any laptop but a very expensive laptop. The vendor claimed that there were no strings attached. And, as they pointed out, I was the only person in the meeting with them, so “no one would know” they had given it to me.

It wasn’t a hard decision. I said no.

It wasn’t because I didn’t need a laptop. In fact, I did need one. The laptop I was using was old and out of date. I had purchased it myself years ago in a fit of frustration at the ridiculous process the government wanted me to follow to obtain one from them.

“No price is too high to pay for the privilege of owning yourself.”

— Nietzsche

Governments have clear conflict-of-interest rules for people in situations like this one. The rules, however, are impractical. They’re also expensive. I remember one dinner with a vendor that ended up costing me hundreds of dollars personally. I made a mistake: I went to wash my hands around the time the vendor picked out some wine. I came back to see a glass of wine poured for me. When the bill came, the vendor insisted on paying it. Damaging our relationship and embarrassing him, I refused and said, “That’s very generous of you, but the government is clear; I’ve got to pay my share.”

My share? Over $200. I hadn’t picked the restaurant or the wine. When I returned to work a few days later and submitted a claim for the difference between my per diem and the meal, I was literally laughed at.

But the real reason I said no to the laptop was that I don’t want to be owned by other people. Even if my freedom personally costs me money. However well-meaning the laptop offer might have been, I would have felt a debt to the vendor who’d given it to me. A debt that would need to be paid at some point. That debt would have created a bond between us that I didn’t want.

We need to make our own way, and there is a slippery slope between accepting the generosity of people who help you along and getting dependent on them. The entitlement born from expecting others to help you is a recipe for misery. So is excessive dependence on others.

The lesson is never to anticipate or rely on the kindness of strangers. This dependence means they own you. If you have a mortgage, you don’t own your house; the bank does.

Working for the government taught me a lot about ownership — specifically, about dependence on other people. People refused to say what they really thought, subconsciously abiding by the maxim “whose bread I eat, his song I sing.”

When people would approach me and tell me how miserable they were and how they hated their jobs, I would ask them why they didn’t leave. The answer was almost always the same: “I can’t.”

Once we’re bought, it’s hard to get out. While we all start out wanting more independence, we increasingly live lifestyles that make us dependent.

When I first started working in the government, I made just under $40k a year in salary. For me, just out of university, that was a killing. I felt like I could do whatever I wanted. After a while, I was making more money but still living off the same starting salary. The additional money went to savings and debt repayment. I said no to living above my means and watched as most of my friends couldn’t say no.

A lot of them spent more than they made no matter how many promotions they received. Appetites for desires are rarely quenched. As people spent more, they got more into debt. As they got more into debt, they wanted more and more. As their wants exceeded even the debt-funded shopping sprees (cars, trucks, houses, swimming pools, campers, play structures for the kids, etc.), they got unhappier. They saw other people with things they wanted. Things they felt like they deserved. Their relationships suffered. They became miserable. They hated their jobs but they were stuck. The bank owned them. Work owned them.

And they realized it too late.

Part of the reason for the laptop offer was likely that vendor expected to have preferred access to me and to the government. Prefered access to information that could potentially benefit his company, to the tune of millions or tens of millions of dollars. Had I accepted the offer, it would have been hard to deny him. I saw the strings and didn’t want any part of them.

Amelia Boone, the Michael Jordan of adventure racing, said, “I believe the key to self-sufficiency is breaking free of the mindset that someone, somewhere, owes you something and will come to your rescue.”

The bank doesn’t owe you a mortgage, just as work doesn’t owe you a job.

“Self-sufficiency,” wrote Epicurus, “is the greatest of all wealth.” Epictetus added that “wealth consists not in having great possessions, but in having few wants.”

It can be hard to say no. It means refusing someone, and often it means denying yourself instant gratification. The rewards of doing this are uncertain and less tangible. I call decisions like this “first-order negative, second-order positive.” Most people don’t take the time to think through the second-order effects of their choices. If they did, they’d realize that freedom comes from the ability to say no.

***

Members can discuss this on the Learning Community Forum.

Making Compassionate Decisions: The Role of Empathy in Decision Making

“The biggest deficit that we have in our society and in the world right now is an empathy deficit. We are in great need of people being able to stand in somebody else’s shoes and see the world through their eyes.”

— Barack Obama

You don’t have to look hard to find quotes expounding the need for more empathy in society. As with Barack Obama’s quote above, we are encouraged to actively build empathy with others — especially those who are different from us. The implicit message in these pleas is that empathy will make us treat each other with more respect and caring and will help reduce violence. But is this true? Does empathy make us appreciate others, help us behave in moral ways, or help us make better decisions?

These are questions Paul Bloom tackles in his book Against Empathy: The Case for Rational Compassion. As the title suggests, Bloom’s book makes a case against empathy as an inherent force for good and takes a closer look at what empathy is (and is not), how empathy works in our brains, how empathy can lead to immoral outcomes despite our best intentions, and how we can improve our ability to have a positive impact by strengthening our intelligence, compassion, self-control, and ability to reason.

To explore these questions, we first need to define what we’re talking about.

What Is Empathy?

Empathy is an often-used word that can mean different things. Bloom quotes one team of empathy researchers who joke that “there are probably nearly as many definitions of empathy as people working on this topic.” For his part, Bloom defines empathy as “the act of coming to experience the world as you think someone else does.” This type of empathy was explored by philosophers of the Scottish Enlightenment. Bloom writes:

As Adam Smith put it, we have the capacity to think about another person and “place ourselves in his situation and become in some measure the same person with him, and thence form some idea of his sensations, and even feel something which, though weaker in degree, is not altogether unlike them.”

This is the definition and view of empathy that Bloom devotes most of the book to exploring. This is the “standing in another man’s shoes” type of empathy from Barack Obama’s quote above, which Bloom calls emotional empathy.

“I feel your pain” is more than a metaphor. It’s literal.

With emotional empathy, you actually experience a weaker degree of what somebody else feels. Researchers in recent years have been able to show that empathic responses of pain occur in the same area of the brain where real pain is experienced.

So “I feel your pain” isn’t just a gooey metaphor; it can be made neurologically literal: Other people’s pain really does activate the same brain area as your own pain, and more generally, there is neural evidence for a correspondence between self and other.

To make the shoe metaphor literal, imagine that you see somebody drop something heavy on their foot — you flinch because you know what this feels like and the parts of your brain that experience pain (the anterior insula and the cingulate cortex) react. You don’t feel the same degree of pain, of course — you didn’t drop anything on your foot after all — but it is likely that you have an involuntary physical reaction like a flinch, a facial grimace, or an audible outburst. This is an emotionally empathic response.

But there is another form of empathy that Bloom wants us to be aware of and consider differently. It relates to our ability to understand what is going on in the minds of others. Bloom refers to this form as cognitive empathy:

… if I understand that you are in pain without feeling it myself, this is what psychologists describe as social cognition, social intelligence, mind reading, theory of mind, or mentalizing. It’s also sometimes described as a form of empathy—“cognitive empathy” as opposed to “emotional empathy.”

In this sense, cognitive empathy speaks to our capacity to understand what is going on in the minds of others. In the case of pain, which is where a lot of empathy research is done, we’re not talking about feeling any degree of pain, as we might with emotional empathy, but instead, we simply understand that the other person is feeling pain without feeling it ourselves. Cognitive empathy goes beyond pain — our ability to understand what is going on in somebody else’s mind is an important part of being human and is necessary for us to relate to each other.

Empathy and compassion are synonyms in many dictionaries and used interchangeably by many, but they have different characteristics.

The brain is, of course, very complicated, so it is plausible that these two types of empathy could take place in the same part of the brain. So far, though, the research seems to indicate that they are largely separate:

In a review article, Jamil Zaki and Kevin Ochsner note that hundreds of studies now support a certain perspective on the mind, which they call “a tale of two systems.” One system involves sharing the experience of others, what we’ve called empathy; the other involves inferences about the mental states of others—mentalizing or mind reading. While they can both be active at once, and often are, they occupy different parts of the brain. For instance, the medial prefrontal cortex, just behind the forehead, is involved in mentalizing, while the anterior cingulate cortex, sitting right behind that, is involved in empathy.

The difference between cognitive and emotional empathy is important for understanding Bloom’s arguments. From Bloom’s perspective, cognitive empathy is “…a useful and necessary tool for anyone who wishes to be a good person—but it is morally neutral.” On the other hand, Bloom believes that emotional empathy is “morally corrosive,” and the bulk of his attack is directed at highlighting the pitfalls of relying on emotional empathy while making the case for cultivating and practicing “rational compassion” instead.

I believe that the capacity for emotional empathy, described as “sympathy” by philosophers such as Adam Smith and David Hume, often simply known as “empathy” and defended by so many scholars, theologians, educators, and politicians, is actually morally corrosive. If you are struggling with a moral decision and find yourself trying to feel someone else’s pain or pleasure, you should stop. This empathic engagement might give you some satisfaction, but it’s not how to improve things and can lead to bad decisions and bad outcomes. Much better to use reason and cost-benefit analysis, drawing on a more distanced compassion and kindness.

Here again, the definition of the terms is important for understanding the argument. Empathy and compassion are synonyms in many dictionaries and used interchangeably by many, but they have different characteristics. Bloom outlines the difference:

… compassion and concern are more diffuse than empathy. It is weird to talk about having empathy for the millions of victims of malaria, say, but perfectly normal to say that you are concerned about them or feel compassion for them. Also, compassion and concern don’t require mirroring of others’ feelings. If someone works to help the victims of torture and does so with energy and good cheer, it doesn’t seem right to say that as they do this, they are empathizing with the individuals they are helping. Better to say that they feel compassion for them.

Bloom references a review paper written by Tania Singer and Olga Klimecki to help make the distinction clear. Singer and Klimecki write:

In contrast to empathy, compassion does not mean sharing the suffering of the other: rather, it is characterized by feelings of warmth, concern and care for the other, as well as a strong motivation to improve the other’s well-being. Compassion is feeling for and not feeling with the other.

To summarize, emotional empathy could be simply described as “feeling what others feel,” cognitive empathy as “understanding what others feel,” and compassion as “caring about how others feel.”

Emotional empathy could be simply described as “feeling what others feel,” cognitive empathy as “understanding what others feel,” and compassion as “caring about how others feel.”

Empathy and Morality

Many people believe that our ability to empathize is the basis for morality because it causes us to consider our actions from another’s perspective. “Treat others as you would like to be treated” is the basic morality lesson repeated thousands of times to children all over the world.

In this way, empathy can lead us to rely on our self-centered nature. If this is true, Bloom suggests that the argument in its simplest form would go like this:

Everyone is naturally interested in him- or herself; we care most about our own pleasure and pain. It requires nothing special to yank one’s hand away from a flame or to reach for a glass of water when thirsty. But empathy makes the experiences of others salient and important—your pain becomes my pain, your thirst becomes my thirst, and so I rescue you from the fire or give you something to drink. Empathy guides us to treat others as we treat ourselves and hence expands our selfish concerns to encompass others.

In this way, the willful exercise of empathy can motivate kindness that would never have otherwise occurred. Empathy can make us care about a slave, or a homeless person, or someone in solitary confinement. It can put us into the mind of a gay teenager bullied by his peers, or a victim of rape. We can empathize with a member of a despised minority or someone suffering from religious persecution in a faraway land. All these experiences are alien to me, but through the exercise of empathy, I can, in some limited way, experience them myself, and this makes me a better person.

When we consider the plight of others by imagining ourselves in their situation, we experience an empathic response that can cause us to evaluate the morality of our actions.

When we consider the plight of others by imagining ourselves in their situation, we experience an empathic response that can cause us to evaluate the morality of our actions.

In an interview, Steven Pinker hypothesizes that it was an increase in empathy, made possible by the technology of the printing press and the resulting increase in literacy, that led to the Humanitarian Revolution during the Enlightenment. The increase in empathy brought about by our ability to read accounts of violent punishments like disembowelment and mutilation caused us to reconsider the morality of treating other human beings in such ways.

So in certain instances, empathy can play a role in motivating us to take moral action. But is an empathic response required to do so?

To use a classic example from philosophy—first thought up by the Chinese philosopher Mencius—imagine that you are walking by a lake and see a young child struggling in shallow water. If you can easily wade into the water and save her, you should do it. It would be wrong to keep walking.

What motivates this good act? It is possible, I suppose, that you might imagine what it feels like to be drowning, or anticipate what it would be like to be the child’s mother or father hearing that she drowned. Such empathic feelings could then motivate you to act. But that is hardly necessary. You don’t need empathy to realize that it’s wrong to let a child drown. Any normal person would just wade in and scoop up the child, without bothering with any of this empathic hoo-ha.

And so there has to be more to morality than empathy. Our decisions about what’s right and what’s wrong, and our motivations to act, have many sources. One’s morality can be rooted in a religious worldview or a philosophical one. It can be motivated by a more diffuse concern for the fates of others—something often described as concern or compassion…

I hope most people reading this would agree that failing to attempt to save a drowning child or supporting or perpetrating violent punishments like disembowelment would be at the very least morally reprehensible, if not outright evil.

But what motivates people to be “evil”? For researchers like Simon Baron-Cohen, evil is defined as “empathy erosion” — truly evil people lack the capacity to empathize, and it is this lack of empathy that causes them to act in evil ways. Bloom looks at the question of what causes people to be evil from a slightly different angle:

Indeed, some argue that the myth of pure evil gets things backward. That is, it’s not that certain cruel actions are committed because the perpetrators are self-consciously and deliberatively evil. Rather it is because they think they are doing good. They are fueled by a strong moral sense.

When the perpetrators of violence or cruelty believe that their actions are morally justified, what motivates them? Bloom suggests that it can be empathy. Empathy often causes us to choose sides, to choose whom to empathize with. We see this tendency play out in politics all the time.

Empathy often causes us to choose sides, to choose whom to empathize with.

Politicians representing one side believe they are saving the world, while representatives on the other side believe that their adversaries are out to destroy civilization as we know it. If I believe that I am protecting a person or group of people whom I choose to empathize with, then I may be motivated to act in a way I believe is morally justified, even though others may believe that I have harmed them.

Steven Pinker weighed in on this issue when he wrote the following in The Better Angels of our Nature:

If you added up all the homicides committed in pursuit of self-help justice, the casualties of religious and revolutionary wars, the people executed for victimless crimes and misdemeanors, and the targets of ideological genocides, they would surely outnumber the fatalities from amoral predation and conquest.

Bloom quotes Pinker and goes on to write:

Henry Adams put this in stronger terms, with regard to Robert E. Lee: “It’s always the good men who do the most harm in the world.”

This might seem perverse. How can good lead to evil? One thing to keep in mind here is that we are interested in beliefs and motivations, not what’s good in some objective sense. So the idea isn’t that evil is good; rather, it’s that evil is done by those who think they are doing good.

So from a moral perspective, empathy can lead us astray. We may believe we are doing good or that our actions are justified but this may not necessarily be true for all involved. This is especially troublesome when we consider how we are affected by a growing list of cognitive biases.

Empathy and Biases

While empathy may not be required to motivate us to save a drowning child, it can still help us consider the differing experiences or suffering of another person thus motivating us to consider things from their perspective or thus act to relieve their suffering:

I see the bullied teenager and might be tempted initially to join in with his tormenters, out of sadism or boredom or a desire to dominate or be popular, but then I empathize—I feel his pain, I feel what it’s like to be bullied—so I don’t add to his suffering. Maybe I even rise to his defense. Empathy is like a spotlight directing attention and aid to where it’s needed.

On the surface this seems like an excellent case for the positive power of empathy; it shines a “spotlight” on a person in need and motivates us to help them. But what happens when we dig a little deeper into this metaphor? Bloom writes

… spotlights have a narrow focus, and this is one problem with empathy. It does poorly in a world where there are many people in need and where the effects of one’s actions are diffuse, often delayed, and difficult to compute, a world in which an act that helps one person in the here and now can lead to greater suffering in the future.

He adds:

Further, spotlights only illuminate what they are pointed at, so empathy reflects our biases. Although we might intellectually believe that the suffering of our neighbor is just as awful as the suffering of someone living in another country, it’s far easier to empathize with those who are close to us, those who are similar to us, and those we see as more attractive or vulnerable and less scary. Intellectually, a white American might believe that a black person matters just as much as a white person, but he or she will typically find it a lot easier to empathize with the plight of the latter than the former. In this regard, empathy distorts our moral judgments in pretty much the same way that prejudice does.

We are all predisposed to care more deeply for those we are close to. From a purely biological perspective, we will care for and protect our children and families before the children or families of strangers. Our decision making often falls victim to narrow framing, and our actions are affected by biases like Liking/Loving and Disliking/Hating and our tendency to discount the pain of people we don’t like:

We are constituted to favor our friends and family over strangers, to care more about members of our own group than people from different, perhaps opposing, groups. This fact about human nature is inevitable given our evolutionary history. Any creature that didn’t have special sentiments toward those that shared its genes and helped it in the past would get its ass kicked from a Darwinian perspective; it would falter relative to competitors with more parochial natures. This bias to favor those close to us is general—it influences who we readily empathize with, but it also influences who we like, who we tend to care for, who we will affiliate with, who we will punish, and so on.

There are many causes for human biases — empathy is only one — but taking a step back, we can see how the intuitive gut responses motivated by emotional empathy can negatively affect our ability to make rational decisions.

Empathy’s narrow focus, specificity, and innumeracy mean that it’s always going to be influenced by what captures our attention, by racial preferences, and so on. It’s only when we escape from empathy and rely instead on the application of rules and principles or a calculation of costs and benefits that we can, to at least some extent, become fair and impartial.

While many of us are motivated to be good and to make good decisions, it isn’t always cut and dry. Our preferences for whom to help or which organizations to support are affected by our biases. If we’re not careful, empathy can affect our ability to see the potential impacts of our actions. However, considering these impacts takes much more than empathy and a desire to do good; it takes awareness of our biases and mental effort to combat their effects:

… doing actual good, instead of doing what feels good, requires dealing with complex issues and being mindful of exploitation from competing, sometimes malicious and greedy, interests. To do so, you need to step back and not fall into empathy traps. The conclusion is not that one shouldn’t give, but rather that one should give intelligently, with an eye toward consequences.

In addition to biases like Liking/Loving and Disliking/Hating, empathy can lead to biases related to the Representative Heuristic. Actions motivated by empathy often fail to take the broader picture into account; the spotlight doesn’t encourage us to consider base rates or sample size when we make our decisions. Instead, we are motivated by positive emotions for a specific individual or small group:

Empathy is limited as well in that it focuses on specific individuals. Its spotlight nature renders it innumerate and myopic: It doesn’t resonate properly to the effects of our actions on groups of people, and it is insensitive to statistical data and estimated costs and benefits.

Part of the challenge that exists with empathy is this innumeracy that Bloom describes. It is impossible for us to form genuine empathic connections with abstractions. Conversely, if we see the suffering of one, empathy can motivate us to help make it stop. As Mother Theresa said, “If I look at the mass, I will never act. If I look at the one, I will.” This is what psychologists call “the identifiable victim effect.”

While many of us are motivated to be good and to make good decisions, it isn’t always cut and dry.

Perhaps an example will help illustrate.  On October 17, 1987, 18-month-old Jessica McClure fell 22 feet down an eight-inch-diameter well in the backyard of her home in Midland, Texas. Over the next 2 ½ days, fire, police, and volunteer rescuers worked around the clock to save her. Media coverage of the emergency was broadcast all over the world resulting in Jessica McClure becoming internationally known as “Baby Jessica” and prompting then-President Ronald Reagan to proclaim that “…everybody in America became the godmothers and godfathers of Jessica while this was going on.” The intense coverage and global awareness led to an influx of donations, resulting in an $800,000 trust being established in Jessica’s name.

What prompted this massive outpouring of concern and support? There are millions of children in need every day all over the world. How many of the people who sent donations to Baby Jessica had ever tried to help these faceless children? In the case of Baby Jessica, they had an identifiable victim, and empathy motivated many of them to help Jessica and her family. They could imagine what it might feel like for those poor parents and they felt genuine concern for the child’s future; all the other needy children around the world were statistical abstractions. This ability to identify and put a face on the suffering child and their family enables us to experience an empathic response with them, but the random children and their families remain empathically out of reach.

None of this is to say that rescuers should not have worked to save Jessica McClure — she was a real-world example of Mencius’s proverbial drowning child — but there are situations every day where we choose to help individuals at the cost of the continued suffering of others. Our actions often have diffuse and unknowable impacts.

If our concern is driven by thoughts of the suffering of specific individuals, then it sets up a perverse situation in which the suffering of one can matter more than the suffering of a thousand.

Furthermore, not only are we more likely to empathize with the identifiable victim, our empathy has its limits in scale as well. If we hear that an individual in a faraway land is suffering, we may have an empathic response, but will that response be increased proportionally if we learned that thousands or millions of people suffered? Adam Smith got to the heart of this question in The Theory of Moral Sentiments when he wrote:

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labors of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquility, as if no such accident had happened.

Empathy can inadvertently motivate us to act to save the one at the expense of the many. While the examples provided are by no means clear-cut issues, it is worth considering how the morality or goodness of our actions to help the few may have negative consequences for the many.

Charlie Munger has written and spoken about the Kantian Fairness Tendency, in which he suggests that for certain systems to be moral to the many, they must be unfair to the few.

For certain systems to be moral to the many, they must be unfair to the few.

Empathy and Reason

We are emotional creatures, then, but we are also rational beings, with the capacity for rational decision-making. We can override, deflect, and overrule our passions, and we often should do so. It’s not hard to see this for feelings like anger and hate—it’s clear that these can lead us astray, that we do better when they don’t rule us and when we are capable of circumventing them.

While we need kindness and compassion and we should strive to be good people making good decisions, we are not necessarily well served by empathy in this regard; emotional empathy’s negatives often outweigh its positives. Instead, we should rely on our capacity to reason and control our emotions. Empathy is not something that can be removed or ignored; it is a normal function of our brains after all, but we can and do combine reason with our natural instincts and intuitions:

The idea that human nature has two opposing facets—emotion versus reason, gut feelings versus careful, rational deliberation—is the oldest and most resilient psychological theory of all. It was there in Plato, and it is now the core of the textbook account of cognitive processes, which assumes a dichotomy between “hot” and “cold” mental processes, between an intuitive “System 1” and a deliberative “System 2.”

We know from Daniel Kahneman’s Thinking, Fast and Slow that these two systems are not inherently separate in practice. They are both functioning in our brains at the same time.

Some decisions are made faster due to heuristics and intuitions from experiences or our biology, while other decisions are made in a more deliberative and slow fashion using reason. Bloom writes:

We go through a mental process that is typically called “choice,” where we think about the consequences of our actions. There is nothing magical about this. The neural basis of mental life is fully compatible with the existence of conscious deliberation and rational thought—with neural systems that analyze different options, construct logical chains of argument, reason through examples and analogies, and respond to the anticipated consequences of actions.

We have an impulsive, emotional, and intuitive decision-making system in System 1 and a deliberative, reasoning, and (sometimes) rational decision-making system in System 2.

We will always have emotional reactions, but on average our decision making will be better served by improving our ability to reason rather than leveraging our ability to empathize

We will always have emotional reactions, but on average our decision making will be better served by improving our ability to reason rather than by leveraging our ability to empathize. One way to increase our ability to reason is to focus on improving our self-control:

Self-control can be seen as the purest embodiment of rationality in that it reflects the working of a brain system (embedded in the frontal lobe, the part of the brain that lies behind the forehead) that restrains our impulsive, irrational, or emotive desires.

While Bloom is unabashedly against empathy as an inherent force for good in the world, he is also a firm supporter of being and doing good. He believes that the “feeling with” nature of emotional empathy leads us to make biased and bad decisions despite our best intentions and that we should instead foster and encourage the “caring for” nature of compassion while combining it with our intelligence, self-control, and ability to reason:

… none of this is to deny the importance of traits such as compassion and kindness. We want to nurture these traits in our children and work to establish a culture that prizes and rewards them. But they are not enough. To make the world a better place, we would also want to bless people with more smarts and more self-control. These are central to leading a successful and happy life—and a good and moral one.

 

[Editor’s note: Where you see boldface in block quotes, emphasis has been added by Farnam Street.]

Do Algorithms Beat Us at Complex Decision Making?

Algorithms are all the rage these days. AI researchers are taking more and more ground from humans in areas like rules-based games, visual recognition, and medical diagnosis. However, the idea that algorithms make better predictive decisions than humans in many fields is a very old one.

In 1954, the psychologist Paul Meehl published a controversial book with a boring sounding name: Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.

The controversy? After reviewing the data, Meehl claimed that mechanical, data-driven algorithms could better predict human behavior than trained clinical psychologists — and with much simpler criteria. He was right.

The passing of time has not been friendly to humans in this game: Studies continue to show that the algorithms do a better job than experts in a range of fields. In Daniel Kahneman’s Thinking Fast and Slow, he details a selection of fields which have demonstrated inferior human judgment compared to algorithms:

The range of predicted outcomes has expanded to cover medical variables such as the longevity of cancer patients, the length of hospital stays, the diagnosis of cardiac disease, and the susceptibility of babies to sudden infant death syndrome; economic measures such as the prospects of success for new businesses, the evaluation of credit risks by banks, and the future career satisfaction of workers; questions of interest to government agencies, including assessments of the suitability of foster parents, the odds of recidivism among juvenile offenders, and the likelihood of other forms of violent behavior; and miscellaneous outcomes such as the evaluation of scientific presentations, the winners of football games, and the future prices of Bordeaux wine.

The connection between them? Says Kahneman: “Each of these domains entails a significant degree of uncertainty and unpredictability.” He called them “low-validity environments”, and in those environments, simple algorithms matched or outplayed humans and their “complex” decision making criteria, essentially every time.

***

A typical case is described in Michael Lewis’ book on the relationship between Daniel Kahneman and Amos Tversky, The Undoing Project. He writes of work done at the Oregon Research Institute on radiologists and their x-ray diagnoses:

The Oregon researchers began by creating, as a starting point, a very simple algorithm, in which the likelihood that an ulcer was malignant depended on the seven factors doctors had mentioned, equally weighted. The researchers then asked the doctors to judge the probability of cancer in ninety-six different individual stomach ulcers, on a seven-point scale from “definitely malignant” to “definitely benign.” Without telling the doctors what they were up to, they showed them each ulcer twice, mixing up the duplicates randomly in the pile so the doctors wouldn’t notice they were being asked to diagnose the exact same ulcer they had already diagnosed. […] The researchers’ goal was to see if they could create an algorithm that would mimic the decision making of doctors.

This simple first attempt, [Lewis] Goldberg assumed, was just a starting point. The algorithm would need to become more complex; it would require more advanced mathematics. It would need to account for the subtleties of the doctors’ thinking about the cues. For instance, if an ulcer was particularly big, it might lead them to reconsider the meaning of the other six cues.

But then UCLA sent back the analyzed data, and the story became unsettling. (Goldberg described the results as “generally terrifying”.) In the first place, the simple model that the researchers had created as their starting point for understanding how doctors rendered their diagnoses proved to be extremely good at predicting the doctors’ diagnoses. The doctors might want to believe that their thought processes were subtle and complicated, but a simple model captured these perfectly well. That did not mean that their thinking was necessarily simple, only that it could be captured by a simple model.

More surprisingly, the doctors’ diagnoses were all over the map: The experts didn’t agree with each other. Even more surprisingly, when presented with duplicates of the same ulcer, every doctor had contradicted himself and rendered more than one diagnosis: These doctors apparently could not even agree with themselves.

[…]

If you wanted to know whether you had cancer or not, you were better off using the algorithm that the researchers had created than you were asking the radiologist to study the X-ray. The simple algorithm had outperformed not merely the group of doctors; it had outperformed even the single best doctor.

The fact that doctors (and psychiatrists, and wine experts, and so forth) cannot even agree with themselves is a problem called decision making “noise”: Given the same set of data twice, we make two different decisions. Noise. Internal contradiction.

Algorithms win, at least partly, because they don’t do this: The same inputs generate the same outputs every single time. They don’t get distracted, they don’t get bored, they don’t get mad, they don’t get annoyed. Basically, they don’t have off days. And they don’t fall prey to the litany of biases that humans do, like the representativeness heuristic.

The algorithm doesn’t even have to be a complex one. As demonstrated above with radiology, simple rules work just as well as complex ones. Kahneman himself addresses this in Thinking, Fast and Slow when discussing Robyn Dawes’s research on the superiority of simple algorithms using a few equally-weighted predictive variables:

The surprising success of equal-weighting schemes has an important practical implication: it is possible to develop useful algorithms without prior statistical research. Simple equally weight formulas based on existing statistics or on common sense are often very good predictors of significant outcomes. In a memorable example, Dawes showed that marital stability is well predicted by a formula: Frequency of lovemaking minus frequency of quarrels.

You don’t want your result to be a negative number.

The important conclusion from this research is that an algorithm that is constructed on the back of an envelope is often good enough to compete with an optimally weighted formula, and certainly good enough to outdo expert judgment. This logic can be applied in many domains, ranging from the selection of stocks by portfolio managers to the choices of medical treatments by doctors or patients.

Stock selection, certainly a “low validity environment”, is an excellent example of the phenomenon.

As John Bogle pointed out to the world in the 1970’s, a point which has only strengthened with time, the vast majority of human stock-pickers cannot outperform a simple S&P 500 index fund, an investment fund that operates on strict algorithmic rules about which companies to buy and sell and in what quantities. The rules of the index aren’t complex, and many people have tried to improve on them with less success than might be imagined.

***

Another interesting area where this holds is interviewing and hiring, a notoriously difficult “low-validity” environment. Even elite firms often don’t do it that well, as has been well documented.

Fortunately, if we take heed of the advice of the psychologists, operating in a low-validity environment has rules that can work very well. In Thinking Fast and Slow, Kahneman recommends fixing your hiring process by doing the following (or some close variant), in order to replicate the success of the algorithms:

Suppose you need to hire a sales representative for your firm. If you are serious about hiring the best possible person for the job, this is what you should do. First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on). Don’t overdo it — six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of questions for each trait and think about how you will score it, say on a 1-5 scale. You should have an idea of what you will call “very weak” or “very strong.”

These preparations should take you half an hour or so, a small investment that can make a significant difference in the quality of the people you hire. To avoid halo effects, you must collect the information one at a time, scoring each before you move on to the next one. Do not skip around. To evaluate each candidate, add up the six scores. […] Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better–try to resit your wish to invent broken legs to change the ranking. A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as “I looked into his eyes and liked what I saw.”

In the battle of man vs algorithm, unfortunately, man often loses. The promise of Artificial Intelligence is just that. So if we’re going to be smart humans, we must learn to be humble in situations where our intuitive judgment simply is not as good as a set of simple rules.

Blog Posts, Book Reviews, and Abstracts: On Shallowness

We’re quite glad that you read Farnam Street, and we hope we’re always offering you a massive amount of value. (If not, email us and tell us what we can do more effectively.)

But there’s a message all of our readers should appreciate: Blog posts are not enough to generate the deep fluency you need to truly understand or get better at something. We offer a starting point, not an end point.

This goes just as well for book reviews, abstracts, cliff’s notes, and a good deal of short-form journalism.

This is a hard message for some who want a shortcut. They want the “gist” and the “high level takeaways”, without doing the work or eating any of the broccoli. They think that’s all it takes: Check out a 5-minute read, and instantly their decision making and understanding of the world will improve right-quick. Most blogs, of course, encourage this kind of shallowness. Because it makes you feel that the whole thing is pretty easy.

Here’s the problem: The world is more complex than that. It doesn’t actually work this way. The nuanced detail behind every “high level takeaway” gives you the context needed to use it in the real world. The exceptions, the edge cases, and the contradictions.

Let me give you an example.

A high-level takeaway from reading Kahneman’s Thinking Fast, and Slow would be that we are subject to something he and Amos Tversky call the Representativeness Heuristic. We create models of things in our head, and then fit our real-world experiences to the model, often over-fitting drastically. A very useful idea.

However, that’s not enough. There are so many follow-up questions. Where do we make the most mistakes? Why does our mind create these models? Where is this generally useful? What are the nuanced examples of where this tendency fails us? And so on. Just knowing about the Heuristic, knowing that it exists, won’t perform any work for you.

Or take the rise of human species as laid out by Yuval Harari. It’s great to post on his theory; how myths laid the foundation for our success, how “natural” is probably a useless concept the way it’s typically used, and how biology is the great enabler.

But Harari’s book itself contains the relevant detail that fleshes all of this out. And further, his bibliography is full of resources that demand your attention to get even more backup. How did he develop that idea? You have to look to find out.

Why do all this? Because without the massive, relevant detail, your mind is built on a house of cards.

What Farnam Street and a lot of other great resources give you is something like a brief map of the territory.

Welcome to Colonial Williamsburg! Check out the re-enactors, the museum, and the theatre. Over there is the Revolutionary City. Gettysburg is 4 hours north. Washington D.C. is closer to 2.5 hours.

Great – now you have a lay of the land. Time to dig in and actually learn about the American Revolution. (This book is awesome, if you actually want to do that.)

Going back to Kahneman, one of his and Tversky’s great findings was the concept of the Availability Heuristic. Basically, the mind operates on what it has close at hand.

As Kahneman puts it, “An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.”

That means that in the moment of decision making, when you’re thinking hard on some complex problem you face, it’s unlikely that your mind is working all that successfully without the details. It doesn’t have anything to draw on. It’d be like a chess player who read a book about great chess players, but who hadn’t actually studied all of their moves. Not very effective.

The great difficulty, of course, is that we lack the time to dig deep into everything. Opportunity costs and trade-offs are quite real.

That’s why you must develop excellent filters. What’s worth learning this deeply? We think it’s the first-principle style mental models. The great ideas from physical systems, biological systems, and human systems. The new-new thing you’re studying is probably either A. Wrong or B. Built on one of those great ideas anyways. Farnam Street, in a way, is just a giant filtering mechanism to get you started down the hill.

But don’t stop there. Don’t stop at the starting line. Resolve to increase your depth and stop thinking you can have it all in 5 minutes or less. Use our stuff, and whoever else’s stuff you like, as an entrée to the real thing.

Breaking the Rules: Moneyball Edition

Most of the book Simple Rules by Donald Sull and Kathleen Eisenhardt talks about identifying a problem area (or an area ripe for “simple rules”) and then walks you through creating your own set of rules. It’s a useful mental process.

An ideal situation for simple rules is something repetitive, giving you constant feedback so you can course correct as you go. But what if your rules stop working and you need to start over completely?

Simple Rules recounts the well-known Moneyball tale in its examination of this process:

The story begins with Sandy Alderson. Alderson, a former Marine with no baseball background became the A’s general manager in 1983. Unlike baseball traditionalists, Alderson saw scoring runs as a process, not an outcome, and imagined baseball as a factory with a flow of players moving along the bases. This view led Alderson and later his protege and replacement, Billy Beane, to the insight that most teams overvalue batting average (hits only) and miss the relevance of on-base percentage (walks plus hits) to keeping the runners moving. Like many insightful rules, this boundary rule of picking players with a high on base percentage has subtle second – and third-order effects. Hitters with a high on-base percentage are highly disciplined (i.e., patient, with a good eye for strikes). This means they get more walks, and their reputation for discipline encourages pitchers to throw strikes, which are easier to hit. They tire out pitchers by making them throw more pitches overall, and disciplined hitting does not erode much with age. These and other insights are at the heart of what author Michael Lewis famously described as moneyball.

The Oakland A’s did everything right, they had examined the issues, they tried to figure out those areas which would most benefit from a set of simple rules and they had implemented them. The problem was, they were easy rules to copy. 

They were operating in a Red Queen Effect world where everyone around them was co-evolving, where running fast was just enough to get ahead temporarily, but not permanently. The Red Sox were the first and most successful club to copy the A’s:

By 2004, a free-spending team, the Boston Red Sox, co-opted the A’s principles and won the World Series for the first time since 1918. In contrast, the A’s went into decline, and by 2007 the were losing more games than they were winning Moneyball had struck out.

What can we do when the rules stop working? 

We must break them.

***

When the A’s had brought in Sandy Alderson, he was an outsider with no baseball background who could look at the problem in a different and new light. So how could that be replicated?

The team decided to bring in Farhan Zaidi as director of baseball operations in 2009. Zaidi spent most of his life with a pretty healthy obsession for baseball but he had a unique background: a PhD in behavioral economics.

He started on the job of breaking the old rules and crafting new ones. Like Andy Grove did once upon a time with Intel, Zaidi helped the team turn and face a new reality. Sull and Eisenhardt consider this as a key trait:

To respond effectively to major change, it is essential to investigate the new situation actively, and create a reimagined vision that utilizes radically different rules.

The right choice is often to move to the new rules as quickly as possible. Performance will typically decline in the short run, but the transition to the new reality will be faster and more complete in the long run. In contrast, changing slowly often results in an awkward combination of the past and the future with neither fitting the other or working well.

Beane and Zaidi first did some house cleaning: They fired the team’s manager. Then, they began breaking the old Moneyball rules, things like avoiding drafting high-school players. They also decided to pay more attention to physical skills like speed and throwing.

In the short term, the team performed quite poorly as fan attendance showed a steady decline. Yet, once again, against all odds, the A’s finished first in their division in 2012. Their change worked. 

With a new set of Simple Rules, they became a dominant force in their division once again. 

Reflecting their formidable analytic skills, the A’s brass had a new mindset that portrayed baseball as a financial market rife with arbitrage possibilities and simple rules to match.

One was a how-to rule that dictated exploiting players with splits. Simply put, players with splits have substantially different performances in two seemingly similar situations. A common split is when a player hits very well against right-handed pitchers and poorly against left-handed pitchers, or vice versa. Players with spits are mediocre when they play every game, and are low paid. In contrast, most superstars play well regardless of the situation, and are paid handsomely for their versatility. The A’s insight was that when a team has a player who can perform one side of the split well and a different player who excels at the opposite split, the two positives can create a cheap composite player. So the A’s started using a boundary rule to pick players with splits and how-to rule to exploit those splits with platooning – putting different players at the same position to take advantage of their splits against right – or left-handed pitching.

If you’re reading this as a baseball fan, you’re probably thinking that exploiting splits isn’t anything new. So why did it have such an effect on their season? Well, no one had pushed it this hard before, which had some nuanced effects that might not have been immediately apparent.

For example, exploiting these splits keeps players healthier during the long 162-game season because they don’t play every day. The rule keeps everyone motivated because everyone has a role and plays often. It provides versatility when players are injured since players can fill in for each other.

They didn’t stop there. Zaidi and Beane looked at the data and kept rolling out new simple rules that broke with their highly successful Moneyball past.

In 2013 they added a new boundary rule to the player-selection activity: pick fly-ball hitters, meaning hitters who tend to hit the ball in the air and out of the infield (in contrast with ground-ball hitters). Sixty percent of the A’s at-bat were by fly-ball hitters in 2013, the highest percentage in major-league baseball in almost a decade, and the A’s had the highest ratio of fly ball to ground balls, by far. Why fly-ball hitters?

Since one of ten fly balls is a home run, fly-ball hitters hit more home runs: an important factor in winning games. Fly-ball hitters also avoid ground-ball double plays, a rally killer if ever there as one. They are particularly effective against ground-ball pitches because they tend to swing underneath the ball, taking way the advantage of those pitchers. In fact, the A’s fly-ball hitters batted an all-star caliber .302 against ground-ball pitchers in 2013 on their way to their second consecutive division title despite having the fourth-lowest payroll in major-league baseball.

Unfortunately, the new rules had a short-lived effectiveness: In 2014 the A’s fell to 2nd place and have been struggling the last two seasons. Two Cinderella stories is a great achievement, but it’s hard to maintain that edge. 

This wonderful demonstration of the Red Queen Effect in sports can be described as an “arms race.’” As everyone tries to get ahead, a strange equilibrium is created by the simultaneous continual improvement, and those with more limited resources must work even harder as the pack moves ahead one at a time.

Even though they have adapted and created some wonderful “Simple Rules” in the past, the A’s (and all of their competitors) must stay in the race in order to return to the top: No “rule” will allow them to rest on their laurels. Second Level Thinking and a little real world experience shows this to be true: Those that prosper consistently will think deeply, reevaluate, adapt, and continually evolve. That is the nature of a competitive world.