Tag: microeconomics

Prisoner’s Dilemma: What Game Are you Playing?

In this classic game theory experiment, you must decide: rat out another for personal benefit, or cooperate? The answer may be more complicated than you think.

***

What does it take to make people cooperate with each other when the incentives to act primarily out of self-interest are often so strong?

The Prisoner’s Dilemma is a thought experiment originating from game theory. Designed to analyze the ways in which we cooperate, it strips away the variations between specific situations where people are called to overcome the urge to be selfish. Political scientist Robert Axelrod lays down its foundations in The Evolution of Cooperation:

Under what conditions will cooperation emerge in a world of egoists without a central authority? This question has intrigued people for a long time. And for good reason. We all know that people are not angels and that they tend to look after themselves and their own first. Yet we also know that cooperation does occur and that our civilization is based on it. But in situations where each individual has an incentive to be selfish, how can cooperation ever develop?

…To make headway in understanding the vast array of specific situations which have this property, a way is needed to represent what is common to these situations without becoming bogged down in the details unique to each…the famous Prisoner’s Dilemma game.

The thought experiment goes as such: two criminals are in separate cells, unable to communicate, accused of a crime they both participated in. The police do not have enough evidence to sentence both without further evidence, though they are certain enough to wish to ensure they both spend time in prison. So they offer the prisoners a deal. They can accuse each other of the crime, with the following conditions:

  • If both prisoners say the other did it, each will serve two years in prison.
  • If one prisoner says the other did it and the other stays silent, the accused will serve three years and the accuser zero.
  • If both prisoners stay silent, each will serve one year in prison.

In game theory, the altruistic behavior (staying silent) is called “cooperating,” while accusing the other is called “defecting.”

What should they do?

If they were able to communicate and they trusted each other, the rational choice is to stay silent; that way each serves less time in prison than they would otherwise. But how can each know the other won’t accuse them? After all, people tend to act out of self-interest. The cost of being the one to stay silent is too high. The expected outcome when the game is played is that both accuse the other and serve two years. (In the real world, we doubt it would. After they served their time, it’s not hard to imagine each of them still being upset. Two years is a lot of time for a spring to coil in a negative way. Perhaps they spend the rest of their lives sabatoging each other.)

The Iterated Prisoner’s Dilemma

A more complex form of the thought experiment is the iterated Prisoner’s Dilemma, in which we imagine the same two prisoners being in the same situation multiple times. In this version of the experiment, they are able to adjust their strategy based on the previous outcome.

If we repeat the scenario, it may seem as if the prisoners will begin to cooperate. But this doesn’t make sense in game theory terms. When they know how many times the game will repeat, both have an incentive to accuse on the final round, seeing as there can be no retaliation. Knowing the other will surely accuse on the final round, both have an incentive to accuse on the penultimate round—and so on, back to the start.

Gregory Mankiw summarizes how difficult it is to model cooperation in Business Economics as follows:

To see how difficult it is to maintain cooperation, imagine that, before the police captured . . . the two criminals, [they] had made a pact not to confess. Clearly, this agreement would make them both better off if they both live up to it, because they would each spend only one year in jail. But would the two criminals in fact remain silent, simply because they had agreed to? Once they are being questioned separately, the logic of self-interest takes over and leads them to confess. Cooperation between the two prisoners is difficult to maintain because cooperation is individually irrational.

However, cooperative strategies can evolve if we model the game as having random or infinite iterations. If each prisoner knows they will likely interact with each other in the future, with no knowledge or expectation their relationship will have a definite end, the cooperation becomes significantly more likely. If we imagine that the prisoners will go to the same jail or will run in the same circles once released, we can understand how the incentive for cooperation might increase. If you’re a defector, running into the person you defected on is awkward at best, and leaves you sleeping with the fishes at worst.

Real-world Prisoner’s Dilemmas

We can use the Prisoner’s Dilemma as a means of understanding many real-world situations based on cooperation and trust. As individuals, being selfish tends to benefit us, at least in the short term. But when everyone is selfish, everyone suffers.

In The Prisoner’s Dilemma, Martin Peterson asks readers to imagine two car manufacturers, Row Cars and Col Motors. As the only two actors in their market, the price each sells cars at has a direct connection to the price the other sells cars at. If one opts to sell at a higher price than the other, they will sell fewer cars as customers transfer. If one sells at a lower price, they will sell more cars at a lower profit margin, gaining customers from the other. In Peterson’s example, if both set their prices high, both will make $100 million per year. Should one decide to set their prices lower, they will make $150 million while the other makes nothing. If both set low prices, both make $20 million. Peterson writes:

Imagine that you serve on the board of Row Cars. In a board meeting, you point out that irrespective of what Col Motors decides to do, it will be better for your company to opt for low prices. This is because if Col Motors sets its price low, then a profit of $20 million is better than $0, and if Col Motors sets its price high, then a profit of $150 million is better than $100 million.

Gregory Mankiw gives another real-world example in Microeconomics, detailed here:

Consider an oligopoly with two members, called Iran and Saudi Arabia. Both countries sell crude oil. After prolonged negotiation, the countries agree to keep oil production low in order to keep the world price of oil high. After they agree on production levels, each country must decide whether to cooperate and live up to this agreement or to ignore it and produce at a higher level. The following image shows how the profits of the two countries depend on the strategies they choose.

Suppose you are the leader of Saudi Arabia. You might reason as follows:

I could keep production low as we agreed, or I could raise my production and sell more oil on world markets. If Iran lives up to the agreement and keeps its production low, then my country ears profit of $60 billion with high production and $50 billion with low production. In this case, Saudi Arabia is better off with high production. If Iran fails to live up to the agreement and produces at a high level, then my country earns $40 billion with high production and $30 billion with low production. Once again, Saudi Arabia is better off with high production. So, regardless of what Iran chooses to do, my country is better off reneging on our agreement and producing at a high level.

Producing at a high level is a dominant strategy for Saudi Arabia. Of course, Iran reasons in exactly the same way, and so both countries produce at a high level. The result is the inferior outcome (from both Iran and Saudi Arabia’s standpoint) with low profits in each country. This example illustrates why oligopolies have trouble maintaining monopoly profits. The monopoly outcome is jointly rational for the oligopoly, but each oligopolist has an incentive to cheat. Just as self-interest drives the prisoners in the prisoners’ dilemma to confess, self-interest makes it difficult for the oligopoly to maintain the cooperative outcome with low production, high prices and monopoly prices.

Other examples of prisoners’ dilemmas include arms races, advertising, and common resources (see The Tragedy of the Commons). Understanding the Prisoner’s Dilemma is an important component of the dynamics of cooperation, an extremely useful mental model.

Thinking of life as an iterative game changes how you play. Positioning yourself for the future carries more weight than “winning” in the moment.

Externalities: Why We Can Never Do “One Thing”

No action exists in a vacuum. There are ripples that have consequences that we can and can’t see. Here are the three types of externalities that can help us guide our actions so they don’t come back to bite us.

***

An externality affects someone without them agreeing to it. As with unintended consequences, externalities can be positive or negative. Understanding the types of externalities and the impact they have in our lives can help us improve our decision making, and how we interact with the world.

Externalities provide useful mental models for understanding complex systems. They show us that systems don’t exist in isolation from other systems. Externalities may affect uninvolved third parties which make them a form of market failure —an inefficient allocation of resources.

We both create and are subject to externalities. Most are very minor but compound over time. They can inflict numerous second-order effects. Someone reclines their seat on an airplane. They get the benefit of comfort. The person behind bears the cost of discomfort by having less space. One family member leaves their dirty dishes in the sink. They get the benefit of using the plate. Someone else bears the cost of washing it later. We can’t expect to interact with any system without repercussions. Over time, even minor externalities can cause significant strain in our lives and relationships.

The First Law of Ecology

To understand externalities it is first useful to consider second-order consequences. In Filters Against Folly, Garrett Hardin describes what he considers to be the First Law of Ecology: We can never do one thing. Whenever we interact with a system, we need to ask, “And then what? What will the wider repercussions of our actions be?” There is bound to be at least one externality.

Hardin gives the example of the Prohibition Amendment in the U.S. In 1920, lawmakers banned the production and sale of alcoholic beverages throughout the entire country. This was in response to an extended campaign by those who believed alcohol was evil. It wasn’t enough to restrict its consumption—it needed to go.

The addition of 61 words to the American Constitution changed the social and legal landscape for over a decade. Policymakers presumably thought they could make the change and people would stop drinking. But Prohibition led to numerous externalities. Alcohol is an important part of many people’s lives. Few were willing to suddenly give it up without a fight. The demand was more than strong enough to ensure a black-market supply re-emerged.

Wealthy people stockpiled alcohol in their homes before the ban went into effect. Thousands of speakeasies and gin joints flourished. Walgreens grew from 20 stores to 500, in large part due to its sales of ‘medicinal’ whiskey. Former alcohol producers simply sold the ingredients for people to make their own. Gangsters like Al Capone made their fortune smuggling, and murdered his rivals in the process. Crime gangs undermined official institutions. Tax revenues plummeted. People lost their jobs. Prisons became overcrowded and bribery commonplace. Thousands died from crime and drinking unsafe homemade alcohol.

Policymakers did not fully ask, “And then what?” before legislating. Drinking did decrease during this time, on average by about half.  But this was far from the hope of a total ban. The second-order consequences outweighed any benefits.

As economist Gregory Mankiw explains in Principles of Microeconomics,

In the presence of externalities, society’s interest in a market outcome extends beyond the well-being of buyers and sellers who participate in the market; it also includes the well-being of bystanders who are affected indirectly…. The market equilibrium is not efficient when there are externalities. That is, the equilibrium fails to maximize the total benefit to society as a whole.

Negative Externalities

Negative externalities can occur during the production or consumption of a service or good. Pollution is a useful example. If a factory pollutes nearby water supplies, it causes harm without incurring costs. The costs to society are high and are not reflected in the price of whatever the factory makes. Economists often view environmental damage as another factor in a production process. But even if pollution is taxed, the harmful effects don’t go away.

Transport and manufacturing release toxins into the environment, harming our health and altering our climate. The reality though, is these externalities are hard to see, and it is often difficult to trace them back to their root causes. There’s also the question of whether we are responsible for externalities or not.

Imagine you’re driving down the road. As you go by an apartment, the noise disturbs someone who didn’t agree to it. Your car emits air pollution, which affects everyone living nearby. Each of these small externalities will affect people you don’t see and who didn’t choose them. They won’t receive any compensation from you. Are you really responsible for the externalities you cause? If you’re not being outright careless or malicious, isn’t it just part of life? How much responsibility do we have as individuals, anyway?

Calling something a negative externality can be a convenient way of abdicating responsibility.

Positive Externalities

A positive externality imposes an unexpected benefit on a third party. The producer doesn’t agree to this, nor do they receive compensation for it.

Scientific research often leads to positive externalities. Research findings can have applications beyond their initial scope. The resulting information becomes part of our collective knowledge base. However, the researcher who makes a discovery cannot receive the full benefits. Nor do they necessarily feel entitled to them.

Blaise Pascal and Pierre de Fermat developed probability theory to solve a gambling dispute. Their work went on to inform numerous disciplines (like the field of calculus) and transform our understanding of the world. Probabilities are now a core part of how we think. Pascal and Fermat created a positive externality.

Someone who comes up with an equation cannot expect compensation each time it gets used. As a result, the incentives to invest the time and effort to discover new equations are reduced. Algorithms, patents, and copyright laws change this by allowing creators to protect and profit from their ideas for years before other people can freely use them. We all benefit, and researchers have an incentive to continue their work.

Network effects are an example of a positive externality. Silicon Valley understands this well. Each person who joins a network, like a marketplace app, increases the value to all other users. Those who own the network have an incentive improve it to encourage new users. Everyone benefits from being able to communicate with more people. While we might not join a new network intending to improve it for other people, that is what normally happens. (On the flipside, network effects can also produce negative externalities, as too many members can decrease the value of a network.)

Positive externalities often lead to the “free rider” problem. When we enjoy something that we aren’t paying for, we tend not to value it. Not paying can remove the incentive to look after a resource and leads to a Tragedy of the Commons situation. As Aristotle put it, “For that which is common to the greatest number has the least care bestowed upon it.” A good portion of online content succumbs to the free rider problem. We enjoy it and yet we don’t pay for it. We expect it to be free and yet, if users weren’t willing to support sites like Farnam Street, they would likely fold, start publishing lower quality articles, or sell readers to advertisers who collect their data. The end result, as we see too frequently, is low-quality content funded by page-view advertising. (This is why we have a membership program. Members of our learning community create a positive externality for non-members by helping support the free content.)

Positional Externalities

Positional externalities are a form of second-order effects. They occur when our decisions alter the context of future perception or value.

For example, consider what happens when a person decides to start staying at the office an hour late. Perhaps they want a promotion and think it will endear them to managers. Parkinson’s Law states that tasks expand to fit the time allocated to them. What this person would otherwise get done by 5pm, now takes until 6pm. Staying late becomes their norm. Their co-workers notice and start to also stay late. Before long, staying at the office until 6pm becomes the standard for everyone. Anyone who leaves at 5pm is perceived as lazy. Now that 6pm is the norm, everyone suffers. They are forced to work more without deriving any real benefits. It’s a lose-lose situation for everyone.

Someone we know once made an investment with a nearly unlimited return by gaming the system. He worked for an investment firm that valued employees according to a perception of how hard they worked and not necessarily by their results. Each Monday he brought in a series of sport coats and left them in the office. He paid the cleaning staff $20 a week to change the coat hanging on his chair and to turn on his computer. No matter what happened, it appeared he was always the first one into the office even though he often didn’t show up from a “client meeting” until 10. When it came to bonus time, he’d get an enormous return on that $20 investment.

Purchasing luxury goods can create positional externalities. Veblen goods are items we value because of their scarcity and high cost. Diamonds, Lamborghinis, tailor-made suits — owning them is a status symbol, and they lose their value if they become cheaper or if too many people have them. As Luca Lambertini puts it in The Economics of Vertically Differentiated Markets,

The utility derived from consumption is a function of the quantity purchased relative to the average of the society or the reference group to whom the consumer compares.” In other words, a shiny new car seems more valuable if all your friends are driving battered old wrecks. If they have equally (or more) fancy cars, the value of yours drops. At some point, it seems worthless and it’s time to find a new one. In this way, the purchase of a Veblen good confers a positional externality on other people who own it too.

That utility can also be a matter of comparison. A person earning $40,000 a year while their friends earn $30,000 will be happier than one earning $60,000 when their friends earn $70,000. When someone’s salary increases, it raises the bar, giving others a new point of reference.

We can confer positional externalities on ourselves by changing our attitudes. Let’s say someone enjoys wine but is not a connoisseur. A $10 bottle and a $100 bottle make them equally happy. When they decide to go on a course and learn the subtleties and technicalities of fine wines, they develop an appreciation for the $100 wine and a distaste for the $10. They may no longer be able to enjoy a cheap drink because they raised their standards.

Conclusion

Externalities are everywhere. It’s easy to ignore the impact of our decisions—to recline an airplane seat, to stay late at the office, or drop litter. Eventually though, someone always ends up paying. Like the villagers in Hardin’s Tragedy of the Commons, who end up with no grass for their animals, we run the risk of ruining a good thing if we don’t take care of it. Keeping the three types of externalities in mind is a useful way to make decisions that won’t come back to bite you. Whenever we interact with a system, we should remember to ask Hardin’s question: and then what?