Tag: Tragedy of the Commons

What Competition in Nature Should Teach Us about Markets

“Though the free-market faithful have long preached that competition creates efficiency, as if it were a law of nature, nature itself teaches a different lesson.”

No tree can afford to not compete in the height competition. However, if somehow the trees could arrange a pact of friendship to limit their heights, each tree, and the forest as a whole, could save energy. This is obviously not possible for trees, but if it were, Dawkins concludes, the “Forest of Friendship [would be] more efficient as a forest.”

Systems of self-interested agents, responding only to local incentives, can easily evolve energy-wasting, unfruitful competitions. Dawkins doesn’t make the obvious connection between free- market theory and freely evolved systems, but you should. Once a way of competing is established, it’s very difficult for individuals not to play along. If we let our economies imitate trees, and the majority of nature, in practicing unguided free competition, the results will often be suboptimal, for each and for all. Worse, we will miss the main benefit of being human, which is to use reason to coordinate better outcomes.

The way wasteful competition gets entrenched is a worrying example of an entire class of errors in which what passes for rational decisions can create undesirable outcomes. These include the tragedy of the commons, Prisoner’s Dilemma-type games, and Nash equilibria. Applying a narrowly self-maximizing logic yields suboptimal results for everybody.

Still curious? Try reading The Darwin Economy.

The Tragedy Of The Commons

What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others. — Aristotle

The rules pay you to do the wrong thing. — Garrett Hardin

The Tragedy of the Commons is a parable that illustrates why common resources get used more than is desirable from the standpoint of society as a whole.

Garrett Hardin, introduces us to the The Tragedy of the Commons:

Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy.

As a rational being, each herdsman seeks to maximize his gain. Explicitly or implicitly, more or less consciously, he asks, “What is the utility to me of adding one more animal to my herd?” This utility has one negative and one positive component.

1) The positive component is a function of the increment of one animal. Since the herdsman receives all the proceeds from the sale of the additional animal, the positive utility is nearly +1.

2) The negative component is a function of the additional overgrazing created by one more animal. Since, however, the effects of overgrazing are shared by all the herdsmen, the negative utility for any particular decision-making herdsman is only a fraction of 1.

Adding together the component partial utilities, the rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another. . . . But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit–in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.

Greg Mankiw, in his Microeconomics text says:

Consider life in a small medieval town. Of the many economic activities that take place in the town, one of the most important is raising sheep. Many of the town’s families own flocks of sheep and support themselves by selling the sheep’s wool, which is used to make clothing.

As our story begins, the sheep spend much of their time grazing on the land surrounding the town, called the Town Commons. No family owns the land. Instead the town residents own the land collectively, and all the residents are allowed to graze their sheep on it. Collective ownership works well because land is plentiful. As long as everyone can get all the good grazing land they want, the Town Common is not a rival good and allowing residents’ sheep to graze for free causes no problems. Everyone in the town is happy.

As the years pass, the population of the town grows and so does the number of sheep grazing on the Town Commons. With a growing number of sheep and a fixed amount of land, the land starts to lose its ability to replenish itself. Eventually, the land is grazed so heavily that it becomes barren. With no grass left on the Town Common, raising sheep is impossible, and the town’s once prosperous wool industry disappears and, tragically, many families lose their source of livelihood.

What causes the tragedy? Why do the shepherds allow the sheep population to grow so large that is destroys the Town Common? The reason is that social and private incentives differ. Avoiding the destruction of the grazing land depends on the collective action of the shepherds. If the shepherds acted together, they could reduce the sheep population to a size that the Town Common could support. Yet no single family has an incentive to reduce the size of its own flock because each flock represents only a small part of the problem.

In essence, the Tragedy of the Commons arises because of an externality. When one family’s flock grazes on the common land, it reduces the quality of the land available for other families. Because people neglect this negative externality when deciding how many sheep to own, the result is an excessive number of sheep.

If the tragedy had been foreseen, the town could have solved the problem in various ways. It could have regulated the number of sheep in each family’s flock, internalized the externality by taxing sheep, or auctioned off a limited number of sheep grazing permits. That is, the medieval town could have dealt with the problem of overgrazing in the way that modern society deals with the problem of pollution.

In the case of land, however, there is a simpler solution. The town can divide up the land among town families. Each family can enclose its allotment of land with a fence and then protect it from excessive grazing. In this way, the land becomes a private good rather than a common resource. This outcome in fact occurred during the enclosure movement in England in the 17th century.

The Tragedy of the Commons is a story with a general lesson: when one person uses a common resource, he diminishes other people’s enjoyment of it. Because of this negative externality, common resources tend to be used excessively. The government can solve the problem by reducing use of the common resource through regulation or taxes. Alternatively, the government can sometimes turn the common resource into a private good.

This lesson has been known for thousands of years. The ancient Greek philosopher Aristotle pointed out the problem with common resources: ‘What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others.’

The Tragedy of the Commons is a Farnam Street Mental Model.

Max Bazerman — You Are Not As Ethical As You Think

Ethical infractions are rooted in the intricacies of human psychology rather than integrity.

Max Bazerman’s book: Blind Spots will certainly make you think about your own actions more objectively.

Briefly, here are some of my takeaways.

  • We engage in behavioral forecasting errors. We believe we will behave a certain way in a certain situation. Yet, when actually faced with that situation we behave differently.
  • We are experts at deflecting blame and rationalizing our behavior in a positive light. A used car salesman can view himself as ethical despite selling someone a car that leaks oil, by noting the buyer failed to ask the right questions (bias from self-interest).
  • People often judge the ethicality of actions based on the outcome (outcome bias). We tend to be far more concerned with and show more sympathy when the actions taken affect “identifiable victims”.
  • Motivated blindness (when one party has an interest in overlooking the unethical behavior of another party) explains the financial crisis (bias from self-interest).
  • Research finds that cognitively busy people are more likely to cheat on a task than those who are less overloaded. Why? Because it takes cognitive effort to be reflective enough to skip the influence to cheat. Our brains are predisposed to make quick decisions and in the process, they can fail to consider outside influences (such as ethical concerns). We also behave differently when facing a loss than a gain. We’re more willing to cheat when we’re trying to avoid a loss.
  • Snap decisions are especially prone to unconscious bias. The less time we have to think the more likely we default to in-group preference (racial stereotypes). When instructed to shoot “criminals” and not unarmed citizens one study found that participants incorrectly shot more black men than white men.
  • Research shows that most people view their own input into a group, their division’s input to the overall organization, and their firm’s contributions to a strategic alliance to be more important and substantial than reality can sustain. Over-claiming this credit is, at least partly rooted in our bounded ethicality. That is, we exclude important and relevant information from our decisions by placing arbitrary and functional bounds around our definition of a problem (normally in a self-serving manner). This is part of the reason we fail to see eye to eye in disagreements — we pay attention to different data.
  • The difference in the way information is processed is often not intentional. Confirmation bias helps our minds absorb information that is in agreement with our beliefs and discount information that may contradict our thoughts. (We can’t remember our previous intentions either; How Our Brains Make Memories).
  • Egocentrism is dangerous when playing a Tragedy of the Commons game (Social Dilemma) such as the one we’re currently playing with debt and the environment as it encourages us to over claim resources.
  • In the end the kindergarten rule of fairness applies: one person cuts the cookie and the other has first pick on which half to eat.
  • In social dilemmas the easiest strategy is to defect.
  • A whole host of societal problems result from our tendency to use an extremely high discount rate regarding the future. One result is that we save far too little for retirement. Over-discounting the future can be immoral too as it robs future generations of opportunities and resources.
  • Compliance programs often include sanctioning systems that attempt to discourage unethical behavior, typically though punishment. Yet these programs often have the reverse effect, encouraging the behavior they are supposed to discourage. Why? In short because it removes the ethical consideration and makes it a business decision. (The number of late pick ups at daycares increase when there is a fine.)
  • When your informal culture doesn’t line up with your formal culture you have blind spots and employees will follow the informal culture.
  • Of course, we’re overconfident so informing us about our blind spots doesn’t seem to help us make better choices. We tend to believe that while others may fall prey to psychological biases, we don’t. Left to our own devices we dramatically understate the degree to which our own behavior is affected by incentives and situational factors.

***

Still curious? Check out Blind Spots. This book will help you see how your biases lead to your own immoral actions. And if you’re still curious try: Bounded Ethicality: The Perils of Loss Framing.

Social Dilemmas: When to Defect and When to Cooperate

Social dilemmas arise when an individual receives a higher payoff for defecting than cooperating when everyone else cooperates. When everyone defects they are worse off. That is, each member has a clear and unambiguous incentive to make a choice, which if made by all members provides a worse outcome.

A great example of a social dilemma is to imagine yourself out with a group of your friends for dinner. Before the meal, you all agree to share the cost equally. Looking at the menu you see a lot of items that appeal to you but are outside of your budget.

Pondering on this, you realize that you’re only on the hook for 1/(number of friends at the dinner) of the bill. Now you can enjoy yourself without having to pay the full cost.

But what if everyone at the table realized the same thing? My guess is you’d all be stunned by the bill, even the tragedy of the commons.

This is a very simple example but you can map this to the business word by thinking about healthcare and insurance.

If that sounds a lot like game theory, you’re on the right track.

I came across an excellent paper[1] by Robyn Dawes and David Messick, which takes a closer look at social dilemmas.

A Psychological Analysis of Social Dilemmas

In the case of the public good, one strategy that has been employed is to create a moral sense of duty to support it—for instance, the public television station that one watches. The attempt is to reframe the decision as doing one’s duty rather than making a difference—again, in the wellbeing of the station watched. The injection of a moral element changes the calculation from “Will I make a difference” to “I must pay for the benefit I get.”

The final illustration, the shared meal and its more serious counterparts, requires yet another approach. Here there is no hierarchy, as in the organizational example, that can be relied upon to solve the problem. With the shared meal, all the diners need to be aware of the temptation that they have and there need to be mutually agreed-upon limits to constrain the diners. Alternatively, the rule needs to be changed so that everyone pays for what they ordered. The latter arrangement creates responsibility in that all know that they will pay for what they order. Such voluntary arrangements may be difficult to arrange in some cases. With the medical insurance, the insurance company may recognize the risk and insist on a principle of co-payments for medical services. This is a step in the direction of paying for one’s own meal, but it allows part of the “meal’ ‘ to be shared and part of it to be paid for by the one who ordered it.

The fishing version is more difficult. To make those harvesting the fish pay for some of the costs of the catch would require some sort of taxation to deter the unbridled exploitation of the fishery. Taxation, however, leads to tax avoidance or evasion. But those who harvest the fish would have no incentive to report their catches accurately or at all, especially if they were particularly successful, which simultaneously means particularly successfully—compared to others at least—in contributing to the problem of a subsequently reduced yield. Voluntary self-restraint would be punished as those with less of that personal quality would thrive while those with more would suffer. Conscience, as Hardin (1968) noted, would be self-eliminating. …

Relatively minor changes in the social environment can induce major changes in decision making because these minor changes can change the perceived appropriateness of a situation. One variable that has been shown to make such a difference is whether the decision maker sees herself as an individual or as a part of a group.

Footnotes
  • 1

    Dawes RM, Messick M (2000) Social Dilemmas. Int J Psychol 35(2):111–116