Tag: Game Theory

Moral Hypocrisy

From Jonathan Haidt’s book The Happiness Hypothesis:

The gap between action and perception is bridged by the art of impression management. If life itself is what you deem it, then why not focus your efforts on persuading others to believe that you are a virtuous and trustworthy cooperator?

Natural selection, like politics, works by the principle of survival of the fittest, and several researchers have argued that human beings evolved to play the game of life in a Machiavellian way. The Machiavellian version of tit for tat… is to do all you can to cultivate the reputation of a trustworthy yet vigilant partner, whatever reality may be.

The simplest way to cultivate a reputation for being fair is to really be fair, but life and psychology experiments sometimes force us to choose between appearance and reality. The findings are not pretty. … The tendency to value the appearance of morality over reality has been dubbed “moral hypocrisy”.

… Proving that people are selfish, or that they’ll sometimes cheat when they know they won’t be caught, seems like a good way to get an article into the Journal of Incredibly Obvious Results. What’s not so obvious is that, in nearly all these studies, people don’t think they are doing anything wrong. It’s the same in real life. From the person who cuts you off on the highway all the way to the Nazis who ran the concentration camps, most people think they are good people and they their actions are motivated by good reasons. Machiavellian tit for tat requires devotion to appearances, including protestations of one’s virtue even when one chooses vice. And such protestations are most effective when the person making them really believes them.

As Robert Wright puts it in his masterful book The Moral Animal, “Human beings are a species splendid in their array of moral equipment, tragic in their propensity to misuse it, and pathetic in their constitutional ignorance of the misuse.

Social Dilemmas: When to Defect and When to Cooperate

Social dilemmas arise when an individual receives a higher payoff for defecting than cooperating when everyone else cooperates. When everyone defects they are worse off. That is, each member has a clear and unambiguous incentive to make a choice, which if made by all members provides a worse outcome.

A great example of a social dilemma is to imagine yourself out with a group of your friends for dinner. Before the meal, you all agree to share the cost equally. Looking at the menu you see a lot of items that appeal to you but are outside of your budget.

Pondering on this, you realize that you’re only on the hook for 1/(number of friends at the dinner) of the bill. Now you can enjoy yourself without having to pay the full cost.

But what if everyone at the table realized the same thing? My guess is you’d all be stunned by the bill, even the tragedy of the commons.

This is a very simple example but you can map this to the business word by thinking about healthcare and insurance.

If that sounds a lot like game theory, you’re on the right track.

I came across an excellent paper[1] by Robyn Dawes and David Messick, which takes a closer look at social dilemmas.

A Psychological Analysis of Social Dilemmas

In the case of the public good, one strategy that has been employed is to create a moral sense of duty to support it—for instance, the public television station that one watches. The attempt is to reframe the decision as doing one’s duty rather than making a difference—again, in the wellbeing of the station watched. The injection of a moral element changes the calculation from “Will I make a difference” to “I must pay for the benefit I get.”

The final illustration, the shared meal and its more serious counterparts, requires yet another approach. Here there is no hierarchy, as in the organizational example, that can be relied upon to solve the problem. With the shared meal, all the diners need to be aware of the temptation that they have and there need to be mutually agreed-upon limits to constrain the diners. Alternatively, the rule needs to be changed so that everyone pays for what they ordered. The latter arrangement creates responsibility in that all know that they will pay for what they order. Such voluntary arrangements may be difficult to arrange in some cases. With the medical insurance, the insurance company may recognize the risk and insist on a principle of co-payments for medical services. This is a step in the direction of paying for one’s own meal, but it allows part of the “meal’ ‘ to be shared and part of it to be paid for by the one who ordered it.

The fishing version is more difficult. To make those harvesting the fish pay for some of the costs of the catch would require some sort of taxation to deter the unbridled exploitation of the fishery. Taxation, however, leads to tax avoidance or evasion. But those who harvest the fish would have no incentive to report their catches accurately or at all, especially if they were particularly successful, which simultaneously means particularly successfully—compared to others at least—in contributing to the problem of a subsequently reduced yield. Voluntary self-restraint would be punished as those with less of that personal quality would thrive while those with more would suffer. Conscience, as Hardin (1968) noted, would be self-eliminating. …

Relatively minor changes in the social environment can induce major changes in decision making because these minor changes can change the perceived appropriateness of a situation. One variable that has been shown to make such a difference is whether the decision maker sees herself as an individual or as a part of a group.

  • 1

    Dawes RM, Messick M (2000) Social Dilemmas. Int J Psychol 35(2):111–116