Coordination Problems: What It Takes to Change the World

The key to major changes on a societal level is getting enough people to alter their behavior at the same time. It’s not enough for isolated individuals to act. Here’s what we can learn from coordination games in game theory about what it takes to solve some of the biggest problems we face.

***

What is a Coordination Failure?

Sometimes we see systems where everyone involved seems to be doing things in a completely ineffective and inefficient way. A single small tweak could make everything substantially better—save lives, be more productive, save resources. To an outsider, it might seem obvious what needs to be done, and it might be hard to think of an explanation for the ineffectiveness that is more nuanced than assuming everyone in that system is stupid.

Why is publicly funded research published in journals that charge heavily for it, limiting the flow of important scientific knowledge, without contributing anything? Why are countries spending billions of dollars and risking disaster developing nuclear weapons intended only as deterrents? Why is doping widespread in some sports, even though it carries heavy health consequences and is banned? You can probably think of many similar problems.

Coordination games in game theory gives us a lens for understanding both the seemingly inscrutable origins of such problems and why they persist.

The Theoretical Background to Coordination Failure

In game theory, a game is a set of circumstances where two or more players pick among competing strategies in order to get a payoff. A coordination game is one where players get the best possible payoff by all doing the same thing. If one player chooses a different strategy, they get a diminished payoff and the other player usually gets an increased payoff.

When all players are carrying out a strategy from which they have no incentive to deviate, this is called the Nash equilibrium: given the strategy chosen by the other player(s), no player could improve their payoff by changing their strategy. However, a game can have multiple Nash equilibria with different payoffs. In real-world terms, this means there are multiple different choices everyone could make, some better than others, but all only working if they are unanimous.

The Prisoner’s Dilemma is a coordination game. In a one-round Prisoner’s Dilemma, the optimal strategy for each player is to defect. Even though this is the strategy that makes most sense, it isn’t the one with the highest possible payoff—that would involve both players cooperating. If one cooperates when the other doesn’t, they receive a diminished payoff. Seeing as they cannot know what the other player will do, cooperating is unwise. If they cooperate when the other defects, they get the worst possible payoff. If they defect and the other player also defects, they still get a better payoff than they would have done by cooperating.

So the Prisoner’s Dilemma is a coordination failure. The players would get a better payoff if they both cooperated, but they cannot trust each other. In a form of the Iterated Prisoner’s Dilemma, players compete for an unknown number of rounds. In this case, cooperation becomes possible if both players use the strategy of “tit for tat.” This means that they cooperate in the first round, then do whatever the other player previously did for each subsequent round. However, there is still an incentive to defect because any given round could be the last, so cooperating can never be the Nash equilibrium in the Prisoner’s Dilemma.

Many of the major problems we see around us are coordination failures. They are only solvable if everyone can agree to do the same thing at the same time. Faced with multiple Nash equilibria, we do not necessarily choose the best one overall. We choose what makes sense given the existing incentives, which often discourage us from challenging the status quo. It often makes most sense to do what everyone else is doing, whether that’s driving on the left side of the road, wearing a suit to a job interview, or keeping your country’s nuclear arsenal stocked up.

Take the case of academic publishing, given as a classic coordination failure by Eliezer Yudkowsky in Inadequate Equilibria: Where and How Civilizations Get Stuck. Academic journals publish research within a given field and charge for access to it, often at exorbitant rates. In order to get the best jobs and earn prestige within a field, researchers need to publish in the most respected journals. If they don’t, no one will take their work seriously.

Academic publishing is broken in many ways. By charging high prices, journals limit the flow of knowledge and slow scientific progress. They do little to help researchers, instead profiting from the work of volunteers and taxpayer funding. Yet researchers continue to submit their work to them. Why? Because this is the Nash equilibrium. Although it would be better for science as a whole if everyone stopped publishing in journals that charge for access, it isn’t in the interests of any individual scientist to do so. If they did, their career would suffer and most likely end. The only solution would be a coordinated effort for everyone to move away from journals. But seeing as this is so difficult to organize, the farce of academic publishing continues, harming everyone except the journals.

How We Can Solve and Avoid Coordination Failures

It’s possible to change things on a large scale if we are able to communicate on a much greater scale. When everyone knows that everyone knows, changing what we do is much easier.

We all act out of self-interest, so expecting individuals to risk the costs of going against convention is usually unreasonable. Yet it only takes a small proportion of people to change their opinions to reach a tipping point where there is strong incentive for everyone to change their behavior, and this is magnified even more if those people have a high degree of influence. The more power those who enact change have, the faster everyone else can do the same.

To overcome coordination failures, we need to be able to communicate despite our differences. And we need to be able to trust that when we act, others will act too. The initial kick can be enough people making their actions visible. Groups can have exponentially greater impacts than individuals. We thus need to think beyond the impact of our own actions and consider what will happen when we act as part of a group.

In an example given by the effective altruism-centered website 80,000 Hours, there are countless charitable causes one could donate money to at any given time. Most people who donate do so out of emotional responses or habit. However, some charitable causes are orders of magnitude more effective than others at saving lives and having a positive global impact. If many people can coordinate and donate to the most effective charities until they reach their funding goal, the impact of the group giving is far greater than if isolated individuals calculate the best use of their money. Making research and evidence of donations public helps solve the communication issue around determining the impact of charitable giving.

As Michael Suk-Young Chwe writes in Rational Ritual: Culture, Coordination, and Common Knowledge, “Successful communication sometimes is not simply a matter of whether a given message is received. It also depends on whether people are aware that other people also receive it.” According to Suk-Young Chwe, for people to coordinate on the basis of certain information it must be “common knowledge,” a phrase used here to mean “everyone knows it, everyone knows that everyone knows it, everyone knows that everyone knows that everyone knows it, and so on.” The more public and visible the change is, the better.

We can prevent coordination failures in the first place by visible guarantees that those who take a different course of action will not suffer negative consequences. Bank runs are a coordination failure that were particularly problematic during the Great Depression. It’s better for everyone if everyone leaves their deposits in the bank so it doesn’t run out of reserves and fail. But when other people start panicking and withdrawing their deposits, it makes sense for any given individual to do likewise in case the bank fails and they lose their money. The solution to this is deposit protection insurance, which ensures no one comes away empty-handed even if a bank does fail.

Game theory can help us to understand not only why it can be difficult for people to work together in the best possible way but also how we can reach more optimal outcomes through better communication. With a sufficient push towards a new equilibrium, we can drastically improve our collective circumstances in a short time.