Tag: Game Theory

Coordination Problems: What It Takes to Change the World

The key to major changes on a societal level is getting enough people to alter their behavior at the same time. It’s not enough for isolated individuals to act. Here’s what we can learn from coordination games in game theory about what it takes to solve some of the biggest problems we face.

***

What is a Coordination Failure?

Sometimes we see systems where everyone involved seems to be doing things in a completely ineffective and inefficient way. A single small tweak could make everything substantially better—save lives, be more productive, save resources. To an outsider, it might seem obvious what needs to be done, and it might be hard to think of an explanation for the ineffectiveness that is more nuanced than assuming everyone in that system is stupid.

Why is publicly funded research published in journals that charge heavily for it, limiting the flow of important scientific knowledge, without contributing anything? Why are countries spending billions of dollars and risking disaster developing nuclear weapons intended only as deterrents? Why is doping widespread in some sports, even though it carries heavy health consequences and is banned? You can probably think of many similar problems.

Coordination games in game theory gives us a lens for understanding both the seemingly inscrutable origins of such problems and why they persist.

The Theoretical Background to Coordination Failure

In game theory, a game is a set of circumstances where two or more players pick among competing strategies in order to get a payoff. A coordination game is one where players get the best possible payoff by all doing the same thing. If one player chooses a different strategy, they get a diminished payoff and the other player usually gets an increased payoff.

When all players are carrying out a strategy from which they have no incentive to deviate, this is called the Nash equilibrium: given the strategy chosen by the other player(s), no player could improve their payoff by changing their strategy. However, a game can have multiple Nash equilibria with different payoffs. In real-world terms, this means there are multiple different choices everyone could make, some better than others, but all only working if they are unanimous.

The Prisoner’s Dilemma is a coordination game. In a one-round Prisoner’s Dilemma, the optimal strategy for each player is to defect. Even though this is the strategy that makes most sense, it isn’t the one with the highest possible payoff—that would involve both players cooperating. If one cooperates when the other doesn’t, they receive a diminished payoff. Seeing as they cannot know what the other player will do, cooperating is unwise. If they cooperate when the other defects, they get the worst possible payoff. If they defect and the other player also defects, they still get a better payoff than they would have done by cooperating.

So the Prisoner’s Dilemma is a coordination failure. The players would get a better payoff if they both cooperated, but they cannot trust each other. In a form of the Iterated Prisoner’s Dilemma, players compete for an unknown number of rounds. In this case, cooperation becomes possible if both players use the strategy of “tit for tat.” This means that they cooperate in the first round, then do whatever the other player previously did for each subsequent round. However, there is still an incentive to defect because any given round could be the last, so cooperating can never be the Nash equilibrium in the Prisoner’s Dilemma.

Many of the major problems we see around us are coordination failures. They are only solvable if everyone can agree to do the same thing at the same time. Faced with multiple Nash equilibria, we do not necessarily choose the best one overall. We choose what makes sense given the existing incentives, which often discourage us from challenging the status quo. It often makes most sense to do what everyone else is doing, whether that’s driving on the left side of the road, wearing a suit to a job interview, or keeping your country’s nuclear arsenal stocked up.

Take the case of academic publishing, given as a classic coordination failure by Eliezer Yudkowsky in Inadequate Equilibria: Where and How Civilizations Get Stuck. Academic journals publish research within a given field and charge for access to it, often at exorbitant rates. In order to get the best jobs and earn prestige within a field, researchers need to publish in the most respected journals. If they don’t, no one will take their work seriously.

Academic publishing is broken in many ways. By charging high prices, journals limit the flow of knowledge and slow scientific progress. They do little to help researchers, instead profiting from the work of volunteers and taxpayer funding. Yet researchers continue to submit their work to them. Why? Because this is the Nash equilibrium. Although it would be better for science as a whole if everyone stopped publishing in journals that charge for access, it isn’t in the interests of any individual scientist to do so. If they did, their career would suffer and most likely end. The only solution would be a coordinated effort for everyone to move away from journals. But seeing as this is so difficult to organize, the farce of academic publishing continues, harming everyone except the journals.

How We Can Solve and Avoid Coordination Failures

It’s possible to change things on a large scale if we are able to communicate on a much greater scale. When everyone knows that everyone knows, changing what we do is much easier.

We all act out of self-interest, so expecting individuals to risk the costs of going against convention is usually unreasonable. Yet it only takes a small proportion of people to change their opinions to reach a tipping point where there is strong incentive for everyone to change their behavior, and this is magnified even more if those people have a high degree of influence. The more power those who enact change have, the faster everyone else can do the same.

To overcome coordination failures, we need to be able to communicate despite our differences. And we need to be able to trust that when we act, others will act too. The initial kick can be enough people making their actions visible. Groups can have exponentially greater impacts than individuals. We thus need to think beyond the impact of our own actions and consider what will happen when we act as part of a group.

In an example given by the effective altruism-centered website 80,000 Hours, there are countless charitable causes one could donate money to at any given time. Most people who donate do so out of emotional responses or habit. However, some charitable causes are orders of magnitude more effective than others at saving lives and having a positive global impact. If many people can coordinate and donate to the most effective charities until they reach their funding goal, the impact of the group giving is far greater than if isolated individuals calculate the best use of their money. Making research and evidence of donations public helps solve the communication issue around determining the impact of charitable giving.

As Michael Suk-Young Chwe writes in Rational Ritual: Culture, Coordination, and Common Knowledge, “Successful communication sometimes is not simply a matter of whether a given message is received. It also depends on whether people are aware that other people also receive it.” According to Suk-Young Chwe, for people to coordinate on the basis of certain information it must be “common knowledge,” a phrase used here to mean “everyone knows it, everyone knows that everyone knows it, everyone knows that everyone knows that everyone knows it, and so on.” The more public and visible the change is, the better.

We can prevent coordination failures in the first place by visible guarantees that those who take a different course of action will not suffer negative consequences. Bank runs are a coordination failure that were particularly problematic during the Great Depression. It’s better for everyone if everyone leaves their deposits in the bank so it doesn’t run out of reserves and fail. But when other people start panicking and withdrawing their deposits, it makes sense for any given individual to do likewise in case the bank fails and they lose their money. The solution to this is deposit protection insurance, which ensures no one comes away empty-handed even if a bank does fail.

Game theory can help us to understand not only why it can be difficult for people to work together in the best possible way but also how we can reach more optimal outcomes through better communication. With a sufficient push towards a new equilibrium, we can drastically improve our collective circumstances in a short time.

Prisoner’s Dilemma: What Game Are you Playing?

In this classic game theory experiment, you must decide: rat out another for personal benefit, or cooperate? The answer may be more complicated than you think.

***

What does it take to make people cooperate with each other when the incentives to act primarily out of self-interest are often so strong?

The Prisoner’s Dilemma is a thought experiment originating from game theory. Designed to analyze the ways in which we cooperate, it strips away the variations between specific situations where people are called to overcome the urge to be selfish. Political scientist Robert Axelrod lays down its foundations in The Evolution of Cooperation:

Under what conditions will cooperation emerge in a world of egoists without a central authority? This question has intrigued people for a long time. And for good reason. We all know that people are not angels and that they tend to look after themselves and their own first. Yet we also know that cooperation does occur and that our civilization is based on it. But in situations where each individual has an incentive to be selfish, how can cooperation ever develop?

…To make headway in understanding the vast array of specific situations which have this property, a way is needed to represent what is common to these situations without becoming bogged down in the details unique to each…the famous Prisoner’s Dilemma game.

The thought experiment goes as such: two criminals are in separate cells, unable to communicate, accused of a crime they both participated in. The police do not have enough evidence to sentence both without further evidence, though they are certain enough to wish to ensure they both spend time in prison. So they offer the prisoners a deal. They can accuse each other of the crime, with the following conditions:

  • If both prisoners say the other did it, each will serve two years in prison.
  • If one prisoner says the other did it and the other stays silent, the accused will serve three years and the accuser zero.
  • If both prisoners stay silent, each will serve one year in prison.

In game theory, the altruistic behavior (staying silent) is called “cooperating,” while accusing the other is called “defecting.”

What should they do?

If they were able to communicate and they trusted each other, the rational choice is to stay silent; that way each serves less time in prison than they would otherwise. But how can each know the other won’t accuse them? After all, people tend to act out of self-interest. The cost of being the one to stay silent is too high. The expected outcome when the game is played is that both accuse the other and serve two years. (In the real world, we doubt it would. After they served their time, it’s not hard to imagine each of them still being upset. Two years is a lot of time for a spring to coil in a negative way. Perhaps they spend the rest of their lives sabatoging each other.)

The Iterated Prisoner’s Dilemma

A more complex form of the thought experiment is the iterated Prisoner’s Dilemma, in which we imagine the same two prisoners being in the same situation multiple times. In this version of the experiment, they are able to adjust their strategy based on the previous outcome.

If we repeat the scenario, it may seem as if the prisoners will begin to cooperate. But this doesn’t make sense in game theory terms. When they know how many times the game will repeat, both have an incentive to accuse on the final round, seeing as there can be no retaliation. Knowing the other will surely accuse on the final round, both have an incentive to accuse on the penultimate round—and so on, back to the start.

Gregory Mankiw summarizes how difficult it is to model cooperation in Business Economics as follows:

To see how difficult it is to maintain cooperation, imagine that, before the police captured . . . the two criminals, [they] had made a pact not to confess. Clearly, this agreement would make them both better off if they both live up to it, because they would each spend only one year in jail. But would the two criminals in fact remain silent, simply because they had agreed to? Once they are being questioned separately, the logic of self-interest takes over and leads them to confess. Cooperation between the two prisoners is difficult to maintain because cooperation is individually irrational.

However, cooperative strategies can evolve if we model the game as having random or infinite iterations. If each prisoner knows they will likely interact with each other in the future, with no knowledge or expectation their relationship will have a definite end, the cooperation becomes significantly more likely. If we imagine that the prisoners will go to the same jail or will run in the same circles once released, we can understand how the incentive for cooperation might increase. If you’re a defector, running into the person you defected on is awkward at best, and leaves you sleeping with the fishes at worst.

Real-world Prisoner’s Dilemmas

We can use the Prisoner’s Dilemma as a means of understanding many real-world situations based on cooperation and trust. As individuals, being selfish tends to benefit us, at least in the short term. But when everyone is selfish, everyone suffers.

In The Prisoner’s Dilemma, Martin Peterson asks readers to imagine two car manufacturers, Row Cars and Col Motors. As the only two actors in their market, the price each sells cars at has a direct connection to the price the other sells cars at. If one opts to sell at a higher price than the other, they will sell fewer cars as customers transfer. If one sells at a lower price, they will sell more cars at a lower profit margin, gaining customers from the other. In Peterson’s example, if both set their prices high, both will make $100 million per year. Should one decide to set their prices lower, they will make $150 million while the other makes nothing. If both set low prices, both make $20 million. Peterson writes:

Imagine that you serve on the board of Row Cars. In a board meeting, you point out that irrespective of what Col Motors decides to do, it will be better for your company to opt for low prices. This is because if Col Motors sets its price low, then a profit of $20 million is better than $0, and if Col Motors sets its price high, then a profit of $150 million is better than $100 million.

Gregory Mankiw gives another real-world example in Microeconomics, detailed here:

Consider an oligopoly with two members, called Iran and Saudi Arabia. Both countries sell crude oil. After prolonged negotiation, the countries agree to keep oil production low in order to keep the world price of oil high. After they agree on production levels, each country must decide whether to cooperate and live up to this agreement or to ignore it and produce at a higher level. The following image shows how the profits of the two countries depend on the strategies they choose.

Suppose you are the leader of Saudi Arabia. You might reason as follows:

I could keep production low as we agreed, or I could raise my production and sell more oil on world markets. If Iran lives up to the agreement and keeps its production low, then my country ears profit of $60 billion with high production and $50 billion with low production. In this case, Saudi Arabia is better off with high production. If Iran fails to live up to the agreement and produces at a high level, then my country earns $40 billion with high production and $30 billion with low production. Once again, Saudi Arabia is better off with high production. So, regardless of what Iran chooses to do, my country is better off reneging on our agreement and producing at a high level.

Producing at a high level is a dominant strategy for Saudi Arabia. Of course, Iran reasons in exactly the same way, and so both countries produce at a high level. The result is the inferior outcome (from both Iran and Saudi Arabia’s standpoint) with low profits in each country. This example illustrates why oligopolies have trouble maintaining monopoly profits. The monopoly outcome is jointly rational for the oligopoly, but each oligopolist has an incentive to cheat. Just as self-interest drives the prisoners in the prisoners’ dilemma to confess, self-interest makes it difficult for the oligopoly to maintain the cooperative outcome with low production, high prices and monopoly prices.

Other examples of prisoners’ dilemmas include arms races, advertising, and common resources (see The Tragedy of the Commons). Understanding the Prisoner’s Dilemma is an important component of the dynamics of cooperation, an extremely useful mental model.

Thinking of life as an iterative game changes how you play. Positioning yourself for the future carries more weight than “winning” in the moment.

Books Everyone Should Read on Psychology and Behavioral Economics

Earlier this year, a prominent friend of mine was tasked with coming up with a list of behavioral economics book recommendations for the military leaders of a G7 country and I was on the limited email list asking for input.

Yikes.

While I read a lot and I’ve offered up books to sports teams and fortune 100 management teams, I’ve never contributed to something as broad as educating a nation’s military leaders. While I have a huge behavorial economics reading list, this wasn’t where I started.

Not only did I want to contribute, but I wanted to choose books that these military leaders wouldn’t normally have come across in everyday life. Books they were unlikely to have read. Books that offered perspective.

Given that I couldn’t talk to them outright, I was really trying to answer the question ‘what would I like to communicate to military leaders through non-fiction books?’ There were no easy answers.

I needed to offer something timeless. Not so outside the box that they wouldn’t approach it, and not so hard to find that those purchasing the books would give up and move on to the next one on the list. And it can’t be so big they get intimidated by the commitment to read. On top of that, you need a book that starts strong because, in my experience of dealing with C-level executives, they stop paying attention after about 20 pages if it’s not relevant or challenging them in the right way.

In short there is no one-size-fits-all but to make the biggest impact you have to consider all of these factors.

While the justifications for why people chose the books below are confidential, I can tell you what books were on the final email that I saw. I left one book off the list, which I thought was a little too controversial to post.

These books have nothing to do with military per se, rather they deal with enduring concepts like ecology, intuition, game theory, strategy, biology, second order thinking, and behavioral psychology. In short these books would benefit most people who want to improve their ability to think, which is why I’m sharing them with you.

If you’re so inclined you can try to guess which ones I recommended in the comments. Read wisely.

In no order and with no attribution:

  1. Risk Savvy: How to Make Good Decisions by Gerd Gigerenzer
  2. The Righteous Mind: Why Good People Are Divided by Politics and Religion by Jonathan Haidt
  3. The Checklist Manifesto: How to Get Things Right by Atul Gawande
  4. The Darwin Economy: Liberty, Competition, and the Common Good by Robert H. Frank
  5. David and Goliath: Underdogs, Misfits, and the Art of Battling Giants by Malcolm Gladwell
  6. Predictably Irrational, Revised and Expanded Edition: The Hidden Forces That Shape Our Decisions by Dan Ariely
  7. Thinking, Fast and Slow by Daniel Kahneman
  8. The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life by Robert Trivers
  9. The Hour Between Dog and Wolf: Risk Taking, Gut Feelings and the Biology of Boom and Bust by John Coates
  10. Adapt: Why Success Always Starts with Failure by Tim Harford
  11. The Lessons of History by Will & Ariel Durant
  12. Poor Charlie’s Almanack
  13. Passions Within Reason: The Strategic Role of the Emotions by Robert H. Frank
  14. The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t by Nate Silver
  15. Sex at Dawn: How We Mate, Why We Stray, and What It Means for Modern Relationships by Christopher Ryan & Cacilda Jetha
  16. The Red Queen: Sex and the Evolution of Human Nature by Matt Ridley
  17. Introducing Evolutionary Psychology by Dylan Evans & Oscar Zarate
  18. Filters Against Folly: How To Survive Despite Economists, Ecologists, and the Merely Eloquent by Garrett Hardin
  19. Games of Strategy (Fourth Edition) by Avinash Dixit, Susan Skeath & David H. Reiley, Jr.
  20. The Theory of Political Coalitions by William H. Riker
  21. The Evolution of War and its Cognitive Foundations (PDF) by John Tooby & Leda Cosmides.
  22. Fight the Power: Lanchester’s Laws of Combat in Human Evolution by Dominic D.P. Johnson & Niall J. MacKay.

Opinions and Organizational Theory

When I think about the world in which we live and the organizations in which we work, I can’t help but think that few people have the intellectual honesty, time, and discipline required to hold a view. Considered opinions are a lot of work, that’s why there are so few of them.

We have a bias for action and, equally important, a bias for the appearance of knowledge.

Think about it. When’s the last time you heard someone say I don’t know? If my experience was any indication, it seems the higher up the corporate ladder you go, the more unlikely you are to say or hear those three words.

Too Busy to Think

No one wants to tell the boss they don’t know. The boss certainly doesn’t want to let on that they might know either. We have too much of our self-worth wrapped up in our profession and others opinions of us.

Because we don’t know, we talk in abstractions and fog. The appearance of knowledge becomes our currency.

Who has time to do the work required to hold an opinion? There is always an email to respond to, an urgent request from your boss, paper to move from one side of your desk to the other, etc. So we don’t do the work. But so few others do the work either.

Perhaps an example will help.

At 4:45 pm you receive a 4 page proposal in your inbox. The proposal is to be decided on the next day at a meeting with 12 people.

To reflect on the proposal seriously, you’d have to stay at work late. You’d need to turn off the email and all of your other tasks to read a document from start to finish. And, after-all, who has time to read four pages these days? (So we skim.)

If we really wanted to do the work necessary to hold an opinion we’d have to: read the document from start to finish; talk to anyone you can find about the proposal; listen to arguments from others for and against it; verify the facts; consider our assumptions; talk to someone who has been through something similar before; verify the framing of the problem is not too narrow or wide; make sure the solution solves the problem; etc.

So we don’t do the work. Yet we need an opinion for the meeting, or, perhaps more accurately, a sound bite. So we skim the document again looking for something we can support; something that signals we’ve thought about it, despite the fact we haven’t.

We spend the time we should be learning and understanding something running around trying to make it look like we know everything. We’re doing work alright: busywork.

We turn up at the meeting the next day to discuss the proposal but our only real goal is to find a brief pause in the conversation so we can insert our pre-scripted, fact-deficient, obfuscating generality into the conversation. We do, after-all, have to maintain appearances.

The proposal ultimately reaches consensus, but this was never really in doubt. If you flip this around for a second, it must be the easiest decision in the world. Think about it, here you have a room full of smart people all in agreement on the best way to proceed. How often does that happen? A true no-brainer is hardly worth a memo or even a meeting.

It’s easy to agree when no one is thinking.

And organizational incentives encourage this behavior.

If the group makes the right decision you can take take credit. And, if by chance things go wrong, you don’t get the blame. Group decisions, especially ones with consensus, allow for all of the participants to have the upside and few if any to have the downside.

When groups make decisions based on consensus, no one is really accountable if things go bad. Everyone can weasel out of responsibility (diffusion of responsibility).

When you’re talking to someone familiar with the situation you might say something like, ‘we were all wrong’ or ‘we all thought the same thing.’

When you’re talking to someone unfamiliar with the situation you’d offer something more clever like ‘I thought that decision was wrong but no one would listen to me,’ knowing full well they can’t prove you wrong.

And just like that, one-by-one, everyone in attendance at the meeting is absolved.

The alternative is uncomfortable.

Say, rather than jetting off to pick up the kids at 5 you stay and do the work required to have an opinion. While you get home around 11, exhausted, you are now comfortable with a well thought out opinion on the proposal. Maybe two things happen at this point. If you’ve done the work and you reached the same conclusion at the proposal, you feel like you just wasted 6 hours. If, however, you do the work and you reach a different conclusion, things get more interesting.

You show up at the meeting and mention that you thought about this and reached a different conclusion — In fact, you determine this proposal doesn’t solve the problem. It’s nothing more than lipstick.

So you speak up. And in the process, you risk being labelled as a dissenting troublemaker. Why, because no one else has done the work.

You might even offer some logical flow for everyone to follow along with your thinking. So you say something along the lines of: “I think a little differently on this. Here is how I see the problem and here are what I think are the governing variables. And here is how I weighed them. And here is how I’d address the main arguments I see against this. … What I’d miss?”

In short you’d expose your thinking and open yourself up. You’d be vulnerable to people who haven’t really done the work.

If you expect them to say OK, that sounds good, you’d be wrong. After all, if they’re so easy swayed by your rational thinking, it looks like they haven’t done the work.

Instead, they need to show they’ve already thought about your reasoning and arguments and formed a different opinion.

Rather than stick to facts, they might respond with hard to pin down jargon or corporate speak — facts will rarely surface in a rebuttal.

You’ll hear something like “that doesn’t account for the synergies” or “that doesn’t line up with the strategic plan (you haven’t seen).” Or maybe they point the finger at their boss who is not in the room: “That’s what I thought too but Doug, oh no, he wants it done this way.”

If you push too far you won’t be at the next meeting because everyone knows you’ll do the work and that means they know that by inviting you they’ll be forced to think about things a little more, to anticipate arguments, etc. In short, inviting you means more work for them. It’s nothing personal.

Gaming the System

Some college students used game theory to get an A by exploiting a loophole in the grading curve.

Catherine Rampell explains:

In several computer science courses at Johns Hopkins University, the grading curve was set by giving the highest score on the final an A, and then adjusting all lower scores accordingly. The students determined that if they collectively boycotted, then the highest score would be a zero, and so everyone would get an A.

Inside Higher Ed, writes:

The students refused to come into the room and take the exam, so we sat there for a while: me on the inside, they on the outside,” [Peter Fröhlich, the professor,] said. “After about 20-30 minutes I would give up…. Then we all left.” The students waited outside the rooms to make sure that others honored the boycott, and were poised to go in if someone had. No one did, though.

Andrew Kelly, a student in Fröhlich’s Introduction to Programming class who was one of the boycott’s key organizers, explained the logic of the students’ decision via e-mail: “Handing out 0’s to your classmates will not improve your performance in this course,” Kelly said.

“So if you can walk in with 100 percent confidence of answering every question correctly, then your payoff would be the same for either decision. Just consider the impact on your other exam performances if you studied for [the final] at the level required to guarantee yourself 100. Otherwise, it’s best to work with your colleagues to ensure a 100 for all and a very pleasant start to the holidays.”

Bayesian Nash equilibria

In this one-off final exam, there are at least two Bayesian Nash equilibria (a stable outcome, where no student has an incentive to change his strategy after considering the other students’ strategies). Equilibrium #1 is that no one takes the test, and equilibrium #2 is that everyone takes the test. Both equilibria depend on what all the students believe their peers will do.

If all students believe that everyone will boycott with 100 percent certainty, then everyone should boycott (#1). But if anyone suspects that even one person will break the boycott, then at least someone will break the boycott, and everyone else will update their choices and decide to take the exam (#2).

Two incomplete thoughts

First, exploiting loopholes ensures increasing rules, laws, and language (to close previous loopholes), which lead to creating more complexity. More complexity, in turn, leads to more loopholes (among other things). … you see where this is going.

Second, ‘gaming the system’ is a form of game theory. What’s best for you, the individual (or in this case, a small group), may not be best for society.

Today’s college kids are tomorrow’s bankers and CEO’s. Just because you can do something doesn’t mean you should.

Update (via metafilter): In 2009, Peter Fröhlich, the instructor mentioned above, published Game Design: Tricking Students into Learning More.

Still curious? Learn more about game theory with the Prisoners’ Dilemma.

Mental Model: Game Theory

From Game Theory, by Morton Davis:

The theory of games is a theory of decision making. It considers how one should make decisions and to a lesser extent, how one does make them. You make a number of decisions every day. Some involve deep thought, while others are almost automatic. Your decisions are linked to your goals—if you know the consequences of each of your options, the solution is easy. Decide where you want to be and choose the path that takes you there. When you enter an elevator with a particular floor in mind (your goal), you push the button (one of your choices) that corresponds to your floor. Building a bridge involves more complex decisions but, to a competent engineer, is no different in principle. The engineer calculates the greatest load the bridge is expected to bear and designs a bridge to withstand it. When chance plays a role, however, decisions are harder to make. … Game theory was designed as a decision-making tool to be used in more complex situations, situations in which chance and your choice are not the only factors operating. … (Game theory problems) differ from the problems described earlier—building a bridge and installing telephones—in one essential respect: While decision makers are trying to manipulate their environment, their environment is trying to manipulate them. A store owner who lowers her price to gain a larger share of the market must know that her competitors will react in kind. … Because everyone’s strategy affects the outcome, a player must worry about what everyone else does and knows that everyone else is worrying about him or her.

What is a game? From Game Theory and Strategy:

Game theory is the logical analysis of situations of conflict and cooperation. More specifically, a game is defined to be any situation in which:

  1. There are at least two players. A player may be an individual, but it may also be a more general entity like a company, a nation, or even a biological species.
  2. Each player has a number of possible strategies, courses of action which he or she may choose to follow.
  3. The strategies chosen by each player determine the outcome of the game.
  4. Associated to each possible outcome of the game is a collection of numerical payoffs, one to each player. These payoffs represent the value of the outcome to the different players.

…Game theory is the study of how players should rationally play games. Each player would like the game to end in an outcome which gives him as large a payoff as possible.

From Greg Mankiw’s Economics textbook:

Game theory is the study of how people behave in strategic situations. By ‘strategic’ we mean a situation in which each person, when deciding what actions to take, must consider how others might respond to that action. Because the number of firms in an oligopolistic market is small, each firm must act strategically. Each firm knows that its profit depends not only on how much it produces but also on how much the other firms produce. In making its production decision, each firm in an oligopoly should consider how its decision might affect the production decisions of all other firms.

Game theory is not necessary for understanding competitive or monopoly markets. In a competitive market, each firm is so small compared to the market that strategic interactions with other firms are not important. In a monopolized market, strategic interactions are absent because the market has only one firm. But, as we will see, game theory is quite useful for understanding the behavior of oligopolies.

A particularly important ‘game’ is called the prisoners’ dilemma.

Markets with only a few sellers

Because an oligopolistic market has only a small group of sellers, a key feature of oligopoly is the tension between cooperation and self-interest. The oligopolists are best off when they cooperate and act like a monopolist – producing a small quantity of output and charging a price above marginal cost. Yet because each oligopolist cares only about its own profit, there are powerful incentives at work that hinder a group of firms from maintaining the cooperative outcome.

Avinash Dixit and Barry Nalebuff, in their book “Thinking Strategically” offer:

Everyone’s best choice depends on what others are going to do, whether it’s going to war or maneuvering in a traffic jam.

These situations, in which people’s choices depend on the behavior or the choices of other people, are the ones that usually don’t permit any simple summation. Rather we have to look at the system of interaction.

Michael J. Mauboussin relates game theory to firm interaction

How a firm interacts with other firms plays an important role in shaping sustainable value creation. Here we not only consider how many companies interact with their competitors, but how companies can co-evolve.

Game Theory is one of the best tools to understand interaction. Game Theory forces managers to put themselves in the shoes of other players rather than viewing games solely from their own perspective.

The classic two-player example of game theory is the prisoners’ dilemma.

Game Theory is part of the Farnam Street latticework of Mental Models. See all posts on game theory.