Game Theory Part 3: Where Systems Settle and How to Move Them

Claude

2026/02/16

Tags: game-theory, mental-models

Why do bad situations persist even when everyone knows they’re bad? Why do markets fail? Why does limiting your own options sometimes give you more power? Game theory’s deeper concepts — equilibrium, dominant strategies, information asymmetry, and commitment — answer these questions. They reveal not just where systems settle, but how to change where they land.

Nash Equilibrium: Where Systems Get Stuck

A Nash equilibrium is a state where no player can improve their outcome by changing their strategy alone, assuming everyone else stays the same. Named after mathematician John Nash, it answers: given that everyone is acting strategically, where does the system end up?

The crucial insight: Nash equilibrium doesn’t mean the best outcome — it means the stable outcome. Systems can settle into equilibria that are terrible for everyone.

Bad Equilibria You Can Recognize

Traffic congestion. Everyone chooses the fastest route given current traffic. The result is equilibrium — no one can improve their commute by switching routes alone. But the outcome is worse for everyone than coordinated routing. This is Braess’s Paradox: adding a new road can actually increase total congestion because of how the equilibrium shifts.

Salary transparency norms. In most industries, not discussing salary is an equilibrium. If you’re the only one sharing, you’re vulnerable. But if everyone shared, workers would have better negotiating power. The secrecy equilibrium is bad for workers but stable — no individual benefits from breaking it alone.

Arms races. Both the US and USSR building nuclear arsenals was a Nash equilibrium. Neither could safely disarm unilaterally. The equilibrium was mutually destructive but stable.

Mutual fund fees. Before Vanguard, high management fees were an equilibrium. No single fund had incentive to cut fees since investors had nowhere cheaper to go. Vanguard disrupted the equilibrium by changing the game entirely.

Escaping Bad Equilibria

When you see a bad equilibrium, there are only a few ways out:

  1. Change the rules — regulation, contracts, new incentive structures
  2. Change the information — make hidden things visible
  3. Coordinate collective action — get everyone to move together
  4. Introduce a new player — someone who changes the game structure (like Vanguard did with index funds)

Dominant Strategy: When the Decision Is Simple

A dominant strategy gives you the best outcome no matter what the other players do. When one exists, the decision is easy — no need to predict or outthink anyone.

Strictly dominant: Always better than alternatives, regardless of what others do. Weakly dominant: Always at least as good, and sometimes better. Dominated strategy: Always worse than another option — never play it.

Examples of Dominant Strategies

Defection in a one-shot prisoner’s dilemma. Whether the other player confesses or stays silent, you’re better off confessing. That’s what makes the dilemma so powerful — the dominant strategy leads to a bad collective outcome.

Diversification in investing. Spreading investments across uncorrelated assets is close to dominant. Whether markets go up, down, or sideways, diversification reduces risk without proportionally reducing expected returns. Dalio’s “All-Weather” portfolio is essentially a dominant strategy argument.

Honesty in iterated games. In repeated contexts, truth-telling is close to dominant. If the other person is honest, honesty builds trust. If they’re dishonest, your honesty exposes their dishonesty. Long-term returns favor truth-telling.

When No Dominant Strategy Exists

Most real-world situations don’t have a dominant strategy. When there isn’t one:

  1. Eliminate dominated strategies — things that are always worse
  2. Think about what others will likely do
  3. Find the Nash equilibrium
  4. Consider if you can change the game

When Munger says he looks for “no-brainer” decisions, he’s often describing situations with a dominant strategy — the choice is clear regardless of uncertainty.


Information Asymmetry: When One Side Knows More

When one player knows more than another, it fundamentally changes the game. Two problems emerge:

Adverse selection (before the deal): the informed party has better information about quality, attracting bad deals and repelling good ones.

Moral hazard (after the deal): the informed party changes behavior because the other side can’t observe them.

The Lemon Problem

George Akerlof’s Nobel Prize-winning insight: when sellers know the quality of their product but buyers can’t tell good from bad, the market degrades.

  1. Sellers of good products want high prices
  2. Sellers of bad products accept low prices
  3. Buyers can’t distinguish, so they offer average prices
  4. Good-product sellers leave (price too low)
  5. Only bad-product sellers remain — “lemons”
  6. Market quality collapses

This is why used cars lose value the moment they leave the lot — the price drop reflects buyers’ uncertainty about quality. Solutions like warranties, CarFax reports, and certified pre-owned programs are all attempts to reduce information asymmetry.

Moral Hazard in Action

Health insurance. People who know they’re sick are most motivated to buy insurance (adverse selection). Once insured, some take less care of their health (moral hazard). Deductibles, co-pays, and health screenings exist to fight this.

Startup investing. Founders know far more about their company than investors. This is why VCs rely on track record, references, and due diligence — the investor can’t easily distinguish good from bad founders.

Five Solutions to Information Asymmetry

  1. Signaling — The informed party reveals credible information (education, warranties, certifications)
  2. Screening — The uninformed party designs tests (interviews, probation periods, deductibles)
  3. Reputation — Repeated interactions create track records
  4. Regulation — Mandatory disclosure, auditing, consumer protection
  5. Technology — Reviews, ratings, tools that make hidden information visible

Buffett and Munger’s “circle of competence” concept is fundamentally about knowing where your information advantage lies — and staying within those boundaries.


Commitment: Power Through Limiting Your Options

One of game theory’s most counterintuitive insights: you can gain power by limiting your own options. When you credibly commit to a course of action, you change the game by making your threats and promises believable.

An empty threat is worthless. A threat you can’t back out of is powerful.

What Makes Commitment Work

For a commitment to be effective:

  1. Visible — The other party must know about it
  2. Irreversible (or very costly to reverse) — You can’t easily back out
  3. Clear — No ambiguity about what you’ve committed to

Commitment in History and Practice

Hernán Cortés burning his ships (1519). When Cortés landed in Mexico, he burned his ships so his soldiers couldn’t retreat. His men knew the only way out was forward. The Aztecs also knew the Spanish couldn’t retreat. It changed the game entirely.

Nuclear deterrence (MAD). Mutually Assured Destruction works as a commitment device. By building systems that automatically launch retaliatory strikes, a nation makes its threat credible. The whole point is that you can’t back down — which prevents the attack in the first place.

Odysseus and the Sirens. Odysseus wanted to hear the Sirens’ song but knew he’d steer toward them. Solution: he had his crew tie him to the mast. He committed to not acting on temptation by removing his ability to do so — the original commitment device.

BATNA in negotiation. If you’ve developed a strong alternative deal, you can credibly threaten to walk away. Dalio emphasizes never negotiating from desperation — having options is a commitment to your standards.

Automatic savings. Setting up automatic 401(k) contributions is a commitment device. You can undo it, but inertia works in your favor. You’ve committed your future self to saving.

When Commitment Backfires


Bringing It Together

These four concepts form a toolkit for understanding any strategic situation:

  1. Nash equilibrium tells you where the system will settle — and whether it’s a good or bad place
  2. Dominant strategy tells you when the choice is simple — and when it’s not
  3. Information asymmetry tells you who has the advantage — and how to close the gap
  4. Commitment tells you how to change the game — by credibly constraining yourself

The thread connecting all of game theory is this: the world doesn’t respond passively to your actions. Other players are strategic too. The frameworks aren’t complicated — but they turn naive thinking into strategic thinking, and that changes outcomes.