This Wednesday Dan Bryce will be hosting Mike Goodrich from BYU. Mike is going to give a talk during 5600 11:30 to 12:20 in Main 117 on his work on UAVs.

Slides:



Advertisements
Similar presentations
NON - zero sum games.
Advertisements

Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Strongly based on Slides by Vincent Conitzer of Duke
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
Mixed Strategies CMPT 882 Computational Game Theory Simon Fraser University Spring 2010 Instructor: Oliver Schulte.
Chapter Twenty-Eight Game Theory. u Game theory models strategic behavior by agents who understand that their actions affect the actions of other agents.
Two-Player Zero-Sum Games
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
1 Chapter 4: Minimax Equilibrium in Zero Sum Game SCIT1003 Chapter 4: Minimax Equilibrium in Zero Sum Game Prof. Tsang.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
EC3224 Autumn Lecture #04 Mixed-Strategy Equilibrium
Game Theory. “If you don’t think the math matters, then you don’t know the right math.” Chris Ferguson 2002 World Series of Poker Champion.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Zero-Sum Games (follow-up)
Network Theory and Dynamic Systems Game Theory: Mixed Strategies
General-sum games You could still play a minimax strategy in general- sum games –I.e., pretend that the opponent is only trying to hurt you But this is.
Part 3: The Minimax Theorem
Extensive-form games. Extensive-form games with perfect information Player 1 Player 2 Player 1 2, 45, 33, 2 1, 00, 5 Players do not move simultaneously.
An Introduction to Game Theory Part I: Strategic Games
Vincent Conitzer CPS Normal-form games Vincent Conitzer
Chapter 6 © 2006 Thomson Learning/South-Western Game Theory.
Game Theory Analysis Sidney Gautrau. John von Neumann is looked at as the father of modern game theory. Many other theorists, such as John Nash and John.
by Vincent Conitzer of Duke
Complexity Results about Nash Equilibria
An Introduction to Game Theory Part II: Mixed and Correlated Strategies Bernhard Nebel.
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Games of pure conflict two person constant sum. Two-person constant sum game Sometimes called zero-sum game. The sum of the players’ payoffs is the same,
AWESOME: A General Multiagent Learning Algorithm that Converges in Self- Play and Learns a Best Response Against Stationary Opponents Vincent Conitzer.
An Introduction to Game Theory Part III: Strictly Competitive Games Bernhard Nebel.
Game Theory Here we study a method for thinking about oligopoly situations. As we consider some terminology, we will see the simultaneous move, one shot.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
QR 38, 2/22/07 Strategic form: dominant strategies I.Strategic form II.Finding Nash equilibria III.Strategic form games in IR.
Game Theory.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Minimax strategies, Nash equilibria, correlated equilibria Vincent Conitzer
CPS Learning in games Vincent Conitzer
Game Theory.
Vincent Conitzer CPS Game Theory Vincent Conitzer
Risk attitudes, normal-form games, dominance, iterated dominance Vincent Conitzer
CPS 170: Artificial Intelligence Game Theory Instructor: Vincent Conitzer.
Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
CPS LP and IP in Game theory (Normal-form Games, Nash Equilibria and Stackelberg Games) Joshua Letchford.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Game-theoretic analysis tools Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
Game theory & Linear Programming Steve Gu Mar 28, 2008.
THE “CLASSIC” 2 x 2 SIMULTANEOUS CHOICE GAMES Topic #4.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Lecture 2: two-person non.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Introduction Giovanni Neglia.
The Science of Networks 6.1 Today’s topics Game Theory Normal-form games Dominating strategies Nash equilibria Acknowledgements Vincent Conitzer, Michael.
CPS 270: Artificial Intelligence Game Theory Instructor: Vincent Conitzer.
Lecture 5 Introduction to Game theory. What is game theory? Game theory studies situations where players have strategic interactions; the payoff that.
CPS LP and IP in Game theory (Normal-form Games, Nash Equilibria and Stackelberg Games) Joshua Letchford.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Strategic Behavior in Business and Econ Static Games of complete information: Dominant Strategies and Nash Equilibrium in pure and mixed strategies.
Algorithms for solving two-player normal form games
CPS 570: Artificial Intelligence Game Theory Instructor: Vincent Conitzer.
CPS Utility theory, normal-form games
Statistics Overview of games 2 player games representations 2 player zero-sum games Render/Stair/Hanna text CD QM for Windows software Modeling.
Vincent Conitzer CPS Learning in games Vincent Conitzer
Games of pure conflict two-person constant sum games.
By: Donté Howell Game Theory in Sports. What is Game Theory? It is a tool used to analyze strategic behavior and trying to maximize his/her payoff of.
1 Fahiem Bacchus, University of Toronto CSC384: Intro to Artificial Intelligence Game Tree Search I ● Readings ■ Chapter 6.1, 6.2, 6.3, 6.6 ● Announcements:
Mixed Strategies Keep ‘em guessing.
CPS 570: Artificial Intelligence Game Theory
Game Theory.
Vincent Conitzer Normal-form games Vincent Conitzer
Lecture 20 Linear Program Duality
Instructor: Vincent Conitzer
Normal Form (Matrix) Games
Presentation transcript:

This Wednesday Dan Bryce will be hosting Mike Goodrich from BYU. Mike is going to give a talk during :30 to 12:20 in Main 117 on his work on UAVs for disaster relief.

Normal-form games

Rock-paper-scissors 0, 0-1, 11, -1 0, 0-1, 1 1, -10, 0 Row player aka. player 1 chooses a row Column player aka. player 2 (simultaneously) chooses a column A row or column is called an action or (pure) strategy Row player’s utility is always listed first, column player’s second Zero-sum game: the utilities in each entry sum to 0 (or a constant) Three-player game would be a 3D table with 3 utilities per entry, etc.

“Chicken” 0, 0-1, 1 1, -1-5, -5 D S DS S D D S Two players drive cars towards each other If one player goes straight, that player wins If both go straight, they both die not zero-sum

Dominance Player i’s strategy s i strictly dominates s i ’ if –for any s -i, u i (s i, s -i ) > u i (s i ’, s -i ) s i weakly dominates s i ’ if –for any s -i, u i (s i, s -i ) ≥ u i (s i ’, s -i ); and –for some s -i, u i (s i, s -i ) > u i (s i ’, s -i ) 0, 01, -1 -1, 10, 0-1, 1 1, -10, 0 strict dominance weak dominance -i = “the player(s) other than i” Changed Game

Prisoner’s Dilemma -2, -20, -3 -3, 0-1, -1 confess Pair of criminals has been caught District attorney has evidence to convict them of a minor crime (1 year in jail); knows that they committed a major crime together (3 years in jail) but cannot prove it Offers them a deal: –If both confess to the major crime, they each get a 1 year reduction –If only one confesses, that one gets 3 years reduction don’t confess confess

“Should I buy an SUV?” -10, -10-7, , -7-8, -8 cost: 5 cost: 3 cost: 5 cost: 8cost: 2 purchasing cost accident cost buying SUV dominates buying other

Checking for dominance by mixed strategies Linear program (LP) for checking whether strategy s i * is strictly dominated by a mixed strategy: select p s i to maximize ε such that: –for any s -i, Σ s i p s i u i (s i, s -i ) ≥ u i (s i *, s -i ) + ε –Σ s i p s i = 1 Linear program for checking whether strategy s i * is weakly dominated by a mixed strategy: maximize Σ s -i [(Σ s i p s i u i (s i, s -i )) - u i (s i *, s -i )] such that: –for any s -i, Σ s i p s i u i (s i, s -i ) ≥ u i (s i *, s -i ) –Σ s i p s i = 1

removing weakly or very weakly dominated strategies CAN eliminate some equilibria of the original game. There is still a computational benefit. No new equilibria are created and at least one of the original equilibria always survives.

Dominated by a mixed strategy If a pure strategy is dominated by a mixed strategy, it may not be immediately obvious. Below, we show only the row player’s payoff. What is dominated? abc A516 B222 C166

Iterated dominance Different version of rock/paper/scissors Rock drops from a large height and punches hole in Paper Iterated dominance: remove (strictly/weakly) dominated strategy, repeat. Arrows show dominance. Iterated strict dominance on Seinfeld’s RPS (nothing beats a rock): 0, 01, -1 -1, 10, 0-1, 1 1, -10, 0 1, -1 -1, 10, 0

Terms zero sum u 1 (o) = -u 2 (o) constant sum u 1 (o) + u 2 (o) = c common payoff (pure coordination or team games) - for the same outcome, agents get the same payoff u 1 (o) = u 2 (o) Expected utility of a mixed strategy – sum of probabilities of each outcome times utility of each outcome

Terms positive affine transformation: For positive constant A and offset b x -> Ax + b Ratio of distances along a line are preserved If we transform the payoffs with an affine transformation, the solution doesn’t change

Can some outcomes be declared to be better? Might be tempting to try to optimize maximum welfare, but any positive affine transformation applied to payoffs,yields another valid utility function. So just maximizing social welfare is bad if agents have different scales. Pareto dominance gives us a partial order over strategy profiles. Clearly 4,5 > 3,4 BUT which is better 4,6 or 9,2

Iterated dominance: path (in)dependence 0, 10, 0 1, 0 0, 00, 1 Iterated weak dominance is path-dependent: particular sequence of eliminations may determine which solution we get (if any) (whether or not dominance by mixed strategies allowed) 0, 10, 0 1, 0 0, 00, 1 0, 0 1, 0 0, 00, 1 Iterated strict dominance is path-independent: elimination process will always terminate at the same point (whether or not dominance by mixed strategies allowed)

Two computational questions for iterated dominance 1. Can a given strategy be eliminated using iterated dominance? 2. Is there some path of elimination by iterated dominance such that only one strategy per player remains? For strict dominance (with or without dominance by mixed strategies), both can be solved in polynomial time due to path-independence: –Check if any strategy is dominated, remove it, repeat For weak dominance, both questions are NP-hard (even when all utilities are 0 or 1), with or without dominance by mixed strategies [Conitzer, Sandholm 05] –Weaker version proved by [Gilboa, Kalai, Zemel 93] Explain why the problems are NP hard

Lots of complexity issues Issues of complexity are often stated The Shoham, Leyton-Brown (link on website) talks about many of these, for those of you who are interested.

Can you explain? If I am mixing three strategies, you will be mixing three strategies. Why? To test a mixed strategy, you see what each of the pure strategies (in the opponents support) do against it. They need to yield the same response. Why don’t you have to check all the mixed strategies?

Two-player zero-sum games revisited 0, 0-1, 11, -1 0, 0-1, 1 1, -10, 0 Recall: in a zero-sum game, payoffs in each entry sum to zero –… or to a constant: recall that we can subtract a constant from anyone’s utility function without affecting their behavior What the one player gains, the other player loses

Two-player general-sum game 2, 1,-3 1, 3,-43, 1,-4 2, 0,-21,3,-4 3, 1,-42, 2,-4 Note: a general-sum k-player game can be modeled as a zero- sum (k+1)-player game by adding a dummy player (with no choices) absorbing the remaining utility, so zero-sum games with 3 or more players have to deal with the difficulties of general- sum games; this is why we focus on 2-player zero-sum games here.

Best-response strategies Suppose you know your opponent’s mixed strategy –E.g., your opponent plays rock 50% of the time and scissors 50% What is the best strategy for you to play? Rock gives.5*0 +.5*1 =.5 Paper gives.5*1 +.5*(-1) = 0 Scissors gives.5*(-1) +.5*0 = -.5 So the best response to this opponent strategy is to (always) play rock There is always some pure strategy that is a best response (given you know the opponent’s strategy) –Suppose you have a mixed strategy that is a best response; then every one of the pure strategies that that mixed strategy places positive probability on must also be a best response to some choice

Support of  i All o k in the mixed strategy for which p k > 0 Mixed strategy NE are necessarily weak. ( SLB defn 3.3.6) Why? Recall a strict NE is when if you deviate from the strategy, you do worse.

How to play matching pennies Assume opponent knows our mixed strategy If we play L 60%, R 40%... … opponent will play R… … we get.6*(-1) +.4*(1) = -.2 What’s optimal for us? What about rock-paper-scissors? with a NE, doesn’t matter if they know our strategy in advance. 1, -1-1, 1 1, -1 L R LR Us Them

Matching pennies with a sensitive target If we play 50% L, 50% R, opponent will attack L –We get.5*(1) +.5*(-2) = -.5 What if we play 55% L, 45% R? Opponent has choice between –L: gives them.55*(-1) +.45*(2) =.35 –R: gives them.55*(1) +.45*(-1) =.1 We get -.35 > -.5 1, -1-1, 1 -2, 21, -1 L R LR Us Them

How do we compute p (probability we select L) Pick p so that our opponent has no preferences between choices -1(p) + 2(1-p) = 1(p) -1(1-p) 2-3p = 2p -1 3 = 5p p =.6 1, -1-1, 1 -2, 21, -1 L (p) R (1-p) LR Us Them

Matching pennies with a sensitive target What if we play 60% L, 40% R? Opponent has choice between –L: gives them.6*(-1) +.4*(2) =.2 –R: gives them.6*(1) +.4*(-1) =.2 We get -.2 either way A mixture of L and R (by our opponent) gives us the same thing as well! This is the maxmin strategy (we don’t know what the opponent will do, but we assume they will try to minimize our utility. We pick so that we maximize our minimum) 1, -1-1, 1 -2, 21, -1 L R LR Us Them Our min is -1 Our min is -2

Let’s change roles Suppose we know their strategy If they play 50% L, 50% R, –We play L, we get.5*(1)+.5*(-1) = 0 If they play 40% L, 60% R, –If we play L, we get.4*(1)+.6*(-1) = -.2 –If we play R, we get.4*(-2)+.6*(1) = -.2 This is the minmax strategy (i picks his strategy to minimize the maximum of what the opponent earns) 1, -1-1, 1 -2, 21, -1 L R L R Us Them von Neumann’s minimax theorem [1927]: maxmin value = minmax value (~LP duality) for zero sum games max for them is 1 max for them is 2

Can you find? Can you find a zero sum game in which maxmin (I pick a strategy knowing they will try to minimize what I get) and minmax (I try to minimize the maximum of what they get, and then I’ll get the opposite) causes a different outcome value?

Minimax theorem [von Neumann 1927] Maximin utility: max σ i min s -i u i (σ i, s -i ) (= - min σ i max s -i u -i (σ i, s -i )) Minimax utility: min σ -i max s i u i (s i, σ -i ) (= - max σ -i min s i u -i (s i, σ -i )) Minimax theorem: in any finite two player, zero sum game, any Nash Equilibrium receives payoff equal to max σ i min s -i u i (σ i, s -i ) = min σ -i max s i u i (s i, σ -i )

Example ab A2,-20,0 B1,-1-2,2 Maxmin Minmax

Maxmin is i’s best choice when first i must commit to a (possibly mixed) strategy and then the remaining agents observe the strategy (but not the actual choice) and choose to minimize i’s payoff Example ab A2,10,0 B 1,2 Mixed strategy yields equilibrium. A row player plays (2/3, 1/,3), column player (1/3,2/3) Then 2/3 is minimax value and deviating doesn’t help. If I picked A, opponent could get 1. If I picked B, opponent could get 2.

Practice games What is the mixed equilibrium? 20, -200, 0 10, , -200, 010, -10 0, 010, -108, -8

Practice games If it isn’t ever a best response, don’t play it in the mix. Support sizes are the same for each player. 20, -200, 0 10, , -200, 010, -10 0, 010, -108, -8 1/3 2/3 1/3 2/3

Solving for minimax strategies using linear programming maximize u i subject to for any s -i, Σ s i p s i u i (s i, s -i ) ≥ u i Σ s i p s i = 1 Can also convert linear programs to two-player zero-sum games, so they are equivalent Notice, mixed strategies are provided for

General-sum games You could still play a minimax strategy in general- sum games –I.e., pretend that the opponent is only trying to hurt you But this is not rational: 0, 03, 1 1, 02, 1 If Column was trying to hurt Row, Column would play Left, so Row should play Down In reality, Column will play Right (strictly dominant), so Row should play Up Is there a better generalization of minimax strategies in zero- sum games to general-sum games?

Nash equilibrium [Nash 50] A vector of strategies (one for each player) is called a strategy profile A strategy profile (σ 1, σ 2, …, σ n ) is a Nash equilibrium if each σ i is a best response to σ -i –That is, for any i, for any σ i ’, u i (σ i, σ -i ) ≥ u i (σ i ’, σ -i ) Note that this does not say anything about multiple agents changing their strategies at the same time In any (finite) game, at least one Nash equilibrium (possibly using mixed strategies) exists [Nash 50] (Note - singular: equilibrium, plural: equilibria)

Nash equilibria of “chicken” 0, 0-1, 1 1, -1-5, -5 D S DS S D D S (D, S) and (S, D) are Nash equilibria –They are pure-strategy Nash equilibria: nobody randomizes –They are also strict Nash equilibria: changing your strategy will make you strictly worse off No other pure-strategy Nash equilibria

Nash equilibria of “chicken”… 0, 0-1, 1 1, -1-5, -5 D S DS Is there a Nash equilibrium that uses mixed strategies? Say, where player 1 uses a mixed strategy? Recall: if a mixed strategy is a best response, then all of the pure strategies that it randomizes over must also be best responses So we need to make player 1 indifferent between D and S Player 1’s utility for playing D = -(1-q) Player 1’s utility for playing S = q -5(1-q) = 6q-5 So we need -(1-q) = 6q-5 so q = 4/5 Then, player 2 needs to be indifferent as well Mixed-strategy Nash equilibrium: ((4/5 D, 1/5 S), (4/5 D, 1/5 S)) –People may die! Expected utility = -1/5 for each player

The presentation game Pay attention (A) Do not pay attention (NA) Put effort into presentation (E) Do not put effort into presentation (NE) 4, 4-16, -14 0, -20, 0 Presenter Audience Pure-strategy Nash equilibria: (A, E), (NA, NE) Mixed-strategy Nash equilibrium: ((1/10 A, 9/10 NA), (4/5 E, 1/5 NE)) –Utility 0 for audience, -14/10 for presenter –Can see that some equilibria are strictly better for both players than other equilibria, i.e. some equilibria Pareto-dominate other equilibria

Meaning of Nash Equilibrium Does it mean is it the ideal? Does it mean it will happen in real life?

Focal Point In game theory, a focal point (also called Schelling point) is a solution that people will tend to use in the absence of communication, because it seems natural, special or relevant to them. Schelling describes "focal point[s] for each person’s expectation of what the other expects him to expect to be expected to do." This type of focal point later was named after Schelling.game theory Tell two people: Pick one of the squares. Win a prize if both of you pick same square.

Focal Point Example: two people unable to communicate with each other are each shown a panel of four squares and asked to select one; if and only if they both select the same one, they will each receive a prize. Three of the squares are blue and one is red. Assuming they each know nothing about the other player, but that they each do want to win the prize, then they will, reasonably, both choose the red square. Of course, the red square is not in a sense a better square; they could win by both choosing any square. And it is the "right" square to select only if a player can be sure that the other player has selected it; but by hypothesis neither can. Wikipedia Aug 2010

The “equilibrium selection problem” You are about to play a game that you have never played before with a person that you have never met According to which equilibrium should you play? Possible answers: –Equilibrium that maximizes the sum of utilities (social welfare) –Or, at least not a Pareto-dominated equilibrium –So-called focal equilibria “Meet in Paris” game - you and a friend were supposed to meet in Paris at noon on Sunday, but you forgot to discuss where and you cannot communicate. All you care about is meeting your friend. Where will you go? –Equilibrium that is the convergence point of some learning process –An equilibrium that is easy to compute –…–… Equilibrium selection is a difficult problem

Some properties of Nash equilibria If you can eliminate a strategy using strict dominance or even iterated strict dominance, it will not occur (i.e., it will be played with probability 0) in every Nash equilibrium –Weakly dominated strategies may still be played in some Nash equilibrium

How hard is it to compute one (any) Nash equilibrium? Complexity was open for a long time –[Papadimitriou STOC01]: “together with factoring […] the most important concrete open question on the boundary of P today” Recent sequence of papers shows that computing one (any) Nash equilibrium is PPAD-complete (even in 2-player games) [Daskalakis, Goldberg, Papadimitriou 2006; Chen, Deng 2006] All known algorithms require exponential time (in the worst case).

What is tough about finding a mixed NE? It is not easy to tell which strategies appear in the mix. If you knew the support, the computation is easy. The difficulty is in figuring out which strategies are part of the mixed strategy. If there are n strategies, 2 n possible supports. If one strategy is dominated, we can’t make the player not care between this strategy and others.

What if we want to compute a Nash equilibrium with a specific property? For example: –An equilibrium that is not Pareto-dominated –An equilibrium that maximizes the expected social welfare (= the sum of the agents’ utilities) –An equilibrium that maximizes the expected utility of a given player –An equilibrium that maximizes the expected utility of the worst-off player –An equilibrium in which a given pure strategy is played with positive probability –An equilibrium in which a given pure strategy is played with zero probability –… All of these are NP-hard (and the optimization questions are inapproximable assuming P ≠ NP), even in 2-player games [Gilboa, Zemel 89; Conitzer & Sandholm IJCAI-03/GEB-08]

Search-based approaches (for 2 players) Suppose we know the support X i of each player i’s mixed strategy in equilibrium –Support: the pure strategies which receive positive probability Then, we have a linear feasibility problem: –for both i, for any s i  S i - X i, p i (s i ) = 0 –for both i, for any s i  X i, Σp -i (s -i )u i (s i, s -i ) = u i The opponents probabilities are selected so my utility is always the same for any pure strategy in my support. –for both i, for any s i  S i - X i, Σp -i (s -i )u i (s i, s -i ) ≤ u i The opponents probabilities are selected so deviating from my probabilities makes me worse off.

Search-based approaches (for 2 players) Thus, we can search over possible supports –This is the basic idea underlying methods in [Dickhaut & Kaplan 91; Porter, Nudelman, Shoham AAAI04/GEB08] Dominated strategies can be eliminated

Solving for a Nash equilibrium using MIP (2 players) [ Sandholm, Gilpin, Conitzer AAAI05] maximize whatever you like (e.g., social welfare) subject to –for both i, for any s i, Σ s -i p s -i u i (s i, s -i ) = u s i others choices make me indifferent to my strategies. –for both i, for any s i, u i ≥ u s i utility of mixed is no worse than any pure strategy utility. –for both i, for any s i, p s i ≤ b s i I don’t have positive support unless b says so. –for both i, for any s i, u i - u s i ≤ M(1- b s i ) of supported strategies, mixed will be equal to pure strategy –for both i, Σ s i p s i = 1 legal support values b s i is a binary variable indicating whether s i is in the support, M is a large number

Terminology When there is nothing to maximize, a LP problem becomes a feasibility problem rather than an optimization problem.

Correlated equilibrium [Aumann 74] Suppose there is a trustworthy mediator who has offered to help out the players in the game The mediator chooses a profile of pure strategy outcomes, perhaps randomly, then tells each player what her strategy is in the profile (but not what the other players’ strategies are) A correlated equilibrium is a distribution over pure-strategy profiles so that every player wants to follow the recommendation of the mediator (if she assumes that the others do so as well). It is no better to deviate. Every Nash equilibrium is also a correlated equilibrium –Corresponds to mediator choosing players’ recommendations independently … but not vice versa (Note: there are more general definitions of correlated equilibrium, but it can be shown that they do not allow you to do anything more than this definition.)

A correlated equilibrium for “chicken” Why is this a correlated equilibrium? No reason for either to deviate. Suppose the mediator tells the row player to Dodge From Row’s perspective, the conditional probability that Column was told to Dodge is 20% / (20% + 40%) = 1/3 So the expected utility of Row player of Dodging is (2/3)*(-1) = -2/3 But the expected utility of Straight is (1/3)*1 + (2/3)*(-5) = -3 So Row wants to follow the recommendation If Row is told to go Straight, he knows that Column was told to Dodge, so again Row wants to follow the recommendation Similar for Column (as is symmetric) 0, 0-1, 1 1, -1-5, -5 D S DS 20% 40% 0% Know only what YOU were told to do - NOT what opponent was told to do

Correlated So, for rock paper scissors – how would you correlate?

A correlated nonzero-sum variant of rock- paper-scissors (Shapley’s game [Shapley 64] ) If both choose the same pure strategy, both get zero. Also true for mixed. These probabilities give a correlated equilibrium: For example, suppose Row is told to play Rock –Row knows Column is playing either paper or scissors (50-50) Playing Rock will give ½; playing Paper will give 0; playing Scissors will give ½ –So can do no better than playing Rock. [ Rock is optimal (not uniquely)] 0, 00, 11, 0 0, 00, 1 1, 00, 0 1/

Solving for a correlated equilibrium using linear programming (n players!) Variables are now p s where s is a profile of pure strategies – so we are deciding which of the profiles should be included and at what probability maximize whatever you like (e.g., social welfare) subject to –for any i, s i, s i ’, Σ s -i p (s i, s -i ) u i (s i, s -i ) ≥ Σ s -i p (s i, s -i ) u i (s i ’, s -i ) because I add up the utilities along a row, it considers the marginal probability of what my opponent will do –Σ s p s = 1 probabilities of each option add to one