Static Games of Complete Information: Equilibrium Concepts

Slides:



Advertisements
Similar presentations
Game Theory Assignment For all of these games, P1 chooses between the columns, and P2 chooses between the rows.
Advertisements

This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
© 2009 Institute of Information Management National Chiao Tung University Game theory The study of multiperson decisions Four types of games Static games.
Infinitely Repeated Games. In an infinitely repeated game, the application of subgame perfection is different - after any possible history, the continuation.
Bayesian Games Yasuhiro Kirihata University of Illinois at Chicago.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
An Introduction to... Evolutionary Game Theory
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
Network Theory and Dynamic Systems Game Theory: Mixed Strategies
Games What is ‘Game Theory’? There are several tools and techniques used by applied modelers to generate testable hypotheses Modeling techniques widely.
ECO290E: Game Theory Lecture 4 Applications in Industrial Organization.
Economics 202: Intermediate Microeconomic Theory 1.HW #6 on website. Due Thursday. 2.No new reading for Thursday, should be done with Ch 8, up to page.
An Introduction to Game Theory Part I: Strategic Games
Chapter 6 © 2006 Thomson Learning/South-Western Game Theory.
Introduction to Nash Equilibrium Presenter: Guanrao Chen Nov. 20, 2002.
Eponine Lupo.  Game Theory is a mathematical theory that deals with models of conflict and cooperation.  It is a precise and logical description of.
A camper awakens to the growl of a hungry bear and sees his friend putting on a pair of running shoes, “You can’t outrun a bear,” scoffs the camper. His.
Basics on Game Theory For Industrial Economics (According to Shy’s Plan)
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
BEE3049 Behaviour, Decisions and Markets Miguel A. Fonseca.
Game theory The study of multiperson decisions Four types of games
1 Introduction APEC 8205: Applied Game Theory. 2 Objectives Distinguishing Characteristics of a Game Common Elements of a Game Distinction Between Cooperative.
APEC 8205: Applied Game Theory Fall 2007
An introduction to game theory Today: The fundamentals of game theory, including Nash equilibrium.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Static Games of Complete Information: Subgame Perfection
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
UNIT III: COMPETITIVE STRATEGY Monopoly Oligopoly Strategic Behavior 7/21.
QR 38, 2/22/07 Strategic form: dominant strategies I.Strategic form II.Finding Nash equilibria III.Strategic form games in IR.
Nash Equilibrium - definition A mixed-strategy profile σ * is a Nash equilibrium (NE) if for every player i we have u i (σ * i, σ * -i ) ≥ u i (s i, σ.
EC941 - Game Theory Francesco Squintani Lecture 3 1.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
UNIT III: COMPETITIVE STRATEGY
Two-Stage Games APEC 8205: Applied Game Theory Fall 2007.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Today: Some classic games in game theory
Week 4 Nash Equilibrium and Strategic Uncertainty In some games even iterative dominance does not help us predict what strategies players might choose.
UNIT III: MONOPOLY & OLIGOPOLY Monopoly Oligopoly Strategic Competition 7/28.
Chapter 12 Choices Involving Strategy Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written.
© 2009 Institute of Information Management National Chiao Tung University Lecture Note II-3 Static Games of Incomplete Information Static Bayesian Game.
EC941 - Game Theory Francesco Squintani 1 Lecture 1.
Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
ECO290E: Game Theory Lecture 12 Static Games of Incomplete Information.
Nash equilibrium Nash equilibrium is defined in terms of strategies, not payoffs Every player is best responding simultaneously (everyone optimizes) This.
Intermediate Microeconomics
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
Game Theory (Microeconomic Theory (IV)) Instructor: Yongqin Wang School of Economics and CCES, Fudan University December,
Information, Control and Games Shi-Chung Chang EE-II 245, Tel: ext Office.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
McGraw-Hill/Irwin Copyright  2008 by The McGraw-Hill Companies, Inc. All rights reserved. GAME THEORY, STRATEGIC DECISION MAKING, AND BEHAVIORAL ECONOMICS.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Lecture 2: two-person non.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Introduction Giovanni Neglia.
Chapters 29, 30 Game Theory A good time to talk about game theory since we have actually seen some types of equilibria last time. Game theory is concerned.
Lecture 5 Introduction to Game theory. What is game theory? Game theory studies situations where players have strategic interactions; the payoff that.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Intermediate Microeconomics Game Theory and Oligopoly.
Normal Form Games, Normal Form Games, Rationality and Iterated Rationality and Iterated Deletion of Dominated Strategies Deletion of Dominated Strategies.
Mohsen Afsharchi Multiagent Interaction. What are Multiagent Systems?
Game Theory (Microeconomic Theory (IV)) Instructor: Yongqin Wang School of Economics, Fudan University December, 2004.
5.1.Static Games of Incomplete Information
Econ 805 Advanced Micro Theory 1 Dan Quint Fall 2009 Lecture 1 A Quick Review of Game Theory and, in particular, Bayesian Games.
Advanced Subjects in GT Outline of the tutorials Static Games of Complete Information Introduction to games Normal-form (strategic-form) representation.
Game theory basics A Game describes situations of strategic interaction, where the payoff for one agent depends on its own actions as well as on the actions.
Chapter 28 Game Theory.
Game Theory.
LECTURE 2 MIXED STRATEGY GAME
Multiagent Systems Game Theory © Manfred Huber 2018.
Chapter 29 Game Theory Key Concept: Nash equilibrium and Subgame Perfect Nash equilibrium (SPNE)
UNIT II: The Basic Theory
Presentation transcript:

Static Games of Complete Information: Equilibrium Concepts APEC 8205: Applied Game Theory

Objectives Understand Common Solution Concepts for Static Games of Complete Information Dominant Strategy Equilibrium Iterated Dominance Maxi-Min Equilibrium Pure Strategy Nash Equilibrium Mixed Strategy Nash Equilibrium

Introductory Comments On Assumptions Knowledge I know the rules of the game. I know you know the rules of the game. I know you know I know the rules of the game. … Rationality I am individually rational. I believe you are individually rational. I believe that you believe I am individually rational.

Normative Versus Positive Theory How should people play games? What should they be trying to accomplish? Positive: How do people play games? What do they accomplish? What are obstacles to the theory’s predictive performance? Players do not always fully understand the rules of the game. Players may not be individually rational. Players may poorly anticipate the choices of others. Players are not always playing the games we think they are.

What makes a good solution concept? Existence Uniqueness Logical Consistency Predictive Performance In Equilibrium Out of Equilibrium

Normal Form Games: Notation A set N = {1,2,…,n} of players. A finite set of pure strategies Si for each i  N where S = S1S2…Sn is the set of all possible pure strategy outcomes. si is a specific strategy for player i (si  Si) s~i is a specific strategy for everyone but player i (s~i  S~i = S1… Si-1Si+1 …Sn). s is a specific strategy for each and every player (e.g. a strategy profile: s  S) A payoff function gi: S  for each i  N.

Dominant Strategy Equilibrium Definitions Strategy si weakly dominates strategy ti if gi(si,s~i) ≥ gi(ti,s~i) for all s~i  S~i. Strategy si dominates strategy ti if gi(si,s~i) ≥ gi(ti,s~i) for all s~i  S~i and gi(si,s~i) > gi(ti, s~i) for some s~i  S~i. Strategy si strictly dominates strategy ti if gi(si,s~i) > gi(ti,s~i) for all s~i  S~i. Strategy profile s*  S is a weakly/strictly dominant strategy equilibrium if for all i  N and all ti  S, si weakly/strictly dominates ti. Note: In a dominant strategy equilibrium, your best strategy does not depend on your opponents’ strategy choices! Note: An equilibrium is defined by the strategy profile that meets the definition of the equilibrium!

Example: Prisoners’ Dilemma Player 1 Choose Defect if Player 2 Cooperates (3 > 2). Choose Defect if Player 2 Defects (1 > 0). Defect is a Dominant Strategy! Player 2 Choose Defect if Player 1 Cooperates (3 > 2). Choose Defect if Player 1 Defects (1 > 0). (Defect, Defect) is a strictly dominant strategy equilibrium.

Example: Second Price Auction Who are the players? Bidders i = 1, …, n who value and are competing for the same object. Who can do what when? Players submit bids simultaneously. Who knows what when? Players know their value of the object before submitting their bid. They do not know the value of others. How are players rewarded based on what they do? vi: value to i of winning Auction h~i: highest bid value of all players not including i gi = vi – h~i if bi > h~i and 0 otherwise. What is a players strategy? bi ≥ 0 (e.g. bid value)

Claim: bi* = vi for all i is a weakly dominant strategy equilibrium! Suppose bi > vi: If h~i ≥ bi, gi = 0 (Same as if bi = vi). If h~i < vi, gi = vi - h~i (Same as if bi = vi). If bi > h~i ≥ vi, gi = vi - h~i ≤ 0 (0 if bi = vi). Does not improve payoff under any circumstances and may reduce payoff! Suppose vi > bi: If h~i ≥ vi, gi = 0 (Same as if bi = vi). If h~i < bi, gi = vi - h~i (Same as if bi = vi). If vi > h~i ≥ bi, gi = 0 (vi - h~i ≥ 0 if bi = vi).

Good & Bad Of Dominance Equilibrium Tends to predict behavior pretty well! Bad Often does not exist!

Iterative Dominance Definition: Intuition: Messy Not Very Instructive Easy! No rational player will ever choose a dominated strategy. Repeatedly eliminate dominated strategies for each player until no dominated strategies remain!

Example: Iterative Dominance R strictly dominates C, so C is gone. U strictly dominates M, so M is gone. U strictly dominates D, so D is gone. L strictly dominates R, so R is gone. (U,L) is the iterative dominant strategy.

Good & Bad Of Iterative Dominance May be able to use it when there is not dominant strategy equilibrium! Bad Does not predict as well. Particularly, if lots of iterations are involved.

Maxi-Min Equilibrium Motivation: Definition: How should we play if we want to be particularly cautious? Definition: Strategy si* is a maxi-min strategy if it maximizes i’s minimum possible payoff: s* is a maxi-min equilibrium if si* is a maxi-min strategy for all i.

Example: MaxiMin Equilibrium Player 1 The minimum possible reward from choosing U is 0. The minimum possible reward from choosing D is 1. D maximizes Player 1’s minimum possible reward. Player 2 The minimum possible reward from choosing L is 0. The minimum possible reward from choosing R is 1. R maximizes Player 2’s minimum possible reward. (D, R) is the Maxi-Min equilibrium strategy.

Comments on Maxi-Min Equilibrium Popular Solution Concept for Zero Sum Games Your gain is your opponents loss, so they are out to get you and it makes sense to be cautious. Game theorist version of the precautionary principle.

Pure Strategy Nash Equilibrium Definition: s*  S is a pure strategy Nash equilibrium if for all players i  N, gi(si*,s~i*) ≥ gi(si,s~i*) for all si  Si (there are no profitable unilateral deviations). Alternative Definition: Best Response Function: bri(s) = {si  Si: gi(si,s~i) ≥ gi(si’,s~i) for all si’  Si}. Best Response Correspondence: br(s) = br1(s)  br2(s) … brn(s). s*  S is a pure strategy Nash equilibrium if s*  br(s*) (s* is a best response to itself).

Example: Prisoners’ Dilemma Player 1 Defect is the best response to Cooperate (3 > 2). Defect is the best response to Defect (1 > 0). Player 2 (Defect, Defect) is a pure strategy Nash equilibrium. Same as dominant strategy equilibrium! * * * *

Welfare & Nash First Fundamental Welfare Theorem: A competitive equilibrium is Pareto efficient. A Nash equilibrium need not be Pareto efficient! g2 (0,3) Pareto Efficient (2,2) Pareto Efficient (1,1) Nash (3,0) Pareto Efficient g1

Iterative Dominance Example Revisited Player 1 U is a best response to L. D is a best response to C. U is a best response to R. Player 2 L is a best response to U. R is a best response to M. R is a best response to D. (U, L) is a pure strategy Nash equilibrium. Same as the iterative dominant equilibrium! * * * * * *

Maxi-Min Example Revisited Player 1 D is a best response to L. U is a best response to R. Player 2 R is a best response to U. L is a best response to D. Pure strategy Nash equilibria: (U, R) (D, L) Multiple Nash! Neither is the Maxi-Min equilibrium! * * * *

How can we choose between these two equilibria? Motivation for equilibrium refinements! What may make sense for this game? Pareto Dominance! * * * *

Is Pareto dominance always a good strategy? Player 1 U is a best response to L. D is a best response to R. Player 2 L is a best response to U. R is a best response to D. Pure strategy Nash equilibria: (U, L) Pareto Dominant (D, R) Is (U, L) really more compelling than (D, R)? * * * *

Example: Battle of the Sexes Player 1 Football is the best response to Football. Ballet is the best response to Ballet. Player 2 Pure Strategy Nash Equilibria: (Football, Football) (Ballet, Ballet) Neither strategy is Pareto dominant! * * * *

Focal Points (Schelling) Suppose you and a friend go to the Mall of America to shop. As you leave the car in the parking garage, you agree to go separate ways and meet back up at 4 pm. The problem is you forget to specify where to meet. Question: Where do you go to meet back up with your friend? Historically, equilibrium refinement relied much on introspection. With the emergence and increasing popularity of experimental methods, economists are relying more and more on people to show them how the games will be played.

Matching Pennies Revisited Mason Heads is the best response to Heads. Tails is the best response to Tails. Spencer Tails is the best response to Heads. Heads is the best response to Tails. There is no pure strategy Nash equilibrium! * * * *

Mixed Strategy Nash Equilibrium Definitions: i(si): probability player i will play pure strategy si. i: mixed strategy (a probability distribution over all possible pure strategies). i: set of all possible mixed strategies for player i (i  i).  = { 1,  2, …,  n}: mixed strategy profile.  =  1   2 …  n: set of all possible mixed strategy profiles (  ). Gi(i,~i) =  *   is a mixed strategy Nash equilibrium if for all players i  N, Gi(i*,~i*) ≥ Gi(si,~i*) for all si  Si. Note: Dominant strategy equilibrium and iterative dominant strategy equilibrium can also be defined in mixed strategies.

Mixed Strategy Nash Equilibrium Another Definition: Best Response Function: bri() = {i  i: Gi(i, ~i) ≥ Gi(si’, ~i) for all si’  Si}. Best Response Correspondence: br() = br1()  br2() … brn().  *   is a pure strategy Nash equilibrium if  *  br( *).

What is Mason’s best response for Matching Pennies? (S ,1 – S): mixed strategy for Spencer where 1 ≥ S ≥ 0 is the probability of Heads. (M,1 – M): mixed strategy for Mason where 1 ≥ M ≥ 0 is the probability of Heads. M(H) = S – (1 – S) : Mason’s expected payoff from choosing Heads. M(T) = -S + (1 – S) : Mason’s expected payoff from choosing Tails. M(H) >/=/< M(T) for S >/=/< ½

What is Spencer’s best response for Matching Pennies? S(H) = -M + (1 – M) : Spencer’s expected payoff from choosing Heads. S(T) = M – (1 – M) : Spencer’s expected payoff from choosing Tails. S(H) >/=/< S(T) for ½ >/=/< M

Do we have a mixed strategy equilibrium? brS() 1 brM() Nash Equilibrium: {(½, ½), (½, ½)} ½ ½ 1 M

Battle of the Sexes Example Revisited (1 ,1 – 1): mixed strategy for Player 1 where 1 ≥ 1 ≥ 0 is the probability of Football. (2,1 – 2): mixed strategy for Player 2 where 1 ≥ 2 ≥ 0 is the probability of Football. Player 1’s Optimization Problem Player 2’s Optimization Problem

Solving for Player 1 Lagrangian First Order Conditions Implications

Solving for Player 2 Lagrangian First Order Conditions Implications

Do we have a mixed strategy equilibrium? Is that all? Nash Equilibrium: {(1, 0), (1, 0)} 2 br2() 1 br1() Nash Equilibrium: {(2/3, 1/3), (1/3, 2/3)} 1/3 Nash Equilibrium: {(0, 1), (0, 1)} 2/3 1 1

Why do we care about mixed strategy equilibrium? Seems sensible in many games: Matching Pennies Rock/Paper/Scissors Tennis Baseball Prelim Exams If we allow mixed strategies, we are guaranteed to find at least one Nash in finite games (Nash, 1950)! Games with continuous strategies also have at least one Nash under usual conditions (Debreu, 1952; Glicksburg, 1952; and Fan, 1952). Actually, finding a Nash is usually not a problem. The problem is usually the multiplicity of Nash!

Application: Cournot Duopoly Who are the players? Two firms denoted by i = 1, 2. Who can do what when? Firms choose output simultaneously. Who knows what when? Neither firm knows the other’s output before choosing its own. . How are firms rewarded based on what they do? gi(qi, qj) = (a – qi – qj)qi – cqi for i ≠ j. Question: What is a strategy for firm i? qi ≥ 0

Nash Equilibrium for Cournot Duopoly Find each firm’s best response function: FOC for interior: a – 2qi – qj – c = 0 SOC: –2 < 0 is satisfied Solve for qi: Find a Mutual Best Response: q2 a – c q1(q2) a – c 2 q2* q2(q1) a – c 2 a – c q1 q1*

But what if we have n firms instead of just 2? Find each firm’s best response function: FOC for interior: a – 2qi – q~i – c = 0 SOC: –2 < 0 is satisfied Solve for qi: Find a Mutual Best Response: qi* = a – c – Q* where Q* = qi* + q~i* Sum over i: Solve for Q*: Solve for qi*:

Implications as n Gets Large Individual firm equilibrium output decreases. Equilibrium industry output approaches a – c. Equilibrium price approaches marginal cost c. We approach an efficient competitive equilibrium!

Application: Common Property Resource Who are the players? Ranchers denoted by i = 1, 2, …, n. Who can do what when? Each rancher can put steers on open range to graze simultaneously. Who knows what when? No rancher knows how many steers other ranchers will graze before choosing how many he will graze. How are ranchers rewarded based on what they do? gi(qi, q~i) = p(aQ – Q2)qi/Q – cqi where Q is the total number of steers grazing the range land. Question: What is a strategy for rancher i? qi ≥ 0

Nash Equilibrium for Common Property Resource Find each ranchers’s best response function: FOC for interior: p(a – 2qi – q~i) – c = 0 SOC: –2p < 0 is satisfied Solve for qi: Find a Mutual Best Response: pqi* = pa – c – pQ* Sum over i: Solve for Q*: Solve for qi*:

Implications as n Gets Large Individual rancher equilibrium stocking decreases. Equilibrium industry stocking approaches a – c/p. Individual rancher’s payoff approaches zero. Stocking rate becomes increasingly inefficient!

Application: Compliance Game Who are the players? Regulator & Firm Who does what when? Regulator chooses whether to Audit Firm & Firm chooses whether to Comply. Choices are simultaneous. Who knows what when? Regulator & Firm do not know each others choices when making their own. How are the Regulator and Firm rewarded based on what they do?

Assuming S > CF & S > CA, what is the Nash equilibrium for this game? Regulator R(Audit) = F(BR – CA) + (1 – F)(S – CA) R(Don’t Audit) = FBR + (1 – F)0 R(Audit) >/=/< R(Don’t Audit) for 1 – CA/S >/=/< F Firm F(Comply) = R(BF – CF) + (1 – R)(BF – CF) F(Don’t Comply) = R(BF – S) + (1 – R)BF F(Comply) >/=/< F(Don’t Comply) for R >/=/< CF/S

What is the equilibrium? We know we have at least one! F brF() 1 Nash Equilibrium: {(CF/S, 1 - CF/S), (1 – CA/S, CA/S)} 1 – CA/S brR() 1 R CF/S

What are the implications of this equilibrium? Equilibrium Audit Probability: Increasing in the Firm’s cost of compliance! Decreasing in the Regulator’s sanction! Equilibrium Compliance Probability: Decreasing in the Regulator’s cost of Auditing! Increasing in the Regulators sanction! Shoot Jaywalkers with Zero Probability!