Quantal Response Equilibrium APEC 8205: Applied Game Theory Fall 2007.

Slides:



Advertisements
Similar presentations
Pondering more Problems. Enriching the Alice-Bob story Go to AGo to B Go to A Alice Go to B Go to A Go to B Go shoot pool Alice.
Advertisements

General Linear Model With correlated error terms  =  2 V ≠  2 I.
Game Theory Assignment For all of these games, P1 chooses between the columns, and P2 chooses between the rows.
Mixed Strategies CMPT 882 Computational Game Theory Simon Fraser University Spring 2010 Instructor: Oliver Schulte.
© 2009 Institute of Information Management National Chiao Tung University Game theory The study of multiperson decisions Four types of games Static games.
Lecture XXIII.  In general there are two kinds of hypotheses: one concerns the form of the probability distribution (i.e. is the random variable normally.
Two-Player Zero-Sum Games
Tacit Coordination Games, Strategic Uncertainty, and Coordination Failure John B. Van Huyck, Raymond C. Battalio, Richard O. Beil The American Economic.
Joint Strategy Fictitious Play Sherwin Doroudi. “Adapted” from J. R. Marden, G. Arslan, J. S. Shamma, “Joint strategy fictitious play with inertia for.
ECO290E: Game Theory Lecture 5 Mixed Strategy Equilibrium.
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
EC3224 Autumn Lecture #04 Mixed-Strategy Equilibrium
Working Some Problems. Telephone Game How about xexed strategies? Let Winnie call with probability p and wait with probability 1-p. For what values of.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
Eponine Lupo.  Game Theory is a mathematical theory that deals with models of conflict and cooperation.  It is a precise and logical description of.
1 Duke PhD Summer Camp August 2007 Outline  Motivation  Mutual Consistency: CH Model  Noisy Best-Response: QRE Model  Instant Convergence: EWA Learning.
An Introduction to Game Theory Part IV: Games with Imperfect Information Bernhard Nebel.
Harsanyi transformation Players have private information Each possibility is called a type. Nature chooses a type for each player. Probability distribution.
Game Theory and Applications following H. Varian Chapters 28 & 29.
1 Computing Nash Equilibrium Presenter: Yishay Mansour.
Learning in Games. Fictitious Play Notation! For n Players we have: n Finite Player’s Strategies Spaces S 1, S 2, …, S n n Opponent’s Strategies Spaces.
XYZ 6/18/2015 MIT Brain and Cognitive Sciences Convergence Analysis of Reinforcement Learning Agents Srinivas Turaga th March, 2004.
Outline  Motivation and Modeling Philosophy  Empirical Alternative I: Model of Cognitive Hierarchy (Camerer, Ho, and Chong, QJE, 2004)  Empirical Alternative.
Rationality and information in games Jürgen Jost TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A AAA Max Planck.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Static Games of Complete Information: Subgame Perfection
Outline  In-Class Experiment on Centipede Game  Test of Iterative Dominance Principle I: McKelvey and Palfrey (1992)  Test of Iterative Dominance Principle.
Games in the normal form- An application: “An Economic Theory of Democracy” Carl Henrik Knutsen 5/
Outline  Motivation and Modeling Philosophy  Empirical Alternative I: Model of Cognitive Hierarchy (Camerer, Ho, and Chong, QJE, 2004)  Empirical Alternative.
Lecture 14-2 Multinomial logit (Maddala Ch 12.2)
EC941 - Game Theory Francesco Squintani Lecture 3 1.
Inferences About Process Quality
Two-Stage Games APEC 8205: Applied Game Theory Fall 2007.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Games of Incomplete Information. These games drop the assumption that players know each other’s preferences. Complete info: players know each other’s preferences.
1 Economics & Evolution. 2 Cournot Game 2 players Each chooses quantity q i ≥ 0 Player i’s payoff is: q i (1- q i –q j ) Inverse demand (price) No cost.
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
1 © Lecture note 3 Hypothesis Testing MAKE HYPOTHESIS ©
GEO7600 Inverse Theory 09 Sep 2008 Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of.
Chapter 9 Games with Imperfect Information Bayesian Games.
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
1 A unified approach to comparative statics puzzles in experiments Armin Schmutzler University of Zurich, CEPR, ENCORE.
Assoc. Prof. Yeşim Kuştepeli1 GAME THEORY AND APPLICATIONS UNCERTAINTY AND EXPECTED UTILITY.
© 2009 Institute of Information Management National Chiao Tung University Lecture Note II-3 Static Games of Incomplete Information Static Bayesian Game.
Derivative Action Learning in Games Review of: J. Shamma and G. Arslan, “Dynamic Fictitious Play, Dynamic Gradient Play, and Distributed Convergence to.
ECO290E: Game Theory Lecture 12 Static Games of Incomplete Information.
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Nash equilibrium Nash equilibrium is defined in terms of strategies, not payoffs Every player is best responding simultaneously (everyone optimizes) This.
Games with Imperfect Information Bayesian Games. Complete versus Incomplete Information So far we have assumed that players hold the correct belief about.
Game theory & Linear Programming Steve Gu Mar 28, 2008.
\ B A \ Draw a graph to show the expected pay-off for A. What is the value of the game. How often should A choose strategy 1? If A adopts a mixed.
Optimal Bayes Classification
Enrica Carbone (UniBA) Giovanni Ponti (UA- UniFE) ESA-Luiss–30/6/2007 Positional Learning with Noise.
Algorithms for solving two-player normal form games
Auctions serve the dual purpose of eliciting preferences and allocating resources between competing uses. A less fundamental but more practical reason.
5.1.Static Games of Incomplete Information
1 The Volunteer’s Dilemma (Mixed Strategies). 2 The Volunteer Dilemma Game Simultaneously and independently, players have to decide if they wish to volunteer.
Managerial Economics Game Theory Aalto University School of Science Department of Industrial Engineering and Management January 12 – 28, 2016 Dr. Arto.
Econ 805 Advanced Micro Theory 1 Dan Quint Fall 2009 Lecture 1 A Quick Review of Game Theory and, in particular, Bayesian Games.
Lec. 19 – Hypothesis Testing: The Null and Types of Error.
Game Theory Georg Groh, WS 08/09 Verteiltes Problemlösen.
Working Some Problems.
Statistical Modelling
Probability Theory and Parameter Estimation I
Simultaneous-Move Games: Mixed Strategies
Econ 805 Advanced Micro Theory 1
Partly Verifiable Signals (c.n.)
Normal Form (Matrix) Games
Presentation transcript:

Quantal Response Equilibrium APEC 8205: Applied Game Theory Fall 2007

THE GAME Players: N = {1,2,…,n} Strategies: S i = {s 1 i,s 2 i,…,s J(i) i } Strategy Profile: s = {s 1,s 2,…,s n } for s i  S i  i  N Strategy Space:S =  i  N S i Individual Payoffs: u i (s)  Everyone’s Payoffs:u(s) = {u 1 (s),u 2 (s),…,u n (s)}

SOME NOTATION  i : J(i) dimensional simplex  =  i  N  i p i = (p 1 i,p 2 i,…,p J(i) i )   i p = {p 1, p 2,…,p n }   p i (s i ): Probability player i chooses strategy s i p(s) =  i = 1 n p i (s i ): Probability of strategy profile s  S given p Eu i (p) =  s  S p(s)u i (s): Player i’s expected payoff

DEFINITION p’ = {p i ’, p -i ’} is a Nash equilibrium if for all i  N and all p i   i, Eu i (p i ’, p -i ’)  Eu i (p i,p -i ’) where p -i ’ is p’ exclusive of p i ’.

MORE NOTATION s j i = {p i : p j i = 1}: Player i’s pure strategy j  j i : Random error for player i and strategy j Eu j i (p) = Eu i (s j i,p -i ) +  j i : i’s expected payoff for strategy j plus an error  i = (  1 i,  2 i,…,  J(i) i ): Collection of errors for player i f i (  i ): Joint density of errors assuming E(  i ) = 0 f j i (  j i ): Marginal density of error  = (  1,  2,…,  n ): Errors for all players f = (f 1,f 2,…,f n ): Joint densities of errors for all players

ASSUMPTION Player i chooses the strategy j when Eu j i (p)  Eu k i (p)  k = 1,2,…,J(i).

IMPORTANT NOTES Player i knows  i, but not  k for i  k. Player i only knows the distribution f k (  k ), which means k’s strategy choice is random from the perspective of i. k’s strategy choice is not uniformly random because it also depends on his payoffs and lack of knowledge of j’s specific strategy choice.

BACK TO NOTATION R j i (p) = {  i | Eu j i (p)  Eu k i (p)  k = 1,2,…,J(i)} –Region of errors that make strategy j optimal for player i  j i (p) = –probability strategy j optimal for player i

ANOTHER DEFINITION   is a Quantal Response Equilibrium if  j i =  j i (  )  j = 1,2,…,J(i) and i  N where  j i is the probability player i chooses strategy j and  = {  1,  2,…,  n } where  i = {  1 i,  2 i,…,  J(i) i }  i  N.

COMMENT Assuming the  j i ’s are identically and independently distributed (iid) extreme value (Weibull), the QRE implies the logit function  j i = where is a parameter that is inversely related to the dispersion or variance of error. For = 0, probabilities are uniform. As approaches , the dispersion of error approaches 0 and the QRE approaches a Nash equilibrium.

GENERAL EXAMPLE If  1 i =  i and  2 i = 1 -  i for i = 1,2, the QRE will solve and

MORE SPECIFIC EXAMPLE

ANOTHER SPECIFIC EXAMPLE NE = {(0.50, 0.50), (0.50, 0.50)} Observed: –Goeree & Holt = {(0.48, 0.52), (0.48, 0.52)} –Our Class = {(0.75,0.25),(0.44,0.56)}

ANOTHER SPECIFIC EXAMPLE NE = {(0.50, 0.50), (0.125, 0.875)} Observed: –Goeree & Holt = {(0.96, 0.04), (0.16, 0.84)} –Our Class = {(0.57,0.43),(0.20,0.80)}

ANOTHER SPECIFIC EXAMPLE NE = {(0.50, 0.50), (0.909, 0.091)} Observed: –Goeree & Holt = {(0.08, 0.92), (0.80, 0.20)} –Our Class = {(0.29,0.71),(0.70,0.30)}

PLAYER 1’s QRE

PLAYER 2’s QRE

EMPIRCAL ANALYSIS N pairs of subjects. y i  {(Top, Left), (Top, Right), (Bottom, Left), (Bottom, Right)} Let y = {y 1,…,y N }, so the probability of y is L =  i=1 N Pr(y i ) where Solve for :  1 ( ) and  2 ( ). Optimize L for  0.

WHAT HAVE OTHERS DONE EMPIRICALLY McKelvey and Palfrey: Had subjects play a variety of games. Hypotheses: –Random Play Rejected –Nash Play Rejected –Learning (e.g. decreases with experience) Results Mixed

GOOD EXERCISE Data From Class Room Experiment Last Year –Treatment 1: {(T, L), (T, R), (B, L), (B, R)} = {3,2,1,1} –Treatment 2: {(T, L), (T, R), (B, L), (B, R)} = {0,6,1,0} –Treatment 3: {(T, L), (T, R), (B, L), (B, R)} = {2,0,3,2} Questions: –Is play strictly random? –Does differ across treatments? –What are the estimated probabilities of Top & Bottom and Left & Right for each treatment?