Calibrated Learning and Correlated Equilibrium By: Dean Foster and Rakesh Vohra Presented by: Jason Sorensen.

Slides:



Advertisements
Similar presentations
Chapter 17: Making Complex Decisions April 1, 2004.
Advertisements

On the Approximation Performance of Fictitious Play in Finite Games Paul W. GoldbergU. Liverpool Rahul Savani U. Liverpool Troels Bjerre Sørensen U. Warwick.
Vincent Conitzer CPS Repeated games Vincent Conitzer
A study of Correlated Equilibrium Polytope By: Jason Sorensen.
Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Game Theory Assignment For all of these games, P1 chooses between the columns, and P2 chooses between the rows.
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
Mixed Strategies For Managers
Simultaneous- Move Games with Mixed Strategies Zero-sum Games.
6-1 LECTURE 6: MULTIAGENT INTERACTIONS An Introduction to MultiAgent Systems
Joint Strategy Fictitious Play Sherwin Doroudi. “Adapted” from J. R. Marden, G. Arslan, J. S. Shamma, “Joint strategy fictitious play with inertia for.
Bayesian Games Yasuhiro Kirihata University of Illinois at Chicago.
1 Chapter 14 – Game Theory 14.1 Nash Equilibrium 14.2 Repeated Prisoners’ Dilemma 14.3 Sequential-Move Games and Strategic Moves.
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
Nash Equilibria In Graphical Games On Trees Edith Elkind Leslie Ann Goldberg Paul Goldberg.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 15 Game Theory.
 1. Introduction to game theory and its solutions.  2. Relate Cryptography with game theory problem by introducing an example.  3. Open questions and.
Course: Applications of Information Theory to Computer Science CSG195, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas.
General-sum games You could still play a minimax strategy in general- sum games –I.e., pretend that the opponent is only trying to hurt you But this is.
Part 3: The Minimax Theorem
Economics 202: Intermediate Microeconomic Theory 1.HW #6 on website. Due Thursday. 2.No new reading for Thursday, should be done with Ch 8, up to page.
An Introduction to Game Theory Part I: Strategic Games
Game Theory Part 5: Nash’s Theorem.
by Vincent Conitzer of Duke
Eponine Lupo.  Game Theory is a mathematical theory that deals with models of conflict and cooperation.  It is a precise and logical description of.
Algoritmi per Sistemi Distribuiti Strategici
An Introduction to Game Theory Part II: Mixed and Correlated Strategies Bernhard Nebel.
Yale 11 and 12 Evolutionary Stability: cooperation, mutation, and equilibrium.
Yale lectures 3 and 4 review Iterative deletion of dominated strategies. Ex: two players choose positions on political spectrum. Endpoints become repeatedly.
Communication Networks A Second Course Jean Walrand Department of EECS University of California at Berkeley.
Harsanyi transformation Players have private information Each possibility is called a type. Nature chooses a type for each player. Probability distribution.
Game Theory: Key Concepts Zero Sum Games Zero Sum Games Non – Zero Sum Games Non – Zero Sum Games Strategic Form Games  Lay out strategies Strategic Form.
An introduction to game theory Today: The fundamentals of game theory, including Nash equilibrium.
Lecture Slides Dixit and Skeath Chapter 4
An introduction to game theory Today: The fundamentals of game theory, including Nash equilibrium.
PRISONER’S DILEMMA By Ajul Shah, Hiten Morar, Pooja Hindocha, Amish Parekh & Daniel Castellino.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
On Bounded Rationality and Computational Complexity Christos Papadimitriou and Mihallis Yannakakis.
Game Theory.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
An introduction to game theory Today: The fundamentals of game theory, including Nash equilibrium.
Minimax strategies, Nash equilibria, correlated equilibria Vincent Conitzer
CPS 170: Artificial Intelligence Game Theory Instructor: Vincent Conitzer.
Learning Games Presented by: Aggelos Papazissis Alexandros Papapostolou.
Derivative Action Learning in Games Review of: J. Shamma and G. Arslan, “Dynamic Fictitious Play, Dynamic Gradient Play, and Distributed Convergence to.
ECO290E: Game Theory Lecture 12 Static Games of Incomplete Information.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Chapters 29, 30 Game Theory A good time to talk about game theory since we have actually seen some types of equilibria last time. Game theory is concerned.
Price of Anarchy Georgios Piliouras. Games (i.e. Multi-Body Interactions) Interacting entities Pursuing their own goals Lack of centralized control Prediction?
Part 3 Linear Programming
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Zero-sum Games The Essentials of a Game Extensive Game Matrix Game Dominant Strategies Prudent Strategies Solving the Zero-sum Game The Minimax Theorem.
GAME THEORY Day 5. Minimax and Maximin Step 1. Write down the minimum entry in each row. Which one is the largest? Maximin Step 2. Write down the maximum.
Entry Deterrence Players Two firms, entrant and incumbent Order of play Entrant decides to enter or stay out. If entrant enters, incumbent decides to fight.
Game representations, game-theoretic solution concepts, and complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Day 9 GAME THEORY. 3 Solution Methods for Non-Zero Sum Games Dominant Strategy Iterated Dominant Strategy Nash Equilibrium NON- ZERO SUM GAMES HOW TO.
Game theory basics A Game describes situations of strategic interaction, where the payoff for one agent depends on its own actions as well as on the actions.
Chapter 15: Game Theory: The Mathematics Lesson Plan of Competition
Lecture 13.
Simultaneous Move Games: Discrete Strategies
Vincent Conitzer CPS Repeated games Vincent Conitzer
Game Theory.
Chapter 29 Game Theory Key Concept: Nash equilibrium and Subgame Perfect Nash equilibrium (SPNE)
Multiagent Systems Repeated Games © Manfred Huber 2018.
Vincent Conitzer Repeated games Vincent Conitzer
Chapter 15: Game Theory: The Mathematics Lesson Plan of Competition
Collaboration in Repeated Games
Normal Form (Matrix) Games
Vincent Conitzer CPS Repeated games Vincent Conitzer
Presentation transcript:

Calibrated Learning and Correlated Equilibrium By: Dean Foster and Rakesh Vohra Presented by: Jason Sorensen

Nash Equilibrium A Nash Equilibrium is a set of strategies in which no player benefits by individually altering their strategy. The Mini-max theorem gives the value of a zero sum game at an N.E.: In any game with a finite strategy set, an N.E. exists. N.E. exists and is played if: 1. Each players is rational and believes others to be so 2. The utility matrix is accurate 3. The play execution is flawless 4. The players can accurately deduce the N.E. solution

But… In many instances, repeated play may not converge to a Nash Equilibrium Actually, in general there is no method guaranteed to converge to an N.E. The payoff matrix may be unknown There may be multiple equilibriums Opponent may be irrational N.E is Inconsistent with Bayesian perspective

Fictitious Play Each player assumes the opponent’s strategy is the ratio of plays of each made move to present N.E. are absorbing points of F.P. If F.P. Converges, it is to an N.E. Converges under if the game: 1. Has a 2x2 general payoff matrix 2. Is zero sum 3. Is solvable by elimination of dominated strategies 4. Is a potential game Convergence not guaranteed for general games

Correlated Equilibrium More general than N.E. (Every N.E is a C.E.) Neither player can benefit from deviating from C.E. without other player deviating also Calibrated forecasts lead to C.E. Consistent with Bayesian viewpoint Differs from N.E. in that it is a distribution over all combined moves that both players follow

An example: C.E. vs N.E. Game of Dare or Chicken Three N.E. exist at (D,C), (C,D) and each player Dare with p = 1/3 C.E. at (C,C), (D,C), (C,D) each with p=1/3 Payoff for mixed N.E. is 4 Payoff for C.E. is 5 DC D0,07,2 C2,76,6

The Shapely game F.P. Oscillates between (1,1),(1,2),(2,2),(2,3),(3,3),(3,1) Does NOT converge to N.E. N.E. when both players have strategy (1/3, 1/3, 1/3) C.E. with support on the above states each with p=1/6 N.E. payoff = 1/3 C.E. payoff = 1/2 This C.E. is not a mixture of Nash Equilibriums ,00,10,0 2 1,00,1 3 0,01,0

Calibrated Forecasts A forecast is calibrated if, on average of all instances in which x is predicted with probability p, it occurred p percent of those times. If each player uses a calibrated forecast, then i.e. Convergence to C.E. is guaranteed There is a calibrated forecast that converges to any C.E. except for a measure zero set of games There is a randomized forecast for a player regardless of what learning rule the opponent uses

This is all great, but… How long does convergence really take … to a calibrated forecast? … to a correlated equilibrium? How do we know which C.E. we are learning? Was N.E. really so bad in the first place? What is the expected payoff increase of a general C.E. over an N.E. In a 2003 paper Foster showed a powerful method of how to learn N.E. _insert your own problem with paper here_