News and Notes 3/18 Two readings in game theory assigned Short lecture today due to 10 AM fire drill HW 2 handed back today, midterm handed back Tuesday.

Slides:



Advertisements
Similar presentations
Mixed Strategies.
Advertisements

Introduction to Game Theory Networked Life CSE 112 Spring 2005 Prof. Michael Kearns.
News and Notes 4/13 HW 3 due now HW 4 distributed today, due Thu 4/22 Final exam is Mon May 3 11 AM Levine 101 Today: –intro to evolutionary game theory.
Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Mechanism Design without Money Lecture 1 Avinatan Hassidim.
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
3. Basic Topics in Game Theory. Strategic Behavior in Business and Econ Outline 3.1 What is a Game ? The elements of a Game The Rules of the.
Game theory (Sections )
Evolution and Repeated Games D. Fudenberg (Harvard) E. Maskin (IAS, Princeton)
Introduction to Game theory Presented by: George Fortetsanakis.
Infinitely Repeated Games. In an infinitely repeated game, the application of subgame perfection is different - after any possible history, the continuation.
Non-Cooperative Game Theory To define a game, you need to know three things: –The set of players –The strategy sets of the players (i.e., the actions they.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
An Introduction to... Evolutionary Game Theory
 1. Introduction to game theory and its solutions.  2. Relate Cryptography with game theory problem by introducing an example.  3. Open questions and.
EC941 - Game Theory Lecture 7 Prof. Francesco Squintani
Game Theory Eduardo Costa. Contents What is game theory? Representation of games Types of games Applications of game theory Interesting Examples.
Game Theory Developed to explain the optimal strategy in two-person interactions. Initially, von Neumann and Morganstern Zero-sum games John Nash Nonzero-sum.
Multi-player, non-zero-sum games
An Introduction to Game Theory Part I: Strategic Games
Chapter 6 © 2006 Thomson Learning/South-Western Game Theory.
Introduction to (Networked) Game Theory Networked Life NETS 112 Fall 2014 Prof. Michael Kearns.
GAME THEORY By Ben Cutting & Rohit Venkat. Game Theory: General Definition  Mathematical decision making tool  Used to analyze a competitive situation.
EC941 - Game Theory Prof. Francesco Squintani Lecture 8 1.
Eponine Lupo.  Game Theory is a mathematical theory that deals with models of conflict and cooperation.  It is a precise and logical description of.
Sep. 5, 2013 Lirong Xia Introduction to Game Theory.
Slide 1 of 13 So... What’s Game Theory? Game theory refers to a branch of applied math that deals with the strategic interactions between various ‘agents’,
A camper awakens to the growl of a hungry bear and sees his friend putting on a pair of running shoes, “You can’t outrun a bear,” scoffs the camper. His.
Chapter 12 Choices Involving Strategy McGraw-Hill/Irwin Copyright © 2008 by The McGraw-Hill Companies, Inc. All Rights Reserved.
An Introduction to Game Theory Part II: Mixed and Correlated Strategies Bernhard Nebel.
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Review: Game theory Dominant strategy Nash equilibrium
Game Theory: Whirlwind Review Matrix (normal form) games, mixed strategies, Nash equil. –the basic objects of vanilla game theory –the power of private.
APEC 8205: Applied Game Theory Fall 2007
Introduction to Game Theory and Behavior Networked Life CIS 112 Spring 2009 Prof. Michael Kearns.
QR 38, 2/22/07 Strategic form: dominant strategies I.Strategic form II.Finding Nash equilibria III.Strategic form games in IR.
DANSS Colloquium By Prof. Danny Dolev Presented by Rica Gonen
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Communication Networks A Second Course Jean Walrand Department of EECS University of California at Berkeley.
Introduction to Game Theory and Networks Networked Life CSE 112 Spring 2007 Prof. Michael Kearns.
EC941 - Game Theory Prof. Francesco Squintani Lecture 2 1.
Social Choice Session 7 Carmen Pasca and John Hey.
Game Theory, Strategic Decision Making, and Behavioral Economics 11 Game Theory, Strategic Decision Making, and Behavioral Economics All men can see the.
Chapter 12 Choices Involving Strategy Copyright © 2014 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written.
Introduction to Game Theory and Strategic Behavior Networked Life MKSE 112 Fall 2012 Prof. Michael Kearns.
Nash equilibrium Nash equilibrium is defined in terms of strategies, not payoffs Every player is best responding simultaneously (everyone optimizes) This.
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
Game Theory is Evolving MIT , Fall Our Topics in the Course  Classical Topics Choice under uncertainty Cooperative games  Values  2-player.
Microeconomics 2 John Hey. Game theory (and a bit of bargaining theory) A homage to John Nash. Born Still alive (as far as Google knows). Spent.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Game-theoretic analysis tools Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Lecture 2: two-person non.
The Science of Networks 6.1 Today’s topics Game Theory Normal-form games Dominating strategies Nash equilibria Acknowledgements Vincent Conitzer, Michael.
Mixed Strategies and Repeated Games
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Game tree search Thanks to Andrew Moore and Faheim Bacchus for slides!
Game theory (Sections )
5.1.Static Games of Incomplete Information
Market Design and Analysis Lecture 2 Lecturer: Ning Chen ( 陈宁 )
Lec 23 Chapter 28 Game Theory.
Game Theory By Ben Cutting & Rohit Venkat.
Game theory basics A Game describes situations of strategic interaction, where the payoff for one agent depends on its own actions as well as on the actions.
Contents of the Talk Preliminary Materials Motivation and Contribution
Project BEST Game Theory.
Introduction to (Networked) Game Theory
Choices Involving Strategy
Introduction to (Networked) Game Theory
Introduction to (Networked) Game Theory
Molly W. Dahl Georgetown University Econ 101 – Spring 2009
Presentation transcript:

News and Notes 3/18 Two readings in game theory assigned Short lecture today due to 10 AM fire drill HW 2 handed back today, midterm handed back Tuesday No MK OHs today

Introduction to Game Theory Networked Life CSE 112 Spring 2004 Prof. Michael Kearns

Game Theory A mathematical theory designed to model: –how rational individuals should behave –when individual outcomes are determined by collective behavior –strategic behavior Rational usually means selfish --- but not always Rich history, flourished during the Cold War Traditionally viewed as a subject of economics Subsequently applied by many fields –evolutionary biology, social psychology Perhaps the branch of pure math most widely examined outside of the “hard” sciences

Prisoner’s Dilemma Cooperate = deny the crime; defect = confess guilt of both Claim that (defect, defect) is an equilibrium: –if I am definitely going to defect, you choose between -10 and -8 –so you will also defect –same logic applies to me Note unilateral nature of equilibrium: –I fix a behavior or strategy for you, then choose my best response Claim: no other pair of strategies is an equilibrium But we would have been so much better off cooperating… Looking ahead: what do people actually do?actually do? cooperatedefect cooperate-1, -1-10, defect-0.25, -10-8, -8

Penny Matching What are the equilibrium strategies now? There are none! –if I play heads then you will of course play tails –but that makes me want to play tails too –which in turn makes you want to play heads –etc. etc. etc. But what if we can each (privately) flip coins? –the strategy pair (1/2, 1/2) is an equilibrium Such randomized strategies are called mixed strategies headstails heads1, 00, 1 tails0, 11, 0

The World According to Nash If > 2 actions, mixed strategy is a distribution on them –e.g. 1/3 rock, 1/3 paper, 1/3 scissors Might also have > 2 players A general mixed strategy is a vector P = (P[1], P[2],… P[n]): –P[i] is a distribution over the actions for player i –assume everyone knows all the distributions P[j] –but the “coin flips” used to select from P[i] known only to i P is an equilibrium if: –for every i, P[i] is a best response to all the other P[j] Nash 1950: every game has a mixed strategy equilibrium –no matter how many rows and columns there are –in fact, no matter how many players there are Thus known as a Nash equilibrium A major reason for Nash’s Nobel Prize in economics

Facts about Nash Equilibria While there is always at least one, there might be many –zero-sum games: all equilibria give the same payoffs to each player –non zero-sum: different equilibria may give different payoffs! Equilibrium is a static notion –does not suggest how players might learn to play equilibrium –does not suggest how we might choose among multiple equilibria Nash equilibrium is a strictly competitive notion –players cannot have “pre-play communication” –bargains, side payments, threats, collusions, etc. not allowed Computing Nash equilibria for large games is difficult

Hawks and Doves Two parties confront over a resource of value V May simply display aggression, or actually have a fight Cost of losing a fight: C > V Assume parties are equally likely to win or lose There are three Nash equilibria: –(hawk, dove), (dove, hawk) and (V/C hawk, V/C hawk) Alternative interpretation for C >> V: –the Kansas Cornfield Intersection game (a.k.a. Chicken) –hawk = speed through intersection, dove = yield hawkdove hawk(V-C)/2, (V-C)/2V, 0 dove0, VV/2, V/2

Board Games and Game Theory What does game theory say about richer games? –tic-tac-toe, checkers, backgammon, go,… –these are all games of complete information with state –incomplete information: poker Imagine an absurdly large “game matrix” for chess: –each row/column represents a complete strategy for playing –strategy = a mapping from every possible board configuration to the next move for the player –number of rows or columns is huge --- but finite! Thus, a Nash equilibrium for chess exists! –it’s just completely infeasible to compute it –note: can often “push” randomization “inside” the strategy

Repeated Games Nash equilibrium analyzes “one-shot” games –we meet for the first time, play once, and separate forever Natural extension: repeated games –we play the same game (e.g. Prisoner’s Dilemma) many times in a row –like a board game, where the “state” is the history of play so far –strategy = a mapping from the history so far to your next move So repeated games also have a Nash equilibrium –may be different from the one-shot equilibrium! –depends on the game and details of the setting We are approaching learning in games –natural to adapt your behavior (strategy) based on play so far

Repeated Prisoner’s Dilemma If we play for R rounds, and both know R: –(always defect, always defect) still the only Nash equilibrium –argue by backwards induction If uncertainty about R is introduced (e.g. random stopping): –cooperation and tit-for-tat can become equilibria If computational restrictions are placed on our strategies: –as long as we’re too feeble to count, cooperative equilibria arise –formally: < log(R) states in a finite automaton –a form of bounded rationality

The Folk Theorem Take any one-shot, two-player game Suppose that (u,v) are the (expected) payoffs under some mixed strategy pair (P[1],P[2]) for the two players –(P[1], P[2]) not necessarily a Nash equilibrium –but (u,v) gives better payoffs than the security levels security level: what a player can get no matter what the other does –example: sec. level is (-8, -8) in Prisoner’s Dilemma; (-1,-1) is better Then there is always a Nash equilibrium for the infinite repeated game giving payoffs (u,v) –makes use of the concept of threats Partial resolution of the difficulties of Nash equilibria…

Correlated Equilibrium In a Nash equilibrium (P[1],P[2]): –player 2 “knows” the distribution P[1] –but doesn’t know the “random bits” player 1 uses to select from P[1] –equilibrium relies on private randomization Suppose now we also allow public (shared) randomization –so strategy might say things like “if private bits = and shared bits = , then play hawk” Then two strategies are in correlated equilibrium if: –knowing only your strategy and the shared bits, my strategy is a best response, and vice-versa Nash is the special case of no shared bits

Hawks and Doves Revisited There are three Nash equilibria: –(hawk, dove), (dove, hawk) and (V/C hawk, V/C hawk) Alternative interpretation for C >> V: –the Kansas Cornfield Intersection game (a.k.a. Chicken) –hawk = speed through intersection, dove = yield Correlated equilibrium: the traffic signal –if the shared bit is green to me, I am playing hawk –if the shared bit is red to me, I will play dove –you play the symmetric strategy –splits waiting time between us --- a different outcome than Nash hawkdove hawk(V-C)/2, (V-C)/2V, 0 dove0, VV/2, V/2

Correlated Equilibrium Facts Always exists –all Nash equilibria are correlated equilibria –all probability distributions over Nash equilibria are C.E. –and some more things are C.E. as well –a broader concept than Nash Technical advantages of correlated equilibria: –often easier to compute than Nash Conceptual advantages: –correlated behavior is a fact of the real world –model a limited form of cooperation –more general cooperation becomes extremely complex and messy Breaking news (late 90s – now): –CE is the natural convergence notion for “rational” learning in games!

A More Complex Setting: Bargaining x = payoff to player 1 y = payoff to player 2 possible outcomes Convex set S of possible payoffs Players must bargain to settle on a solution (x,y) in S What “should” the solution be? Want a general answer A function F(S) mapping S to a solution (x,y) in S Nash’s axioms for F: –choose on red boundary (Pareto) –scale invariance –symmetry in the role of x and y –“independence of irrelevant alternatives”: if green solution was contained in smaller red set, must also be red solution

Nash’s Bargaining Solution x = payoff to player 1 y = payoff to player 2 possible outcomes There’s only one choice of F that satisfies all these axioms And the winner is: –choose (x,y) on the boundary of S that maximizes xy Example: rich-poor bargaining Cash 1 (rich) Cash 2 (poor) Utility 1 (rich) Utility 2 (poor) U1 x U

Social Choice Theory Suppose we must collectively choose between alternatives –e.g. Bush, Kerry, Nader, Sharpton,… Under current voting scheme, “gamesmanship” encouraged –e.g. prefer Nader to Gore, but Gore is more “electable” –not a “truth revealing” mechanism An idealized voting scheme –we each submit a complete ordering on the candidates –e.g. Sharpton > Bush > Nader > Kerry –then combine the orderings to choose a global ordering (outcome) –we would like the outcome to be “fair” and “reasonable” What do “fair” and “reasonable” mean? Again take an axiomatic approach

Social Choice Axioms Let’s call F the mapping from preferences to outcome Suppose for some preferences, x > y in the global outcome –then if we move x up in all preferences, F still has x > y Suppose we look at some subset S of alternatives –e.g. S = {Kerry, Sharpton} –suppose we modify preferences only outside of S –F’s ranking of S should remain unchanged (irrelevant alternatives) Always some way of making F output x > y, for any x,y –otherwise F is ignoring the preferences! Non-dictatorship: –no single individual determines output of F

Arrow’s Impossibility Theorem There is no mapping F satisfying all four axioms! Long history of alternate axioms, (im)possibility research A mathematical demonstration of the difficulty of selecting collective outcomes from individual preferences

Next Up Have so far examined simple games between two players Strategic interaction on the smallest “network”: –two vertices with a single link between them –much richer interaction than just info transmission, messages, etc. Classical game theory generalizes to many players –e.g. Nash equilibria always exist in multi-player matrix games –but this fails to capture/exploit/examine structured interaction We need specific models for networked games: –games on networks: local interaction –shared information: economies, financial markets –voting systems –evolutionary games