Presentation is loading. Please wait.

Presentation is loading. Please wait.

LECTURE 3 DYNAMIC GAMES OF COMPLETE INFORMATION

Similar presentations


Presentation on theme: "LECTURE 3 DYNAMIC GAMES OF COMPLETE INFORMATION"— Presentation transcript:

1 LECTURE 3 DYNAMIC GAMES OF COMPLETE INFORMATION
May 19, 2003 LECTURE 3 DYNAMIC GAMES OF COMPLETE INFORMATION Notes modified from Yongqin Fudan University

2 Outline of dynamic games of complete information
Extensive-form representation Dynamic games of complete and perfect information Game tree Subgame-perfect Nash equilibrium Backward induction Applications Dynamic games of complete and imperfect information More applications Repeated games

3 Entry game An incumbent monopolist faces the possibility of entry by a challenger. The challenger may choose to enter or stay out. If the challenger enters, the incumbent can choose either to accommodate or to fight. The payoffs are common knowledge. Challenger In Out Incumbent The first number is the payoff of the challenger. The second number is the payoff of the incumbent. 1, 2 A F 2, 1 0, 0

4 Sequential-move matching pennies
Player 1 Each of the two players has a penny. Player 1 first chooses whether to show the Head or the Tail. After observing player 1’s choice, player 2 chooses to show Head or Tail Both players know the following rules: If two pennies match (both heads or both tails) then player 2 wins player 1’s penny. Otherwise, player 1 wins player 2’s penny. H T Player 2 Player 2 H T H T -1, 1 1, -1 1, -1 -1, 1

5 Dynamic (or sequential-move) games of complete information
A set of players Who moves when and what action choices are available? What do players know when they move? Players’ payoffs are determined by their choices. All these are common knowledge among the players.

6 Definition: extensive-form representation
The extensive-form representation of a game specifies: the players in the game when each player has the move what each player can do at each of his or her opportunities to move what each player knows at each of his or her opportunities to move the payoff received by each player for each combination of moves that could be chosen by the players

7 Dynamic games of complete and perfect information
All previous moves are observed before the next move is chosen. A player knows Who has moved What before she makes a decision

8 Game tree A game tree has a set of nodes and a set of edges such that
x0 A game tree has a set of nodes and a set of edges such that each edge connects two nodes (these two nodes are said to be adjacent) for any pair of nodes, there is a unique path that connects these two nodes a path from x0 to x4 a node x1 x2 x3 x4 x5 x6 x7 x8 an edge connecting nodes x1 and x5

9 Game tree a path from x0 to x4 x0
A path is a sequence of distinct nodes y1, y2, y3, ..., yn-1, yn such that yi and yi+1 are adjacent, for i=1, 2, ..., n-1. We say that this path is from y1 to yn. We can also use the sequence of edges induced by these nodes to denote the path. The length of a path is the number of edges contained in the path. Example 1: x0, x2, x3, x7 is a path of length 3. Example 2: x4, x1, x0, x2, x6 is a path of length 4 x1 x2 x3 x4 x5 x6 x7 x8

10 Game tree There is a special node x0 called the root of the tree which is the beginning of the game The nodes adjacent to x0 are successors of x0. The successors of x0 are x1, x2 For any two adjacent nodes, the node that is connected to the root by a longer path is a successor of the other node. Example 3: x7 is a successor of x3 because they are adjacent and the path from x7 to x0 is longer than the path from x3 to x0 x0 x1 x2 x3 x4 x5 x6 x7 x8

11 Game tree If a node x is a successor of another node y then y is called a predecessor of x. In a game tree, any node other than the root has a unique predecessor. Any node that has no successor is called a terminal node which is a possible end of the game Example 4: x4, x5, x6, x7, x8 are terminal nodes x0 x1 x2 x3 x4 x5 x6 x7 x8

12 Game tree Any node other than a terminal node represents some player.
For a node other than a terminal node, the edges that connect it with its successors represent the actions available to the player represented by the node H T Player 2 Player 2 H T H T -1, 1 1, -1 1, -1 -1, 1

13 Game tree Player 1 A path from the root to a terminal node represents a complete sequence of moves which determines the payoff at the terminal node H T Player 2 Player 2 H T H T -1, 1 1, -1 1, -1 -1, 1

14 Strategy A strategy for a player is a complete plan of actions.
It specifies a feasible action for the player in every contingency in which the player might be called on to act. What the players can possibly play, not what they do play. Cf: static games

15 Entry game Challenger’s strategies In Out Incumbent’s strategies
Accommodate (if challenger plays In) Fight (if challenger plays In) Payoffs Normal-form representation Incumbent Accommodate Fight Challenger In 2 , 1 0 , 0 Out 1 , 2

16 Strategy and payoff In a game tree, a strategy for a player is represented by a set of edges. A combination of strategies (sets of edges), one for each player, induce one path from the root to a terminal node, which determines the payoffs of all players

17 Sequential-move matching pennies
Player 1’s strategies Head Tail Player 2’s strategies H if player 1 plays H, H if player 1 plays T H if player 1 plays H, T if player 1 plays T T if player 1 plays H, H if player 1 plays T T if player 1 plays H, T if player 1 plays T Player 2’s strategies are denoted by HH, HT, TH and TT, respectively.(n x m)

18 Sequential-move matching pennies
Their payoffs Normal-form representation Player 2 HH HT TH TT Player 1 H -1 , 1 1 , -1 T

19 Nash equilibrium The set of Nash equilibria in a dynamic game of complete information is the set of Nash equilibria of its normal-form.

20 Nash equilibrium in a dynamic game
We can also use normal-form to represent a dynamic game The set of Nash equilibria in a dynamic game of complete information is the set of Nash equilibria of its normal-form How to find the Nash equilibria in a dynamic game of complete information Construct the normal-form of the dynamic game of complete information Find the Nash equilibria in the normal-form

21 Nash equilibria in entry game
Two Nash equilibria ( In, Accommodate ) ( Out, Fight ) Does the second Nash equilibrium make sense? Non-creditable threats Limitation to the normal form representation Incumbent Accommodate Fight Challenger In 2 , 1 0 , 0 Out 1 , 2

22 Remove nonreasonable Nash equilibrium
Subgame perfect Nash equilibrium is a refinement of Nash equilibrium It can rule out nonreasonable Nash equilibria or non-creditable threats We first need to define subgame

23 Subgame Player 1 Player 2 H T 1, -1 -1, 1 A subgame of a game tree begins at a nonterminal node and includes all the nodes and edges following the nonterminal node A subgame beginning at a nonterminal node x can be obtained as follows: remove the edge connecting x and its predecessor the connected part containing x is the subgame a subgame -1, 1

24 Subgame: example Player 2 E F Player 1 G H 3, 1 1, 2 0, 0 Player 2 E F
C D 2, 0 Player 1 G H 1, 2 0, 0

25 Subgame-perfect Nash equilibrium
A Nash equilibrium of a dynamic game is subgame-perfect if the strategies of the Nash equilibrium constitute a Nash equilibrium in every subgame of the game. Subgame-perfect Nash equilibrium is a Nash equilibrium.

26 Entry game Two Nash equilibria ( In, Accommodate ) is subgame-perfect.
( Out, Fight ) is not subgame-perfect because it does not induce a Nash equilibrium in the subgame beginning at Incumbent. Challenger In Out Incumbent A F 1, 2 2, 1 0, 0 Incumbent A F 2, 1 0, 0 Accommodate is the Nash equilibrium in this subgame.

27 Find subgame perfect Nash equilibria: backward induction
Starting with those smallest subgames Then move backward until the root is reached Challenger In Out Incumbent 1, 2 A F The first number is the payoff of the challenger. The second number is the payoff of the incumbent. 2, 1 0, 0

28 Find subgame perfect Nash equilibria: backward induction
Subgame perfect Nash equilibrium (DG, E) Player 1 plays D, and plays G if player 2 plays E Player 2 plays E if player 1 plays C Player 2 E F Player 1 G H 3, 1 1, 2 0, 0 C D 2, 0

29 Existence of subgame-perfect Nash equilibrium
Every finite dynamic game of complete and perfect information has a subgame-perfect Nash equilibrium that can be found by backward induction.

30 Sequential bargaining (2.1.D of Gibbons)
Player 1 and 2 are bargaining over one dollar. The timing is as follows: At the beginning of the first period, player 1 proposes to take a share s1 of the dollar, leaving 1-s1 to player 2. Player 2 either accepts the offer or rejects the offer (in which case play continues to the second period) At the beginning of the second period, player 2 proposes that player 1 take a share s2 of the dollar, leaving 1-s2 to player 2. Player 1 either accepts the offer or rejects the offer (in which case play continues to the third period) At the beginning of third period, player 1 receives a share s of the dollar, leaving 1-s for player 2, where 0<s <1. The players are impatient. They discount the payoff by a fact , where 0<  <1

31 Sequential bargaining (2.1.D of Gibbons)
Player 1 propose an offer ( s1 , 1-s1 ) Period 1 Player 2 s1 , 1-s1 accept reject Player 2 propose an offer ( s2 , 1-s2 ) Player 1 Period 2 s2 , 1-s2 accept reject Period 3 s , 1-s

32 Solve sequential bargaining by backward induction
Period 2: Player 1 accepts s2 if and only if s2  s. (We assume that each player will accept an offer if indifferent between accepting and rejecting) Player 2 faces the following two options: (1) offers s2 = s to player 1, leaving 1-s2 = 1-s for herself at this period, or (2) offers s2 < s to player 1 (player 1 will reject it), and receives 1-s next period. Its discounted value is (1-s) Since (1-s)<1-s, player 2 should propose an offer (s2* , 1-s2* ), where s2* = s. Player 1 will accept it.

33 Sequential bargaining (2.1.D of Gibbons)
Player 1 propose an offer ( s1 , 1-s1 ) Period 1 Player 2 s1 , 1-s1 accept reject s , 1- s Player 2 propose an offer ( s2 , 1-s2 ) Player 1 Period 2 s2 , 1-s2 accept reject Period 3 s , 1-s

34 Solve sequential bargaining by backward induction
Period 1: Player 2 accepts 1-s1 if and only if 1-s1  (1-s2*)= (1- s) or s1  1-(1-s2*), where s2* = s. Player 1 faces the following two options: (1) offers 1-s1 = (1-s2*)=(1- s) to player 2, leaving s1 = 1-(1-s2*)=1-+s for herself at this period, or (2) offers 1-s1 < (1-s2*) to player 2 (player 2 will reject it), and receives s2* = s next period. Its discounted value is s Since s < 1-+s, player 1 should propose an offer (s1* , 1-s1* ), where s1* = 1-+s

35 Strategy and payoff a strategy for player 1: H Player 1 Player 2 H T -1, 1 1, -1 A strategy for a player is a complete plan of actions. It specifies a feasible action for the player in every contingency in which the player might be called on to act. It specifies what the player does at each of her nodes a strategy for player 2: H if player 1 plays H, T if player 1 plays T (written as HT) Player 1’s payoff is -1 and player 2’s payoff is 1 if player 1 plays H and player 2 plays HT

36 Subgame Player 1 Player 2 H T 1, -1 -1, 1 A subgame of a game tree begins at a nonterminal node and includes all the nodes and edges following the nonterminal node A subgame beginning at a nonterminal node x can be obtained as follows: the connected part containing x is the subgame remove the edge connecting x and its predecessor a subgame -1, 1

37 Existence of subgame-perfect Nash equilibrium
Every finite dynamic game of complete and perfect information has a subgame-perfect Nash equilibrium that can be found by backward induction.

38 Backward induction: illustration
Subgame-perfect Nash equilibrium (C, EH). player 1 plays C; player 2 plays E if player 1 plays C, plays H if player 1 plays D. Player 1 C D Player 2 E F 3, 0 2, 1 G H 1, 3 0, 2

39 Multiple subgame-perfect Nash equilibria: illustration
Subgame-perfect Nash equilibrium (D, FHK). player 1 plays D player 2 plays F if player 1 plays C, plays H if player 1 plays D, plays K if player 1 plays E. Player 1 C E D Player 2 F G 1, 0 0, 1 Player 2 H I 2, 1 1, 1 Player 2 J K 1, 3 2, 2

40 Multiple subgame-perfect Nash equilibria
Subgame-perfect Nash equilibrium (E, FHK). player 1 plays E; player 2 plays F if player 1 plays C, plays H if player 1 plays D, plays K if player 1 plays E. Player 1 C E D Player 2 F G 1, 0 0, 1 Player 2 H I 2, 1 1, 1 Player 2 J K 1, 3 2, 2

41 Multiple subgame-perfect Nash equilibria
Subgame-perfect Nash equilibrium (D, FIK). player 1 plays D; player 2 plays F if player 1 plays C, plays I if player 1 plays D, plays K if player 1 plays E. Player 1 C E D Player 2 F G 1, 0 0, 1 Player 2 H I 2, 2 1, 1 Player 2 J K 1, 3 2, 2

42 Stackelberg model of duopoly
A homogeneous product is produced by only two firms: firm 1 and firm 2. The quantities are denoted by q1 and q2, respectively. The timing of this game is as follows: Firm 1 chooses a quantity q1 0. Firm 2 observes q1 and then chooses a quantity q2 0. The market priced is P(Q)=a –Q, where a is a constant number and Q=q1+q2. The cost to firm i of producing quantity qi is Ci(qi)=cqi. Payoff functions: u1(q1, q2)=q1(a–(q1+q2)–c) u2(q1, q2)=q2(a–(q1+q2)–c)

43 Stackelberg model of duopoly
Find the subgame-perfect Nash equilibrium by backward induction We first solve firm 2’s problem for any q1 0 to get firm 2’s best response to q1 . That is, we first solve all the subgames beginning at firm 2. Then we solve firm 1’s problem. That is, solve the subgame beginning at firm 1

44 Stackelberg model of duopoly
Solve firm 2’s problem for any q1 0 to get firm 2’s best response to q1. Max u2(q1, q2)=q2(a–(q1+q2)–c) subject to 0  q2  +∞ FOC: a – 2q2 – q1 – c = 0 Firm 2’s best response, R2(q1) = (a – q1 – c)/2 if q1  a– c = 0 if q1 > a– c Note: Osborne used b2(q1) instead of R2(q1)

45 Stackelberg model of duopoly
Solve firm 1’s problem. Note firm 1 can also solve firm 2’s problem. That is, firm 1 knows firm 2’s best response to any q1. Hence, firm 1’s problem is Max u1(q1, R2(q1))=q1(a–(q1+R2(q1))–c) subject to 0  q1  +∞ That is, Max u1(q1, R2(q1))=q1(a–q1–c)/2 subject to 0  q1  +∞ FOC: (a – 2q1 – c)/2 = q1 = (a – c)/2

46 Stackelberg model of duopoly
Subgame-perfect Nash equilibrium ( (a – c)/2, R2(q1) ), where R2(q1) = (a – q1 – c)/2 if q1  a– c = 0 if q1 > a– c That is, firm 1 chooses a quantity (a – c)/2, firm 2 chooses a quantity R2(q1) if firm 1 chooses a quantity q1. The backward induction outcome is ( (a – c)/2, (a – c)/4 ). Firm 1 chooses a quantity (a – c)/2, firm 2 chooses a quantity (a – c)/4.

47 Stackelberg model of duopoly
Firm 1 produces q1=(a – c)/2 and its profit q1(a–(q1+ q2)–c)=(a–c)2/8 Firm 2 produces q2=(a – c)/4 and its profit q2(a–(q1+ q2)–c)=(a–c)2/16 The aggregate quantity is 3(a – c)/4.

48 Cournot model of duopoly
Firm 1 produces q1=(a – c)/3 and its profit q1(a–(q1+ q2)–c)=(a–c)2/9 Firm 2 produces q2=(a – c)/3 and its profit q2(a–(q1+ q2)–c)=(a–c)2/9 The aggregate quantity is 2(a – c)/3.

49 Monopoly Suppose that only one firm, a monopoly, produces the product. The monopoly solves the following problem to determine the quantity qm. Max qm (a–qm–c) subject to 0  qm  +∞ FOC: a – 2qm – c = qm = (a – c)/2 Monopoly produces qm=(a – c)/2 and its profit qm(a–qm–c)=(a–c)2/4

50 Discusssion The first-mover advantage The curse of knowledge
Strategic substitutes and commitment (threat) Stackelberg model Strategic complements and commitment Bertrand model (your task) The curse of knowledge It is a good thing for rational one-agent decision problem Not for a person with self-control The last leaf (O.Henry)

51 Sequential-move Bertrand model of duopoly (differentiated products)
Two firms: firm 1 and firm 2. (partial substitutes) Each firm chooses the price for its product. The prices are denoted by p1 and p2, respectively. The timing of this game as follows. Firm 1 chooses a price p1 0. Firm 2 observes p1 and then chooses a price p2 0. The quantity that consumers demand from firm 1: q1(p1, p2) = a – p1 + bp2. The quantity that consumers demand from firm 2: q2(p1, p2) = a – p2 + bp1. The cost to firm i of producing quantity qi is Ci(qi)=cqi.

52 Sequential-move Bertrand model of duopoly (differentiated products)
Solve firm 2’s problem for any p1 0 to get firm 2’s best response to p1. Max u2(p1, p2)=(a – p2 + bp1 )(p2 – c) subject to 0  p2  +∞ FOC: a + c – 2p2 + bp1 = p2 = (a + c + bp1)/2 Firm 2’s best response, R2(p1) = (a + c + bp1)/2

53 Sequential-move Bertrand model of duopoly (differentiated products)
Solve firm 1’s problem. Note firm 1 can also solve firm 2’s problem. Firm 1 knows firm 2’s best response to p1. Hence, firm 1’s problem is Max u1(p1, R2(p1))=(a – p1 + bR2(p1) )(p1 – c) subject to 0  p1  +∞ That is, Max u1(p1, R2(p1))=(a – p1 + b(a + c + bp1)/2 )(p1 – c) subject to 0  p1  +∞ FOC: a – p1 + b(a + c + bp1)/2+(–1+b2/2) (p1 – c) = p1 = (a+c+(ab+bc–b2c)/2)/(2–b2)

54 Sequential-move Bertrand model of duopoly (differentiated products)
Subgame-perfect Nash equilibrium ((a+c+(ab+bc–b2c)/2)/(2–b2), R2(p1) ), where R2(p1) = (a + c + bp1)/2 Firm 1 chooses a price (a+c+(ab+bc–b2c)/2)/(2–b2), firm 2 chooses a price R2(p1) if firm 1 chooses a price p1.

55 Imperfect information: illustration
Player 1 Player 2 H T -1, 1 1, -1 Each of the two players has a penny. Player 1 first chooses whether to show the Head or the Tail. Then player 2 chooses to show Head or Tail without knowing player 1’s choice, Both players know the following rules: If two pennies match (both heads or both tails) then player 2 wins player 1’s penny. Otherwise, player 1 wins player 2’s penny. Player 2

56 Information set Gibbons’ definition: An information set for a player is a collection of nodes satisfying: the player has the move at every node in the information set, and when the play of the game reaches a node in the information set, the player with the move does not know which node in the information set has (or has not) been reached. All the nodes in an information set belong to the same player The player must have the same set of feasible actions at each node in the information set.

57 Information set: illustration
Player 1 L R Player 2 L’ R’ 2, 2, 3 3 L” R” 1, 2, 0 3, 1, 2 2, 2, 1 0, 1, 1 1, 1, 2 1, 1, 1 two information sets for player 2 each containing a single node an information set for player 3 containing three nodes an information set for player 3 containing a single node

58 Information set: illustration
All the nodes in an information set belong to the same player Player 1 This is not a correct information set C D Player 2 Player 3 E F G H 2, 1, 3 3, 0, 2 0, 2, 2 1, 3, 1

59 Information set: illustration
The player must have the same set of feasible actions at each node in the information set. An information set cannot contains these two nodes Player 1 C D Player 2 Player 2 E F G H K 2, 1 3, 0 0, 2 1, 1 1, 3

60 Represent a static game as a game tree: illustration
Prisoners’ dilemma (another representation of the game in Figure of Gibbons. The first number is the payoff for player 1, and the second number is the payoff for player 2) Static game as a game of imperfect information Prisoner 1 Prisoner 2 Mum Fink 4, 4 5, 0 0, 5 1, 1

61 Example: mutually assured destruction
Two superpowers, 1 and 2, have engaged in a provocative incident. The timing is as follows. The game starts with superpower 1’s choice: either ignore the incident ( I ), resulting in the payoffs (0, 0), or to escalate the situation ( E ). Following escalation by superpower 1, superpower 2 can back down ( B ), causing it to lose face and result in the payoffs (1, -1), or it can choose to proceed to an atomic confrontation situation ( A ). Upon this choice, the two superpowers play the following simultaneous move game. They can either retreat ( R ) or choose to doomsday ( D ) in which the world is destroyed. If both choose to retreat then they suffer a small loss and payoffs are (-0.5, -0.5). If either chooses doomsday then the world is destroyed and payoffs are (-K, -K), where K is very large number.

62 Example: mutually assured destruction (think of Cuba crisis)
1 I E 0, 0 2 B A 1, -1 R D -0.5, -0.5 -K, -K

63 Perfect information and imperfect information
A dynamic game in which every information set contains exactly one node is called a game of perfect information. A dynamic game in which some information sets contain more than one node is called a game of imperfect information.

64 Strategy and payoff a strategy for player 1: H Player 1 A strategy for a player is a complete plan of actions. It specifies a feasible action for the player in every contingency in which the player might be called on to act. It specifies what the player does at each of her information sets H T Player 2 Player 2 H T H T -1, 1 1, -1 1, -1 -1, 1 a strategy for player 2: T Player 1’s payoff is 1 and player 2’s payoff is -1 if player 1 plays H and player 2 plays T

65 Strategy and payoff: illustration
1 I E 0, 0 2 B A 1, -1 R D -0.5, -0.5 -K, -K a strategy for player 1: E, and R if player 2 plays A, written as ER a strategy for player 2: A, R, if player 1 plays E, written as AR

66 Subgame A subgame of a dynamic game tree
begins at a singleton information set (an information set contains a single node), and includes all the nodes and edges following the singleton information set, and does not cut any information set; that is, if a node of an information set belongs to this subgame then all the nodes of the information set also belong to the subgame.

67 Subgame: illustration
1 I E 0, 0 2 B A 1, -1 R D -0.5, -0.5 -K, -K a subgame a subgame Not a subgame

68 Subgame-perfect Nash equilibrium
A Nash equilibrium of a dynamic game is subgame-perfect if the strategies of the Nash equilibrium constitute or induce a Nash equilibrium in every subgame of the game. Subgame-perfect Nash equilibrium is a Nash equilibrium.

69 Find subgame perfect Nash equilibria: backward induction
1 I E 0, 0 2 B A 1, -1 R D -0.5, -0.5 -K, -K a subgame Starting with those smallest subgames Then move backward until the root is reached One subgame-perfect Nash equilibrium ( IR, AR )

70 Find subgame perfect Nash equilibria: backward induction
1 I E 0, 0 2 B A 1, -1 R D -0.5, -0.5 -K, -K a subgame Starting with those smallest subgames Then move backward until the root is reached Another subgame-perfect Nash equilibrium ( ED, BD )

71 Dynamic games of complete information
Perfect information A player knows Who has made What choices when she has an opportunity to make a choice Imperfect information A player may not know exactly Who has made What choices when she has an opportunity to make a choice.

72 Information set: illustration
Player 1 L R Player 2 L’ R’ 2, 2, 3 3 L” R” 1, 2, 0 3, 1, 2 2, 2, 1 0, 1, 1 1, 1, 2 1, 1, 1 two information sets for player 2 each containing a single node an information set for player 3 containing three nodes an information set for player 3 containing a single node

73 Subgame-perfect Nash equilibrium
A Nash equilibrium of a dynamic game is subgame-perfect if the strategies of the Nash equilibrium constitute or induce a Nash equilibrium in every subgame of the game. A subgame of a game tree begins at a singleton information set (an information set containing a single node), and includes all the nodes and edges following the singleton information set, and does not cut any information set; that is, if a node of an information set belongs to this subgame then all the nodes of the information set also belong to the subgame.

74 Find subgame perfect Nash equilibria: backward induction
What is the subgame perfect Nash equilibrium? Player 1 L R Player 2 L’ R’ 2, 2, 0 3 L” R” 1, 2, 3 3, 1, 2 2, 2, 1 0, 0, 1 1, 1, 2 1, 1, 1

75 Bank runs (2.2.B of Gibbons)
Two investors, 1 and 2, have each deposited D with a bank. The bank has invested these deposits in a long-term project. If the bank liquidates its investment before the project matures, a total of 2r can be recovered, where D > r > D/2. If bank’s investment matures, the project will pay out a total of 2R, where R>D. Two dates at which the investors can make withdrawals from the bank.

76 Bank runs: timing of the game
The timing of this game is as follows Date 1 (before the bank’s investment matures) Two investors play a simultaneous move game If both make withdrawals then each receives r and the game ends If only one makes a withdrawal then she receives D, the other receives 2r-D, and the game ends If neither makes a withdrawal then the project matures and the game continues to Date 2. Date 2 (after the bank’s investment matures) If both make withdrawals then each receives R and the game ends If only one makes a withdrawal then she receives 2R-D, the other receives D, and the game ends If neither makes a withdrawal then the bank returns R to each investor and the game ends.

77 Bank runs: game tree 1 W: withdraw NW: not withdraw W NW 2 2 Date 1 W NW W NW r, r D, 2r–D 2r–D, D 1 Date 2 W NW a subgame 2 2 One subgame-perfect Nash equilibrium ( NW W, NW W ) W NW W NW R, R 2R–D, D D, 2R–D R, R

78 Bank runs: game tree One subgame-perfect Nash equilibrium ( W W, W W )
1 W: withdraw NW: not withdraw W NW 2 2 Date 1 W NW W NW r, r D, 2r–D 2r–D, D 1 Date 2 W NW a subgame 2 2 One subgame-perfect Nash equilibrium ( W W, W W ) W NW W NW R, R 2R–D, D D, 2R–D R, R

79 Tariffs and imperfect international competition (2.2.C of Gibbons)
Two identical countries, 1 and 2, simultaneously choose their tariff rates, denoted t1, t2, respectively. Firm 1 from country 1 and firm 2 from country 2 produce a homogeneous product for both home consumption and export. After observing the tariff rates chosen by the two countries, firm 1 and 2 simultaneously chooses quantities for home consumption and for export, denoted by (h1, e1) and (h2, e2), respectively. Market price in two countries Pi(Qi)=a–Qi, for i=1, 2. Q1=h1+e2, Q2=h2+e1. Both firms have a constant marginal cost c. Each firm pays tariff on export to the other country.

80 Tariffs and imperfect international competition

81 Tariffs and imperfect international competition

82 Backward induction: subgame between the two firms

83 Backward induction: subgame between the two firms

84 Backward induction: whole game

85 Tariffs and imperfect international competition

86 Repeated game A repeated game is a dynamic game of complete information in which a (simultaneous-move) game is played at least twice, and the previous plays are observed before the next play. We will find out the behavior of the players in a repeated game.

87 Two-stage repeated game
Two-stage prisoners’ dilemma Two players play the following simultaneous move game twice The outcome of the first play is observed before the second play begins The payoff for the entire game is simply the sum of the payoffs from the two stages. That is, the discount factor is 1. Player 2 L2 R2 Player 1 L1 1 , 1 5 , 0 R1 0 , 5 4 , 4

88 Game tree of the two-stage prisoners’ dilemma
1 L1 R1 2 2 L2 R2 L2 R2 1 1 1 1 L1 R1 L1 R1 L1 R1 L1 R1 2 2 2 2 2 2 2 2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2

89 Informal game tree of the two-stage prisoners’ dilemma
1 L1 R1 2 2 L2 R2 L2 R2 1 1 1 1 (1, 1) (5, 0) (0, 5) (4, 4) L1 R1 L1 R1 L1 R1 L1 R1 2 2 2 2 2 2 2 2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4

90 Informal game tree of the two-stage prisoners’ dilemma
1 L1 R1 2 2 L2 R2 L2 R2 1 1 1 1 (2, 2) (6, 1) (1, 6) (5, 5) L1 R1 L1 R1 L1 R1 L1 R1 2 2 2 2 2 2 2 2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4

91 two-stage prisoners’ dilemma
The subgame-perfect Nash equilibrium (L1 L1L1L1L1, L2 L2L2L2L2) Player 1 plays L1 at stage 1, and plays L1 at stage 2 for any outcome of stage 1. Player 2 plays L2 at stage 1, and plays L2 at stage 2 for any outcome of stage 1. Player 2 L2 R2 Player 1 L1 1 , 1 5 , 0 R1 0 , 5 4 , 4

92 Finitely repeated game
A finitely repeated game is a dynamic game of complete information in which a (simultaneous-move) game is played a finite number of times, and the previous plays are observed before the next play. The finitely repeated game has a unique subgame perfect Nash equilibrium if the stage game (the simultaneous-move game) has a unique Nash equilibrium. The Nash equilibrium of the stage game is played in every stage.

93 What happens if the stage game has more than one Nash equilibrium?
Two players play the following simultaneous move game twice The outcome of the first play is observed before the second play begins The payoff for the entire game is simply the sum of the payoffs from the two stages. That is, the discount factor is 1. Question: can we find a subgame perfect Nash equilibrium in which M1, M2 are played? Or, can the two players cooperate in a subgame perfect Nash equilibrium? Player 2 L2 M2 R2 Player 1 L1 1 , 1 5 , 0 0 , 0 M1 0 , 5 4 , 4 R1 3 , 3

94 Informal game tree 1 L1 R1 M1 2 2 2 L2 R2 M2 L2 R2 L2 R2 M2 M2 (1, 1)
(5, 0) (0, 0) (0, 5) (4, 4) (0, 0) (0, 0) (0, 0) (3, 3) 1 L1 M1 R1 2 2 2 L2 R2 M2 L2 R2 L2 R2 M2 M2 (1, 1) (5, 0) (0, 0) (0, 5) (4, 4) (0, 0) (0, 0) (0, 0) (3, 3)

95 Informal game tree and backward induction
1 L1 R1 M1 2 2 2 L2 R2 M2 L2 R2 L2 R2 M2 M2 (1, 1) (5, 0) (4, 4) (0, 0) (0, 5) (0, 0) (0, 0) (0, 0) (3, 3) + (1, 1) (1, 1) (3, 3) (1, 1) (1, 1) (1, 1) (1, 1) (1, 1) (1, 1) 1 L1 M1 R1 2 2 2 L2 R2 L2 R2 L2 R2 M2 M2 M2 (1, 1) (5, 0) (0, 0) (0, 5) (4, 4) (0, 0) (0, 0) (0, 0) (3, 3)

96 Two-stage repeated game
Subgame perfect Nash equilibrium: player 1 plays M1 at stage 1, and at stage 2, plays R1 if the first stage outcome is (M1,M2 ), or plays L1 if the first stage outcome is not ( M1,M2 ) player 2 plays M2 at stage 1, and at stage 2, plays R2 if the first stage outcome is ( M1, M2 ), or plays L2 if the first stage outcome is not ( M1,M2 ) Player 2 L2 M2 R2 Player 1 L1 1 , 1 5 , 0 0 , 0 M1 0 , 5 4 , 4 R1 3 , 3

97 Two-stage repeated game
Subgame perfect Nash equilibrium: At stage 1, player 1 plays M1, and player 2 plays M2. At stage 2, player 1 plays R1 if the first stage outcome is ( M1, M2 ), or plays L1 if the first stage outcome is not ( M1, M2 ) player 2 plays R2 if the first stage outcome is ( M1, M2 ), or plays L2 if the first stage outcome is not ( M1, M2 ) The payoffs of the 2nd stage has been added to the first stage game. Player 2 L2 M2 R2 Player 1 L1 2 , 2 6 , 1 1 , 1 M1 1 , 6 7 , 7 R1 4 , 4

98 An abstract game: generalization of the tariff game (general form)
Four players: 1, 2, 3, 4. The timing of the game is as follows. Stage1: Player 1 and 2 simultaneously choose actions a1 and a2 from feasible action sets A1 and A2, respectively. Stage 2: After observing the outcome (a1, a2) of the first stage, Player 3 and 4 simultaneously choose actions a3 and a4 from feasible action sets A3 and A4, respectively. The game ends. The payoffs are ui(a1, a2, a3, a4), for i=1, 2, 3, 4

99 An abstract game: informal game tree
player 1 Player 1’ action set A1 a1 Stage 1 player 2 Player 2’ action set A2 a2 player 3 a3 Player 3’ action set A3 Stage 2 player 4 a4 Player 4’ action set A4 A smallest subgame following (a1, a2)

100 Backward induction: solve the smallest subgame

101 Backward induction: back to the root
player 1 Player 1’ action set A1 a1 Stage 1 player 2 Player 2’ action set A2 a2 Stage 2

102 Backward induction: back to the root

103 Tariffs and imperfect international competition (2.2.C of Gibbons)
Two identical countries, 1 and 2, simultaneously choose their tariff rates, denoted t1, t2, respectively. Firm 1 from country 1 and firm 2 from country 2 produce a homogeneous product for both home consumption and export. After observing the tariff rates chosen by the two countries, firm 1 and 2 simultaneously chooses quantities for home consumption and for export, denoted by (h1, e1) and (h2, e2), respectively. Market price in two countries Pi(Qi)=a–Qi, for i=1, 2. Q1=h1+e2, Q2=h2+e1. Both firms have a constant marginal cost c. Each firm pays tariff on export to the other country.

104 Tariffs and imperfect international competition
This model fits the abstract model. Country 1 and 2 are player 1 and 2, respectively Firm 1 and 2 are player 3 and 4, respectively Stage 1 Stage 2 Country 1 Firm 1 Q1= h1 + e2 P1(Q1)=a–Q1 Country 2 Firm 2 Q2= h2 + e1 P2(Q2)=a–Q2 t1 t2

105 Backward induction: subgame between the two firms

106 Backward induction: subgame between the two firms

107 Backward induction: back to the root

108 Tariffs and imperfect international competition

109 Infinitely repeated game
A infinitely repeated game is a dynamic game of complete information in which a (simultaneous-move) game called the stage game is played infinitely, and the outcomes of all previous plays are observed before the next play. Precisely, the simultaneous-move game is played at stage 1, 2, 3, ..., t-1, t, t+1, The outcomes of all previous t-1 stages are observed before the play at the tth stage. Each player discounts her payoff by a factor , where 0<  < 1. (detour: two ways to model people’s patient) A player’s payoff in the repeated game is the present value of the player’s payoffs from the stage games.

110 Present value

111 Infinitely repeated game: example
The following simultaneous-move game is repeated infinitely The outcomes of all previous plays are observed before the next play begins Each player’s payoff for the infinitely repeated game is present value of the payoffs received at all stages. Question: what is the subgame perfect Nash equilibrium? Player 2 L2 R2 Player 1 L1 1 , 1 5 , 0 R1 0 , 5 4 , 4

112 Example: subgame Every subgame of an infinitely repeated game is identical to the game as a whole. 1 L1 R1 2 2 L2 R2 L2 R2 (1, 1) (5, 0) (0, 5) (4, 4)

113 Example: strategy A strategy for a player is a complete plan. It can depend on the history of the play. A strategy for player i: play Li at every stage (or at each of her information sets) Another strategy called a trigger strategy for player i: play Ri at stage 1, and at the tth stage, if the outcome of each of all t-1 previous stages is (R1, R2) then play Ri; otherwise, play Li.

114 Example: subgame perfect Nash equilibrium
Check whether there is a subgame perfect Nash equilibrium in which player i plays Li at every stage (or at each of her information sets). This can be done by the following two steps. Step 1: check whether the combination of strategies is a Nash equilibrium of the infinitely repeated game. If player 1 plays L1 at every stage, the best response for player 2 is to play L2 at every stage. If player 2 plays L2 at every stage, the best response for player 1 is to play L1 at every stage. Hence, it is a Nash equilibrium of the infinitely repeated game.

115 Example: subgame perfect Nash equilibrium cont’d
Step 2: check whether the Nash equilibrium of the infinitely repeated game induces a Nash equilibrium in every subgame of the infinitely repeated game. Recall that every subgame of the infinitely repeated game is identical to the infinitely repeated game as a whole Obviously, it induces a Nash equilibrium in every subgame Hence, it is a subgame perfect Nash equilibrium.

116 Example: subgame 1 L1 R1 2 2 L2 R2 L2 R2 1 1 1 1 (1, 1) (5, 0) (0, 5)
(4, 4) L1 R1 L1 R1 L1 R1 L1 R1 2 2 2 2 2 2 2 2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 TO INFINITY

117 Trigger strategy: step 1
Stage 1: (R1, R2) Stage 2: (R1, R2) Stage t-1: (R1, R2) Stage t: (R1, L2) Stage t+1: (L1, L2) Stage t+2: (L1, L2)

118 Trigger strategy: step 1 cont’d
Stage 1: (R1, R2) Stage 2: (R1, R2) Stage t-1: (R1, R2) Stage t: (R1, L2) Stage t+1: (L1, L2) Stage t+2: (L1, L2)

119 Trigger strategy: step 2
Step 2: check whether the Nash equilibrium induces a Nash equilibrium in every subgame of the infinitely repeated game. Recall that every subgame of the infinitely repeated game is identical to the infinitely repeated game as a whole Stage 1: (R1, R2) Stage 2: (R1, R2) Stage t-1: (R1, R2) Stage t: (R1, R2) Stage t+1: (R1, R2) Stage t+2: (R1, R2)

120 Step 2 cont’d: subgame 1 L1 R1 2 2 L2 R2 L2 R2 1 1 1 1 (1, 1) (5, 0)
(0, 5) (4, 4) L1 R1 L1 R1 L1 R1 L1 R1 2 2 2 2 2 2 2 2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 TO INFINITY

121 Trigger strategy: step 2 cont’d
We have two classes of subgames: subgame following a history in which the stage outcomes are all (R1, R2) subgame following a history in which at least one stage outcome is not (R1, R2) The Nash equilibrium of the infinitely repeated game induces a Nash equilibrium in which each player still plays trigger strategy for the first class of subgames The Nash equilibrium of the infinitely repeated game induces a Nash equilibrium in which (L1, L2) is played forever for the second class of subgames.

122 Every subgame of an infinitely repeated game is identical to the game as a whole
1 L1 R1 2 2 L2 R2 L2 R2 1 1 1 1 (1, 1) (5, 0) (0, 5) (4, 4) L1 R1 L1 R1 L1 R1 L1 R1 2 2 2 2 2 2 2 2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 L2 R2 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 1 1 5 0 0 5 4 4 TO INFINITY

123 Trigger strategy: step 1
2 t-1 t t+1 t+2 Stage (R1, R2) (R1, L2) (L1, L2) P2: Trigger P2: deviate trigger at t

124 Trigger strategy: step 1 cont’d
Stage P2: Trigger P2: deviate trigger at t 1 (R1, R2) (R1, R2) (R1, L2) (L1, L2) 2 t-1 t t+1 t+2

125 Trigger strategy: step 1 cont’d
2 t-1 t t+1 t+2 Stage (R1, R2) (R1, L2) (L1, L2) P2: Trigger P2: deviate trigger at t

126 Trigger strategy: step 2
Step 2: check whether the Nash equilibrium induces a Nash equilibrium in every subgame of the infinitely repeated game. Recall that every subgame of the infinitely repeated game is identical to the infinitely repeated game as a whole

127 Discussion The existence of tough strategies
The existence of bad guys (Ray) The folk theorem Multiple equilibrium Social norm and institutions Coordination on certain equilibrium Justice as an institution (“focal point”) (Myerson)


Download ppt "LECTURE 3 DYNAMIC GAMES OF COMPLETE INFORMATION"

Similar presentations


Ads by Google