Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sequential Rationality

Similar presentations


Presentation on theme: "Sequential Rationality"— Presentation transcript:

1 Sequential Rationality
Chapter 5 Sequential Rationality

2 Example 1: should we offer the position?
A candidate to the computer science department has already written 11 research papers, and the department would like to decide on whether to make her a job offer based on the quality of the papers. There are 11 committee members who are each given one paper to read in order to make a recommendation. Initially, each paper may be "good" or "bad" with equal probabilities, and the department has chosen to make an offer to a candidate if he has a majority of "good" papers. Committee member values the correct recommendation of the committee, at $1000 to him, but values the time he needs to spend on reading the paper at $400.

3 Example 1: should we offer the position?
A simple mechanism asks all the committee members, simultaneously, for their recommendations. The strategy tuple where all agents choose to read the papers and report truthfully is not an equilibrium. Consider the perspective of agent 1: assuming all agents replied (truthfully, or not), then agent 1 can alter the outcome only if the other 10 replies split evenly between 0 and 1 which has a probability of approximately 0.25. Therefore, by guessing, and assuming all other agents compute, he will gain 0.25 X X 1000 = $875. 25% of the time the right decision is made (as he has a 50% chance of guessing right) and 75% of the time he gets the right decision as the others make the correct decision. However, by computing an agent gains =$600, and so player 1 has no incentive to compute (the same for all 11 agents). .25* *600 = $600

4 Example 1: should we offer the position?
This elicitation mechanism will also fail if only agent 1 has the above cost and all other agents have zero costs (the same analysis will hold for agent 1). If however agents 2,3,….,11 are asked first for their recommendations, and agent 1 is approached only if there is a tie among the ten recommendations, then all agents will have enough incentive to invest the effort as 100% of the time you are asked you gain $600 as compared to $500 for guessing! This illustrates the power of sequential mechanisms. This motivates the careful discussion of sequential elicitation mechanisms, i.e. the construction of mechanisms that approach agents in a well designed sequence. Goal is to change the way the “game” is played so there is no temptation to shirk! This is the positive outcome of the study! We aren’t trying to teach you to be a shirker, but to devise mechanisms to de-incentivize shirking.

5 Subgame perfection ensures that the players continue to play rationally as the game progresses.
We have a problem with imperfect information, however, as we might not have subgames at all. sequential equilibrium: a pair (,µ) where  is a behavior strategy profile and µ is a system of beliefs consistent with  such that no player can gain by deviating from  at any member of the information set.

6 5.1 Market for Lemons Two player game in which one player has more information than the other. Like my example of forgetting which cards had been played, but in this case, one player just knows more. Selling cars. ph is the reservation price (least he is willing to accept) for a high quality car. pl is the reservation price (least he is willing to accept) for a low quality car.

7 Similarly, the buyer has similar reservation prices:
H highest price he is willing to pay for high quality car. L highest price he is willing to pay for a low quality car. Look at sequential game when a seller puts a car on the market. Nature reveals the quality of the car to the seller, but the buyer doesn’t know it. Seller must decide whether to ask ph or pl. Buyer doesn’t know price.

8 buy (p-ph, H-p) 2 not X (0,0) ph 1 (p-pl, L-p) buy 2 not pl Y (0,0) G Nature (p-ph, H-p) ph buy B 2 not M (0,0) 1 pl (p-pl, L-p) buy 2 N not (0,0)

9 Note, there are no subgames.
What should player 2 do? If price is close to ph, could assume the car is of high value. If close to pl, could assume the car is of low value. The seller knows this. Both would agree to split the profit in half, so gains would be equal. BUT…Why not just charge a high price for every car! So now the buyer doesn’t know what to do. Could assume has equal likelihood of being high or low (uniform distribution)

10 His expected value is ½(H-p) + ½(L-p) Since he needs a postive expected value ½(H-p) + ½(L-p) >0 p  ½ (H+L) If offered a high price, he believes he is at node X with probability ½ and at node Y with probability ½. But if he gets such a low offer, he knows he isn’t at X or Y, but at N.

11 Consider a case by case analysis
Case 1: ½(H+L)  ph Both types of cars could really be for sale, so buyer will buy at this price. Case 2: ½(H+L) < ph No high quality car can be sold for that price, so you are at node N. Buyer insists on somewhere between pl and L, as he knows he is looking at a low quality car. Equilbrium – which is driven by consistent beliefs

12

13 Lets take another look. utilities = (seller, buyer)
Lots of prices could be offered – not just ph or pl. Use line to represent infinite number of nodes (p-ph, .5(H+L)-p) Like before only assume equally likely you have Good as Bad car buy price, p X not (0,0) Good Nature Bad (p-pl, .5(H+L)-p) buy Y price, p not (0,0)

14 Case 1: ½(H+L) >ph u1(p,s) = p-ph (if s(X) = buy) = p-pl (if s(Y) = buy) = 0 (if s(X) = not buy) =0 (if s(Y) = not buy) u2(p,s) = ½(H+L)-p if s = buy =0 if s = not buy Optimal stategy: p* = ½(H+L) s* = buy if ½(H+L)  p = not buy if ½*(H+L) <p

15 Case 2: ph > ½(H+L) Seller knows that buyer will never buy a car at a price greater than ½(H+L) so only low-quality cars are on the market. u1(p,s) = p-pl (if buy) = 0 (if not buy) u2(p,s) = L-p (if buy) p* = L S* = buy if L > p = not buy if L < p

16 Notice – no subgames role of beliefs is critical.

17 Let’s look again using different values… Example of adverse selection: the market for ‘lemons’
You want to buy a used car. There are two types of cars – good cars and lemons. Type WTP willing to purchase WTS willing to sell % of cars available Good $10,000 $8,000 40% Lemon $6,000 $3,000 60%

18 Example of adverse selection: the market for ‘lemons’
Suppose the seller always knows what type of car they are selling. What happens in the market depends on whether buyers can also tell the type of car. Type WTP willing to purchase WTS willing to sell % of cars available Good $10,000 $8,000 40% Lemon $6,000 $3,000 60%

19 The market for ‘lemons’ – case 1: symmetric information (Both can tell a lemon)
Type WTP WTS % of cars available Predicted market price Good $10,000 H $8,000 ph 40% Between $10,000 and $8,000 Lemon $6,000 L $3,000 pl 60% Between $6,000 and $3,000

20 The market for ‘lemons’: case 2 – asymmetric information
Suppose the buyer cannot tell a good car from a lemon before they buy. Will you buy if the price is at least $8,000? NO! At this price, every seller will want to sell. But this means that if you buy a car it has a 60% chance of being a lemon (worth $6,000 to you) and a 40% chance of being good (worth $10,000 to the buyer). So the expected value of a car to the buyer is $7,600. So you will not pay more than $8,000 for a car with an expected value of $7,600!

21 The market for ‘lemons’: case 2 – asymmetric information
Suppose the buyer cannot tell a good car from a lemon before they buy. Will you buy if the price is between $6,000 and $8,000? NO! At this price, only the sellers of ‘lemons’ will want to sell. Every car being offered is a lemon and you will not pay more than $6000. So we expect that the market will have a price of between $3,000 and $6,000 with only lemons sold

22 The market for ‘lemons’ – case 2: asymmetric information
Type WTP WTS % of cars available Predicted market price Good $10,000 $8,000 40% No sales – complete market failure Lemon $6,000 $3,000 60% Between $6,000 and $3,000 assume all cars lemons

23 Adverse selection So: the problem of adverse selection can lead to the complete collapse of the market for good cars A similar problem faces insurance companies and the market for ‘loanable funds’. Adverse selection can also lead to ‘statistical discrimination’.

24 Responses to adverse selection
Note that the problem of adverse selection harms the un-informed parties and some of the informed parties. In the lemons example, it meant that buyers could not buy good cars. But also sellers of good cars could not get a reasonable price for their cars. Un-informed buyers may try to overcome the information asymmetry by searching for more info Informed sellers may try to overcome the problem by Warranties Signaling

25 Example: job-market model of bilateral uncertainty – uncertainty on both sides.
Workers are uncertain about what job descriptions advertised by firms really mean Firms are uncertain about the qualifications of workers before they are interviewed. Both types of uncertainty can be resolved, but both processes are costly. Intermediaries (recruiters) can perform the job matching but only at the cost of transforming the firm’s objectives between the parties.

26 Each branch has an associated probability
firms info sets Knows what fits, but can’t tell if good person good person, fits employee info sets Knows if he’s good or bad but can’t tell it he’s a fit. bad person, fits good person, doesn’t fit bad person, doesn’t fit

27

28 Information and market failure Can you answer these questions?
Why does a new car lose about one quarter of its value when you drive it away from the dealer? (can’t convince others it is a good car) Why do manufacturers of products that almost never break down still offer warranties? (need to convince others the product is good) Why is car insurance more expensive for all younger drivers? (paying more of the actual cost of converage) Why do insurance companies make you pay the first part of any claim (the deductible)? (No temptation to let a flood water set so you get all new furniture. No temptation to be careless.)

29 Information and market failure
There are two basic types of information asymmetry that can lead to market failure Adverse selection: one party to a deal has private information that affects the value of the deal Moral Hazard: one party to a deal has to take an action that cannot be perfectly monitored by the other party. The action affects the value of the deal. Called a moral hazard as there may be an incentive to do something immoral (dishonest) – like shirk at your job.

30 Moral hazard can be present any time two parties come into agreement with one another. Each party in a contract may have the opportunity to gain from acting contrary to the principles laid out by the agreement. For example, when a salesperson is paid a flat salary with no commissions for his/her sales, there is a danger that the salesperson may not try very hard to sell because the wage stays the same regardless of how much or how little the owner benefits from the salesperson's work.

31 Sometimes people do better than break even when misfortune strikes, and this possibility has greatly interested economists. If the misfortune costs a person $1000, but insurance will pay $2000, the insured person has no incentive to avoid the misfortune and may act to bring it on. For example, if you have full replacement costs on your house insurance, you may be happy when grape juice ruins your 10 year old carpet. Obviously, deliberately throwing grape juice to get insurance reimubursement is illegal. This tendency of insurance to change behavior falls under the label moral hazard.

32 Asymmetric information is feature of many markets
Adverse Selection Asymmetric information is feature of many markets - some market participants have information that the others do not have 1) The hiring process – a worker might know more about his ability than the firm does - the idea is that there are several types of workers - some are more productive than others are 2) Insurance – insurance companies do not observe individual characteristics such as driving skills 3) Project financing – entrepreneurs might have more information about projects than potential lenders 4) Used cars – sellers know more about the car’s quality than buyers Adverse selection is often a feature in these settings - it arises when an informed individual’s decisions depend on his privately held information in a way that adversely affects uninformed market participants .

33 Warranties Lets return to our car market example.
Remember that buyers are willing to pay up to $10,000 for a good car but only up to $6,000 for a lemon. The problem is – which is which? Suppose that good cars never break down. However, a lemon breaks down often (that is why it is a lemon). Say lemons break down 80% of the time. Fixing a broken down car is expensive – about $5,000. Suppose now that a car seller offers you the following deal – “Buy the car for $9,000. If it breaks down, the seller will not only fix your car for you but also pay you $3,000 in cash as compensation” Should you buy the car? But, have you ever been offered that good of a warranty?

34 5.2 Beliefs beliefs are in important in finding solution to a game without subgames. beliefs must be consistent with the way game is played. For example, in the Star Trek/Game Theory book gift example – you needed to know the likelihood a gift would be offered given the type of book. The game structure is key to deciding which beliefs you need to formulate. System of beliefs: assigns probability distribution to nodes in the information set. (What node do I think I am at) Use µ to represent that probability Behavior strategy  for a player is the probability he will take each edge. (mixed strategy – what edge will I take) completely mixed strategy – at every node, every choice is taken with positive probability

35 Player 1 plays L and L’ with .1
Example 5.2 Player 1 plays a with .1 Player 2 plays T with .1 Player 1 plays L and L’ with .1 (4,2) .001 .1 L R .1 .9 T E (0,3) .009 .1 X B .9 L .1 (1,7) .009 a F R .9 (3,0) (2,6) O .081 G .1 L’ b .1 T .9 .9 R’ (2,4) .081 Y B .9 (3,5) .081 L’ .1 H .9 R’ (4,3) .729

36 Note, than in general, the probability you are at E is .01.
The conditional probability of p(E|X) = .1 as once you know you are at X, the probability of E is greater. In the book example, the probability of giving a gift could be different depending on what the book is. A person might be MUCH more likely to give Star Wars as a gift than Game Theory.

37 For example, if I1={E,F} and I2={G,H}
5.3 Bayes Consistent A system of beliefs µ is said to be Bayes consistent with respects to a mixed behavior profile  if µ can be generated by . In other words, you beliefs about probabilities make a behavior profile reasonable. For example, if I1={E,F} and I2={G,H} if µ(E|I1) = µ(G|I2) = 0 and µ(F|I1) = µ(H|I2) = 1 µ(X) = 0, µ(Y) = 1

38 O-> Y -> H -> (4,3) is a good plan.
Example 5.2 This means… O-> Y -> H -> (4,3) is a good plan. (4,2) .1 L R .1 .9 T E (0,3) .1 X B .9 L .1 (1,7) a F R .9 (3,0) O (2,6) G .1 L’ b .1 T .9 .9 R’ (2,4) Y B .9 (3,5) L’ .1 H .9 R’ (4,3)

39 5.4 Expected Payoff We compute expected payoff in the regular way – multiply the payoff by the probability you get it.

40 (4,2) 1 Example 5.6 .6 N .4 (0,3) E(I2={NF}) = .2*E(N) +.8*E(F) .2 2 (1,7) .6 .8 X F .4 (2,6) The value player 1 uses for E(X) use HIS beliefs and his strategy. (3,0) 1 (2,4) G (3,5) Y H (4,3)

41 5.5 Sequential Equilibrium
Sequential Equilbrium: a pair (,µ) where  is a strategy profile and µ is a system of beliefs consistent with  such that no player can gain by deviating from . Note that what I believe may not be exactly the case, as I am not privy to the other person’s strategy. My strategy must be consistent with what I believe to be true.

42 Kreps-Wilson Every sequential game with imperfect information has a sequential equilibrium. I interpret that to mean, that given my understanding that is the “stable” thing to do. It might not be the right thing, but given what I know, I can do no better.

43 (4,2) 1 Example 5.10 L 1 N R 0 (0,3) T 1/2 2 (1,7) . B 1/2 X F (2,6) a 0 (3,0) L’ 1/6 1 R’ 5/6 (2,4) G b 1 (3,5) Y H (4,3) Can show equilibrium by seeing if can gain with other strategy.

44 Warranties A GOOD SELLER CAN MAKE SUCH AN OFFER Don’t buy (0,0)
Breakdown under warranty (zero chance) Buyer Offer deal Good seller (-$7,000, $?) Buy Don’toffer deal No breakdown (100%) (0,0) (1000,$?)

45 Warranties But the buyer can infer this – the seller of a lemon would not offer the deal. Don’t buy (0,0) Buyer Offer deal Breakdown (80%) Lemon seller (-$2,000, $?) Buy Don’toffer deal No breakdown (20%) (0,0) ($6,000, $?)

46 Warranties Don’t buy (0,0) Buyer Offer deal Breakdown (80%)
Lemon seller Expected payoff for lemon seller if buyer accepts offer is -$400. So if the lemon seller will not offer deal! Buy Don’toffer deal No breakdown (20%) (0,0)

47 Warranties So the warranty ‘works’
Only the good sellers will offer the warranty Buyers can buy the car with the warranty, sure that they are buying a good car (and will never need to use the warranty) But it is not worth while for the sellers of ‘lemons’ to offer the warranty – their cars break down and the warranty costs more than the increased price that they receive for their cars

48 Signaling The warranty is an example of a ‘signal’ that the ‘good’ seller can send to the buyer. In our example here the signal had no cost to the ‘good’ seller, but was expensive for the bad seller. This is not generally the case. “This is a Good Car” sign is ineffective: every type of seller will use it, and it will provide no new info For a signal to ‘work’ it requires three features It must be less costly to the ‘good’ type than to the ‘bad’ type. Even given the cost, it must be better for the ‘good’ type to distinguish themselves than be mistaken for a ‘bad’ type. The cost to the ‘bad’ type must be high enough so that they do not want to pretend to be a ‘good’ type

49 Other examples: the early career rat race
Suppose there are ordinary and talented workers Your boss can observe the quality of your work but not how difficult you found the task If everyone spends the same time, the talented workers will be recognised and gain promotion So the ordinary workers work harder to try and ‘appear’ to be talented So to distinguish themselves, the talented workers also have to work hard

50 Other examples: the early career rat race
What will be the outcome? Could get a separating equilibrium. This is where the signal ‘works’. The talented workers work too hard but are recognised. The ordinary workers just give up. Or could get a pooling equilibrium. In this situation, the ordinary workers work hard and talented workers work normally. The boss interprets ‘ordinary’ performance as a sure sign of lack of talent. But the boss cannot infer anything from exceptional work – because everyone is doing it!

51 Signaling Signaling can overcome information problems
But it can be costly to the ‘good’ type who is trying to distinguish themselves Choosing the wrong signal just means copying by the ‘bad’ type. Despite the cost of the signal there is no gain in information So: it is important to carefully choose your signaling strategy. It needs to be low cost to you and high cost to others, so that it will be ‘believed’ and cannot be ‘jammed’ by the bad types.

52 Moral hazard While Adverse selection is about private information, moral hazard is where one party can take a hidden action that affects the value of a transaction For example: Precaution with car and household insurance Employees Outsourcing of services Regulation

53 Example of moral hazard
You can work hard or slack. Working hard adds $1000 per week to firm value but has a personal cost of $100. Slacking has no personal cost but only adds $300 to firm value. If you do not work for this firm you can get a job elsewhere for $400 per week Work hard Hire you at $w per week ($ w, $w - 100) You Firm Slack Don’t hire you ($300 - w, $w) ($0, $400)

54 Example of moral hazard
If the firm just offers you a fixed wage then by roll back it is better for you to slack Work hard Hire you at $w per week ($ w, $w - 100) You Firm Slack Don’t hire you ($300 - w, $w) ($0, $400)

55 Example of moral hazard
Knowing this, the firm will not want to pay you a wage of more than $300. But given such a low wage you are better off working elsewhere Notice that this is inefficient. If you could commit to work hard, there is $500 extra value that can be shared ($ $400 (outside wage) - $100 (cost of hard work)) Work hard Hire you at $w per week ($ w, $w - 100) You Firm Slack Don’t hire you ($300 - w, $w) ($0, $400)

56 Example of moral hazard
This problem can be overcome if the firm can either observe your effort OR if the firm can observe your individual added value perfectly. In this case, the firm can offer you a contingent wage – wh if work hard and ws if slack. Hire you at $wh and $ws per week Work hard ($ wh, $wh - 100) You Firm Slack Don’t hire you ($300 – ws, $ws) ($0, $400)

57 Example of moral hazard
How does the firm calculate the best wh and ws to offer so that you will work hard? The firm faces an ‘individual rationality’ constraint. You must receive at least as much by working hard for the firm as if you decided not to work for the firm If you do not work for the firm you receive $400 per week If you work hard for the firm you receive wh - $100. So you will only accept a wage contract that leads you to work hard if overall you receive at least $400 (your outside offer). This means that $wh > $500

58 Example of moral hazard
The firm also faces an ‘incentive compatibility’ constraint. You must prefer to work hard for the firm than slack If you work hard for the firm you receive wh - $100. If you slack you receive ws. This means that the firm must set wh > ws + $100

59 Example of moral hazard
So the firm faces two constraints when setting your (contingent) wage contract $wh > $500 wh > ws + $100 What is the profit maximising wage contract for the firm? wh = $501, ws < $401

60 Example of moral hazard
Notice that now it is better for you to work hard than to slack. Hire you at $501 if work hard and $100 if slack Work hard ($499, $401) You Firm Slack Don’t hire you ($200, $100) ($0, $400)

61 Example of moral hazard
Notice that now it is better for you to work hard than to slack. So the firm now knows that if you accept the contract then you will work hard. Hire you at $501 if work hard and $100 if slack Work hard ($499, $401) You Firm Slack Don’t hire you ($200, $100) ($0, $400)

62 Example of moral hazard
And the firm makes as much profit as possible given that it must beat your outside option. Hire you at $501 if work hard and $100 if slack ($499, $401) Firm Don’t hire you ($0, $400)

63 Example of moral hazard
Notice that the firm could also achieve an outcome where you work hard by paying you an ‘output based’ contract (e.g. the firm pays you 50% of your value added so you receive $500 if work hard and $150 if slack). More generally, moral hazard analysis forms the basis of incentive contracting Issues of observability Issues of risk sharing

64 Signaling game set-up Imagine there are two types of people in the world, but that type is private information know only to individual people who good at business but not at art (business people) people who are very talented at art but not business (artists) Employers want to hire business people not artists Businesses pay very well such that artists would like to have business jobs How should businesses find business people?

65 How do businesses find business people?
One solution is to ask people, “are you a business person or an artist?” What will business people say? What will artists say? Is there a signal business people can send? What if wearing a suit signals that one is a business person? What will artists do? Is there a credible signal business people can send that businesses will believe?

66 Credible signals A signal is credible if it is costly enough such that artists will not want to invest in signaling One potential credible signal is going to business school Interestingly, the signal works even if business school does not affect business people’s productivity

67 Signaling game set-up ½ the people in the world are business people and ½ are artists Business people are worth $5 to businesses, while artists are worth $4. (But won’t get paid this much as then there would be no profit to business.) There are only enough business jobs for ½ the people in the world Firms pay $3 to anyone they hire, regardless of type (since this is unobservable) Business school is free however, it costs $1 of effort from business people and $4 of effort from artists (artists dislike business school) Interesting assumption: School does not change the productive capacity of the workers

68 Signaling game: Starting point is at center
Signaling game: Starting point is at center. Utility: (individual, company) Business people 2,2 3,2 H H B School No B School company pays $3 but gets $5 value, so profit is $2 50% -1,0 N N 0,0 -1: individual is paid nothing and loses investment of $1 in education Nature -1,1 H H 3,1 50% B School No B School N N -4,0 0,0 Artists

69 Equilibrium Business people go to business school, artists do not and firms only hire business school graduates This is the only equilibrium in this game, no one can do better by changing their strategy given what other players do Note that if everyone goes to school the expected value for artists is – 2½ (etc. etc.) If they hire an artist, he gets same utility with or without an education. If they don’t hire an artist, lose with education, but same is true of business person. Difference is that companies remove the hiring option for those without a business degree. Note the role of business school in this game Business people don’t learn anything useful in business school in this set-up However business school is still a socially useful institution since it allows business people to send credible signals to potential employers

70 Salary and bonus contracts can compensate for information asymmetry
Incentive Schemes Salary and bonus contracts can compensate for information asymmetry Often, this is unreasonable Employees unwilling to assume risks Contracts must be perfectly balanced May be better to settle for low effort Today: The flip side – are bonuses going to good employees or just lucky ones? Signaling & screening

71 Leakages IBM Variable Pay Bonus of 10% of annual earnings if
“annual objectives are met in key areas” Internal Memo: “ We observe, across divisions, performance in line with expectations through about March. Performance declines consistently in later months.”

72 Leakages If bonus is tied to Increases over last year
Reduce this year’s growth Output / Quantity Reduce quality Average customer satisfaction Reduce number of service calls How would you rate teacher evaluations? If score is tied to percent of happy, stop unhappy from replying. If score is only on how happy, give everyone A’s and require no work. Homework helps students to learn, but gives poorer evaluations and takes a ton of instructor time.

73 Example: Incentives: Market Conditions
Patent races over high-profit pharmaceuticals worth up to $2 billion Resource devotion ranges from twenty to sixty hours per employee, with staff of fifty per project (low level to high level) Project time frame: 6 months

74 Independent labs contracted Average cost of labor: $16/hour
Market Conditions Independent labs contracted Average cost of labor: $16/hour Chance of success: Minimally: 1% Maximally: 2.5%

75 Cost Calculations Extra cost to lab of high effort: 40 hours / week / employee x 25 weeks time frame x $16 / hour _ = $16,000 / employee

76 To entice high effort Costs: Benefits: Incentive compatibility:
$16,000 per employee in costs Benefits: 1½% extra chance of success (2½% - 1%) Incentive compatibility: .015 x bonus > $16,000 bonus > $1.1M

77 To entice high effort Bonus per employee must be greater than $1.1 million Fifty employees, so total bonus must be greater than $55 million Final conclusion $75 million bonus “to be safe”

78 Extra Profit if it Works
Value of extra chance of success: 0.015 x $2B = $30M Cost of bonus: 0.025 x $75M = $2M Benefit of plan: $30M – $2M = $28M

79 Problem Ignoring individual incentives Quick & Dirty Check:
Analysis assuming that entire group works hard or does not Quick & Dirty Check: If fifty people working hard increases chance of success by 1.5%, each person, on average, increases chance by only 1.5%/50 = 0.03% Each person earns a bonus of $75M/50 = $1.5M

80 Conclusion A person’s value of extra time: $1.5M x 0.03% = $450 A person’s cost of extra time: $16,000 NOT EVEN CLOSE!

81 Signally example: Auto Insurance
Half of the population are high risk drivers and half are low risk drivers High risk drivers: 90% chance of accident Low risk drivers: 10% chance of accident Accidents cost $10,000

82 An insurance company can offer a single insurance contract
Pooling An insurance company can offer a single insurance contract Expected cost of accidents: (½ .9 + ½ .1 )10,000 = $5,000 Offer $5,000 premium contract The company is trying to “pool” high and low risk drivers Will it succeed?

83 Self-Selection High risk drivers: Low-risk drivers:
Don’t buy insurance: (.9)(-10,000) = -9K Buy insurance: = -5K High risk drivers buy insurance Low-risk drivers: Don’t buy insurance: (.1)(-10,000) = -1K Low risk drivers do not buy insurance Only high risk drivers “self-select” into the contract to buy insurance

84 Expected cost of accidents in population
Adverse Selection Expected cost of accidents in population (½ .9 + ½ .1 )10,000 = $5,000 Expected cost of among the insured .9 (10,000) = $9,000 Insurance company loss: $4,000 Cannot ignore this “adverse selection” If only going to have high risk drivers, might as well charge more ($9,000)

85 Screening Offer two contracts, so that the customers self-select One contract offers full insurance with a premium of $9,000 Another contract offers a deductible, and a lower premium

86 Want to know an unobservable trait
How to Screen Want to know an unobservable trait Identify an action that is more costly for “bad” types than “good” types Ask the person (are you “good”?) But… attach a cost to the answer Cost high enough so “bad” types don’t lie Low enough so “good” types don’t lie

87 Screening Education as a signaling and screening device Is there value to education? Good types: less hardship cost

88 How long should an MBA program be? Two types of workers:
Example: MBAs How long should an MBA program be? Two types of workers: High and low quality NPV of salary high quality worker: $1.6M low quality worker: $1.0M Disutility per MBA class high quality worker: $5,000 low quality worker: $20,000

89 “High” Quality Workers
If I get an MBA: Signal I am a high quality worker Receive $1,600,000 - $5,000 N If I don’t get an MBA Signal I am a low quality worker Receive $1,000,000 1,600,000 – 5,000 N > 1,000,000 600, > 5,000N 120 > N

90 “Low” Quality Workers 1,600,000 – 20,000 N < 1,000,000
If I get an MBA: Signal I am a high quality worker Receive $1,600,000 - $20,000 N If I don’t get an MBA Signal I am a low quality worker Receive $1,000,000 1,600,000 – 20,000 N < 1,000,000 600, < 20,000N 30 < N

91 Hiding from Signals Suppose students can take a course pass/fail or for a letter grade. An A student should signal her abilities by taking the course for a letter grade – separating herself from the population of B’s and C’s. This leaves B’s and C’s taking the course pass/fail. Now, B students have incentive to take the course for a letter grade to separate from C’s. Ultimately, only C students take the course pass/fail. If employers are rational – will know how to read pass/fail grades. C students cannot hide!

92

93 Bayesian games A game has incomplete information when players know different things about payoffs (or other relevant information) Remember – information is imperfect if players know different things about (prior) moves. Applications include: competition between firms with private information about costs and technology, auctions where each potential buyer may attach a different valuation to the item, negotiations with uncertainty about the other party’s preferences or objectives, etc. – in short, any real economic situation! The basic ‘trick’ that lets us handle such games is to reduce incomplete to imperfect information by adding a chance player whose move chooses the payoffs.

94 With a simple but brilliant “trick” Harsanyi allows us to turn any game of incomplete information into a much more manageable game of imperfect information. The Harsayni trick is used in a game (especially an extensive‐form game) in which at least one player at the time of making a decision does not know what moves or choices were made previously by other players.

95 Suppose that two players, Callum and Rowena, are playing the following game:

96 But whereas Rowena has complete information, i. e
But whereas Rowena has complete information, i.e. knows every entry in each cell in the above matrix, Callum knows only his own payoffs, i.e.,

97 At this point Callum follows Harsanyi’s suggestion: first, to consider all the Rowena’s payoffs that he considers likely (i.e., with a positive probability). For simplicity, let’s suppose that, according to Callum, Rowena can be of only two types. If she’s of type 1, then her payoffs are as in the following matrix:

98 If instead she’s of type 2, then the payoffs matrix looks like this:

99 Second, Callum attaches to each type a subjective probability
Second, Callum attaches to each type a subjective probability. For example, Callum may assume that Rowena is Type 1 with a probability of 2/3. Third, Callum formulates an extensive‐form game with imperfect information whereby at the initial node Nature chooses Rowena’s type with Callum’s probabilities and informs Rowena (but not Callum) of her true type, i.e., Callum draws the following tree:

100

101 Fourth, Callum computes the normal form of the above game and locates any Nash equilibria. These equilibria are called Bayesian Nash Equilibria: A Bayesian Nash equilibrium is a NE of the game where Nature moves first, chooses players’ types from a distribution with probability p(t ) and reveals ti to player i. Rowena still doesn’t know Callums type, but she can compute a strategy based on what she knows.

102 Another example Friend or foe: player 2 does not know whether player 1 is friendly or not: Which dilemma: player 2 does not know which of the following Prisoners’ Dilemmas is being played:

103 Bayesian Game Example Two players, two types. If they are the same type, they play Prisoners’ Dilemma. If they are of different types, they play Battle of the Sexes. The joint distribution of types is given by p, and the resulting game matrix is shown below.

104 Second strategy dominant if
Best replies Compute best replies for each type of each player, e.g. if type 1 of player 1 plays Top expected PO is If he plays Bottom instead (= 0), payoff is Simplifying and comparing, we get 2 1 Second strategy dominant if Uses first strategy if type Player To use, apply appropriate prior probability. Ex: independent, equally likely types, all p’s = ¼, and ‘cut off’ values above are 1 for each strategy of pl. 1 and 2/3 for each strategy of player 2: each type of pl. 1 plays B unless opposite type of player 2 plays L with probability 1. Only pure strategy equilibria: (0,0,0,0), (1,0,1,0), (0,1,0,1), (1,1,1,1).

105 Sequential rationality
Due to presence of information sets, need to redefine rationality by moving from a strategy b to an assessment (b, m) consisting of a strategy b and a system of beliefs m. Informal definitions: (b, m) is sequentially rational if for every information set Ii, bi(Ii) is a best reply given the beliefs m. (b, m) is strategically consistent if m is derived from b via Bayes’ Rule wherever applicable. (b, m) is structurally consistent if m at every information set is derived from some strategy b* via Bayes’ Rule. (b, m) satisfies common beliefs if all players share the same belief about the cause of every unexpected event. The outcome of an assessment conditional on an information set I is a distribution O(b, m|I) over the set Z of terminal histories. It assumes independence (multiply probabilities) which is supported by perfect recall – specifically, if h* is in Z;


Download ppt "Sequential Rationality"

Similar presentations


Ads by Google