Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithmic Game Theory Uri Feige Robi Krauthgamer Moni Naor Lecture 10: Mechanism Design Lecturer: Moni Naor.

Similar presentations


Presentation on theme: "Algorithmic Game Theory Uri Feige Robi Krauthgamer Moni Naor Lecture 10: Mechanism Design Lecturer: Moni Naor."— Presentation transcript:

1 Algorithmic Game Theory Uri Feige Robi Krauthgamer Moni Naor Lecture 10: Mechanism Design
Lecturer: Moni Naor

2 Announcements January: course will be 1300:-15:00
The meetings on Jan 7th, 14th and 21st 2009

3 Recap social choice Social choice: collectively choosing among outcomes or aggregate preferences Arrow’s Impossibility Theorem Gibbard-Satterthwaite Theorem: There exists no social choice function f for more than 2 alternatives that is simultaneously: Onto: for every candidate, there are some votes that make the candidate win Nondictatorial Incentive compatible

4 Proof of Arrow’s Theorem: Find the Dictator
Claim: For any a,b 2 A consider sets of profiles ab ba ba … ba ab ab ba … ba ab ab ab … ba … … … ab ab ab ba Voters Hybrid argument 1 Change must happen at some profile i* Where voter i* changed his opinion 2 n Claim: this i* is the dictator! 1 2 n a Á b b Á a Profiles

5 Single-peaked preferences [Black 48]
Suppose alternatives are ordered on a line Every voter prefers alternatives that are closer to her most preferred alternative Choose the median voter’s peak as the winner Strategy-proof! v5 v4 v2 v1 v3 a1 a2 a3 a4 a5

6 What about Probabilistic Voting Schemes?
Electing the Doge in the Republic of Venice A sequence of electoral colleges, where at each stage: A sub-college is selected at random (lottery) The sub college elects the next electoral college by approval voting. Final college elects the Doge Lottery Approval

7 Probabilistic Voting Schemes
Can do something ``non trivial” to get truthful voting Elect a random leader/dictator Choose at random a pair of alternatives and see which one is preferred by the majority. But this all we can do: Any scheme has to be a combination of such rules

8 Range Voting Each voter ranks the candidates in a certain range (say 0-99) The votes for all candidates are summed up and the one with highest total score wins Can be considered as a generalization of approval voting from the range 0-1 No incentive for voter to rate a candidate lower than a candidate they like less.

9 Mechanism Design Mechanisms
Recall: We want to implement a social choice function Need to know agents’ preferences They may not reveal them to us truthfully Example: One item to allocate: Want to give it to the participant who values it the most If we just ask participants to tell us their preferences: may lie Can use payments result is also a payment vector p=(p1,p2, … pn)

10 Quasi linear preferences
The setting Set of alternatives A Who wins the auction Which path is chosen Who is matched to whom Each participant: a value function vi:A  R Can pay participants: valuation of choice a with payment pi is vi(a)+pi Quasi linear preferences

11 Example: Vickrey’s Second Price Auction
Despite private information and selfish behavior compute “reliably” the max function! Single item for sale Each player has scalar value wi – willingness to pay If he wins item and has to pay p: utility wi-p If someone else wins item: utility 0 Second price auction: Winner is the one with the highest declared value wi. Pays the second highest bid p*=maxj  i wj Theorem (Vickrey): for any every w1, w2,…,wn and every wi’. Let ui be i’s utility if he bids wi and u’i if he bids wi’. Then ui ¸ u’i..

12 Direct Revelation Mechanism
A direct revelation mechanism is a social choice function f: V1  V2  …  Vn  A and payment functions pi: V1  V2  …  Vn  R Participant i pays pi(v1, v2, … vn) A mechanism (f,p1, p2,… pn) is incentive compatible if for every v=(v1, v2, …,vn), i and vi’ 2 V1: if a = f(vi,v-i) and a’ = f(v’i,v-i) then vi(a)-pi(vi,v-i) ¸ vi(a’) -pi(v’i,v-i) v=(v1, v2,… vn) v-i=(v1, v2,… vi-1 ,vi+1 ,… vn) Prefer telling the truth about vi

13 Vickrey Clarke Grove Mechanism
A mechanism (f,p1, p2,… pn ) is called Vickrey-Clarke-Grove (VCG) if f(v1, v2, … vn) maximizes i vi(a) over A Maximizes welfare There are functions h1, h2,… hn where hi: V1  V2  …  Vn  R does not depend on vi we have that: pi(v1, v2, … vn) = hi(v-i) - j  i vj(f(v1, v2,… vn)) Depends only on chosen alternative v=(v1, v2,… vn) v-i=(v1, v2,… vi-1 ,vi+1 ,… vn) Does not depend on vi

14 Example: Second Price Auction
Recall: f assigns the item to one participant and vi(j) = 0 if j  i and vi(i)=wi f(v1, v2, … vn) = i s.t. wi =maxj(w1, w2,… wn) hi(v-i) = maxj(w1, w2, … wi-1, wi+1 ,…, wn) pi(v) = hi(v-i) - j  i vj(f(v1, v2,… vn)) If i the winner pi(vi) = hi(v-i) = maxj  i wj and for j  i pj(vi)= wi – wi = 0 A={i wins|I 2 I}

15 VCG is Incentive Compatible
Theorem: Every VCG Mechanism (f,p1, p2,… pn) is incentive compatible Proof: Fix i, v-i, vi and v’i. Let a=f(vi,v-i) and a’=f(v’i,v-i). Have to show vi(a)-pi(vi,v-i) ¸ vi (a’) -pi(v’i,v-i) Utility of i when declaring vi: vi(a) + j  i vj(a) - hi(v-i) Utility of i when declaring v’i: vi(a’)+ j  i vj(a’)- hi(v-i) Since a maximizes social welfare vi(a) + j  i vj(a) ¸ vi(a’) + j  i vj(a’) maximizes i vi(a) over A

16 Clarke Pivot Rule What is the “right”: h?
Individually rational: participants always get non negative utility vi(f(v1, v2,… vn)) - pi(v1, v2,… vn) ¸ 0 No positive transfers: no participant is ever paid money pi(v1, v2,… vn) ¸ 0 Clark Pivot rule: Choosing hi(v-i) = maxb 2 A j  i vj(b) Payment of i when a=f(v1, v2,…, vn): pi(v1, v2,… vn) = maxb 2 A j  i vj(b) - j  i vj(a) i pays an amount corresponding to the total “damage” he causes other players: difference in social welfare caused by his participation Social welfare (of others) when he participates Social welfare when he does not participate

17 Rationality of Clarke Pivot Rule
Theorem: Every VCG Mechanism with Clarke pivot payments makes no positive Payments. If vi(a) ¸ 0 then it is Individually rational Proof: Let a=f(v1, v2,… vn) maximizes social welfare Let b 2 A maximize j  i vj(b) Utility of i: vi(a) + j  i vj(a) - j  i vj(b) ¸ j vj(a) - j vj(b) ¸ 0 Payment of i: j  i vj(b) - j  i vj(a) ¸ 0 from choice of b maximizes i vi(a) over A

18 Examples: Second Price Auction
hi(v-i) = maxj(w1, w2,…, wi-1, wi+1,…, wn) = maxb 2 A j  i vj(b) Multiunit auction: if k identical items are to be sold to k individuals. A={S wins |S ½ I, |S|=k} and vi(S) = 0 if i2S and vi(i)=wi if i 2 S Allocate units to top k bidders. They pay the k+1th price Claim: this is maxS’ ½ I\{i} |S’| =k j  i vj(S’)-j  i vj(S)

19 Generalized Second Price Auctions
Multiunit auction: if k identical items are to be sold to k individuals. A={S wins |S ½ I, |S|=k} and vi(S) = 0 if i2S and vi(i)=wi if i 2 S Allocate units to top k bidders. The jth highest bidder pays bid j+1. Common in web advertising Claim: this is not incentive compatible

20 Examples: Public Project
Want to build a bridge: Cost is C (if built) Value to each individual vi Want to built iff  i vj ¸ C Player with vj ¸ 0 pays only if pivotal j  i vj < C but  j vj ¸ C in which case pays pj = C- j  i vj In general:  i pj < C Payments do not cover project cost’s Subsidy necessary! A={build, not build} Equality only when  i vj = C

21 Buying a (Short) Path in a Graph
A Directed graph G=(V,E) where each edge e is “owned” by a different player and has cost ce. Want to construct a path from source s to destination t. How do we solicit the real cost ce? Set of alternatives: all paths from s to t Player e has cost: 0 if e not on chosen path and –ce if on Maximizing social welfare: finding shortest s-t path: minpaths e 2 path ce A VCG mechanism that pays 0 to those not on path p: pay each e0 2 p: e 2 p’ ce - e 2 p\{e0} ce where p’ is shortest path without eo Set A of alternatives: all s-t paths

22 Clarke mechanism is not perfect
Requires payments & quasilinear utility functions In general money needs to flow away from the system Strong budget balance = payments sum to 0 Impossible in general [Green & Laffont 77] Vulnerable to collusions Maximizes sum of players’ utilities (social welfare) not counting payments) But: sometimes the center is not interested in maximizing social welfare: E.g. the center may want to maximize revenue

23 Games with Incomplete Information
Game defined by having for every player i2 I A set of actions Xi A set of types Ti. The value ti 2 Ti is the private information i knows. A utility function ui: Ti  X1  X2  …  Xn  R where ui(ti, x1, x2, … xn) is the utility obtained by i if his private information is ti and the profile of actions taken by all players is (x1, x2, … xn). Player i chooses his action knowing ti but not other values

24 …Games with Incomplete Information
A strategy for player i2 I is si:Ti  X1 A strategy si is (weakly) dominant if for all ti 2 Ti we have that si(ti) is a dominant strategy in the full information game defined by the ti’s: for all ti’s and all x=(x1, x2, xi-1, x’i, xi+1 … xn) we have that ui(ti, si(ti), x-i) ¸ ui(ti, x) Alternative play

25 Quasi linear preferences
Games and Mechanisms A mechanism is given by Types T1, T2, … Tn Actions X1, X2, …, Xn An alternative set A and outcome function a: X1 X2  …  Xn  A Player’s valuation functions vi: T1  A  R Payment functions pi: X1  X2  …  Xn  R The utility of player i ui(ti, x1, x2, … xn) = ui(ti, a(x1, x2, … xn)) - pi(x1, x2, … xn) A mechanism implements a social choice function f f: T1  T2  …  Tn  A in dominant strategies if for some dominant strategies s1, s2, … sn (of the induced game) for all t1, t2, … tn f(t1, t2, … tn ) = a(s1(t1), s2(t2), … sn(tn)) Quasi linear preferences

26 The Revelation Principle
Theorem: if there exists an arbitrary mechanism implementing a social choice function f in dominant strategies, then there exists an incentive compatible mechanism that implements f The payments of the players in the incentive compatible mechanism are identical to those obtained at equilibrium in the original mechanism Proof: by simulation

27 Revelation Principle: Intuition
Constructed “direct revelation” mechanism Strategy s1(t1) Original “complex” “indirect” mechanism Player 1: t1 Strategy . . . . . Outcome a,p1,…,pn . Strategy sn(tn) Player n: tn Strategy

28 Revelation Principle: Proof
Since si is dominant for player i, then for all ti, x: vi(ti, a(si(ti), x-i)) - pi(si(ti), x-i) ¸ vi(ti,a(x))-pi(x) In particular for all x-i = s-i (t-i ) and xi = si (t’i ) To understand mechanism: can think of the equivalent direct revelation mechanism

29 Direct Characterization
A mechanism is incentive compatible iff the following hold for all i and all vi The payment pi does not depend on vi but only on the alternative chosen f(vi, v-i) the payment of alternative a is pa The mechanism optimizes for each player: f(vi, v-i) 2 argmaxa (vi(a)-pa)

30 Bayesian Nash Implementation
There is a distribution Di on the types Ti of Player i It is known to everyone The value ti 2DiTi is the private information i knows A profile of strategis si is a Bayesian Nash Equilibrium if for i all ti and all x’i Ed-i[ui(ti, si(ti), s-i(t-i) )] ¸ Ed-i[ui(ti, s-i(t-i)) ]

31 Bayesian Nash: First Price Auction
First price auction for a single item with two players. Each has a private value t1 and t2 in T1=T2=[0,1] Does not make sense to bid true value – utility 0. There are distributions D1 and D2 Looking for s1(t1) and s2(t2) that are best replies to each other Suppose both D1 and D2 are uniform. Claim: In the strategies s1(t1) = ti/2 are in Bayesian Nash Equilibrium t1 Win half the time Cannot win

32 Expected Revenues Expected Revenue:
For first price auction: max(T1/2, T2/2) where T1 and T2 uniform in [0,1] For second price auction min(T1, T2) Which is better? Both are 1/3. Coincidence? Theorem [Revenue Equivalence]: under very general conditions, every two Bayesian Nash implementations of the same social choice function if for some player and some type they have the same expected payment then All types have the same expected payment to the player If all player have the same expected payment: the expected revenues are the same


Download ppt "Algorithmic Game Theory Uri Feige Robi Krauthgamer Moni Naor Lecture 10: Mechanism Design Lecturer: Moni Naor."

Similar presentations


Ads by Google