Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Cost and Windfall of Manipulability Abraham Othman and Tuomas Sandholm Carnegie Mellon University Computer Science Department.

Similar presentations


Presentation on theme: "The Cost and Windfall of Manipulability Abraham Othman and Tuomas Sandholm Carnegie Mellon University Computer Science Department."— Presentation transcript:

1 The Cost and Windfall of Manipulability Abraham Othman and Tuomas Sandholm Carnegie Mellon University Computer Science Department

2 The revelation principle Foundational result of mechanism design –Equivalence of manipulable & truthful mechanisms Only applies if all agents in the manipulable mechanism behave optimally

3 Questions Agents might act irrationally due to: –Computational limitations –Stupidity/trembling –Behavioral/cognitive biases Then, can mechanism designer –get a better result? –be protected from bad results?

4 Key idea Designing manipulable mechanisms that do better than the best truthful mechanism if any agent(s) play irrationally in any way (and equally well if everyone plays rationally) –We don’t need a model of irrationality

5 Mechanism utility We define mechanism utility for outcome o as: M(o) = Σγ i u θ i (o) + m(o) Where the sum is taken over all agents i, the γ are affine multipliers, and m(.) represents a type-independent outcome-specific payoff This is a very flexible formalism!

6 Manipulation optimal mechanisms (MOMs) Def. A manipulable mechanism is manipulation optimal if –Any agent with a manipulable type failing in any way to play his optimal manipulation yields strictly greater mechanism utility –If all agents play rationally, the mechanism’s utility equals that of an optimal truthful mechanism Here “optimal” means not Pareto-dominated by any other truthful mechanism We don’t need a model of irrationality

7 Example of this property [Conitzer & Sandholm 04] A manager and HR director are trying to select a team of k people for a task, from n employees Some employees are friends. Friend relationships are mutual and are common knowledge

8 Example, continued HR director prefers team to have friends on it –She gets utility 2 if team has friends, 0 otherwise Manager has a type – either he has a preference for a specific team, or no team preference –Manager gets a base payoff 1 if the selected team has no friends, 0 otherwise –If manager has a type corresponding to a specific team and that team is selected, he gets a bonus 3

9 Team selection mechanisms If manager reports a team preference, select that team Optimal truthful mechanism: If manager reports no team preference, select a team without friends (mechanism utility = 1) Manipulation-optimal: If manager reports no team preference, select a team with friends (mechanism utility = 2) –Manager’s no-team-preference type is manipulable because he could state a preference for an arbitrary team w/o friends –But finding a team w/o friends is NP-hard

10 Settings for MOMs That was an existence result for a single-agent affine-welfare maximization objective –HR director had only one type => no need to report What about other settings?

11 General impossibility Theorem. No mechanism can satisfy the first characteristic of MOMs (doing strictly better through sub-optimal play) if any agent has more than one distinct manipulable type Proof sketch. Let a and b be manipulable types with different optimal strategic plays –What happens if agent of type a plays the optimal revelation for type b, and vice-versa? –Mechanism must do better than if agents had played correctly, but there’s a reason those plays weren’t optimal for the agent Holds for Nash equilibrium => impossibility for stronger solution concepts

12 Working within the impossibility We prove that –  single-player MOMs if objective is social welfare –  multi-player MOMs if objective is social welfare But not in dominant strategies with symmetry and anonymity  multi-player MOMs if objective is affine welfare –even in dominant strategies with symmetry and anonymity

13 Proof (1/4) Proof by construction. Each agent has type a or a’ Our mechanism: The challenge is fixing the payoffs appropriately… Reporta’a Outcome 1Outcome 2 aOutcome 3Outcome 4

14 Proof (2/4) Payoffs for type a: Payoffs for type a’: Reporta’a a'{1,1}{4,0} a{0,3}{3,0} Reporta’a {3,4}{5,0} a{0,6}{0,0} Reporting a’ regardless of true type is a DSE

15 Proof (3/4) Two parts in proving that this mechanism is manipulation-optimal: –First, if one or both of the agents with true type a play a instead of manipulating to a’, social welfare strictly increases –E.g. if both are truly a: Reporta’a a'{1,1}, DSE sum = 2 {4,0} sum = 4 a{0,3} sum = 3 {3,0} sum = 3

16 Proof (4/4) Truthful analogue to our mechanism maps any report to outcome 1 For our mechanism to be manipulation optimal, this truthful analogue must not be Pareto- dominated by any truthful mechanism Outcome 1 maximizes social welfare when both agents have type a’, and any truthful mechanism which selects outcome 1 for that input must select outcome 1 for every input 

17 Symmetric and anonymous MOMs In that example the payoffs were asymmetric. We prove that there do not exist MOMs in dominant strategies when the mechanism is anonymous and payoffs are symmetric. –Social welfare maximization + DSE + symmetry + anonymity = impossibility in most common case. However, these goals can be satisfied in an affine welfare maximization context.

18 Conclusions Can we use manipulability to guarantee better results if agents fail to be rational in any way? –No If any agent has more than one manipulable type In single-agent settings if objective is social welfare In DSE for social-welfare maximization with symmetric, anonymous agents –Yes For some social welfare maximization settings, even in DSE For some affine welfare maximization settings, even in DSE with symmetry & anonymity In settings where answer is “No”, using a manipulable mechanism exposes designer to outcomes worse than best truthful mechanism

19 Getting around the impossibilities Our impossibility results place strong constraints on MOMs –When mistakes can be arbitrary, only in very limited settings can MOMs exist Circumventing impossibility –Imposing natural restrictions on strategies –Combining behavioral models and priors with automated mechanism design


Download ppt "The Cost and Windfall of Manipulability Abraham Othman and Tuomas Sandholm Carnegie Mellon University Computer Science Department."

Similar presentations


Ads by Google