The Cost and Windfall of Manipulability Abraham Othman and Tuomas Sandholm Carnegie Mellon University Computer Science Department.

Slides:



Advertisements
Similar presentations
6.896: Topics in Algorithmic Game Theory Lecture 20 Yang Cai.
Advertisements

(More on) characterizing dominant-strategy implementation in quasilinear environments (see, e.g., Nisan’s review: Chapter 9 of Algorithmic Game Theory.
Approximating optimal combinatorial auctions for complements using restricted welfare maximization Pingzhong Tang and Tuomas Sandholm Computer Science.
Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Mechanism Design without Money Lecture 1 Avinatan Hassidim.
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
The Voting Problem: A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor SIUC.
Complexity of manipulating elections with few candidates Vincent Conitzer and Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Game Theory and Computer Networks: a useful combination? Christos Samaras, COMNET Group, DUTH.
Game Theory 1. Game Theory and Mechanism Design Game theory to analyze strategic behavior: Given a strategic environment (a “game”), and an assumption.
Negotiation A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor SIUC.
Game Theory, Mechanism Design, Differential Privacy (and you). Aaron Roth DIMACS Workshop on Differential Privacy October 24.
SECOND PART: Algorithmic Mechanism Design. Implementation theory Imagine a “planner” who develops criteria for social welfare, but cannot enforce the.
An Introduction to Game Theory Part I: Strategic Games
Complexity Results about Nash Equilibria
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Social choice theory = preference aggregation = voting assuming agents tell the truth about their preferences Tuomas Sandholm Professor Computer Science.
Agent Technology for e-Commerce Chapter 10: Mechanism Design Maria Fasli
An Algorithm for Automatically Designing Deterministic Mechanisms without Payments Vincent Conitzer and Tuomas Sandholm Computer Science Department Carnegie.
CSE115/ENGR160 Discrete Mathematics 02/07/12
Computational Criticisms of the Revelation Principle Vincent Conitzer, Tuomas Sandholm AMEC V.
AWESOME: A General Multiagent Learning Algorithm that Converges in Self- Play and Learns a Best Response Against Stationary Opponents Vincent Conitzer.
SECOND PART: Algorithmic Mechanism Design. Mechanism Design MD is a subfield of economic theory It has a engineering perspective Designs economic mechanisms.
Sequences of Take-It-or-Leave-it Offers: Near-Optimal Auctions Without Full Valuation Revelation Tuomas Sandholm and Andrew Gilpin Carnegie Mellon University.
Complexity of Mechanism Design Vincent Conitzer and Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Automated Mechanism Design: Complexity Results Stemming From the Single-Agent Setting Vincent Conitzer and Tuomas Sandholm Computer Science Department.
Social choice theory = preference aggregation = truthful voting Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
DANSS Colloquium By Prof. Danny Dolev Presented by Rica Gonen
CPS 173 Mechanism design Vincent Conitzer
Sequences of Take-It-or-Leave-it Offers: Near-Optimal Auctions Without Full Valuation Revelation Tuomas Sandholm and Andrew Gilpin Carnegie Mellon University.
Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Mechanism Design CS 886 Electronic Market Design University of Waterloo.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Automated Design of Multistage Mechanisms Tuomas Sandholm (Carnegie Mellon) Vincent Conitzer (Carnegie Mellon) Craig Boutilier (Toronto)
Mechanism design for computationally limited agents (previous slide deck discussed the case where valuation determination was complex) Tuomas Sandholm.
Game-theoretic analysis tools Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
Econ 805 Advanced Micro Theory 1 Dan Quint Fall 2009 Lecture 4.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Lecture 2: two-person non.
Mechanism design. Goal of mechanism design Implementing a social choice function f(u 1, …, u |A| ) using a game Center = “auctioneer” does not know the.
Regret Minimizing Equilibria of Games with Strict Type Uncertainty Stony Brook Conference on Game Theory Nathanaël Hyafil and Craig Boutilier Department.
Complexity of Determining Nonemptiness of the Core Vincent Conitzer, Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Automated Mechanism Design Tuomas Sandholm Presented by Dimitri Mostinski November 17, 2004.
Mechanism Design II CS 886:Electronic Market Design Sept 27, 2004.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Mechanism design (strategic “voting”) Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
6.853: Topics in Algorithmic Game Theory Fall 2011 Constantinos Daskalakis Lecture 22.
Algorithmic Mechanism Design Shuchi Chawla 11/7/2001.
Automated mechanism design Vincent Conitzer
Negotiating Socially Optimal Allocations of Resources U. Endriss, N. Maudet, F. Sadri, and F. Toni Presented by: Marcus Shea.
Market Design and Analysis Lecture 2 Lecturer: Ning Chen ( 陈宁 )
Bayesian Algorithmic Mechanism Design Jason Hartline Northwestern University Brendan Lucier University of Toronto.
Automated mechanism design
Mechanism design for computationally limited agents (previous slide deck discussed the case where valuation determination was complex) Tuomas Sandholm.
A useful reduction (SAT -> game)
Mechanism design for computationally limited agents (last lecture discussed the case where valuation determination was complex) Tuomas Sandholm Computer.
CPS Mechanism design Michael Albert and Vincent Conitzer
Failures of the VCG Mechanism in Combinatorial Auctions and Exchanges
A useful reduction (SAT -> game)
Social choice theory = preference aggregation = voting assuming agents tell the truth about their preferences Tuomas Sandholm Professor Computer Science.
Tuomas Sandholm Computer Science Department Carnegie Mellon University
Applied Mechanism Design For Social Good
Communication Complexity as a Lower Bound for Learning in Games
Implementation in Bayes-Nash equilibrium
Implementation in Bayes-Nash equilibrium
Vincent Conitzer Mechanism design Vincent Conitzer
Vincent Conitzer CPS 173 Mechanism design Vincent Conitzer
Automated mechanism design
Vincent Conitzer Computer Science Department
Implementation in Bayes-Nash equilibrium
Information, Incentives, and Mechanism Design
Presentation transcript:

The Cost and Windfall of Manipulability Abraham Othman and Tuomas Sandholm Carnegie Mellon University Computer Science Department

The revelation principle Foundational result of mechanism design –Equivalence of manipulable & truthful mechanisms Only applies if all agents in the manipulable mechanism behave optimally

Questions Agents might act irrationally due to: –Computational limitations –Stupidity/trembling –Behavioral/cognitive biases Then, can mechanism designer –get a better result? –be protected from bad results?

Key idea Designing manipulable mechanisms that do better than the best truthful mechanism if any agent(s) play irrationally in any way (and equally well if everyone plays rationally) –We don’t need a model of irrationality

Mechanism utility We define mechanism utility for outcome o as: M(o) = Σγ i u θ i (o) + m(o) Where the sum is taken over all agents i, the γ are affine multipliers, and m(.) represents a type-independent outcome-specific payoff This is a very flexible formalism!

Manipulation optimal mechanisms (MOMs) Def. A manipulable mechanism is manipulation optimal if –Any agent with a manipulable type failing in any way to play his optimal manipulation yields strictly greater mechanism utility –If all agents play rationally, the mechanism’s utility equals that of an optimal truthful mechanism Here “optimal” means not Pareto-dominated by any other truthful mechanism We don’t need a model of irrationality

Example of this property [Conitzer & Sandholm 04] A manager and HR director are trying to select a team of k people for a task, from n employees Some employees are friends. Friend relationships are mutual and are common knowledge

Example, continued HR director prefers team to have friends on it –She gets utility 2 if team has friends, 0 otherwise Manager has a type – either he has a preference for a specific team, or no team preference –Manager gets a base payoff 1 if the selected team has no friends, 0 otherwise –If manager has a type corresponding to a specific team and that team is selected, he gets a bonus 3

Team selection mechanisms If manager reports a team preference, select that team Optimal truthful mechanism: If manager reports no team preference, select a team without friends (mechanism utility = 1) Manipulation-optimal: If manager reports no team preference, select a team with friends (mechanism utility = 2) –Manager’s no-team-preference type is manipulable because he could state a preference for an arbitrary team w/o friends –But finding a team w/o friends is NP-hard

Settings for MOMs That was an existence result for a single-agent affine-welfare maximization objective –HR director had only one type => no need to report What about other settings?

General impossibility Theorem. No mechanism can satisfy the first characteristic of MOMs (doing strictly better through sub-optimal play) if any agent has more than one distinct manipulable type Proof sketch. Let a and b be manipulable types with different optimal strategic plays –What happens if agent of type a plays the optimal revelation for type b, and vice-versa? –Mechanism must do better than if agents had played correctly, but there’s a reason those plays weren’t optimal for the agent Holds for Nash equilibrium => impossibility for stronger solution concepts

Working within the impossibility We prove that –  single-player MOMs if objective is social welfare –  multi-player MOMs if objective is social welfare But not in dominant strategies with symmetry and anonymity  multi-player MOMs if objective is affine welfare –even in dominant strategies with symmetry and anonymity

Proof (1/4) Proof by construction. Each agent has type a or a’ Our mechanism: The challenge is fixing the payoffs appropriately… Reporta’a Outcome 1Outcome 2 aOutcome 3Outcome 4

Proof (2/4) Payoffs for type a: Payoffs for type a’: Reporta’a a'{1,1}{4,0} a{0,3}{3,0} Reporta’a {3,4}{5,0} a{0,6}{0,0} Reporting a’ regardless of true type is a DSE

Proof (3/4) Two parts in proving that this mechanism is manipulation-optimal: –First, if one or both of the agents with true type a play a instead of manipulating to a’, social welfare strictly increases –E.g. if both are truly a: Reporta’a a'{1,1}, DSE sum = 2 {4,0} sum = 4 a{0,3} sum = 3 {3,0} sum = 3

Proof (4/4) Truthful analogue to our mechanism maps any report to outcome 1 For our mechanism to be manipulation optimal, this truthful analogue must not be Pareto- dominated by any truthful mechanism Outcome 1 maximizes social welfare when both agents have type a’, and any truthful mechanism which selects outcome 1 for that input must select outcome 1 for every input 

Symmetric and anonymous MOMs In that example the payoffs were asymmetric. We prove that there do not exist MOMs in dominant strategies when the mechanism is anonymous and payoffs are symmetric. –Social welfare maximization + DSE + symmetry + anonymity = impossibility in most common case. However, these goals can be satisfied in an affine welfare maximization context.

Conclusions Can we use manipulability to guarantee better results if agents fail to be rational in any way? –No If any agent has more than one manipulable type In single-agent settings if objective is social welfare In DSE for social-welfare maximization with symmetric, anonymous agents –Yes For some social welfare maximization settings, even in DSE For some affine welfare maximization settings, even in DSE with symmetry & anonymity In settings where answer is “No”, using a manipulable mechanism exposes designer to outcomes worse than best truthful mechanism

Getting around the impossibilities Our impossibility results place strong constraints on MOMs –When mistakes can be arbitrary, only in very limited settings can MOMs exist Circumventing impossibility –Imposing natural restrictions on strategies –Combining behavioral models and priors with automated mechanism design