Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computational Criticisms of the Revelation Principle Vincent Conitzer, Tuomas Sandholm AMEC V.

Similar presentations


Presentation on theme: "Computational Criticisms of the Revelation Principle Vincent Conitzer, Tuomas Sandholm AMEC V."— Presentation transcript:

1 Computational Criticisms of the Revelation Principle Vincent Conitzer, Tuomas Sandholm AMEC V

2 The revelation principle The revelation principle states that for any mechanism, there is another truthful, direct revelation mechanism that: –Always gives the same outcome as the original mechanism (assuming agents are strategic and unboundedly rational) –Asks agents to report their preferences (or type) directly (and nothing else) –Never gives agents incentive to lie about their type Holds both for dominant strategies and Bayes-Nash equilibrium implementation Cornerstone tool in mechanism design Proof sketch: place an interface layer around the old mechanism that acts on each agent’s best behalf –Compare “proxy bidder” Old Type 1 “Input 2” Old New Old Type 1 New “Type 1” “Input 2” OUTCOME 3 interface layer

3 What is the revelation principle used for? As a theoretical bounding tool –If you know that no truthful direct mechanism achieves a level of objective with perfectly strategic agents, then neither will any indirect/untruthful mechanism –Nothing wrong with this (if applied carefully) As a justification for using truthful direct revelation mechanisms in practice –For instance, Vickrey, Clarke, Groves mechanisms –Ignores communication/computation complexity of reporting and manipulation Certainly acceptable if type spaces are small… … but in many cases, they are not (e.g. combinatorial auctions) –What is lost by ignoring this (if anything)?

4 Communication costs of single-step and multi-step revelation in mechanisms Revelation principle (considered naively) suggests revealing the agent’s whole type (preferences over all outcomes) in one step This may be costly/impractical/infeasible due to: –Determining preferences can be costly E.g. when determining a valuation requires solving a hard planning problem –The size of the string required to communicate the full type may be huge Waste of bandwidth Or even completely impossible –No privacy whatsoever It is well-known that careful preference elicitation (or implicit elicitation, e.g. price/quantity tatonnement) may help –Parts of the type may be irrelevant based on others’ types But how much? Exponential reduction in revelation possible? (Question posed by Papadimitriou)

5 YES! Exponential reduction is possible Theorem. There are settings where: –Executing the optimal single-step mechanism requires an exponential amount of communication and computation –There exists an entirely equivalent two-step mechanism that only requires a linear amount of communication and computation Holds both for dominant strategies and Bayes-Nash implementation

6 A different computational criticism: questioning truthfulness The previous result strengthened a known criticism… … the next type of criticism is new in nature If the participating agents have computational limits, does restricting oneself to truthful mechanisms lead to a loss in the objective (e.g. social welfare)? YES! –This holds even if the center is computationally unbounded! –…although the loss is even greater if the center is bounded also

7 Criticizing truthful mechanisms: computational complexity Theorem. There are settings where: –Executing the optimal truthful (in terms of social welfare) mechanism is NP-complete for the center –There exists an insincere mechanism, where The center only carries out polynomial computation Finding a beneficial insincere revelation is NP-complete for the agents If the agents manage to find the beneficial insincere revelation, the insincere mechanism is just as good as the optimal truthful one Otherwise, the insincere mechanism is strictly better (in terms of s.w.) Holds both for dominant strategies and Bayes-Nash implementation

8 A different model: black-box oracles Suppose that the only method of ascertaining an agent’s utility for an outcome is through an oracle –The oracle takes as input the type and the outcome, and returns the utility –Depending on the type, the query may be costless or carry a constant cost Example: allocating delivery tasks –Given an agent’s type (the company’s resources, e.g. vehicles) and an outcome (an allocation of tasks)… –… to get the agent’s valuation, need to solve a vehicle routing problem Routing software package is the oracle –For some types the routing problem may be easy (costless) For example, if the company has a helicopter with costless flight time (but possibly costly landing time) Another example: selling one or more art pieces –Bidders may be art traders or art collectors (part of the type) –For each outcome (allocation of art), art traders need to hire an expert to determine the authenticity of the pieces allocated to them => costly –But art collectors simply have an intrinsic valuation => costless

9 Criticizing truthful mechanisms: query complexity Theorem. There are settings where: –Executing the optimal truthful (in terms of social welfare) mechanism requires the center to make an exponential number of costly queries for some type reports –There exists an insincere mechanism, where The center makes no costly queries Finding a beneficial insincere revelation requires an agent to make an exponential number of costly queries If the agents manage to find the beneficial insincere revelation, the insincere mechanism is just as good as the optimal truthful one Otherwise, the insincere mechanism is strictly better (in terms of s.w.) Holds both for dominant strategies and Bayes-Nash implementation

10 Is there a systematic approach? The criticisms of truthfulness are for very specific settings How do we take such computational issues into account in general in mechanism design? What is the correct tradeoff? –Cautious: make sure that computationally unbounded agents would not make the mechanism worse than the best truthful mechanism (like previous result) –Aggressive: take a risk and assume agents are probably somewhat bounded What kind of mechanism design approaches can help? –Classical: attempt to theoretically characterize mechanisms that take maximal advantage of computational issues –Automated [Conitzer & Sandholm 02, 03; Jameson, Hackl & Kleinbauer 03; Hsu 03] : compute the mechanism on the fly for the setting at hand


Download ppt "Computational Criticisms of the Revelation Principle Vincent Conitzer, Tuomas Sandholm AMEC V."

Similar presentations


Ads by Google