Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tuomas Sandholm Computer Science Department Carnegie Mellon University

Similar presentations


Presentation on theme: "Tuomas Sandholm Computer Science Department Carnegie Mellon University"— Presentation transcript:

1 Tuomas Sandholm Computer Science Department Carnegie Mellon University
Costly valuation computation/information acquisition in auctions: Strategy, counterspeculation, and deliberation equilibrium Tuomas Sandholm Computer Science Department Carnegie Mellon University Mainly based on the following papers: Larson, K. and Sandholm, T Costly Valuation Computation in Auctions. In Proceedings of the Theoretical Aspects of Reasoning about Knowledge (TARK). Larson, K. and Sandholm, T Computationally Limited Agents in Auctions. In Proceedings of the International Conference on Autonomous Agents, Workshop on Agent-based Approaches to B2B.

2 [Sandholm NOAS-91, AAAI-93]
TRACONET, $ 2,000 $ 1,700 Auction Contract: Task transferred In the interest of time, I won’t discuss the slue of techniques I developed in the TRACONET system. Instead, let me present two of the inherent problems in peer-to-peer negotiation that this work uncovered: 1. When bidding for an item, an agent cannot know the value of the item because it depends on what other items the agent receives and gets rid of in later stages of the negotiation. For example, if an agent later gets a backhaul delivery from Acapulco to Pittsburgh, it may be able to handle the PIT->Acapulco task at half price because the cost of driving back empty does not have to be factored in. No matter whether the agent bids under the assumption that he will get the later item, or under the assumption that he will not, in some materializations the agent’s decision will turn out suboptimal in hindsight. 2. hill-climbing => gets stuck in a locally optimal task allocation, that is, a locally optimal assignment of the inter-agent variables. (REWORK THIS NOTE. A common problem: Usually the issues to be negotiated are interdependent (to at least one agent). E.g., the cost of taking on a task depends on what other tasks the agent will receive (and get rid of) later in the negotiation. In such settings, acting rationally would require intractable lookahead in a huge game tree. Even after lookahead, the agent might make decisions that turn out to be bad in hindsight, because he has uncertainty about others, and thus only probabilistic projections about the future.) [Sandholm NOAS-91, AAAI-93]

3 Bidders may need to compute their valuations for (bundles of) goods
In many (even private-values quasilinear) applications, e.g. Vehicle routing problem in transportation exchanges Manufacturing scheduling problem in procurement Value of a bundle of items (tasks, resources, etc) = value of solution with those items - value of solution without them Our models apply to information gathering as well

4 Software agents for auctions
Software agents exist that bid on behalf of user We want to enable agents to not only bid in auctions, but also determine the valuations of the items Agents use computational resources to compute valuations Valuation determination can involve computing on NP-complete problems (scheduling, vehicle routing, etc.) Optimal solutions may not be possible to determine due to limitations in agents’ computational abilities (i.e. agents have bounded rationality)

5 Bounded rationality Work in economics has largely focused on descriptive models Some models based on limited memory in repeated games [Papadimitriou, Rubinstein, …] Some AI work has focused on models that prescribe how computationally limited agents should behave [Horvitz; Russell & Wefald; Zilberstein & Russell; Sandholm & Lesser; Hansen & Zilberstein, …] Simplifying assumptions Myopic deliberation control Asymptotic notions of bounded optimality Conditioning on performance but not path of an algorithm Simplifications can work well in single agent settings, but any deviation from full normativity can be catastrophic in multiagent settings Incorporate deliberation (computing) actions into agents’ strategies => deliberation equilibrium

6 Simple model: can pay c to find one’s own valuation => Vickrey auction no longer has a dominant strategy [Sandholm ICMAS-96, International J. of Electronic Commerce 2000] Thrm. In a private value Vickrey auction with uncertainty about an agent’s own valuation, a risk-neutral agent’s best strategy can depend on others. E.g. two bidders (1 and 2) bid for a good. v1 uniform between 0 and 1; v2 deterministic, 0 ≤ v2 ≤ 0.5 Agent 1 bids 0.5 and gets item at price v2: Say agent 1 has the choice of paying c to find out v1. Then agent 1 will bid v1 and get the item iff v1 ≥ v2 (no loss possibility, but c invested) Same model studied more recently in the literature on “information acquisition in auctions” [Compte and Jehiel 01, Rezende 02, Rasmussen 06]

7 Quest for a general fully normative model
Auctioneer bid(result) bid(result) Agent Agent Deliberation controller (uses performance profile) Deliberation controller (uses performance profile) C model is not fully normative - Deliberation removes uncertainty completely Cannot partially deliberate Does not address how to allocate deliberation when there are multiple items/bundles - Cannot deliberate about others’ valuations Compute! result Compute! result Domain problem solver (anytime algorithm) Domain problem solver (anytime algorithm)

8 Normative control of deliberation
In our setting agents have Limited computing, or Costly computing Agents must decide how to use their limited resources in an efficient manner Agents have anytime algorithms and use performance profiles to control their deliberation

9 Anytime algorithms can be used to approximate valuations
Solution improves over time Can usually “solve” much larger problem instances than complete algorithms can Allow trading off computing time against quality Decision is not just which bundles to evaluate, but how carefully Examples Iterative refinement algorithms: Local search, simulated annealing Search algorithms: Depth first search, branch and bound

10 Performance profiles of anytime algorithms
Statistical performance profiles characterize the quality of an algorithm’s output as a function of computing time There are different ways of representing performance profiles Earlier methods were not normative: they do not capture all the possible ways an agent can control its deliberation Can be satisfactory in single agent settings, but catastrophic in multiagent systems

11 Performance profiles Deterministic performance profile
Solution quality Variance introduced by different problem instances Computing time Solution quality Optimum Computing time [Horvitz 87, 89, Dean & Boddy 89]

12 Table-based representation of uncertainty in performance profiles
[Zilberstein & Russell IJCAI-91, AIJ-96] .08 .19 .24 .15 .30 .17 .39 .16 .10 .25 .22 .04 .20 .09 .23 .37 .31 .13 .11 .14 .33 .18 .21 .40 .05 .03 Conditioning on solution quality so far [Hansen & Zilberstein AAAI-96] Solution quality Ignores conditioning on the path Computing time

13 Performance profile tree [Larson & Sandholm AAAI-00, AIJ-01, TARK-01]
P(B|A) 5 B 4 4 10 A 3 Solution quality P(C|A) C 6 15 2 5 Normative Allows conditioning on path of solution quality Allows conditioning on path of other solution features Allows conditioning on problem instance features (different trees to be used for different classes) Constructed from statistics on earlier runs 20

14 Performance profile tree…
The following can be augmented into the model: Randomized algorithms Agent not knowing which algorithms others are using Agent having uncertainty about others’ problem instances Agent can emulate different scenarios of others 5 p(0) 4 4 Random node 10 Value node 3 p(1) 6 2 15 Our results hold in this augmented setting 20

15 Roles of computing Computing by an agent
Improves the solution to the agent’s own problem(s) Reduces uncertainty as to what future computing steps will yield Improves the agent’s knowledge about others’ valuations Improves the agent’s knowledge about what problems others may have computed on and what solutions others may have obtained Our results apply to different settings Computing increases the valuation (reduces cost) Computing refines the valuation estimate

16 “Strategic computing”
Good estimates of the other bidders’ valuations can allow an agent to tailor its bids to achieve higher utility Definition. Strong strategic computing: Agent uses some of its deliberation resources to compute on others’ problems Definition. Weak strategic computing: Agent uses information from others’ performance profiles How an agent should allocate its computation (based on results it has obtained so far) can depend on how others allocate their computation “Deliberation equilibrium” [AIJ-01]

17 Theorems on strategic computing
Auction mechanism Counter-speculation by rational agents ? Strategic computing ? Limited computing Costly computing Single item for sale First price sealed-bid yes yes yes Dutch (1st price descending) yes yes yes Vickrey (2nd price sealed bid) no no yes English (1st price ascending) no Multiple items for sale Generalized Vickrey On which <bidder, bundle> pair to allocate next computation step ? no yes yes If performance profiles are deterministic, only weak strategic computing can occur  New normative deliberation control method uncovered a new phenomenon

18 Costly computing in English auctions
For rational bidders, straightforward bidding is ex post eq. Thrm: If at most one performance profile is stochastic, no strong strategic computing occurs in equilibrium Thrm: If at least two performance profiles are stochastic, strong strategic computing can occur in equilibrium Despite the fact that agents learn about others’ valuations by waiting and observing others’ bids Passing & resuming computation during the auction is allowed Proof. Consider an auction with two bidders: Agent 1 can compute for free Agent 2 incurs cost 1 for each computing step

19 Performance profiles of the proof
Agent 1’s problem Agent 2’s problem p(high2) p(high1) high1 high2 low1 low2 1-p(high1) 1-p(high2) low2 < low1 < high2 < high1 Since computing one step on 2’s problem does not yield any information, we can treat computing for two steps on 2’s problem atomically

20 Proof continued… Agent 1 has straightforward (ex post eq.) strategy:
Compute only on own problem & increment bid whenever Agent 1 does not have the highest bid and Highest bid is lower than agent 1’s valuation Agent 2’s strategy: CASE 1: bid1 > low1 Agent 2 knows that agent 1 has valuation high1 Agent 2 cannot win, and thus has no incentive to compute or bid CASE 2: bid1< low2 Agent 2 continues to increment its own bid No need to compute since it knows that its valuation is at least low2 CASE 3: low1  bid1  low2 If Agent 2 bids, he should bid bid1 + ε His strategy depends on the performance profiles…

21 Decision problem of agent 2 in CASE 3
Agent 2’s utility low2 < low1 < high2 < high1 Decision node for agent 2 Chance node for agent 1’s performance profile Chance node for agent 2’s high2 low2 Compute on 2’s Bid -1 -3 high1 Withdraw -1 high2 Compute on 2’s high2-low1-3 -3 Compute on 1’s problem low2 low1 Withdraw Bid high2 low2 Compute on 1’s high1 low1 -2 -1 high2-low1-1 low2-low1-1 -3 high2-low1-3 Compute on 2’s problem high2 Bid high1 Power point is horrible -2 low1 high2-low1-2 low2 Bid high1 Withdraw Withdraw high2 -2 high2-low1 low1 low2 low2-low1

22 Under what conditions does strong strategic computing occur?
low2 =3, low1 =12, high2 =22, high1 =30 1 0.8 0.6 0.4 0.2 Probability that agent 2 will have its high valuation Probability that agent 1 will have its high valuation

23 Other variants we solved
Agents cannot pass on computing during the auction & resume computing later during the auction Can make a difference in English auctions with costly computing, but strong strategic computing is still possible in equilibrium Agents can/cannot compute after the auction 2-agent bargaining (again with performance profile trees) Larson, K. and Sandholm, T Bargaining with Limited Computation: Deliberation Equilibrium. Artificial Intelligence, 132(2), Larson, K. and Sandholm, T An Alternating Offers Bargaining Model for Computationally Limited Agents. In Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Bologna, Italy, July.

24 [Larson & Sandholm AAMAS-05]
Designing mechanisms for agents whose valuation deliberation is limited or costly [Larson & Sandholm AAMAS-05]

25 Mechanism desiderata Preference formation-independent
Mechanism should not be involved in agents’ preference formation process (otherwise revelation principle applies trivially) I.e., agents communicate to auctioneer in terms of valuation (or expected valuations) Deliberation-proof In equilibrium, no agent should have incentive to strategically deliberate Non-misleading In equilibrium, no agent should follow a strategy that causes others to believe that its true preferences are impossible E.g. agent should not want to report a valuation and willingness to pay higher than his true valuation <= truthful (equivalence in the case of direct mechanisms) Thm. There exists no direct or indirect mechanism (where any agent can affect the allocation regardless of others’ revelations) that satisfies all these 3 properties

26 Recent work on overcoming the impossibility
Restricted settings Not too much asymmetry – tends to avoid strong strategic computing Relaxing properties (but not Non-Misleading) Relax Deliberation-Proof: Encourage strategic deliberation Incentives for the right (cheap) agents to compute & share right information? Some agents as “experts” [Ito et al. AAMAS-03] Cavallo & Parkes [AAAI-08] get efficiency and no deficit in (within-period) ex post equilibrium. Agents report deliberation states and center says which agent deliberates next Assumptions Only one agent can compute at a time Valuations increase with computation Time is discounted Without strategic deliberation possibility, achievable using dynamic VCG [Bergemann&Valimaki 07] With strategic deliberation, use payments such that equilibrium utilities are exactly as they would be if an agent’s deliberation processes about other agents’ values were in fact about its own value Relax Preference-Formation Independent Mechanism guides deliberation Revealing only some info about agents’ deliberative capabilities? Related to “search” & sequential preference elicitation Generalizing [Cremer et al. 03] to multi-step info gathering & to gathering info about other agents as well [Larson AAMAS-06] studies mechanism design for the case where agents can only deliberate on their own valuations


Download ppt "Tuomas Sandholm Computer Science Department Carnegie Mellon University"

Similar presentations


Ads by Google