Download presentation

Presentation is loading. Please wait.

Published byBrayden Gills Modified about 1 year ago

1
Effort Games and the Price of Myopia Michael Zuckerman Joint work with Yoram Bachrach and Jeff Rosenschein

2
Agenda What are Effort Games The complexity of incentives in unanimous effort games The price of myopia (PoM) in unanimous effort games The PoM in SPGs – simulation results Rewards and the Banzhaf power index

3
Effort Games - informally Multi-agent environment A common project depends on various tasks –Winning and losing subsets of the tasks The probability of carrying out a task is higher when the agent in charge of it exerts effort There is a certain cost for exerting effort The principal tries to incentivize agents to exert effort The principal can only reward agents based on success of the entire project

4
Effort Games - Example A communication network –A source node and a target node Each agent is in charge of maintenance tasks for some link between the nodes If no maintenance is expended on a link, it has probability α of functioning s t

5
Effort Games – example (2) If maintenance effort is made for the link, it has probability β ≥ α of functioning A mechanism is in charge of sending some information between source and target The mechanism only knows whether it succeeded to send the information, but does not know which link failed, or whether it failed due to lack of maintenance s t

6
Example 2 Voting domain Only the result is published, and not the votes of the participants Some outsider has a certain desired outcome x of the voting The outsider uses lobbying agents, each agent able to convince a certain voter to vote for the desired outcome

7
Example 2 (2) When the agent exerts effort, his voter votes for x with high probability β, otherwise the voter votes for x with probability α ≤ β The outsider only knows whether x was chosen or not, he doesn’t know what agents exerted effort, or even who voted for x.

8
Related Work E. Winter. Incentives and discrimination, –β = 1, the only winning coalition is the grand coalition, α is the same for all agents, iterative elimination of dominated strategies implementation, focuses on economic question of discrimination. M. Babaioff, M. Feldman and N. Nisan. Combinatorial agency, 2006 –Focuses on Nash Equilibrium –Very different results

9
Preliminaries An n-player normal form game is given by: A set of agents (players) I = {1,…,n} For each agent i a set of pure strategies S i, A utility (payoff) function F i (s 1,…, s n ). We denote the set of strategy profiles Denote items in Σ as. We also denote, and denote

10
Dominant Strategy Given a normal form game G, we say agent i’s strategy strictly dominates if for any incomplete strategy profile,. We say agent i’s strategy s x is i’s dominant strategy if it dominates all other strategies.

11
Dominant Strategy Equilibrium Given a normal form game G, we say a strategy profile is a dominant strategy equilibrium if for any agent i, strategy s i is a dominant strategy for i.

12
Iterated Elimination of Dominated Strategies In iterated dominance, strictly dominated strategies are removed from the game, and have no longer effect on future dominance relations. Well-known fact: iterated strict dominance is path-independent. p1\p2s1s1 s2s2 s1s1 s2s2

13
Simple Coalitional Game A simple coalitional game is a domain that consists of a set of tasks, T, and a characteristic function. A coalition wins if, and it loses if.

14
Weighted Voting Game A weighted voting game is a simple coalitional game with tasks T = {t 1,…,t n }, a vector of weights w = (w 1,…,w n ), and a threshold q. We say t i has the weight w i. The weight of coalition is The coalition C wins the game if, and it loses if. עבודה 25קדימה 32ליכוד 40 q = 61

15
The Banzhaf power index The Banzhaf power index depends on the number of coalitions in which an agent is critical, out of all possible coalitions. It is given by, where עבודה 25קדימה 32ליכוד 40 q = 61 β 1 (v) = 1/4β 2 (v) = 1/4β 3 (v) = 3/4

16
Effort Game domain A set I={1,…n} of n agents A set T = {t 1,…,t n } of n tasks A simple coalitional game G with task set T and with the value function A set of success probability pairs (α 1,β 1 ),…, (α n,β n ), such that, and A set of effort exertion costs, such that c i > 0.

17
Effort Game domain interpretation A joint project depends on completion of certain tasks. Achieving some subsets of the tasks completes the project successfully, and some fail, as determined by the game G. Agent i is responsible of task t i. i can exert effort on achieving the task, which gives it probability of β i to be completed Or i can shirk (do not exert effort), and the task will be completed with probability α i. The exertion of effort costs the agent c i.

18
Observations about the Model Given a coalition C of agents that contribute effort, we define the probability that a certain task is completed by Given a subset of tasks T’ and a coalition of agents that contribute effort C, we can calculate the probability that exactly the tasks in T’ are achieved: if otherwise

19
Observations about the Model (2) We can calculate the probability that any winning subset of tasks is achieved: Given the reward vector r=(r 1,…,r n ), and given that the agents that exert effort are, i’s expected reward is

20
Effort Game - definition An Effort Game is the normal form game defined on the above domain, a simple coalitional game G and a reward vector r = (r 1,…, r n ), as follows. In i has two strategies: S i = {exert, shirk} Given a strategy profile, we denote the coalition of agents that exert effort in σ by The payoff function of each agent : if s i = exert if s i = shirk

21
Incentive-Inducing Schemes Given an effort game domain G e, and a coalition of agents C’ that the principal wants to exert effort, we define: A Dominant Strategy Incentive-Inducing Scheme for C’ is a reward vector r = (r 1,…,r n ), such that for any, exerting effort is a dominant strategy for i. An Iterated Elimination of Dominated Strategies Incentive- Inducing Scheme for C’ is a reward vector r = (r 1,…,r n ), such that in the effort game, after any sequence of eliminating dominated strategies, for any, the only remaining strategy for i is to exert effort.

22
The Complexity of Incentives The following problems concern a reward vector r = (r 1,…,r n ), the effort game, and a target agent i: DSE (DOMINANT STRATEGY EXERT): Given, is exert a dominant strategy for i ? IEE (ITERATED ELIMINATION EXERT): Given, is exert the only remaining strategy for i, after iterated elimination of dominated strategies ?

23
The Complexity of Incentives (2) The following problems concern the effort game domain D, and a coalition C: MD-INI (MINIMUM DOMINANT INDUCING INCENTIVES): Given D, compute the dominant strategy incentive inducing scheme r = (r 1,…,r n ) for C that minimizes the sum of payments,. MIE-INI (MINIMUM ITERATED ELIMINATION INDUCING INCENTIVES): Given D, compute the iterated elimination of dominated strategies incentive-inducing scheme, that minimizes the sum of payments,.

24
Unanimous Effort Games The underlying coalitional game G has task set T = {t 1,…,t n } v(T) = 1, and for all C ≠ T v(C)= 0. Success probability pairs are identical for all the tasks: (α 1 = α, β 1 = β), …, (α n = α, β n = β) α < β Agent i is in charge of the task t i, and has an effort exertion cost of c i.

25
Results – MD-INI Lemma: if and only if exerting effort is a dominant strategy for i. Corollary: MD-INI is in P for the unanimous effort games. The reward vector is the dominant strategy incentive-inducing scheme that minimizes the sum of payments.

26
Results – IE-INI Lemma: Let i and j be two agents, and r i and r j be their rewards such that and. Then under iterated elimination of dominated strategies, the only remaining strategy for both i and j is to exert effort. Theorem: IE-INI is in P for the effort game in the above-mentioned weighted voting domain. For any reordering of the agents, a reward vector r π where the following holds for any agent π(i), is an iterated elimination incentive-inducing scheme:

27
Results – MIE-INI Theorem: MIE-INI is in P for the effort games in the above unanimous weighted voting domain. The reward vector r π that minimizes the sum of payments is achieved by sorting the agents by their weights w i = c i, from the smallest to the largest.

28
The Price of Myopia (PoM) Denote by R DS the minimum sum of rewards ( ) such that for all i, r i is a dominant strategy incentive-inducing scheme. Denote by R IE the minimum sum of rewards ( ) such that for all i, r i is an iterated elimination of dominated strategies incentive-inducing scheme. Clearly R DS ≥ R IE –Since in R IE we are taking into account only a subset of strategies How large can the ratio be ? –We call this ratio “the price of myopia”.

29
PoM in Unanimous Effort Games In the underlying weighted voting game G a coalition wins if it contains all the tasks, and loses otherwise. All the agents have the same cost for exerting effort, c. Success probability pairs are identical for all tasks, (α, β). Theorem: in the above setting, where n ≥ 2 is the number of players.

30
Series-Parallel Graphs (SPGs) SPGs have two distinguished vertices, s and t, called source and sink, respectively. Start with a set of copies of single-edge graph K 2.

31
Effort Games over SPGs An SPG G = (V,E) representing a communication network. For each edge there is an agent i, responsible for maintenance tasks for e i. i can exert an effort at the certain cost c i, to maintain the link e i, and then it will have probability β of functioning

32
Effort Games over SPGs (2) Otherwise (if the agent does not exert effort on maintaining the link), it will have probability α of functioning The winning coalitions are those containing a path from source to sink This setting generalizes the unanimous weighted voting games

33
Simulation Setting All our simulations were carried out on SPGs with 7 edges, whose composition is described by the following tree: The leaves of the tree represent the edges of the SPG. The inner node of the tree represents SPG that is series or parallel composition of its children.

34
Simulation Setting (2) We have sampled uniformly at random SPGs giving probability of ½ for series composition and probability of ½ for parallel composition at each inner node of the above tree. We have computed the PoM for α = 0.1, 0.2,…,0.8 and β = α + 0.1, α + 0.2,…,0.9 The costs c i have been sampled uniformly at random in [0.0, 100). For each pair (α, β) 500 experiments have been made in order to find the average PoM, and the standard deviation of the PoM.

35
Simulation Results Average PoM α \ β

36
Simulation Results (2) α \ β Standard deviation of the PoM

37
Simulation Results Interpretation As one can see from the first table, the higher the distance between α and β, the higher the PoM is. When there are large differences in the probabilities, the structure of the graph is more important Fits our expectations

38
SPG with Parallel Composition Theorem: Let G be SPG which is obtained by parallel composition of a set of copies of single-edge graphs K 2. As before, each agent is responsible for a single edge, and a winning coalition is one which contains a path (an edge) connecting the source and target vertices. Then we have R DS = R IE.

39
Rewards and the Banzhaf power index Theorem: Let D be an effort game domain where for i we have α i = 0 and β i = 1, and for all j ≠ i we have α j = β j = ½, and let r = (r 1,…,r n ) be a reward vector. Exerting effort is a dominant strategy for i in G e (r) if and only if (where β i (v) is the Banzhaf power index of t i in the underlying coalitional game G, with the value function v).

40
DSE (DOMINANT STRATEGY EXERT) hardness result Theorem: DSE is as hard computationally as calculating the Banzhaf power index of its underlying coalitional game G. Corollary: It is NP-hard to test whether exerting effort is dominant strategy in effort game where the underlying coalitional game is weighted voting game.

41
Conclusions We defined the effort games Found how to compute optimal rewards schemes in unanimous effort games Defined the PoM Provided results about PoM in unanimous weighted voting games Gave simulation results about PoM in SPGs Connected the complexity of computing incentives and complexity of finding the Banzhaf power index.

42
Future Work Complexity of incentives for other classes of underlying games Approximation of incentive-inducing schemes The PoM in various classes of effort games

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google