Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inefficiency of equilibria, and potential games Computational game theory Spring 2008 Michal Feldman.

Similar presentations


Presentation on theme: "Inefficiency of equilibria, and potential games Computational game theory Spring 2008 Michal Feldman."— Presentation transcript:

1 Inefficiency of equilibria, and potential games Computational game theory Spring 2008 Michal Feldman

2 Inefficiency of equilibria Outcome of rational behavior might be inefficient How to measure inefficiency? – E.g., prisoner’s dilemma Define an objective function – Social welfare (= sum of players’ payoffs): utilitarian – Maximize min i u i (egalitarian) – … 0,53,3 1,15,0

3 Inefficiency of equilibria To measure inefficiency we need to specify: – Objective function – Definition of approximately optimal – Definition of an equilibrium – If multiple equilibria exist, which one do we consider?

4 Common measures Price of anarchy (poa)=cost of worst NE / cost of OPT Price of stability (pos)=cost of best NE / cost of OPT – Note: poa, pos ≥ 1 (by definition) Approximation ratio: Measures price of limited computational resources Competitive ratio: Measures price of not knowing future Price of anarchy: Measures price of lack of coordination

5 Price of anarchy Example: in prisoner’s dilemma, poa = pos = 3 – But can be as large as desired Wish to find games in which pos or poa are bounded – NE “approximates” OPT – Might explains Internet efficiency. Suppose we define poa and pos w.r.t. NE in pure strategies – we first need to prove existence of pure NE 0,53,3 1,15,0 Prisoner’s dilemma

6 Max-cut game Given undirected graph G = (V,E) players are nodes v in V An edge (u,v) means u “hates” v (and vice versa) Strategy of node i: s i  {Black,White} Utility of node i: # neighbors of different color Lemma: for every graph G, corresponding game has a pure NE

7 Proof 1 Claim: OPT of max-cut defines a NE Proof: – Define strategies of players by cut (i.e., one side is Black, other side is White) – Suppose a player i wishes to switch strategies: i’s benefit from switching = improvement in value of the cut – Contradicting optimality of cut u i =1 u i =2

8 Proof 2 Algorithm greedy-find-cut (GFC): – Start with arbitrary partition of nodes into two sets – If exists node with more neighbors in other side, move it to other side (repeat until no such node exists) Claim 1: GFC provides 2-approx. to max-cut, and runs in polynomial time Proof: – Poly time: GFC terminates within at most |E| steps (since every step improves the value of the solution in at least 1, and |E| is a trivial upper bound to solution) – 2-approx.: Each node ends up with more neighbors in other side than in own side, so at least |E|/2 edges are in cut (since #edges in cut > #edges not in cut)

9 Proof 2 (cont’d) Claim 2: cut obtained by GFC defines a NE Proof: obvious, as each player stops only if his strategy is the best response to the other players’ strategies Conclusion: max-cut game admits a NE in pure strategies

10 Potential games Definition: a game is a potential game if there exists a function  :S1×…×Sn  R s.t.  i,s i,s - i,s i ’, c i (s i,s -i ) > c i (s i ’,s -i ) IFF  (s i,s -i ) >  (s i ’,s -i ) Note: G is an exact potential game if c i (s i,s -i ) - c i (s i ’,s -i ) =  (s i,s -i ) -  (s i ’,s -i ) Example: max-cut is an exact potential game, where  is the cut size – Unfortunately,  is not always so natural

11 Potential games Lemma: a game is a potential game IFF local improvements always terminate proof: – Define a directed graph with a node for each possible pure strategy profile – Directed edge (u,v) means v (which differs from u only in the strategy of a single player, i) is a (strictly) better action for i, given the strategies of the other players – A potential function exists IFF graph does not contains cycles If cycle exists, no potential function; e.g., (a,b,c,a) means f(a)<f(b)<f(c)<f(a) If no cycles exist, can easily define a potential function WHY?

12 Examples direction of local improvement 1,-1-1,1 1,-1 0,53,3 1,15,0 0,02,2 3,30,0 Matching pennies Prisoner’s dilemma Coordination game C D DC col row Which are potential games? Exact potential games? Are the potential functions unique? 0,02,1 1,20,0 Battle of the sexes

13 Properties of potential games Admit a Nash equilibrium Best-response dynamics converge to NE Price of stability is bounded

14 Existence of a pure NE Theorem: every potential game admits a pure NE Proof: we show that the profile minimizing  is a NE – Let s be profile minimizing  – Suppose by contradiction it is not a NE, so i can improve by deviating to a new profile s’ –  (s’) -  (s) = c i (s’) – c i (s) < 0 – Thus  (s’) <  (s), contradicting s minimizes  More generally, the set of pure-strategy Nash equilibria is exactly the set of local minima of the potential function –Local minimum = no player can improve the potential function by herself

15 Best-response dynamics converge to a NE Best-response dynamics: –Start with any strategy profile –If a player is not best-responding, switch that player’s strategy to a better response (must decrease potential) –Terminate when no player can improve (thus a NE) –Alas, no guarantee on the convergence rate

16 16 Multicast Routing Multicast routing: Given a directed graph G = (V, E) with edge costs c e  0, a source node s, and k agents located at terminal nodes t 1, …, t k. Agent j must construct a path P j from node s to its terminal t j. Fair share: If x agents use edge e, they each pay c e / x. Slides on cost sharing based on slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.

17 17 Multicast Routing outer 2 middle 4 1 pays 5 + 1 5/2 + 1 middle4 1 outer middle outer 8 2 pays 8 5/2 + 1 5 + 1 s t1t1 v t2t2 4 8 11 5

18 18 Nash Equilibrium Example: – Two agents start with outer paths. – Agent 1 has no incentive to switch paths (since 4 5 + 1). – Once this happens, agent 1 prefers middle path (since 4 > 5/2 + 1). – Both agents using middle path is a Nash equilibrium. s t1t1 v t2t2 4 8 11 5

19 Recall price of anarchy and stability Price of anarchy (poa)=cost of worst NE / cost of OPT Price of stability (pos)=cost of best NE / cost of OPT

20 Socially Optimum Social optimum: Minimizes total costs of all agents. Observation: In general, there can be many Nash equilibria. Even when it is unique, it does not necessarily equal the social optimum. s t1t1 v t2t2 355 1 1 Social optimum = 7 Unique Nash equilibrium = 8 s t k 1 +  Social optimum = 1 +  Nash equilibrium A = 1 +  Nash equilibrium B = k k agents pos=1, poa=kpos=poa=8/7

21 Price of anarchy Claim: poa ≤ k Proof: – Let s be the worst NE – Suppose by contradiction c(s) > k OPT – Then, there exists a player i s.t. c i (s) > OPT – But i can deviate to OPT (by paying OPT alone), contradicting s is a NE Note: bound is tight (lower bound in prev. slide)

22 22 Price of Stability What is price of stability in multicast routing? Lower bound of log k: s t2t2 t3t3 tktk t1t1... 11/2 1/31/k 0 000 1 +  1 + 1/2 + … + 1/k Social optimum: Everyone Takes bottom paths. Unique Nash equilibrium: Everyone takes top paths. Price of stability: H(k) / (1 +  ). upper bound will follow..

23 23 Finding a potential function Attempt 1: Let  (s) =  j=1 k cost(t i ) be the potential function. A problem: The potential might increase when some agent improve. Example: When all 3 agents use the right path, each pays 4/3 and the potential (total cost) is 4. After one agent moves to the left path the potential increases to 5. s t 4 1 3 agents

24 24 Finding a potential function Attempt 2: Consider a set of paths P 1, …, P k. – Let x e denote the number of paths that use edge e. – Let  (P 1, …, P k ) =  e  E c e · H(x e ) be a potential function. – Consider agent j switching from path P j to path P j '. – Change in agent j’s cost: H(0) = 0,

25 25 Potential function –  increases by –  decreases by – Thus, net change in  is identical to net change in player j’s cost

26 26 Bounding the Price of Stability Claim: Let C(P 1, …, P k ) denote the total cost of selecting paths P 1, …, P k. For any set of paths P 1, …, P k, we have Proof: Let x e denote the number of paths containing edge e. – Let E + denote set of edges that belong to at least one of the paths.

27 27 Bounding the Price of Stability Theorem: There is a Nash equilibrium for which the total cost to all agents exceeds that of the social optimum by at most a factor of H(k) (i.e., price of stability ≤ H(k)). Proof: – Let (P 1 *, …, P k * ) denote set of socially optimal paths. – Run best-response dyn algorithm starting from P *. – Since  is monotone decreasing  (P 1, …, P k )   (P 1 *, …, P k * ). previous claim applied to P previous claim applied to P*

28 Local search and PLS (polynomial local search) Local optimization problem: find a local optimum (i.e., no improvement in neighborhood Local optimization problem is in PLS if exists an oracle that for every instance and solution s decides if s is a local optimum; if not returns a better solution s’ in neighborhood of s Finding NE in potential games is in PLS – Define neighborhood of a profile s to be profiles obtained by deviation of a single player – s is local optimum for c(s) =  (s) iff s is a NE

29 Congestion games [Rosenthal 73] There is a set of resources R Agent i’s set of actions (pure strategies) A i is a subset of 2 R, representing which subsets of resources would meet her needs –Note: different agents may need different resources There exist cost functions c r : {1, 2, 3, …} →  such that agent i’s cost for a = (a i, a -i ) is Σ r  a i c r (n r (a)) –n r (a) is the number of agents that chose r as one of their resources in the profile a

30 Example: multicast routing Resources = edges Each resource r has a cost c r Player 1’s action set: {{A}, {C,D}} Player 2’s action set: {{B}, {C,E}} For all resources r, c r (n r (a)) = c r / n r (a) s t1t1 v t2t2 E 8 11 5 A 4 C D B

31 Every congestion game is an exact potential game Use potential  (a) = Σ r Σ 1 ≤ i ≤ nr(a) c r (i) –One interpretation: the sum of the costs that the agents would have received if each agent were unaffected by all later agents Why is this a correct potential function? Suppose an agent changes action: stop using some resources (R-), start using others (R+) increase in the agent’s cost equals Σ r  R+ c r (n r (a) + 1) - Σ r  R- c r (n r (a)) This is exactly the change in the potential function above –Conclusion: congestion games are potential games


Download ppt "Inefficiency of equilibria, and potential games Computational game theory Spring 2008 Michal Feldman."

Similar presentations


Ads by Google