Presentation is loading. Please wait.

Presentation is loading. Please wait.

How Bad is Selfish Routing? By Tim Roughgarden Eva Tardos Presented by Alex Kogan.

Similar presentations


Presentation on theme: "How Bad is Selfish Routing? By Tim Roughgarden Eva Tardos Presented by Alex Kogan."— Presentation transcript:

1 How Bad is Selfish Routing? By Tim Roughgarden Eva Tardos Presented by Alex Kogan

2 Abstract In many large-scale communication networks, it’s impossible to regulate the traffic by some central authority to minimize the total latency. –We assume that each network user selfishly routes its traffic on the minimal-latency path available. –In this case, the TOTAL latency isn’t minimized! –The paper quantifies the performance degradation caused by the lack of central regulation (under some assumptions about the latency of each edge).

3 The Performance Problem Given the rate of traffic between each pair of nodes, find an assignment of traffic to paths so that the total latency is minimized. –The total latency is the sum of all travel times –The latency of a single link is load-dependent (the network is congested)! In practice, it’s nearly impossible to impose optimal routing strategies, so users act in a selfish manner.

4 The Central Question How much does the network performance suffer from the lack of regulation? –We assume that the users are selfish, but NOT malicious. –We assume that each user has complete information about the current traffic load over the network.

5 Formalizing the Question We can view network users as independent agents in a non-cooperative game. –Each agent is given the link congestion caused by the rest of the network users. –Each agent controls a negligible fraction of the overall traffic. We expect the routes chosen by the agents to form a Nash equilibrium.

6 Formalizing the Question (cont) We’re interested in comparing the total latency of a Nash equilibrium with that of the optimal assignment of traffic to paths. –Nash equilibrium doesn’t in general optimize the social welfare (see the Prisoner’s Dilemma).

7 Formalizing the Question (cont) A feasible assignment of traffic to paths in the network can be modeled as network flow. –Amount of flow between a pair of nodes is equal to the rate of traffic between the nodes. –At Nash equilibrium, all flow paths between given source & destination have the same latency. –If the latency of each link is a continuous and non- decreasing function of the flow on the link, then a flow corresponding to Nash always exists (and all such flows have the same total latency.

8 Formalizing the Question (cont) We can study the cost of selfish routing via the following question: –Among all networks with continuous, nondecreasing link latency functions, what is the worst-case ratio between the total latency of a Nash flow and that of an optimal flow with the minimal total latency?

9 The Answer If the latency of each edge is a linear function of the edge congestion, then a Nash flow has total latency at most 4/3 that of the optimal flow. For a general continuous nondecreasing link latency functions, the ratio is unbounded! BUT, the Nash flow total latency is at most twice the latency of an optimal flow routing twice as much traffic (bicriteria result).

10 Linear Latency Example (1): Braess’s paradox Adding a zero-latency edge may negatively impact all the agents! S w T v x x 1 1 S w T v x x 1 1 0 The optimal latency:1 ½ The Nash latency:1 ½ The Nash latency:2 The ratio: 2 / 1 ½ = 4/3

11 The Formal Model Directed network G = (V, E) k source-destination pairs {s 1, t 1 }, …, {s k, t k } P i - the set of all s i -t i simple paths, P =  i P i For a fixed flow f, we define f e =  P:e  P f P r i - the amount of flow from s i to t i f is feasible if for all i,  P  Pi f P = r i

12 The Formal Model (cont.) l e - the load-dependent latency of e  E; we assume that l e is continuous & non-decreasing. l p (f) =  e  P l e (f e ) - the latency of path P with respect to flow f. C(f) =  P  P l P (f) f P - the cost of a flow f in G (the total latency incurred by f ) We can also write C(f) =  e  E l e (f e ) f e

13 Flows at Nash Equilibrium Definition: A flow f in G is at Nash equilibrium if for all i  {1,…, k}, P1, P2  P i,   [0, f P1 ], we have l P1 (f)  l P2 (f~), where { f P -  if P = P1 f~ P = { f P +  if P = P2 Š{ f P otherwise If   0, from continuity and monotonicity, we get the following lemma:

14 Flows at Nash Equilibrium Lemma 2.2 (Wardrop’s Principle): A flow f is at Nash equilibrium iff for every i  {1,…, k}, and P1, P2  P i with f P1 > 0, l P1 (f)  l P2 (f). It follows from the lemma, that if f is at Nash equilibrium, then all s i -t i paths assigned a positive amount of flow have equal latency denoted L i (f).

15 Flows at Nash Equilibrium Lemma 2.3: If f is a feasible flow at Nash equilibrium, then C(f) =  i L i (f) r i Remark: In our model, each agent chooses a single path (a pure strategy), whereas in classical game theory the Nash equilibrium is defined via mixed strategies (where each agent select a distribution over pure strategies).

16 Optimal Flows The problem of finding the optimal (minimal- latency) flow can be expressed by the following NLP: Min  e  E c e (f e ) subject to:  P  Pi f P = r i  i  {1, …, k} f e =  P:e  P f P  e  E f P  0  e  E

17 Optimal Flows (cont.) The local optima of NLP: –Moving from one path to another can only increase the cost. –In other words, the gradient (i.e. marginal cost) along any s i -t i path P is at least the gradient along any other s i -t i path. –The local and global minimum of a convex function coincide  if each edge latency function is convex, then the condition above defines the global optima.

18 Optimal Flows (cont.) Letting c' P (f) =  e  P c' e (f e ) and applying to the convex program (NLP) the Karus-Kuhn-Tucket Theorem, we derive the following lemma: Lemma 2.4: A flow f is optimal for a convex program of the form (NLP) iff  i  {1, …, k} and P1, P2  P i with f P1 > 0, c' P1 (f)  c' P2 (f). The characterizations in lemma 2.4 and lemma 2.2 are very similar!

19 Optimal Flows (cont.) Let f* be a minimum-latency flow for a convex program of the form (NLP). – f* satisfies the conditions of lemma 2.4. –By lemma 2.2, f* can be regarded as a Nash flow with respect to latency function c'. Since in our problem c e (f e ) = l e (f e ) f e,we denote l e *(f e ) = (l e (f e ) f e )' = l e (f e ) + l e '(f e ) f e –l e *(f e ) is the marginal cost of increasing flow on edge e

20 Optimal Flows (cont.) Any flow f* at Nash equilibrium with respect to latency functions l* is optimal with respect to the original latency functions l. This fact can be exploited to prove the lemma: Lemma 2.5: A network G with continuous, non-decreasing latency functions admits a feasible flow at Nash equilibrium. If f and f' are such flows, then C(f) = C(f').

21 The costs ratio We denote  =  (G, r, l) = C(f) / C(f*) –  is the ratio between the cost at Nash equilibrium and the cost of the minimum-latency flow. –r = the rate vector –l = the latency functions

22 Linear Latency Example (2) r = 1  = 4/3 x 1 TS

23 Non-Linear Latency Example xkxk 1 TS The cost of Nash flow C(f) is still 1. In the optimal flow, the lower link is assigned (k+1) -1/k, the cost is C(f*) = 1 - k(k+1) -(k+1)/k. If k  then C(f*)  0, so  is unbounded!

24 General Latency  can be unbounded. BUT, there are interesting bicriteria results. We compare the cost of a Nash flow to an optimal flow feasible for increased rates. The result for twice the original rate is given by the following theorem.

25 General Latency (cont.) Theorem 1: If f is a flow at Nash equilibrium for (G, r, l) and f* is feasible for (G, 2r, l), then C(f)  C(f*). The proof: –By lemma 2, C(f) =  i L i (f) r i –We define new latency function, l~, as follows: –l~(x) = { l e (f e ) if x  f e } { l e (x) if x > f e }

26 General Latency (cont.) The proof (cont): –We compare the cost of f* under l~ to C(f*). –For any e: l e ~(x) - l e (x) is 0 if x > f e, otherwise bounded above by l e (f e )  x(l e ~(x) - l e (x))  l e (f e )f e –Thus the difference between the new and old costs can be bounded as follows: –  e l e ~( f* e ) f* e - C(f*) =  e f* e (l e ~( f* e ) - l e ~( f e ))  l e (f e )f e = C(f).

27 General Latency (cont.) The proof (cont): –Denote f 0 - the zero flow in G. –By construction, l P ~( f 0 )  L i (f) for any P  P i. –Since l e ~ is non-decreasing, l P ~( f*)  L i (f) for any P  P i. –The cost of f* with respect to l~ is bounded below by:  i  P Pi L i (f) f * P =  i 2L i (f) r i = 2C(f).

28 General Latency (cont.) The proof (cont): –Combining the results: C(f*)   P l P ~( f*) f* P - C(f) = 2C(f) - C(f) = C(f) 

29 Linear Latency For each edge e, l e (f e ) = a e f e + b e (a e, b e  0) –We’re going to show that in this case   4/3. The total latency: C(f) =  e ( a e f e 2 + b e f e ). (NLP) is a convex (quadratic) program, thus lemma 3 characterizes its optimal solutions. l e *(f e ) = l e ' (f e ) = 2a e f e + b e –l e *(f e ) is the marginal cost of increasing flow on e

30 Linear Latency (cont.) Lemma 4.1: Suppose G is a directed network with k source-sink pairs and with edge latency functions l e (f e ) = a e f e + b e (for each e). –(a) A flow f is at Nash equilibrium in G iff for each i and P, P'  P i with f P > 0,  e P ( a e f e + b e )   e P' ( a e f e + b e ) –(b) A flow f is (globally) optimal in G iff for each i and P, P'  P i with f* P > 0,  e P (2a e f* e + b e )   e P' (2a e f* e + b e )

31 Linear Latency (cont.) Corollary 4.2: Let G be a network with all edge latency functions of the form l e (f e ) = a e f e. Then for any rate vector r, a flow feasible for (G, r, l) is optimal iff it is at Nash equilibrium. –(a)  (b)

32 Linear Latency (cont.) Lemma 4.3: Suppose (G, r, l) has linear latency functions. Then if f is at Nash equilibrium for (G, r, l), the flow f/2 is optimal for (G, r/2, l). –If f satisfies (a), then f/2 satisfies (b).

33 Linear Latency (cont.) The main theorem proof outline: –creating an optimal flow for the instance (G, r, l) via two-step process: in the first step, create an optimal flow for the instance (G, r/2, l) with a cost at least 1/4 C(f). in the second, the flow is augmented to one optimal for (G, r, l) (the augmentation may increase or decrease the flow on any given edge). The augmentation’s cost is at least 1/2 C(f). –f is some flow at Nash Equilibrium

34 Linear Latency (cont.) The proof of the second bound requires the following lemma: Lemma 4.4: Suppose (G, r, l) has linear latency functions, f* is its optimal flow. Let L* i (f*) be such that every s i -t i path P of f* satisfies l* P (f*) = L* i (f*). Then for any  > 0, a feasible flow for the problem instance (G, (1 +  ), l) has cost at least C(f*) +   i L* i (f*)r i

35 Linear Latency (cont.) Theorem 4.5: If (G, r, l) has linear latency functions, then  (G, r, l)  4/3. The proof: –f - flow in G at Nash equilibrium –L i - the latency of an s i -t i path, C(f) =  i L i (f) r i –by lemma 4.3, f/2 is an optimal flow for (G, r/2, l).

36 Linear Latency (cont.) The proof (cont.): –in the notation of lemma 4.3, l* e (f e /2) = l(f e ) for each e, hence L* i (f/2) = L i (f) for each i. (Marginal costs of edges and paths w.r.t. f/2 and the latencies of edges w.r.t. f coincide). –Taking  = 1 in lemma 4.4, we get C(f*)  C(f/2) +  i L* i (f/2) r i /2  C(f/2) + 1/2  i L i (f) r i = C(f/2) + 1/2 C(f) the augmentation from f/2 to f* costs at least 1/2C(f)

37 Linear Latency (cont.) The proof (cont.): –now we show the lower bound of C(f/2): C(f/2) =  i (1/4 a e f e 2 + 1/2b e f e )  1/4  i (a e f e 2 + b e f e ) = 1/4C(f). –From the 2 bounds, we finally get C(f*)  3/4 C(f). 

38 Extensions During the previous discussion, we made some unrealistic assumptions. In practice: –agents can evaluate path latency only approximately –there’s a finite number of agents, each controlling a strictly positive amount of flow. Sometimes, each agent can route its flow fractionally over any number of paths, and sometimes it can’t.

39 Approximate Nash Equilibrium We suppose that an agent can distinguish between paths that differ “significantly” in their latency - by more than (1 +  ) factor for  > 0. We can reformulate and prove the result for general latency functions and flows at  -approcimate Nash equilibrium (the bound changes by factor of 1 +  )

40 Finitely Many Agents: Splittable Flow Finitely many agents, each controlling a non-negligible amount of flow. The result is similar to theorem 3.1 (actually, theorem 3.1 can be regarded as the limiting case, as the number of agents tends to infinity and the amount of flow controlled by each agent tends to 0).

41 Finitely Many Agents: Unsplittable Flow Finitely many agents, each controlling a non-negligible amount of flow. Bicriteria statements like the theorem 3.1 are false! The statements can be reformulated only when the largest possible change in edge latency is bounded above.


Download ppt "How Bad is Selfish Routing? By Tim Roughgarden Eva Tardos Presented by Alex Kogan."

Similar presentations


Ads by Google