Presentation is loading. Please wait.

Presentation is loading. Please wait.

EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof.

Similar presentations


Presentation on theme: "EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof."— Presentation transcript:

1 EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof

2 Objective of the paper  Propose an optimization approach for flow control on a network whose resources are shared by a set of S of sources  Maximization of aggregate source utility over transmission rates is aimed  Sources select transmission rates that maximize their benefit (utility – bandwidth cost)  Synchronous and asynchronous distributed algorithms for converging optimal behavior in static environment is presented

3 Problem Framework The problem is formulated for  A network that consists of a set L of unidirectional links of capacities c l, where l is element of L.  The network is shared by a set S of sources, where source s is characterized by a utility function U s (x s ) that is concave increasing in its transmission rate x s.  The goal is to calculate source rates that maximize the sum of the utilities ∑ s S U s (x s ) over x s subject to capacity constraints.

4 S Problem Framework s1s1 s2s2 s3s3s.......... DESTINATION NODES SOURCE NODES L(s 1 )={l 1,l 2,l 3,l 4 } link l 4 : S(l 4 )={s 1,s 3 } l1l1 l2l2 l3l3 l5l5 l6l6

5 Centralized optimization : Why not  Centralized optimization of source rates is possible in theory but not feasible and practical in real networks as  Knowledge of all utility functions are required  Resource usage is coupled due to shared links. So all the sources should be coordinated simultaneously  Therefore a distributed and decentralized approach is needed.

6 The value of the optimization frame-work presented  It is not always critical or feasible to attain exact optimality in a flow control problem  However the optimal framework acts as a guideline to shape the network dynamics to a desirable operating point where source utilities and resource costs are taken into consideration  Optimization frameworks may be used to refine and ameliorate practical flow control schemes

7 The optimization problem : Primal problem

8 The optimization problem : Lagrangian for primal problem  p l represents Lagrange multipliers utilized in standard convex optimization method  By using this approach coupled link capacity constraints are integrated to the objective function  Notice separability in terms of x s so maximizing lagrangian function as aggregate of different x s related terms means gives the same result as summing up maximum of each individual x s related term. Therefore we have

9 The optimization problem : Dual problem  Here p l is the price per unit bandwidth at link l.  p s is the total price per unit bandwidth for all links in the path of s  The dual problem has been defined as minimization of D(p) (upper bound of Lagrangian function) for non-negative bandwidth prices.  Each source can independently solve maximization problem in (3) for a given p

10 The optimization problem : Concavity and duality gap  For each p, a unique maximizer denoted by x s (p) exists since U s (source utility function) is strictly concave  Concavity of U s and linear constraints for the primal problem guarantees that there is no duality gap and dual optimal prices exist in the form of Lagrange Multipliers  Once p* is obtained by solving the dual problem, the primal optimal source rates x*=x(p*) can be computed by individual sources s by solving (3)  Therefore given p* individual sources can solve (3) without any coordination (key concept for distributed algorithm)  So p* acts as a coordination signal that aligns individual and joint optimality for flow control problem

11 Notations and assumptions : R 11 R 12...R 1s R 21 R 22...R 2s....... R l1 R l2...R ls R=  R is the routing matrix where R ls =1 if l L(s) or s L(s)  For each source s, p T R is the path bandwidth price that source s faces which is equal to p s  Let x s (p) be the unique maximizer for (3) then x s (p) could be written as follows : s number of flows/sources l number of links

12 S Routing matrix s1s1 s2s2 s3s3s.......... DESTINATION NODES SOURCE NODES L(s 1 )={l 1,l 2,l 3,l 4 } link l 4 l1l1 l2l2 l3l3 l5l5 l6l6 s1s2 l110 l210 l310 l411 l501 l601 R=

13 Concave utility function: its derivative and inverse derivative of utility U’(x) rate x İnverse of derivative of utility rate x U’(x) TYPICAL CONCAVE UTILITY FUNCTION

14 Source rate as demand function  The above figure depict x s (p) as a possible solution of (6)  Similar to inverse of U’ figure in previous slide the rate is obtained as a decreasing function of U’ -1 (rate)  This means that x s (p) acts as a demand function seen in Microeconomics.

15  The dual problem is solved via gradient projection method where link prices are adjusted in the opposite direction of gradient of D(p) Synchronous Distributed Algorithm based on gradient projection applied to dual problem

16 Synchronous Distributed Algorithm based on gradient projection appliedd to dual problem  Equation (9) shows that the price of a link l is updated based on how much demand exceeds supply.

17 Synchronous Distributed Algorithm Generic outline of the algorithm  Given aggregate source rate that goes through link l, the adjustment algorithm (9) is completely distributed  Therefore network links l and sources s could be treated as processors in a distributed computation system to solve the dual problem at (5) 1. In each iteration sources s solve (3) independently and communicate their results x s (p) to links on their path (L(s)). 2. Links l then update their prices p l according to (9) and communicate their new prices to sources s 3. The cycle repeats goes back to 1 with updated p values  It is possible to prove that under C1 and C2 conditions this algorithm converges to a stable and optimal x* (optimal source rates) and p* (optimal bandwidth prices) for static network conditions (THEOREM 1)

18 Synchronous Distributed Algorithm

19 THEOREM 1

20 THEOREM 1 (proof)  LEMMA 1: Under C1, D(p) is convex, lower bounded and continuously differentiable

21 THEOREM 1 (proof)  LEMMA 2: Under C1, The Hessian of D is given by  2 D(p)=RB(p)R T

22 THEOREM 1 (proof)

23  LEMMA 3: Under C1-C2,  D is Lipschitz continuous with ‖  D(q) -  D(p) ‖ 2 ≤ αLS ‖ q-p ‖ 2 for all p,q ≥ 0

24 THEOREM 1 (proof)  LEMMA 3: Under C1-C2,  D is Lipschitz continous with ‖  D(q) -  (p) ‖ 2 ≤ αLS ‖ q-p ‖ 2 for all p,q ≥ 0

25 THEOREM 1 (proof)

26 Fundamental assumptions : C1,C2 and C3 assumptions for the utility functions  Here m s and M s are minimum and maximum transmission rates for source s

27 Asynchronous Distributed Algorithm why asynchronous model is needed  The synchronous model of the last section assumes that updates at the sources and the links are synchronized  In realistic large network scenarios, synchronous updates might not be possible as  Sources may be located at different distances from the network links .Network states (prices in our case) may be probed by different sources at different rates, e.g., the Resource Management  Feedbacks may reach different sources after different, and variable, delays.  These complications make our distributed computation system consisting of links and sources asynchronous.  The communication delays may be substantial and time-varying.

28 Asynchronous Distributed Algorithm Generic outline of the algorithm  The main approach of interdependent update of source rates and bandwidth prices iteratively is followed in asynchronous version of the algorithm as well  For bandwidth price updates the links use an estimate of the gradient based on past source rates at link l  Two type of policies are applied  Latest data only: Only the last received rate is used  Latest average: Only the average of latest k received rates is used  The convergence of both synchronous and asyncronous algorithms depend on sufficiently small step size for (7) and (9)  Convergence for asynchronous version of the algorithm could be proven as long as assumptions C1 and C3 hold.

29 Asynchronous Distributed Algorithm

30 Asynchronous Distributed Algorithm Proof

31 Asynchronous Distributed Algorithm Proof (lemma 4)

32  Lemma 4 a) comes from applying the projection theorem to the scalar p l (t+1) = [p l (t)-γλ l (t)] +  Lemma 4 b) follows from

33 Asynchronous Distributed Algorithm Proof (lemma 4)

34

35  The next lemma (LEMMA 5) bounds the error in gradient estimation in terms of the successive price change π(t) = p(t+1)-p(t).

36 Asynchronous Distributed Algorithm Proof (lemma 5)

37

38

39 Asynchronous Distributed Algorithm Proof (lemma 6)

40

41

42 Asynchronous Distributed Algorithm Theorem 2

43

44


Download ppt "EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof."

Similar presentations


Ads by Google