Download presentation

Presentation is loading. Please wait.

Published byEmiliano Rodgers Modified over 2 years ago

1
Stochastic optimization for power-aware distributed scheduling Michael J. Neely University of Southern California http://www-bcf.usc.edu/~mjneely t ω(t)

2
Outline Lyapunov optimization method Power-aware wireless transmission – Basic problem – Cache-aware peering – Quality-aware video streaming Distributed sensor reporting and correlated scheduling

3
A single wireless device R(t) = r(P(t), ω(t)) Timeslots t = {0, 1, 2, …} ω(t) = Random channel state on slot t P(t) = Power used on slot t R(t) = Transmission rate on slot t (function of P(t), ω(t)) observedchosen

4
Example R(t) = log(1 + P(t)ω(t)) observed chosen t ω(t) t R(t)

5
Example R(t) = log(1 + P(t)ω(t)) observed chosen t ω(t) t R(t)

6
Example R(t) = log(1 + P(t)ω(t)) observed chosen t ω(t) t R(t)

7
Optimization problem Maximize: R Subject to: P ≤ c Given: Pr[ω(t)=ω] = π(ω), ω in {ω 1, ω 2, …, ω 1000 } p(t) in P = {p 1, p 2, …, p 5 } c = desired power constraint

8
Consider randomized decisions Pr[p k | ω i ] = Pr[P(t) = p k | ω(t)=ω i ] ω(t) in {ω 1, ω 2, …, ω 1000 } P(t) in P = {p 1, p 2, …, p 5 } ∑ Pr[p k | ω i ] = 1 ( for all ω i in {ω 1, ω 2, …, ω 1000 } ) k=1 5

9
Linear programming approach Max: R S.t. : P ≤ c Given parameters: π(ω i ) (1000 probabilities) r(p k, ω i ) (5*1000 coefficients) Optimization variables: Pr[p k |ω i ] (5*1000 variables) ∑ ∑ π(ω i ) Pr[p k |ω i ] r(p k,ω i ) 1000 i=1 k=1 5 ∑ ∑ π(ω i ) Pr[p k |ω i ] p k ≤ c 1000 5 i=1 k=1 Max: S.t.:

10
Multi-dimensional problem 1 1 Access Point Access Point 2 2 N N Observe (ω 1 (t), …, ω N (t)) Decisions: -- Choose which user to serve -- Choose which power to use R 1 (t) R 2 (t) R N (t)

11
Goal and LP approach Maximize: R 1 + R 2 + … + R N Subject to: P n ≤ c for all n in {1, …, N} LP has given parameters: π(ω 1, …, ω N ) (1000 N probabilities) r n (p k, ω i ) (N*5N*1000 N coefficients) LP has optimization variables: Pr[p k |ω i ] (5N*1000 N variables)

12
Advantages of LP approach Solves the problem of interest LPs have been around for a long time Many people are comfortable with LPs

13
Disadvantages of LP approach

14
Need to estimate an exponential number of probabilities. LP has exponential number of variables. What if probabilities change? Fairness? Delay? Channel errors?

15
Lyapunov optimization approach Maximize: R 1 + R 2 + … + R N Subject to: P n ≤ c for all n in {1, …, N}

16
Lyapunov optimization approach Maximize: R 1 + R 2 + … + R N Subject to: P n ≤ c for all n in {1, …, N} Virtual queue for each constraint: Stabilizing virtual queue constraint satisfied! Q n (t+1) = max[Q n (t) + P n (t) – c, 0] Q n (t) P n (t) c

17
Lyapunov drift L(t) = ½ ∑ Q n (t) 2 Δ(t) = L(t+1) – L(t) n Q1Q1 Q2Q2

18
Drift-plus-penalty algorithm Every slot t: Observe (Q 1 (t), …., Q N (t)), (ω 1 (t), …, ω N (t)) Choose (P 1 (t), …, P N (t)) to greedily minimize: Update queues. Δ(t) - (1/ε)(R 1 (t) + … + R N (t)) drift penalty Low complexity No knowledge of π(ω) probabilities is required

19
Specific DPP implementation Each user n observes ω n (t), Q n (t). Each user n chooses P n (t) in P to minimize: -(1/ε)r n (P n (t), ω n (t)) + Q n (t)P n (t) Choose user n* with smallest such value. User n* transmits with power level P n* (t). Low complexity No knowledge of π(ω) probabilities is required

20
Performance Theorem Assume it is possible to satisfy the constraints. Then under DPP with any ε>0: All power constraints are satisfied. Average thruput satisfies: Average queue size satisfies: ∑ Q n ≤ O(1/ε) R 1 + … + R N ≥ throughput opt – O(ε)

21
General SNO problem Minimize: y 0 (α(t), ω(t)) Subject to: y n (α(t), ω(t)) ≤ 0 for all n in {1, …, N} α(t) in A ω(t) for all t in {0, 1, 2, …} Such problems are solved by the DPP algorithm. Performance theorem: O(ε), O(1/ε) tradeoff. ω(t) = Observed random event on slot t π(ω) = Pr[ω(t)=ω] (possibly unknown) α(t) = Control action on slot t A ω(t) = Abstract set of action options

22
What we have done so far Lyapunov optimization method Power-aware wireless transmission – Basic problem – Cache-aware peering – Quality-aware video streaming Distributed sensor reporting and correlated scheduling

23
What we have done so far Lyapunov optimization method Power-aware wireless transmission – Basic problem – Cache-aware peering – Quality-aware video streaming Distributed sensor reporting and correlated scheduling

24
What we have done so far Lyapunov optimization method Power-aware wireless transmission – Basic problem – Cache-aware peering – Quality-aware video streaming Distributed sensor reporting and correlated scheduling

25
What we have done so far Lyapunov optimization method Power-aware wireless transmission – Basic problem – Cache-aware peering – Quality-aware video streaming Distributed sensor reporting and correlated scheduling

26
Mobile P2P video downloads

28
Access Point Access Point

29
Mobile P2P video downloads Access Point Access Point

30
Mobile P2P video downloads Access Point Access Point

31
Mobile P2P video downloads Access Point Access Point Access Point Access Point

32
Mobile P2P video downloads Access Point Access Point Access Point Access Point Access Point Access Point

33
Mobile P2P video downloads Access Point Access Point Access Point Access Point Access Point Access Point

34
Mobile P2P video downloads Access Point Access Point Access Point Access Point Access Point Access Point

35
Mobile P2P video downloads Access Point Access Point Access Point Access Point Access Point Access Point

36
Mobile P2P video downloads Access Point Access Point Access Point Access Point Access Point Access Point

37
Mobile P2P video downloads Access Point Access Point Access Point Access Point Access Point Access Point

38
Cache-aware scheduling Access points (including “femto” nodes) Typically stationary Typically have many files cached Users Typically mobile Typically have fewer files cached Assume each user wants one “long” file Can opportunistically grab packets from any nearby user or access point that has the file.

39
Quality-aware video delivery Video chunks as time progresses Quality Layer 1 Quality Layer 2 Bits: 8176 D: 11.045 Bits: 7370 D: 10.777 Quality Layer L Bits: 40968 D: 0 Bits: 58152 D: 7.363 Bits: 120776 D: 7.108 Bits: 97864 D: 6.971 Bits: 41304 D: 6.716 Bits: 277256 D: 0 Bits: 419640 D: 0 Bits: 72800 D: 6.261 Bits: 59984 D: 6.129 Bits: 299216 D: 0 D = Distortion. Results hold for any matrices Bits( layer, chunk ), D( layer, chunk ). Bits are queued for wireless transmission.

40
Fair video quality delivery Minimize: f( D 1 ) + f( D 2 ) + … + f( D N ) Subject to: P n ≤ c for all n in {1, …, N} Video playback rate constraints

41
Fair video quality delivery Minimize: f( D 1 ) + f( D 2 ) + … + f( D N ) Subject to: P n ≤ c for all n in {1, …, N} Video playback rate constraints Recall the general form: Min: y 0 S.t. : y n ≤ 0 for all n α(t) in A ω(t) for all t

42
Fair video quality delivery Min: y 0 S.t. : y n ≤ 0 for all n α(t) in A ω(t) for all t Minimize: f( D 1 ) + f( D 2 ) + … + f( D N ) Subject to: P n ≤ c for all n in {1, …, N} Video playback rate constraints Recall the general form: Define Y n (t) = P n (t) - c

43
Fair video quality delivery Minimize: f( D 1 ) + f( D 2 ) + … + f( D N ) Subject to: P n ≤ c for all n in {1, …, N} Video playback rate constraints Recall the general form: Define auxiliary variable γ(t) in [0, D max ] Min: y 0 S.t. : y n ≤ 0 for all n α(t) in A ω(t) for all t

44
Equivalence via Jensen’s inequality Minimize: f( D 1 ) + f( D 2 ) + … + f( D N ) Subject to: P n ≤ c for all n in {1, …, N} Video playback rate constraints Minimize: f( γ 1 (t)) + f( γ 2 (t)) + … + f( γ N (t)) Subject to: P n ≤ c for all n in {1, …, N} γ n = D n for all n in {1, …, N} Video playback rate constraints

45
Example simulation BS Region divided into 20 x 20 subcells (only a portion shown here). 1250 mobile devices, 1 base station 3.125 mobiles/subcell

46
Phases 1, 2, 3: File availability prob = 5%, 10%, 7% Basestation Average Traffic: 2.0 packets/slot Peer-to-Peer Average Traffic: 153.7 packets/slot Factor of 77.8 gain compared to BS alone!

47
What we have done so far Lyapunov optimization method Power-aware wireless transmission – Basic problem – Cache-aware peering – Quality-aware video streaming Distributed sensor reporting and correlated scheduling

48
Distributed sensor reports ω i (t) = 0/1 if sensor i observes the event on slot t P i (t) = 0/1 if sensor i reports on slot t Utility: U(t) = min[P 1 (t)ω 1 (t) + (1/2)P 2 (t)ω 2 (t),1] 1 1 2 2 Fusion Center Fusion Center Maximize: U Subject to: P 1 ≤ c P 2 ≤ c ω 1 (t) ω 2 (t)

49
What is optimal? Agree on plan 0 1 2 3 t 4

50
What is optimal? Agree on plan 0 1 2 3 t 4 Example plan: User 1: t=even Do not report. t=odd Report if ω 1 (t)=1. User 2: t=even Report if ω 2 (t)=1 t=odd: Report with prob ½ if ω 2 (t)=1

51
Common source of randomness Example: 1 slot = 1 day Each user looks at Boston Globe every day: If first letter is a “T” Plan 1 If first letter is an “S” Plan 2 Etc. Day 1 Day 2

52
Specific example Assume: Pr[ω 1 (t)=1] = ¾, Pr[ω 2 (t)=1] = ½ ω 1 (t), ω 2 (t) independent Power constraint c = 1/3 Approach 1: Independent reporting If ω 1 (t)=1, user 1 reports with probability θ 1 If ω 2 (t)=1, user 2 reports with probability θ 2 Optimizing θ 1, θ 2 gives u = 4/9 ≈ 0.44444

53
Approach 2: Correlated reporting Pure strategy 1: User 1 reports if and only if ω 1 (t)=1. User 2 does not report. Pure strategy 2: User 1 does not report. User 2 reports if and only if ω 2 (t)=1. Pure strategy 3: User 1 reports if and only if ω 1 (t)=1. User 2 reports if and only if ω 2 (t)=1.

54
Approach 2: Correlated reporting X(t) = iid random variable (commonly known): Pr[X(t)=1] = θ 1 Pr[X(t)=2] = θ 2 Pr[X(t)=3] = θ 3 On slot t: Users observe X(t) If X(t)=k, users use pure strategy k. Optimizing θ 1, θ 2, θ 3 gives u = 23/48 ≈ 0.47917

55
Summary of approaches Independent reporting Correlated reporting Centralized reporting 0.47917 0.44444 0.5 Strategy u

56
Summary of approaches Independent reporting Correlated reporting Centralized reporting 0.47917 0.44444 0.5 Strategy u It can be shown that this is optimal over all distributed strategies!

57
General distributed optimization Maximize: U Subject to: P k ≤ 0 for k in {1, …, K} ω(t) = (ω 1 (t), …, ω Ν (t)) π(ω) = Pr[ω(t) = (ω 1, …, ω Ν )] α(t) = (α 1 (t), …, α Ν (t)) U(t) = u(α(t), ω(t)) P k (t) = p k (α(t), ω(t))

58
Pure strategies A pure strategy is a deterministic vector- valued function: g(ω) = (g 1 (ω 1 ), g 2 (ω 2 ), …, g Ν (ω Ν )) Let M = # pure strategies: M = | A 1 | |Ω1| x | A 2 | |Ω2| x... x | A N | |ΩN|

59
Optimality Theorem There exist: K+1 pure strategies g (m) (ω) Probabilities θ 1, θ 2, …, θ K+1 such that the following distributed algorithm is optimal: X(t) = iid, Pr[X(t)=m] = θ m Each user observes X(t) If X(t)=m use strategy g (m) (ω).

60
LP and complexity reduction The probabilities can be found by an LP Unfortunately, the LP has M variables If (ω 1 (t), …, ω Ν (t)) are mutually independent and the utility function satisfies a preferred action property, complexity can be reduced Example N=2 users, | A 1 |=| A 2 |=2 --Old complexity = 2 |Ω1|+|Ω2| --New complexity = (|Ω1|+1)(|Ω2|+1)

61
Lyapunov optimization approach Define K virtual queues Q 1 (t), …, Q K (t). Every slot t, observe queues and choose strategy m in {1, …, M} to maximize a weighted sum of queues. Update queues with delayed feedback: Q k (t+1) = max[Q k (t) + P k (t-D), 0]

62
Separable problems If the utility and penalty functions are a separable sum of functions of individual variables (α n (t), ω n (t)), then: There is no optimality gap between centralized and distributed algorithms Problem complexity reduces from exponential to linear.

63
Simulation (non-separable problem) 3-user problem α n (t) in {0, 1} for n ={1, 2, 3}. ω n (t) in {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} V=1/ε Get O(ε) guarantee to optimality Convergence time depends on 1/ε

64
Utility versus V parameter (V=1/ε) Utility V (recall V = 1/ε)

65
Average power versus time Average power up to time t Time t power constraint 1/3 V=10 V=50 V=100

66
Adaptation to non-ergodic changes

67
Conclusions Drift-plus-penalty is a strong technique for general stochastic network optimization Power-aware scheduling Cache-aware scheduling Quality-aware video streaming Correlated scheduling for distributed stochastic optimization

68
Conclusions Drift-plus-penalty is a strong technique for general stochastic network optimization Power-aware scheduling Cache-aware scheduling Quality-aware video streaming Correlated scheduling for distributed stochastic optimization

Similar presentations

OK

Universal Scheduling for Networks with Arbitrary Traffic, Channels, and Mobility Michael J. Neely, University of Southern California Proc. IEEE Conf. on.

Universal Scheduling for Networks with Arbitrary Traffic, Channels, and Mobility Michael J. Neely, University of Southern California Proc. IEEE Conf. on.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on merger and acquisition in india Ppt on quality education basketball Ppt on indian financial market Ppt on social media analytics Games we play ppt on tv Ppt on interest rate swaps Ppt on video teleconferencing certifications Ppt on brain drain Ppt on anti cancer therapy Ppt on new technology in electrical engineering