Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dynamic Computations in Ever-Changing Networks Idit Keidar Technion, Israel 1Idit Keidar, TADDS Sep 2011.

Similar presentations


Presentation on theme: "Dynamic Computations in Ever-Changing Networks Idit Keidar Technion, Israel 1Idit Keidar, TADDS Sep 2011."— Presentation transcript:

1 Dynamic Computations in Ever-Changing Networks Idit Keidar Technion, Israel 1Idit Keidar, TADDS Sep 2011

2 TADDS: Theory of Dynamic Distributed Systems (This Workshop) ? 2Idit Keidar, TADDS Sep 2011

3 What I Mean By “Dynamic”* A dynamic computation A dynamic computation –Continuously adapts its output to reflect input and environment changes Other names Other names –Live, on-going, continuous, stabilizing *In this talk 3Idit Keidar, TADDS Sep 2011

4 In This Talk: Three Examples Continuous (dynamic) weighted matching Continuous (dynamic) weighted matching Live monitoring Live monitoring –(Dynamic) average aggregation) Peer sampling Peer sampling –Aka gossip-based membership 4Idit Keidar, TADDS Sep 2011

5 Ever-Changing Networks* Where dynamic computations are interesting Where dynamic computations are interesting Network (nodes, links) constantly changes Network (nodes, links) constantly changes Computation inputs constantly change Computation inputs constantly change –E.g., sensor reads Examples: Examples: –Ad-hoc, vehicular nets – mobility –Sensor nets – battery, weather –Social nets – people change friends, interests –Clouds spanning multiple data-centers – churn *My name for “dynamic” networks 5Idit Keidar, TADDS Sep 2011

6 Continuous Weighted Matching in Dynamic Networks With Liat Atsmon Guz, Gil Zussman Dynamic Ever-Changing 6Idit Keidar, TADDS Sep 2011

7 Weighted Matching Motivation: schedule transmissions in wireless network Motivation: schedule transmissions in wireless network Links have weights, w:E→ℝ Links have weights, w:E→ℝ –Can represent message queue lengths, throughput, etc. Goal: maximize matching weight Goal: maximize matching weight M opt – a matching with maximum weight M opt – a matching with maximum weight 8 5 2 9 4 10 3 1 w(M opt )=17 7 Idit Keidar, TADDS Sep 2011

8 ModelModel Network is ever-changing, or dynamic Network is ever-changing, or dynamic –Also called time-varying graph, dynamic communication network, evolving graph – E t, V t are time-varying sets, w t is a time- varying function Asynchronous communication Asynchronous communication No message loss unless links/node crash No message loss unless links/node crash –Perfect failure detection Idit Keidar, TADDS Sep 20118

9 Continuous Matching Problem 1. At any time t, every node v ∈ V t outputs either ⊥ or a neighbor u ∈ V t as its match 2. If the network eventually stops changing, then eventually, every node v outputs u iff u outputs v Defining the matching at time t: Defining the matching at time t: –A link e=(u,v) ∈ M t, if both u and v output each other as their match at time t –Note: matching defined pre-convergence Idit Keidar, TADDS Sep 20119

10 Classical Approach to Matching One-shot (static) algorithms One-shot (static) algorithms Run periodically Run periodically –Each time over static input Bound convergence time Bound convergence time –Best known in asynchronous networks is O(|V|) Bound approximation ratio at the end Bound approximation ratio at the end –Typically 2 Don’t use the matching while algorithm is running Don’t use the matching while algorithm is running –“Control phase” Idit Keidar, TADDS Sep 201110

11 Self-Stabilizing Approach [Manne et al. 2008] [Manne et al. 2008] Run all the time Run all the time –Adapt to changes But, even a small change can destabilize the entire matching for a long time But, even a small change can destabilize the entire matching for a long time Still same metrics: Still same metrics: –Convergence time from arbitrary state –Approximation after convergence Idit Keidar, TADDS Sep 201111

12 Our Approach: Maximize Matching “All the Time” Run constantly Run constantly –Like self-stabilizing Do not wait for convergence Do not wait for convergence –It might never happen in a dynamic network! Strive for stability Strive for stability –Keep current matching edges in the matching as much as possible Bound approximation throughout the run Bound approximation throughout the run –Local steps can take us back to the approximation quickly after a local change Idit Keidar, TADDS Sep 201112

13 Continuous Matching Strawman Asynchronous matching using Hoepman’s (1-shot) Algorithm Asynchronous matching using Hoepman’s (1-shot) Algorithm –Always pick “locally” heaviest link for the matching –Convergence in O(|V|) time from scratch Use same rule dynamically: if new locally heaviest link becomes available, grab it and drop conflicting links Use same rule dynamically: if new locally heaviest link becomes available, grab it and drop conflicting links Idit Keidar, TADDS Sep 201113

14 Strawman Example 1 1110 9 14 109 12 1178 W(M opt )=45 W(M)=20 1110 9 9 12 1178 W(M)=21 1110 9 9 12 1178 W(M)=22 1110 9 9 12 1178 W(M)=29 Can take Ω (|V|) time to converge to approximation! 2-approximation reached Idit Keidar, TADDS Sep 2011

15 Strawman Example 2 Idit Keidar, TADDS Sep 201115 97 6 8 10 9 97 6 8 9 W(M)=24 W(M)=16 97 6 8 10 9 W(M)=17 Can decrease the matching weight!

16 DynaMatch Algorithm Idea Grab maximal augmenting links Grab maximal augmenting links –A link e is augmenting if adding e to M increases w(M) –Augmentation weight w(e)-w(M ∩ adj(e)) > 0 –A maximal augmenting link has maximum augmentation weight among adjacent links Idit Keidar, TADDS Sep 201116 4 9 7 3 1 augmenting but NOT maximal maximal augmenting

17 More stable after changes More stable after changes Monotonically increasing matching weight Monotonically increasing matching weight Example 2 Revisited 17 97 6 8 10 9 Idit Keidar, TADDS Sep 2011

18 Example 1 Revisited Faster convergence to approximation Faster convergence to approximation 18 1110 9 9 12 1178 10 9 9 12 1178 Idit Keidar, TADDS Sep 2011

19 General Result After a local change After a local change –Link/node added, removed, weight change Convergence to approximation within constant number of steps Convergence to approximation within constant number of steps –Even before algorithm is quiescent (stable) –Assuming it has stabilized before the change Idit Keidar, TADDS Sep 201119

20 LiMoSense – Live Monitoring in Dynamic Sensor Networks With Ittay Eyal, Raphi Rom ALGOSENSORS'11 Dynamic Ever-Changing 20Idit Keidar, TADDS Sep 2011

21 The Problem In sensor network In sensor network –Each sensor has a read value Average aggregation Average aggregation –Compute average of read values Live monitoring Live monitoring –Inputs constantly change –Dynamically compute “current” average Motivation Motivation –Environmental monitoring –Cloud facility load monitoring Idit Keidar, TADDS Sep 201121 7 12 8 23 5 5 10 11 22

22 RequirementsRequirements Robustness Robustness –Message loss –Link failure/recovery – battery decay, weather –Node crash Limited bandwidth (battery), memory in nodes (motes) Limited bandwidth (battery), memory in nodes (motes) No centralized server No centralized server –Challenge: cannot collect the values –Employ in-network aggregation Idit Keidar, TADDS Sep 201122

23 Previous Work: One-Shot Average Aggregation Assumes static input (sensor reads) Assumes static input (sensor reads) Output at all nodes converges to average Output at all nodes converges to average Gossip-based solution [Kempe et al.] Gossip-based solution [Kempe et al.] –Each node holds weighted estimate –Sends part of its weight to a neighbor Idit Keidar, TADDS Sep 201123 10,17,1 10,0.5 8,1.5 8.5,.. Invariant: read sum = weighted sum at all nodes and links

24 LiMoSense: Live Aggregation Adjust to read value changes Adjust to read value changes Challenge: old read value may have spread to an unknown set of nodes Challenge: old read value may have spread to an unknown set of nodes Idea: update weighted estimate Idea: update weighted estimate –To fix the invariant Adjust the estimate: Adjust the estimate: Idit Keidar, TADDS Sep 201124

25 Adjusting The Estimate Idit Keidar, TADDS Sep 201125 Case 1: Case 2: Example: read value 0  1 BeforeAfter 3,1 3,23.5,2 4,1

26 Robust Aggregation Challenges Message loss Message loss –Breaks the invariant –Solution idea: send summary of all previous values transmitted on the link Weight  infinity Weight  infinity –Solution idea: hybrid push-pull solution, pull with negative weights Link/node failures Link/node failures –Solution idea: undo sent messages Idit Keidar, TADDS Sep 201126

27 Correctness Results Theorem 1: The invariant always holds Theorem 1: The invariant always holds Theorem 2: After GST, all estimates converge to the average Theorem 2: After GST, all estimates converge to the average Convergence rate: exponential decay of mean square error Convergence rate: exponential decay of mean square error Idit Keidar, TADDS Sep 201127

28 Simulation Example 100 nodes 100 nodes Input: standard normal distribution Input: standard normal distribution 10 nodes change 10 nodes change –Values +10 Idit Keidar, TADDS Sep 201128

29 Simulation Example 2 100 nodes 100 nodes Input: standard normal distribution Input: standard normal distribution Every 10 steps, Every 10 steps, –10 nodes change values +0.01 Idit Keidar, TADDS Sep 201129

30 SummarySummary LiMoSense – Live Average Monitoring LiMoSense – Live Average Monitoring –Aggregate dynamic data reads Fault tolerant Fault tolerant –Message loss, link failure, node crash Correctness in dynamic asynchronous settings Correctness in dynamic asynchronous settings Exponential convergence after GST Exponential convergence after GST Quick reaction to dynamic behavior Quick reaction to dynamic behavior Idit Keidar, TADDS Sep 201130

31 Correctness of Gossip-Based Membership under Message Loss With Maxim Gurevich PODC'09; SICOMP 2010 Dynamic 31Idit Keidar, TADDS Sep 2011

32 The Setting Many nodes – n Many nodes – n –10,000s, 100,000s, 1,000,000s, … Come and go Come and go –Churn (=ever-changing input) Fully connected network topology Fully connected network topology –Like the Internet Every joining node knows some others Every joining node knows some others –(Initial) Connectivity 32Idit Keidar, TADDS Sep 2011

33 Membership or Peer Sampling Each node needs to know some live nodes Each node needs to know some live nodes Has a view Has a view –Set of node ids –Supplied to the application –Constantly refreshed (= dynamic output) Typical size – log n Typical size – log n 33Idit Keidar, TADDS Sep 2011

34 ApplicationsApplications Applications Applications –Gossip-based algorithm –Unstructured overlay networks –Gathering statistics Work best with random node samples Work best with random node samples –Gossip algorithms converge fast –Overlay networks are robust, good expanders –Statistics are accurate 34Idit Keidar, TADDS Sep 2011

35 Modeling Membership Views Modeled as a directed graph Modeled as a directed graph uv w vyw… y 35Idit Keidar, TADDS Sep 2011

36 Modeling Protocols: Graph Transformations View is used for maintenance View is used for maintenance Example: push protocol Example: push protocol ……w… ……z… uv w v…w… w z 36Idit Keidar, TADDS Sep 2011

37 Desirable Properties? Randomness Randomness –View should include random samples Holy grail for samples: IID Holy grail for samples: IID –Each sample uniformly distributed –Each sample independent of other samples Avoid spatial dependencies among view entries Avoid spatial dependencies among view entries Avoid correlations between nodes Avoid correlations between nodes –Good load balance among nodes 37Idit Keidar, TADDS Sep 2011

38 What About Churn? Views should constantly evolve Views should constantly evolve –Remove failed nodes, add joining ones Views should evolve to IID from any state Views should evolve to IID from any state Minimize temporal dependencies Minimize temporal dependencies –Dependence on the past should decay quickly –Useful for application requiring fresh samples 38Idit Keidar, TADDS Sep 2011

39 Global Markov Chain A global state – all n views in the system A global state – all n views in the system A protocol action – transition between global states A protocol action – transition between global states Global Markov Chain G Global Markov Chain G uv uv 39Idit Keidar, TADDS Sep 2011

40 Defining Properties Formally Small views Small views –Bounded dout(u) Load balance Load balance –Low variance of din(u) From any starting state, eventually (In the stationary distribution of MC on G) From any starting state, eventually (In the stationary distribution of MC on G) –Uniformity Pr(v  u.view) = Pr(w  u.view) Pr(v  u.view) = Pr(w  u.view) –Spatial independence Pr(v  u. view| y  w. view) = Pr(v  u. view) Pr(v  u. view| y  w. view) = Pr(v  u. view) –Perfect uniformity + spatial independence  load balance 40Idit Keidar, TADDS Sep 2011

41 Temporal Independence Time to obtain views independent of the past Time to obtain views independent of the past From an expected state From an expected state –Refresh rate in the steady state Would have been much longer had we considered starting from arbitrary state Would have been much longer had we considered starting from arbitrary state –O(n 14 ) [Cooper09] 41Idit Keidar, TADDS Sep 2011

42 Existing Work: Practical Protocols Tolerates asynchrony, message loss Tolerates asynchrony, message loss Studied only empirically  Studied only empirically  –Good load balance [Lpbcast, Jelasity et al 07] –Good load balance [Lpbcast, Jelasity et al 07] –Fast decay of temporal dependencies [Jelasity et al 07] –Fast decay of temporal dependencies [Jelasity et al 07] –Induce spatial dependence  Push protocol u v w u v w w zz 42Idit Keidar, TADDS Sep 2011

43 v…z… Existing Work: Analysis Analyzed theoretically [Allavena et al 05, Mahlmann et al 06] Analyzed theoretically [Allavena et al 05, Mahlmann et al 06] –Uniformity, load balance, spatial independence –Uniformity, load balance, spatial independence –Weak bounds (worst case) on temporal independence  Unrealistic assumptions – hard to implement  Unrealistic assumptions – hard to implement  –Atomic actions with bi-directional communication –No message loss ……z… ……w… uv w v…w… w z Shuffle protocol z * 43Idit Keidar, TADDS Sep 2011

44 Our Contribution : Bridge This Gap A practical protocol A practical protocol –Tolerates message loss, churn, failures –No complex bookkeeping for atomic actions Formally prove the desirable properties Formally prove the desirable properties –Including under message loss 44Idit Keidar, TADDS Sep 2011

45 …… Send & Forget Membership The best of push and shuffle The best of push and shuffle uv w v…w… uw uw 45 Perfect randomness without loss Perfect randomness without loss Some view entries may be empty Idit Keidar, TADDS Sep 2011

46 S&F: Message Loss Message loss Message loss –Or no empty entries in v’s view uv w u v w 46Idit Keidar, TADDS Sep 2011

47 S&F: Compensating for Loss Edges (view entries) disappear due to loss Edges (view entries) disappear due to loss Need to prevent views from emptying out Need to prevent views from emptying out Keep the sent ids when too few ids in view Keep the sent ids when too few ids in view –Push-like when views are too small –But rare enough to limit dependencies uv w u v w 47Idit Keidar, TADDS Sep 2011

48 S&F: Advantages No bi-directional communication No bi-directional communication –No complex bookkeeping –Tolerates message loss Simple Simple –Without unrealistic assumptions –Amenable to formal analysis Easy to implement 48Idit Keidar, TADDS Sep 2011

49 Degree distribution (load balance) Degree distribution (load balance) Stationary distribution of MC on global graph G Stationary distribution of MC on global graph G –Uniformity –Spatial Independence –Temporal Independence Hold even under (reasonable) message loss! Hold even under (reasonable) message loss! Key Contribution: Analysis 49Idit Keidar, TADDS Sep 2011

50 ConclusionsConclusions Ever-changing networks are here to stay Ever-changing networks are here to stay In these, need to solve dynamic versions of network problems In these, need to solve dynamic versions of network problems We discussed three examples We discussed three examples –Matching –Monitoring –Peer sampling Many more have yet to be studied Many more have yet to be studied 50Idit Keidar, TADDS Sep 2011

51 Thanks!Thanks! Liat Atsmon Guz, Gil Zussman Liat Atsmon Guz, Gil Zussman Ittay Eyal, Raphi Rom Ittay Eyal, Raphi Rom Maxim Gurevich Maxim Gurevich Idit Keidar, TADDS Sep 201151


Download ppt "Dynamic Computations in Ever-Changing Networks Idit Keidar Technion, Israel 1Idit Keidar, TADDS Sep 2011."

Similar presentations


Ads by Google