Download presentation

Presentation is loading. Please wait.

Published bySamira Poplar Modified over 2 years ago

1
Link Reversal Algorithms Jennifer L. Welch [Welch and Walter, 2012]

2
1 What is Link Reversal? Distributed algorithm design technique Used in solutions for a variety of problems routing, leader election, mutual exclusion, scheduling, resource allocation,… Model problem as a directed graph and reverse the direction of links appropriately Use local knowledge to decide which links to reverse

3
Outline Routing in a Graph: Correctness Routing in a Graph: Complexity Routing and Leader Election in a Distributed System Mutual Exclusion in a Distributed System Scheduling in a Graph Resource Allocation in a Distributed System 2

4
Routing [Gafni & Bertsekas 1981] Undirected connected graph represents communication topology of a system Unique destination node Assign virtual directions to the graph edges (links) s.t. if nodes forward messages over the links, they reach the destination Directed version of the graph (orientation) must be acyclic have destination as only sink Thus every node has path to destination. 3

5
Routing Example 4 D

6
Mending Routes What happens if some edges go away? Might need to change the virtual directions on some remaining edges (reverse some links) More generally, starting with an arbitrary directed graph, each node should decide independently which of its incident links to reverse 5

7
Mending Routes Example 6 D

8
Sinks A vertex with no outgoing links is a sink. The property of being a sink can be detected locally. A sink can then reverse some incident links Basis of several algorithms… 7

9
Full Reversal Routing Algorithm 8 Input: directed graph G with destination vertex D Let S(G) be set of sinks in G other than D while S(G) is nonempty do reverse every link incident on a vertex in S(G) G now refers to resulting directed graph

10
9 Full Reversal (FR) Routing Example D D D D D D

11
Why Does FR Terminate? Suppose it does not. Let W be vertices that take infinitely many steps. Let X be vertices that take finitely many steps; includes D. Consider neighboring nodes w in W, x in X. Consider first step by w after last step by x: link is w x and stays that way forever. Then w cannot take any more steps, contradiction. 10

12
Why is FR Correct? Assume input graph is acyclic. Acyclicity is preserved at each iteration: Any new cycle introduced must include a vertex that just took a step, but such a vertex is now a source (has no incoming links) When FR terminates, no vertex, except possibly D, is a sink. A DAG must have at least one sink: if no sink, then a cycle can be constructed Thus output graph is acyclic and D is the unique sink. 11

13
Pair Algorithm Can implement FR by having each vertex v keep an ordered pair (c,v), the height (or vertex label) of vertex v c is an integer counter that can be incremented v is the id of vertex v View link between v and u as being directed from vertex with larger height to vertex with smaller height (compare pairs lexicographically) If v is a sink then v sets c to be 1 larger than maximum counter of all v’s neighbors 12

14
Pair Algorithm Example (0,2)(0,1) (1,0) (2,3) (2,1)

15
Pair Algorithm Example (0,2)(0,1) (1,0) (2,3) (2,1) (3,2)

16
Pair Algorithm Example (0,2)(0,1) (1,0) (2,3) (2,1) (3,2)

17
Partial Reversal Routing Algorithm Try to avoid repeated reversals of the same link. Vertices keep track of which incident links have been reversed recently. Link (u,v) is reversed by v iff the link has not been reversed by u since the last iteration in which v took a step. 16

18
17 Partial Reversal (PR) Routing Example D D D D D D

19
Why is PR Correct? Termination can be proved similarly as for FR: difference is that it might take two steps by w after last step by x until link is w x. Preservation of acyclicity is more involved, deferred to later. 18

20
Triple Algorithm Can implement PR by having each vertex v keep an ordered triple (a,b,v), the height (or vertex label) of vertex v a and b are integer counters v is the id of node v View link between v and u as being directed from vertex with larger height to vertex with smaller height (compare triples lexicographically) If v is a sink then v sets a to be 1 greater than smallest a of all its neighbors sets b to be 1 less than smallest b of all its neighbors with new value of a (if none, then leave b alone) 19

21
Triple Algorithm Example (0,0,2)(0,0,1) (0,1,0) (0,2,3) (1,0,1)

22
Triple Algorithm Example (0,0,2)(0,0,1) (0,1,0) (0,2,3) (1,0,1)(1,-1,2)

23
Triple Algorithm Example (0,0,2)(0,0,1) (0,1,0) (0,2,3) (1,0,1)(1,-1,2)

24
General Vertex Label Algorithm Generalization of Pair and Triple algorithms Assign a label to each vertex s.t. labels are from a totally ordered, countably infinite set new label for a sink depends only on old labels for the sink and its neighbors sequence of labels taken on by a vertex increases without bound Can prove termination and acyclicity preservation, and thus correctness. 23

25
Binary Link Labels Routing [Charron-Bost et al. SPAA 2009] Alternate way to implement and generalize FR and PR Instead of unbounded vertex labels, apply binary link labels to input DAG link directions are independent of labels (in contrast to algorithms using vertex labels) Algorithm for a sink: if at least one incident link is labeled 0, then reverse all incident links labeled 0 and flip labels on all incident links if no incident link is labeled 0, then reverse all incident links but change no labels 24

26
Binary Link Labels Example

27
Binary Link Labels Example

28
Binary Link Labels Example

29
Why is BLL Correct? Termination can be proved very similarly to termination for PR. What about acyclicity preservation? Depends on initial labeling:

30
Conditions on Initial Labeling All labels are the same all 1’s => Full Reversal all 0’s => Partial Reversal Every vertex has all incoming links labeled the same (“uniform” labeling) Both of the above are special cases of a more general condition that is necessary and sufficient for preserving acyclicity 29

31
What About Complexity? Busch et al. (2003,2005) initiated study of the performance of link reversal routing Work complexity of a vertex: number of steps taken by the vertex Global work complexity: sum of work complexity of all vertices Time complexity: number of iterations, assuming all sinks take a step in each iteration (“greedy” execution) 30

32
Worst-Case Work Complexity Bounds [Busch et al.] bad vertex: has no (directed) path to destination Pair algorithm (Full Reversal): for every input, global work complexity is O(n 2 ), where n is number of initial bad vertices for every n, there exists an input with n bad vertices with global work complexity Ω(n 2 ) Triple algorithm (Partial Reversal): same as Pair algorithm 31

33
Exact Work Complexity Bounds A more fine-grained question: Given any input graph and any vertex in that graph, exactly how many steps does that vertex take? Busch et al. answered this question for FR. Charron-Bost et al. answered this question for BLL (as long as labeling satisfies Acyclicity Condition): includes FR and PR. 32

34
Definitions [Charron-Bost et al. SPAA 2009] Let X = be a chain in the labeled input DAG (series of vertices s.t. either (v i,v i+1 ) or (v i+1,v i ) is a link). r: number of links that are labeled 1 and rightway ((v i,v i+1 ) is a link) s: number of occurrences of vertices s.t. the two adjacent links are incoming and labeled 0 Res: 1 if last link in X is labeled 0 and rightway, else 0 ω: equal to 2(r+s)+Res 33

35
Example of Definitions For chain : r = 1, s = 0, Res = 1, ω = 3 For chain : r = 2, s = 1, Res = 0, ω = 6 34 D

36
Outline of BLL Work Complexity Claim 1: A step taken by v decreases the ω value of all chains from D to v by the same amount; steps taken by other vertices have no effect on the ω value of chains from D to v. Claim 2: When algorithm terminates, at least one chain from D to v is the reverse of a path from v to D value of ω for this chain is 0, since no right-way links Thus number of steps by v is number required for the reverse of a D-to-v chain to become a path for the first time Need to quantify how ω min decreases when v takes a step (ω min is min, over all chains X from D to v, of ω for X) 35

37
36 Grouping the Nodes Define S as set of all sinks whose links are all labeled 0 N as set of all nodes whose incoming links are all labeled 1 O as all other nodes group S group N group O

38
Finishing BLL Work Complexity Claim 3: Let X be a D-to-v chain. When v takes a step, if v in S, then ω for X decreases by 1 and v moves to N if v in N, then ω for X decreases by 2 and v stays in N if v in O, then ω for X decreases by 1 and v stays in O Theorem: Number of steps taken by v is (ω min +1)/2 if v in S initially ω min /2 if v in N initially ω min if v in O initially 37

39
Work Complexity for FR Corollary: For FR (all 1’s labeling), work complexity of vertex v is minimum, over all chains from D to v, of r, number of links in the chain that are rightway (directed away from D). Worst-case graph for global work complexity: D 1 2 … n vertex i has work complexity i global work complexity then is Θ(n 2 ) 38

40
Work Complexity for PR Corollary: for PR (all 0’s labeling), work complexity of vertex v is min, over all D-to-v chains, of s + Res if v is a sink or a source min, over all D-to-v chains, of 2s + Res if v is neither a sink nor a source Worst-case graph for global work complexity: D 1 2 3 … n work complexity of vertex i is Θ(i) global work complexity is Θ(n 2 ) 39

41
Comparing FR and PR Looking at worst-case global work complexity shows no difference – both are quadratic in number of bad nodes Can use game theory to show some meaningful differences (Charron-Bost et al. ALGOSENSORS 2009): global work complexity of FR can be larger than optimal (w.r.t. all uniform labelings) by a factor of Θ(n) global work complexity of PR is never more than twice the optimal Another advantage of PR over FR: In PR, if k links are removed, each bad vertex takes at most 2k steps In FR, if 1 link is removed, a vertex might have to take n-1 steps 40

42
Time Complexity Time complexity is number of iterations in greedy execution Busch et al. observed that time complexity cannot exceed global work complexity Thus O(n 2 ) iterations for both FR and PR Busch et al. also showed graphs on which FR and PR require Ω(n 2 ) iterations Charron-Bost et al. (2011) derived an exact formula for the last iteration in which any vertex takes a step in any graph for BLL… 41

43
FR Time Complexity Overview Let W v (t) be number of steps v has taken by iteration t Identify a recurrence relation for W v (t) based on understanding how nodes and their neighbors take turns being sinks this recurrence is linear in the min-plus algebra Thus the set of recurrences for all the vertices can be represented as a matrix This matrix can be interpreted as the adjacency matrix of a graph H Restate value of W v (t) in terms of properties of paths in H Derive a formula for time complexity of vertex v based on properties of paths in H Translate previous formula into properties of original input graph 42

44
FR Time Complexity Theorem: For every bad vertex v, termination time of v is 1 + max{len(X): X is chain ending at v with r = σ v – 1} where σ v is the work complexity of v Worst-case graph for global time complexity: 43 D n-1 n n/2+1 n/2 21 n/2+2 n/2+3 vertex n/2 has work complexity n/2; consider chain that goes around the loop counter-clockwise n/2-1 times starting and ending at n/2: has r = n/2-1 and length Θ(n 2 )

45
BLL Time Complexity What about other link labelings? Transform to FR! In more detail: for every labeled input graph G (satisfying the Acyclicity Condition), construct another graph T(G) s.t. for every execution of BLL on G, there is a “corresponding” execution of FR on T(G) time complexities of relevant vertices in the corresponding executions are the same 44

46
Idea of Transformation If a vertex v is initially in the category O (a sink with some links labeled 0 and some labeled 1, or not a sink with an incoming link labeled 0), then its incident links are partitioned into two sets: all links on one set reverse at odd-numbered steps by v all links in the other set reverse at even-numbered steps by v Transformation replaces each vertex in O with two vertices, one corresponding to odd steps by v and the other to even steps, and inserts appropriate links 45

47
PR Time Complexity Theorem: For every bad vertex v, termination time of v is 1 + max{len(X): X is a chain ending at v with s + Res = σ v – 1} if v is a sink or a source initially 1 + max{len(X): X is a chain ending at v with 2s + Res = σ v – 1} otherwise Worst-case graph for global time complexity: D 1 2 3 … n/2 n/2+1 … n Vertex n/2 has work complexity n/2. Consider chain that starts at n/2, ends at n/2, and goes back and forth between n/2 and n making (n-2)/4 round trips. 2s+Res for this chain is n/2-1, and length is Θ(n 2 ). 46

48
FR vs. PR Again On chain on previous slide, PR has quadratic time complexity. But FR on that chain has linear time complexity in fact, FR has linear time complexity on any tree On chain on previous slide, PR has slightly better global work complexity than FR. 47

49
From Graph to Distributed System To adapt previous ideas to a distributed system: processor is a vertex communication channel is an edge (link) Issues to be overcome: Neighboring processors need to communicate to agree on which way the link between them should be directed: delays and losses Topology can change due to movement and failures; might not always be connected 48

50
Routing in a Dynamic System In any execution that experiences a finite number of topology changes, after the last topology change: every node in same connected component as D (destination) should have a path to D every node not in the same component as D stops trying to find a route to D or forward a message to D 49

51
What’s Wrong with FR? 50 D 21 3 D 21 3 D 21 3 D 21 3

52
TORA [Park & Corson 1997] Modify the generalized algorithm of Gafni & Bertsekas using increasing vertex labels Vertex labels, or heights, are 5-tuples one entry is current time: Temporally Ordered Routing Algorithm Every proc in same connected component as D eventually has a path to D in the directed version of the communication graph induced by the heights Clever use of additional entries in the heights allows node to tell when they are partitioned from D and should stop participating 51

53
52 Heights in TORA [ , oid, r, , i ] reference level delta time this ref. level was started id of node originating this ref. level reflection bit orders nodes with same ref. level id of node, breaks ties

54
TORA Overview Route Creation: use standard spanning tree construction ideas to set ref levels to (0,0,0) and deltas to distances from D Route Maintenance and Partition Detection: see next slide Route Erasure: When partition is detected, flood “clear” messages throughout component 53

55
54 TORA Route Maintenance If node i loses last outgoing link: due to a link failure (Case Generate): set ref level to (current time, i, 0), a full reversal due to a height change and nbrs don’t have same ref level (Case Propagate): adopt max ref level and set to effect a partial reversal nbrs have same ref level with r = 0 (Case Reflect): adopt new ref level, set r to 1, set to 0 (full reversal) nbrs have same ref level with r = 1 and oid = i (Case Detect): Partition! Start process of erasing routes.

56
Apr 13, 2006 Applications of Link Reversal Algorithms in MANETs 55 TORA Example – Partition D Generate D Propagate D Reflect D Propagate D Detect

57
TORA Discussion Works best with perfectly synchronized clocks How to prove correctness? Gafni & Bertsekas result does not directly apply because of asynchronous delay in updating neighbors about new heights Other issues remain: partition detection, route creation, route erasure Can be adapted to solve leader election: when partition is detected, elect a new leader! (Cf. Ingram et al. 2009) 56

58
Link Reversal for Mutual Exclusion [Snepscheut 1987] Goal: no-lockout mutual exclusion in a message-passing system with a tree communication topology Solution: Pass around a unique “token” message. impose logical directions on communication channels s.t. token holder is unique sink when a proc has the token, it can enter the critical section when a proc needs the token, it sends a “request” message on its unique outgoing link – toward the token holder when a proc receives a request, it remembers it in a FIFO queue and forwards it toward the token-holder (if not already waiting) when token holder responds to a request, it forwards the token to the neighbor at the head of the queue, and reverses direction of that link 57

59
From a Tree to a DAG For a general communication topology, the previous algorithm can be run on a spanning tree overlay of the graph. However, this does not take advantage of the redundancy offered by additional links. Instead, direct all links in the graph: request message can be forwarded on any outgoing link when a node receives the token, all its outgoing links are reversed, to make it a sink 58

60
Scheduling in a Graph [Barbosa & Gafni 1989] What happens when the Full Reversal routing algorithm is executed without a destination? I.e., every vertex in the graph does a reversal when it is a sink Call this algorithm FRND (FR with No Destination). When a vertex is a sink, it is said to be scheduled: can take some action with the guarantee that none of its neighbors are scheduled at the same time. 59

61
Behavior of FRND Claim 1: FRND maintains acyclicity. Proof: Same as for FR. Claim 2: Every vertex is a sink infinitely often. Proof: By Claim 1, at each iteration there is at least one sink, so FRND never terminates. If some vertices take finitely many steps and some take infinitely many, then use same argument as for showing FR terminates to get a contradiction. Thus every vertex is scheduled infinitely often. 60

62
FRND Example

63
Behavior of FRND Claim 3: In the greedy execution of FRND from an initial DAG, eventually the pattern of sinks becomes periodic. Proof: Let S 1, S 2,… be the sequence of sets of sinks in the execution. Since finite number of vertices, S i = S j for some i and j. Thus S i+1 = S j+1, etc. Note: every vertex appears at least once in every period. 62

64
Q&A about FRND How long until the execution becomes periodic? At most polynomial number of iterations [Malka & Rajsbaum 1991] How long is the period? At least 2 iterations, since neighbors cannot be sinks simultaneously Can be as bad as exponential [Malka et al. 1993] How “fair” is the period? every vertex takes same number of steps… 63

65
Period is Fair Claim 1: Difference in number of steps taken by u and v at any iteration is at most the distance between them. Proof is by induction on the distance. Claim 2: Every vertex takes same number of steps in the period. Proof: Suppose u appears a times and v appears b times, b > a. After k-th execution of the period, u has taken ka steps and v has taken kb steps. Eventually kb – ka exceeds the distance between u and v, contradicting Claim 1. 64

66
Multiplicity and Concurrency Multiplicity of the period is the number of times that each vertex takes a step in the period. Concurrency is ratio of multiplicity m to the period length p, i.e., m/p. Concurrency is fraction of iterations during which any given vertex takes steps. at most 1/2 at least 1/n 65

67
Multiplicity and Concurrency Multiplicity of the period is the number of times that each vertex takes a step in the period. Concurrency is ratio of multiplicity m to the period length p, i.e., m/p. Concurrency is fraction of iterations during which any given vertex takes steps. at most 1/2 at least 1/n 66

68
Concurrency Example 67 Period length is 5 multiplicity is 2 concurrency is 2/5

69
Exact Expression for Concurrency Claim 1: For any initial orientation of a tree, the greedy execution of FRND reaches a periodic orientation with length 2 and multiplicity 1, so concurrency = 1/2. Claim 2: For any periodic orientation G of a non-tree graph, the concurrency is equal to the minimum, over all (simple) circuits k in G, of the fraction of links in k that are right-way. 68

70
Choosing a Good Initial Orientation For trees, the initial orientation is unimportant: they all lead to a period with concurrency 1/2 For non-trees, the initial orientation can make a big difference to the concurrency: consider a ring of n vertices, where n is even if initially there is just 1 sink, there will never be more than 1 sink in any orientation: concurrency is 1/n if initially every other vertex is a sink, vertices keep alternating: concurrency is 1/2 Unfortunately, determining the best orientation is NP-complete! 69

71
Resource Allocation in a Distributed System [Chandy & Misra 1984] Dining philosophers (or resource allocation) problem is a generalization of the mutual exclusion problem: conflict graph: vertices correspond to the procs, edge between i and j means i and j compete for exclusive access to a resource Ensure exclusion: no two neighbors in the conflict graph are in their critical sections simultaneously Ensure fairness: every proc requesting access to its critical section eventually is granted access 70

72
First Solution Use FRND on the conflict graph: when a proc is a sink, it can enter its critical section every proc is a sink infinitely often no two neighbors are sinks simultaneously Issues: How to adapt FRND to asynchronous message passing? Why bother a proc that is not interested in entering its critical section? 71

73
Chandy & Misra’s Solution Key data structure is precedence graph, directed version of conflict graph Precedence graph is represented in a distributed fashion by having each proc keep a variable for each neighbor indicating who yields to whom variables are initialized so that precedence graph is acyclic Each pair of neighbors i and j share a token to ensure exclusion if i doesn’t have token when it wants to enter C.S. it sends request to j j sends back the token immediately if j is in its remainder section or if it is in its trying section and i has precedence over j, otherwise j defers the request from i 72

74
Chandy & Misra’s Solution Thus precedence graph is used to arbitrate between contending neighbors, but otherwise is ignored. Once i has all its tokens, it enters the C.S. When i leaves the C.S., it satisfies all deferred requests and does a full reversal in the precedence graph 73

75
Correctness Ideas Management of tokens ensures exclusion. By starting with an acyclic conflict graph and only modifying it with full reversal, the precedence graph is always acyclic: no deadlock can be caused by a cycle of procs waiting on each other 74

76
Conclusion Other applications of link reversal include: distributed queueing k-mutual exclusion publish/subscribe simulated annealing graph coloring neural networks Appeal of the approach is using local knowledge to solve global problems 75

77
References Barbosa and Gafni, “Concurrency in Heavily Loaded Systems,” ACM TOPLAS Busch, Surapaneni and Tirthapura, “Analysis of Link Reversal Routing Algorithms for Mobile Ad Hoc Networks,” SPAA Busch and Tirthapura, “Analysis of Link Reversal Routing Algorithms,” SIAM JOC Chandy and Misra, “The Drinking Philosophers Problem,” ACM TOPLAS Charron-Bost, Fuegger, Welch and Widder, “Full Reversal Routing as a Linear Dynamical System,” SIROCCO Charron-Bost, Fuegger, Welch and Widder, “Partial is Full,” SIROCCO Charron-Bost, Gaillard, Welch and Widder, “Routing Without Ordering,” SPAA Charron-Bost, Welch and Widder, “Link Reversal: How to Play Better to Work Less,” ALGOSENSORS Gafni and Bertsekas, “Distributed Algorithms for Generating Loop-Free Routes in Networks with Frequently Changing Topology,” IEEE Trans. Comm

78
References Ingram, Shields, Walter and Welch, “An Asynchronous Leader Election Algorithm for Dynamic Networks,” IPDPS Malka, Moran and Zaks, “A Lower Bound on the Period Length of a Distributed Scheduled,” Algorithmica Malka and Rajsbaum, “Analysis of Distributed Algorithms Based on Recurrence Relations”, WDAG Park and Corson, “A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks,” INFOCOM van de Snepscheut, “Fair Mutual Exclusion on a Graph of Processes,” Distributed Computing Welch and Walter, “Link Reversal Algorithms,” Synthesis Lectures on Distributed Computing Theory #8, Morgan & Claypool Publishers,

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google