Chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-1 Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link.

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

CS3771 Today: deadlock detection and election algorithms  Previous class Event ordering in distributed systems Various approaches for Mutual Exclusion.
Chapter 6 - Convergence in the Presence of Faults1-1 Chapter 6 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Shlomi Dolev, All Rights.
Chapter 7 - Local Stabilization1 Chapter 7: roadmap 7.1 Super stabilization 7.2 Self-Stabilizing Fault-Containing Algorithms 7.3 Error-Detection Codes.
Chapter 2 - Definitions, Techniques and Paradigms2-1 Chapter 2 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of May 2003, Shlomi.
Token-Dased DMX Algorithms n LeLann’s token ring n Suzuki-Kasami’s broadcast n Raymond’s tree.
Routing in a Parallel Computer. A network of processors is represented by graph G=(V,E), where |V| = N. Each processor has unique ID between 1 and N.
Chapter 15 Basic Asynchronous Network Algorithms
Distributed Computing 2. Leader Election – ring network Shmuel Zaks ©
Chapter 4: Trees Part II - AVL Tree
Self Stabilizing Algorithms for Topology Management Presentation: Deniz Çokuslu.
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
B+-Trees (PART 1) What is a B+ tree? Why B+ trees? Searching a B+ tree
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Termination Detection. Goal Study the development of a protocol for termination detection with the help of invariants.
Termination Detection Part 1. Goal Study the development of a protocol for termination detection with the help of invariants.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
1 Complexity of Network Synchronization Raeda Naamnieh.
Chapter 8 - Self-Stabilizing Computing1 Chapter 8 – Self-Stabilizing Computing Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of January 2004 Shlomi.
CPSC 668Set 2: Basic Graph Algorithms1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
LSRP: Local Stabilization in Shortest Path Routing Hongwei Zhang and Anish Arora Presented by Aviv Zohar.
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Chapter 4 - Self-Stabilizing Algorithms for Model Conversions iddistance 22 iddistance iddistance iddistance.
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
Chapter Resynchsonous Stabilizer Chapter 5.1 Resynchsonous Stabilizer Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of Jan 2004, Shlomi.
Chapter 4 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of October 2003 Shlomi Dolev, All Rights Reserved ©
1 Separator Theorems for Planar Graphs Presented by Shira Zucker.
Self Stabilization Classical Results and Beyond… Elad Schiller CTI (Grece)
Raeda Naamnieh 1. The Partition Algorithm Intuitively, the idea of the following algorithm is to choose each cluster as a maximal subset of nodes whose.
Building Suffix Trees in O(m) time Weiner had first linear time algorithm in 1973 McCreight developed a more space efficient algorithm in 1976 Ukkonen.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
CS4432: Database Systems II
Interval Routing Presented by: Marc Segal. Motivation(1) In a computer network a routing method is required so that nodes can communicate with each other.
Election Algorithms and Distributed Processing Section 6.5.
More Trees COL 106 Amit Kumar and Shweta Agrawal Most slides courtesy : Douglas Wilhelm Harder, MMath, UWaterloo
Randomized Algorithms - Treaps
1 Multiway trees & B trees & 2_4 trees Go&Ta Chap 10.
Minimal Spanning Trees What is a minimal spanning tree (MST) and how to find one.
Complexity of Bellman-Ford Theorem. The message complexity of Bellman-Ford algorithm is exponential. Proof outline. Consider a topology with an even number.
On Probabilistic Snap-Stabilization Karine Altisen Stéphane Devismes University of Grenoble.
Distributed Computing 5. Synchronization Shmuel Zaks ©
Selected topics in distributed computing Shmuel Zaks
On Probabilistic Snap-Stabilization Karine Altisen Stéphane Devismes University of Grenoble.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Multi-way Trees. M-way trees So far we have discussed binary trees only. In this lecture, we go over another type of tree called m- way trees or trees.
Diffusing Computation. Using Spanning Tree Construction for Solving Leader Election Root is the leader In the presence of faults, –There may be multiple.
 2004 SDU Lecture 7- Minimum Spanning Tree-- Extension 1.Properties of Minimum Spanning Tree 2.Secondary Minimum Spanning Tree 3.Bottleneck.
Termination Detection
Chapter 12 B+ Trees CS 157B Spring 2003 By: Miriam Sy.
Fall 2006 CSC311: Data Structures 1 Chapter 10: Search Trees Objectives: Binary Search Trees: Search, update, and implementation AVL Trees: Properties.
Diffusing Computation. Using Spanning Tree Construction for Solving Leader Election Root is the leader In the presence of faults, –There may be multiple.
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 2: Basic Graph Algorithms 1.
Indexing Database Management Systems. Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files File Organization 2.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
1 Binary Search Trees   . 2 Ordered Dictionaries Keys are assumed to come from a total order. New operations: closestKeyBefore(k) closestElemBefore(k)
1 Chapter 11 Global Properties (Distributed Termination)
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Lecture 12 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
Database Applications (15-415) DBMS Internals- Part III Lecture 13, March 06, 2016 Mohammad Hammoud.
Distributed Processing Election Algorithm
Lecture 12 Algorithm Analysis
Parallel and Distributed Algorithms
Indexing and Hashing Basic Concepts Ordered Indices
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Compact routing schemes with improved stretch
Lecture 12 Algorithm Analysis
Locality In Distributed Graph Algorithms
Presentation transcript:

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-1 Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id-based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id-based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-2 Motivation  Question: can the tasks achieved in Id-based systems also be achieved in a uniform dynamic system?  Answer: No!  For example: Algorithm for Leader Election (in a uniform system, possible only by using randomization).  We are going to present a randomized self- stabilizing leader-election and naming algorithm for dynamic systems.  The algorithm stabilizes within O(d) cycles.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-3 Naming  Given any communication graph, with up to N identical processors we want to find a unique Id for each processor  At first, each processor has an arbitrary Id  By coloring the tree, we know if it is unique. If so, we’re done. Otherwise, the processor has to find a new Id  The idea of the propagation is similar to the Update Algorithm  Each processor chooses an Id and notifies all others by building a spanning tree  In our algorithm, an Id which already exists, will be replaced with a new one

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-4 Formalization  Each processor P i has a table with at most N rows  This table is called: Processors i  Each row represents the state of the processor in another tree, rooted at any other processor (we have N such processors)  Each processor also has a queue, Queue i, with at most N 2 rows, that should have been in the Processors i table but there is no space for them there  Each row contains: {tid, dis, f, color, ackclr, ot, ack, list, del}

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-5 Formalization (Cont.) Where:  tid is an identifier of a root of a tree (range: between 0 to N 10 ).  dis is the distance of P i from the root of this tree  f is a pointer to the parent of P i in the tree.  color is used to identify other trees with the same tid (range: between 0 to N 10 ).  ackclr is a boolean variable, and used by P i to acknowledge to its parent that its color is known by all processors under P i.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-6 Cont.  ot is a boolean variable, and used by P i to notify the parent of the existence of another tree with the same tid.  ack is also a boolean variable, and used to notify the parent of P i of the termination of the propagation of its new identifier.  list and del may contain up to N identifiers.  list is used by P i to report to its parent the identifiers of the processors in its subtree.  del is used by P i to notify its children in the tree of conflicting identifiers.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-7 The Coloring - finding double Ids example of one tree, rooted at a specific P. X X X We want to ensure that no two processors have the same chosen identifier. Each processor repeatedly colors its tree with a randomly chosen color. Consider a tree T with tid=x. Whenever a processor P i notices that all its children have its color, and their ackclr field is true, P i randomly chooses a new color. Each non-root processor P j repeatedly copies the color of its parent. If the color is new, P j assigns false to its ackclr field. Whenever P j reads that all its children have the same color as its own, and all their ackclr field is true, P j assigns true to its ackclr field. The colors are chosen from a large range. Hence, there is a very high probability, that when a processor P i chooses a new color, the color is new in the tree. Whenever a processor P j changes its color from y to z, P j checks whether it has a neighbor which belongs to a tree T’ rooted at processor with tid=x and with a color different from y and z. If so, P j concludes that more than a single root of a tree with tid=x exists, and set the value of ot field to true. T’ T

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-8 The update part  Processor P i chooses a new Id  P i notifies its children about its new Id, they notify their children, etc.  Thus, we actually build a spanning BFS tree  When getting to a leaf, the leaf acknowledges to its parent that it got the message  Each processor P j which gets acknowledgements from all its children, sends an acknowledgement to its parent  When the root, P i, gets acknowledgements from all its children, it knows that everyone knows its Id

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-9 Cont.  All notifications between processors, are tuples with  {tid, dis, f, color, ackclr, ot, ack, list, del}  where each field contains the appropriate data it wants its neighbors to see.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-10 The Algorithm  Each processor Pi reads the tables of its neighbors, over and over  ReadSeti is the union of the tuples Pi read from its neighbors  In each row of ReadSeti, Pi sets the value of the f field to be the appropriate neighbor. The value of the dis field is incremented by 1  Pi adds a tuple with its own Id and dis=0

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-11 Cont.  For every tid=x, Pi removes every tuple with tid=x which does not have the minimal value in the dis field  Pi sorts the remaining tuples in ReadSet i by the value of their dis fields. If there are more than N tuples left in ReadSet i, Pi assigns the first N tuples to Processors i and the rest are enqueued in Queue i

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-12 But, this does not mean that there are no two trees with identical Ids!! P j modifies the value of the fields: f, dis, ack and list. f – the new father – k. dis – the old dis + 1 ack – true if: 1. P j ‘s neighbors have same tid and dis P j ‘s children on tree have P j ‘s dis+1 and also have ack=true. What if there are two processors with the same Id? – First solution X X X Each processor P j participates in the propagation of information with feedback from every other processor, including P i. When a processor P j includes for the first time a tuple t k with tid=x from a neighbor P k, P j copies the value of the tuple fields of t k to the corresponding fields of t j. The list field of P j contains the names of all the processors in the subtree rooted at P j Thus, when the root gets acknowledgements from all its children, it gets also the list of all the processors in the tree. Now it can see if there are two identical Ids, and propagate to its children the list of those Ids, in the del field. PiPi PjPj PkPk example of one tree, rooted at a specific P.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-13 What if there are two processors with the same Id? – Second solution  Each root colors its tree as we saw in the coloring part  If it finds out that its Id is not unique, it chooses a new one, and we start from the beginning…

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-14 Why does it work?  The range of the Ids is very big (till N 10 )  If a processor found out that another processor has the same Id, then it chooses a new one  The choice is made by luck, from the scheduler-luck game.  The probability of choosing an existing identifier is negligible. But, it could be that the result of the random function is not identical to luck’s choice in the game (the new Id already appears in the system)  In such a case, we start the algorithm from the beginning, with the arbitrary configuration we have Thus, every processor selects a new identifier no more than once.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-15 The reason for the Queue table  When a processor P i chooses a new identifier, a tuple with tid= ID i and dis= 0 is repeatedly written in Processors i.  This action starts the construction of the spanning tree rooted in P i that is identified by tid= ID i.  Still, tuples with the previous identifier of P i may be present in Processors variables.  Thus, since the number of tuples in each Processors variable is bounded, it is unclear whether every processor can include the new tuple in its Processors variable.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-16 The Queue table – cont.  A processor that chooses a new identifier leaves floating tuples with small distances in the system.  These tuples compete with entries of the Processors variables.  Thus, it is possible that part of the tree will be disconnected, not participating in the coloring process, and thus cause a false other-trees indication.  To overcome such a scenario, each processor maintains a queue.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-17 The Queue  When each processor chooses an identifier no more than once, the number of distinct identifiers present in the Processors variables during the entire execution is no greater than N+N 2.  Therefore, it is possible to use a queue of N 2 entries to ensure that, once a tuple with a specific identifier is included in Processors i, a tuple with this identifier exists in either Processors i or Queue i.  Whenever P i computes a new value for Processors i, it deletes from Queue i every tuple with the same tid (keeps only the minimal tid). Thus, there are no two rows with an identical tid value in Queue i.

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-18 Correctness  Lemma: In any given configuration, the number of distinct identifiers is less than N 8  Proof: Each tuple may contain no more than 2N+1 distinct identifiers (in tid, list and del). The number of tuples in Processors i and in Queue i is at most N+N 2. Furthermore, each processor P i maintains two internal variables for each neighbor. Thus, the total number of distinct identifiers is no greater than (2N+1)(N+N 2 )N 1)

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-19 Correctness – cont.  Lemma: The probability that every identifier- choose operation, in a sequence of N identifier- choose operations, results in a nonexistent identifier is greater than ½  Proof: The probability that a single identifier choose operation results in a nonexistent identifier is at least (1 - n 8 /n 10 ) = (1 - 1/n 2 ). And (1 - 1/n 2 ) N > ½ for a sequence of N choose operations (N > 1)

chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-20 Correctness – cont.  Theorem: A safe configuration is reached in an expected O(d) cycles  Outline of proof: The combined probability of the scheduler-luck game is greater than ½, and the expected number of cycles of the execution of the game is O(d) (like in the update algorithm)