Presentation is loading. Please wait.

Presentation is loading. Please wait.

Self-stabilization. What is Self-stabilization? Technique for spontaneous healing after transient failure or perturbation. Non-masking tolerance (Forward.

Similar presentations


Presentation on theme: "Self-stabilization. What is Self-stabilization? Technique for spontaneous healing after transient failure or perturbation. Non-masking tolerance (Forward."— Presentation transcript:

1 Self-stabilization

2 What is Self-stabilization? Technique for spontaneous healing after transient failure or perturbation. Non-masking tolerance (Forward error recovery). Guarantees eventual safety following failures. Feasibility demonstrated by Dijkstra in his Communications of the ACM 1974 article

3 Why Self-stabilizing systems? Recover from any initial configuration to a legitimate configuration in a bounded number of steps, as long as the codes are not corrupted. The ability to spontaneously recover from any initial state implies that no initialization is ever required. Such systems can be deployed ad hoc, and are guaranteed to function properly in bounded time

4 Two properties

5 Examples of Self-stabilizing systems We discussed at least one such system while discussing about clock phase synchronization on an array of clocks that are synchronously ticking. We will discuss about a couple of others now.

6 Example 1: Stabilizing mutual exclusion (Dijkstra 1974) 0 1 62 4 7 53 N-1 Consider a unidirectional ring of processes. In the legal configuration, exactly one token will circulate in the network (token = enabled guard) Safety. The number of processes with an enabled guard is exactly one. Liveness. During an infinite behavior, the guard of each process is enabled infinitely often.

7 Stabilizing mutual exclusion on a ring 0 Hand-execute this first, before reading further. Start the system from an arbitrary initial configuration n = number of processes Each process has k states 0..(k-1) k > n 12n-1

8 Correctness Proof Absence of deadlock If no process j > 0 has an enabled guard then x[0] = x[1] = x[2]= … x[n-1]. But it means that the guard of process 0 is enabled. Proof of Closure When a process executes an action, its own guard is disabled, and at most one more process becomes enabled. So, the number of tokens never increases. It means that if the system is in a good configuration, it remains so (unless, of course a failure occurs)

9 Correctness Proof (continued) Proof of Convergence By the pigeonhole principle, at least one state should not be the state of any process. Let x be one of the missing states in the system. Processes 1..n-1 acquire their states from their left neighbor Eventually process 0 attains the state x Eventually, all processes attain the state x before process 0 becomes enabled again This is a legal configuration (only process 0 has a token) Thus the system is guaranteed to recover from a bad configuration to a good configuration

10 To disprove To prove that a given algorithm is not self-stabilizing to L, it is sufficient to show that. either (1)there exists a deadlock configuration, or (2) there exists a cycle of illegal configurations (≠L) in the history of the computation. (3) The systems stabilizes to a configuration L’ ≠ L

11 Example 2: Stabilizing spanning tree Problem description Given a connected graph G = (V,E) and a root r, design an algorithm for maintaining a spanning tree in presence of transient failures that may corrupt the local states of processes (and hence the spanning tree). Let n = |V|

12 Different scenarios 0 1 2 5 4 3 0 1 2 5 4 3 1 2 3 4 5 P(2) is corrupted Each process i has two variables L(i) = Distance from the root via tree edges P(i) = parent of process i

13 Different scenarios 0 1 2 5 4 3 1 2 3 4 5 0 1 2 5 4 3 1 2 5 4 5 The distance variable L(3) is corrupted

14 Definitions (2)

15 The algorithm 0 1 2 5 4 3 P(2) is corrupted 0 1 2 3 4 5 The blue labels denote the values of L

16 Proof of stabilization Define an edge from i to P(i) to be well-formed, when L(i) ≠ n, L(P(i)) ≠ n and L(i) = L(P(i)) +1. In any configuration, the well-formed edges form a spanning forest. Delete all edges that are not well- formed. Each tree T(k) in the forest is identified by k, the lowest value of L in that tree.

17 Example In the sample graph shown earlier, the original spanning tree is decomposed into two well-formed trees T(0) = {0, 1} T(2) = {2, 3, 4, 5} Let F(k) denote the number of T(k)’s in the forest. Define a tuple F= (F(0), F(1), F(2) …, F(n)). For the sample graph, F = (1, 0, 1, 0, 0, 0) after node 2’s has a transient failure.

18 Skeleton of the proof Minimum F = (1,0,0,0,0,0) {legal configuration} Maximum F = (1, n-1, 0, 0, 0, 0) (considering lexicographic order) With each action of the algorithm, F decreases lexicographically. Verify the claim! This proves that eventually F becomes (1,0,0,0,0,0) and the spanning tree stabilizes. What is an upper bound of the time complexity of this algorithm?

19 Self-stabilizing Algorithm for Graph Coloring

20 The problem Graph Coloring Problem: Given a graph, color all the vertices so that two adjacent vertices get different colors. The real challenge: to use minimum number of colors. 3-colorable

21 Map Coloring Theorem (Apple Haken 1977). Every map is 4-colorable. Can we draw a map that needs 5 colors? NO Conjecture (1852) Every map is 4-colorable. “Proof” by Kempe in 1879, an error is found 11 years later. The proof is computer assisted, some mathematicians are not happy.

22 A graph is planar if it can be drawn on a plane so that no two edges cross one another

23 Self-stabilizing algorithm for coloring planar graphs Goal : to color the nodes of a planar graph using at most six colors Initially an adversary may assign arbitrary colors to the different nodes. Eventually a proper coloring has to be restored. There is no central coordinator to coordinate the recovery

24 The main components Arbitrarily colored planar graph G DAG where each node has an outdegree ≤ 5 Properly colored G with at most six colors First component Second component

25 Euler’s Polyhedron Formula Proof by contradiction. Assume the claim is false. Then the degree of every vertex ≥ 6. This means that the total number of edges e ≥ 3v. Contradiction! Theorem. If G is a simple planar graph with at least 3 vertices, then e ≤ 3v-6 Corollary. In a planar graph, there must exist at least one node with degree ≤ 5

26 Every Planar Graph has a six coloring Theorem. Every planar graph is 6-colorable. Let v be a vertex of degree at most 5. Remove v from the planar graph G. Note that if G-v is 6-colorable then G is 6-colorable (why?) Now, G-v is still a planar graph. So, you can use recursion here, since after you remove a node of degree 5 in each step, ultimately there will be a graph with at most 6 nodes that is trivially 6-colorable G-v v

27 The First Component The first component transforms the given planar graph into a directed acyclic graph (dag) for which.. However, the technique shown in the previous slide only shows the feasibility, and does not present a distributed algorithm! So, we have to revisit this, and search for a self- stabilizing distributed algorithm for this transformation. Let us jump ahead, and examine the second component.

28 The Second Component The second component runs on on a dag where the outdegree of each node is ≤5, and produces the actual 6-coloring. Let sc(i) be the set of colors of the successors of node i program colorme for node i; Note. The second component is self-stabilizing. Why?

29 Stabilizing algorithm for dag generation This is the self-stabilizing distributed algorithm for the first component. It transforms the given planar graph into a directed acyclic graph (dag) for which holds program dag process i; do outdegree(i) > 5  reverse the direction of the edges od The dag-generation algorithm stabilizes to a configuration in which the condition holds. Why? We will discuss it in the class.

30 The last challenge Question Who will detect the termination of the first component so that the second component can be launched? Answer It is not necessary. Run them concurrently. But before that, add the extra predicate outdegree(i) ≤ 5 to strengthen the guard of the coloring algorithm

31 From Self-stabilizing to Adaptive Systems Adaptive systems can be viewed as extensions of self-stabilizing systems Environment E system S A system is expected to adapt to an environment E by switching its legal configuration Example if E = 0 then legal state L = L0, else If E = 1 then legal state L = L1 This implies that the system is self-stabilizing to L = (not E ⋀ L0) ⋁ (E ⋀ L1)


Download ppt "Self-stabilization. What is Self-stabilization? Technique for spontaneous healing after transient failure or perturbation. Non-masking tolerance (Forward."

Similar presentations


Ads by Google