CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS

Slides:



Advertisements
Similar presentations
Two absolute bounds for distributed bit complexity Yefim Dinitz, Noam Solomon, 2007 Presented by: Or Peri & Maya Shuster.
Advertisements

Distributed Computing 1. Lower bound for leader election on a complete graph Shmuel Zaks ©
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 5: Synchronous LE in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 12: Causality1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 8: More Mutex with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
A (nlog(n)) lower bound for leader election in a clique.
Bit Complexity of Breaking and Achieving Symmetry in Chains and Rings.
Leader Election in Rings
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 4 – Consensus and reliable.
CPSC 668Set 11: Asynchronous Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Chapter Resynchsonous Stabilizer Chapter 5.1 Resynchsonous Stabilizer Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of Jan 2004, Shlomi.
CPSC 668Set 11: Asynchronous Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 11: Asynchronous Consensus 1.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 18: Wait-Free Simulations Beyond Registers 1.
Distributed Computing 3. Leader Election – lower bound for ring networks Shmuel Zaks ©
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 1: Introduction 1.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch Set 11: Asynchronous Consensus 1.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
Leader Election. Leader Election: the idea We study Leader Election in rings.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 5: Synchronous LE in Rings 1.
CS294, Yelick Consensus revisited, p1 CS Consensus Revisited
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 8: More Mutex with Read/Write Variables 1.
COMPSCI 102 Discrete Mathematics for Computer Science.
Several sets of slides by Prof. Jennifer Welch will be used in this course. The slides are mostly identical to her slides, with some minor changes. Set.
1 Selecting leader in a clique in O (n log n) By Pierre A.Humblet Presented by: Anat Berkman, Distributed Algorithms, Technion.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 9: Fault Tolerant Consensus 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
The OM(m) algorithm Recall what the oral message model is.
CPSC 411 Design and Analysis of Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Distributed Algorithms (22903)
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Exercise Exercise Proof that
CSCE 411 Design and Analysis of Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Distributed Consensus
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Parallel and Distributed Algorithms
Distributed Systems, Consensus and Replicated State Machines
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Consensus in Synchronous Systems: Byzantine Generals Problem
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Distributed Algorithms (22903)
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Leader Election Ch. 3, 4.1, 15.1, 15.2 Chien-Liang Fok 4/29/2019
Presentation transcript:

CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS CSCE 668 Spring 2014 Prof. Jennifer Welch

Asynchronous Lower Bound on Messages (n log n) lower bound for any leader election algorithm A that (1) works in an asynchronous ring necessary for result to hold (2) is uniform (doesn't use ring size) necessary for this proof to work (3) elects maximum id no loss of generality (4) guarantees everyone learns id of winner Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Statement of Key Result Theorem (3.5): For every n that is a power of 2 and every set of n ids, there is a ring using those ids on which any uniform asynchronous leader election algorithm has a schedule in which at least M(n) messages are sent, where M(2) = 1 and M(n) = 2M(n/2) + (n/2 - 1)/2, n > 2. Why does this give (n log n) result? because M(n) = (n log n) (cf. how to solve recurrences) Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Discussion of Statement power of 2: can be adapted for other case "schedule": the sequence of events (and events only) extracted from an execution, i.e., discard the configurations configuration gives away number of processors but we will want to use the same sequence of events in different size rings relies on assumption of uniform algorithm (ring size unknown) Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Idea of Asynchronous Lower Bound The lower bound on the number of messages, M(n), is described by a recurrence: M(n) = 2 M(n/2) + (n/2 - 1)/2 Prove the bound by induction Double the ring size at each step so induction is on the exponent of 2 Show how to construct an expensive execution on a larger ring by starting with two expensive executions on smaller rings (2*M(n/2)) and then causing about n/4 extra messages to be sent Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Open Schedules To make the induction go through, the expensive executions must have schedules that are "open". Definition of open schedule: There is an edge over which no message is delivered. An edge over which no message is delivered is called an open edge. Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Proof of Base Case Suppose n = 2. Suppose x > y. Then p0 wins and p1 must learn that the leader id is x. So p0 must send at least one message to p1. Truncate execution immediately after the first message is sent (before it is delivered) to get desired open schedule with M(2) = 1 message. p0 p1 x p1 y Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Proof of Inductive Step Suppose n ≥ 4. Split S (set of ids) into two halves, S1 and S2. By inductive hypothesis, there are two rings, R1 and R2: Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Apply Inductive Hypothesis R1 has an open schedule 1 in which at least M(n/2) messages are sent and e1 = (p1,q1) is an open edge. R2 has an open schedule 2 in which at least M(n/2) messages are sent and e2 = (p2,q2) is an open edge. Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Paste Together Two Rings Paste R1 and R2 together over their open edges to make big ring R. Next, build an execution of R with M(n) messages… Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Paste Together Two Executions depends on uniform assumption Execute 1: procs. on left cannot tell difference between being in R1 and being in R. So they behave the same and send M(n/2) msgs in R. Next… Execute 2: procs. on right cannot tell difference between being in R2 and being in R. So they behave the same and send M(n/2) msgs in R. no interference bcs α1 and α2 are open Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Generating Additional Messages Now we have 2*M(n/2) msgs. How to get the extra (n/2 - 1)/2 msgs? Case 1: Without unblocking (delivering a msg on) ep or eq, there is an extension of 1 2 on R in which (n/2 - 1)/2 more msgs are sent. Then this is the desired open schedule. Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Generating Additional Messages Case 2: Without unblocking (delivering a msg on) ep or eq, every extension of 1 2 on R leads to quiescence: no proc. will send another msg. unless it receives one no msgs are in transit except on ep and eq Let 3 be any extension of 1 2 that leads to quiescence without unblocking ep or eq. Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Getting n/2 More Messages Let 4'' be an extension of 1 2 3 that leads to termination. Claim: At least n/2 messages are sent in 4''. Why? Each of the n/2 procs. in the half of R not containing the leader must receive a msg to learn the leader's id. Until 4'' there has been no communication between the two halves of R (remember the open edges) depends on assumption that all learn leader's id Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Getting an Open Schedule Remember we want to use this ring R and this expensive execution as building blocks for the next larger power of 2, in which we will paste together two open executions So we have to find an expensive open execution (with at least one edge over which no msg is delivered). 1 2 3 4'' might not be open A little more work is needed… Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Getting an Open Schedule As msgs in ep and eq are delivered in 4'', procs. "wake up" from quiescent state and send more msgs. Sets of awakened procs. (P and Q) expand outward around ep and eq : Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Getting an Open Schedule Let 4' be the prefix of 4'' when n/2 - 1 msgs have been sent P and Q cannot meet in 4', since less than n/2 msgs are sent in 4' W.l.o.g., suppose the majority of these msgs are sent by procs in P (at least (n/2 - 1)/2 msgs) Let 4 be the subsequence of 4' consisting of just the events involving procs. in P Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Getting an Open Schedule When executing 1234, processors in P behave the same as when executing 1234'. Why? The only difference between 4 and 4' is that 4 is missing the events by procs. in Q. But since there is no communication in 4 between procs. in P and procs. in Q, procs. in P cannot tell the difference. depends on asynchrony assumption Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668

Wrap Up Consider schedule 1234 . During 1, M(n/2) msgs are sent, none delivered over ep or eq During 2, M(n/2) msgs are sent, none delivered over ep or eq During 3, all msgs are delivered except those over ep or eq; possibly some more msgs are sent During 4, (n/2 - 1)/2 msgs are sent, none delivered over eq (why??) This is our desired schedule for the induction! Set 4: Asynchronous Lower Bound for LE in Rings CSCE 668