CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.

Slides:



Advertisements
Similar presentations
Two absolute bounds for distributed bit complexity Yefim Dinitz, Noam Solomon, 2007 Presented by: Or Peri & Maya Shuster.
Advertisements

Distributed Computing 1. Lower bound for leader election on a complete graph Shmuel Zaks ©
CPSC 668Set 18: Wait-Free Simulations Beyond Registers1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
CPSC 668Set 14: Simulations1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 7: Mutual Exclusion with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 5: Synchronous LE in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
CPSC 668Set 2: Basic Graph Algorithms1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 13: Clocks1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 12: Causality1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Distributed Coloring in Õ(  log n) Bit Rounds COST 293 GRAAL and.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 8: More Mutex with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Computer Science Lecture 11, page 1 CS677: Distributed OS Last Class: Clock Synchronization Logical clocks Vector clocks Global state.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
A (nlog(n)) lower bound for leader election in a clique.
Bit Complexity of Breaking and Achieving Symmetry in Chains and Rings.
CPSC 668Set 15: Broadcast1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Leader Election in Rings
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 4 – Consensus and reliable.
CPSC 668Set 11: Asynchronous Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Chapter Resynchsonous Stabilizer Chapter 5.1 Resynchsonous Stabilizer Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of Jan 2004, Shlomi.
CPSC 668Set 11: Asynchronous Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Systems of Distributed systems Module 2 - Distributed algorithms Teaching unit 2 – Properties of distributed algorithms Ernesto Damiani University of Bozen.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Distributed Algorithms – 2g1513 Lecture 9 – by Ali Ghodsi Fault-Tolerance in Distributed Systems.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 11: Asynchronous Consensus 1.
Distributed Computing 3. Leader Election – lower bound for ring networks Shmuel Zaks ©
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch Set 11: Asynchronous Consensus 1.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
Leader Election. Leader Election: the idea We study Leader Election in rings.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 5: Synchronous LE in Rings 1.
CS294, Yelick Consensus revisited, p1 CS Consensus Revisited
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 8: More Mutex with Read/Write Variables 1.
Multiprocess Synchronization Algorithms ( ) Lecturer: Danny Hendler Global Computation.
COMPSCI 102 Discrete Mathematics for Computer Science.
Several sets of slides by Prof. Jennifer Welch will be used in this course. The slides are mostly identical to her slides, with some minor changes. Set.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 9: Fault Tolerant Consensus 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
Jeffrey D. Ullman Stanford University.  A real story from CS341 data-mining project class.  Students involved did a wonderful job, got an “A.”  But.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Distributed Algorithms (22903)
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Exercise Exercise Proof that
Parallel and Distributed Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Distributed Algorithms (22903)
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Presentation transcript:

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings2 Asynchronous Lower Bound on Messages  (n log n) lower bound for any leader election algorithm A that (1) works in an asynchronous ring (2) is uniform (doesn't use ring size) (3) elects maximum id (4) guarantees everyone learns id of winner necessary for result to hold necessary for this proof to work no loss of generality

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings3 Statement of Key Result Theorem (3.5): For every n that is a power of 2 and every set of n ids, there is a ring using those ids on which any asynchronous leader election has a schedule in which at least M(n) messages are sent, where –M(2) = 1 and –M(n) = 2M(n/2) + (n/2 - 1)/2, n > 2. Why does this give  (n log n) result? –because M(n) =  (n log n) (cf. how to solve recurrences)

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings4 Discussion of Statement power of 2: can be adapted for other case "schedule": the sequence of events (and events only) extracted from an execution, i.e., discard the configurations –configuration gives away number of processors but we will want to use the same sequence of events in different size rings

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings5 Idea of Asynchronous Lower Bound The number of messages, M(n), is described by a recurrence: –M(n) = 2 M(n/2) + (n/2 - 1)/2 Prove the bound by induction Double the ring size at each step –so induction is on the exponent of 2 Show how to construct an expensive execution on a larger ring by starting with two expensive executions on smaller rings (2*M(n/2)) and then causing about n/4 extra messages to be sent

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings6 Open Schedules To make the induction go through, the expensive executions must have schedules that are "open". Definition of open schedule: There is an edge over which no message is delivered. An edge over which no message is delivered is called an open edge.

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings7 Suppose n = 2. Suppose x > y. Then p 0 wins and p 1 must learn that the leader id is x. So p 0 must send a message to p 1. Truncate immediately after the message is sent (before it is delivered) to get desired open schedule. Proof of Base Case p0p0 p1p1 p1p1 x y

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings8 Proof of Inductive Step Suppose n ≥ 4. Split S (set of ids) into two halves, S 1 and S 2. By inductive hypothesis, there are two rings, R 1 and R 2 :

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings9 Apply Inductive Hypothesis R 1 has an open schedule  1 in which at least M(n/2) messages are sent and e 1 = (p 1,q 1 ) is an open edge. R 2 has an open schedule  2 in which at least M(n/2) messages are sent and e 2 = (p 2,q 2 ) is an open edge.

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings10 Paste Together Two Rings Paste R 1 and R 2 together over their open edges to make big ring R. Next, build an execution of R with M(n) messages…

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings11 Paste Together Two Executions Execute  1 : procs. on left cannot tell difference between being in R 1 and being in R. So they behave the same and send M(n/2) msgs in R. Execute  2 : procs. on right cannot tell difference between being in R 2 and being in R. So they behave the same and send M(n/2) msgs in R.

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings12 Generating Additional Messages Now we have 2*M(n/2) msgs. How to get the extra (n/2 - 1)/2 msgs? Case 1: Without unblocking (delivering a msg on) e p or e q, there is an extension of  1  2 on R in which (n/2 - 1)/2 more msgs are sent. Then this is the desired open schedule.

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings13 Generating Additional Messages Case 2: Without unblocking (delivering a msg on) e p or e q, every extension of  1  2 on R leads to quiescence: –no proc. will send another msg. unless it receives one –no msgs are in transit except on e p and e q Let  3 be any extension of  1  2 that leads to quiescence.

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings14 Getting n/2 More Messages Let  4 '' be an extension of  1  2  3 that leads to termination. Claim: At least n/2 messages are sent in  4 ''. Why? –Each of the n/2 procs. in the half of R not containing the leader must receive a msg to learn the leader's id. –Until  4 '' there has been no communication between the two halves of R (remember the open edges)

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings15 Getting an Open Schedule Remember we want to use this ring R and this expensive execution as building blocks for the next larger power of 2, in which we will paste together two open executions So we have to find an expensive open execution (with at least one edge over which no msg is delivered).  1  2  3  4 '' might not be open A little more work is needed…

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings16 Getting an Open Schedule As msgs in e p and e q are delivered in  4 '', procs. "wake up" from quiescent state and send more msgs. Sets of awakened procs.(P and Q) expand outward around e p and e q :

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings17 Getting an Open Schedule Let  4 ' be the prefix of  4 '' when n/2 - 1 msgs have been sent P and Q cannot meet in  4 ', since less than n/2 msgs are sent in  4 ' W.l.o.g., suppose the majority of these msgs are sent by procs in P (at least (n/2 - 1)/2 msgs) Let  4 be the subsequence of  4 ' consisting of just the events involving procs. in P

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings18 Getting an Open Schedule When executing  1  2  3  4, processors in P behave the same as when executing  1  2  3  4 '. Why? The only difference between  4 and  4 ' is that  4 is missing the events by procs. in Q. But since there is no communication in  4 between procs. in P and procs. in Q, procs. in P cannot tell the difference.

CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings19 Wrap Up Consider schedule  1  2  3  4. During  1, M(n/2) msgs are sent, none delivered over e p or e q During  2, M(n/2) msgs are sent, none delivered over e p or e q During  3, all msgs are delivered except those over e p or e q ; possibly some more msgs are sent During  4, (n/2 - 1)/2 msgs are sent, none delivered over e q (why??) This is our desired schedule for the induction!