CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 11: Asynchronous Consensus 1.

Slides:



Advertisements
Similar presentations
ON THE COMPLEXITY OF ASYNCHRONOUS GOSSIP Presented by: Tamar Aizikowitz, Spring 2009 C. Georgiou, S. Gilbert, R. Guerraoui, D. R. Kowalski.
Advertisements

Impossibility of Distributed Consensus with One Faulty Process
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Consensus Steve Ko Computer Sciences and Engineering University at Buffalo.
CPSC 668Set 18: Wait-Free Simulations Beyond Registers1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Distributed Computing 8. Impossibility of consensus Shmuel Zaks ©
Announcements. Midterm Open book, open note, closed neighbor No other external sources No portable electronic devices other than medically necessary medical.
Distributed Algorithms – 2g1513 Lecture 10 – by Ali Ghodsi Fault-Tolerance in Asynchronous Networks.
Consensus Hao Li.
Distributed Computing 8. Impossibility of consensus Shmuel Zaks ©
Consensus problem Agreement. All processes that decide choose the same value. Termination. All non-faulty processes eventually decide. Validity. The common.
Byzantine Generals Problem: Solution using signed messages.
CPSC 668Set 19: Asynchronous Solvability1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 5: Synchronous LE in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Sergio Rajsbaum 2006 Lecture 3 Introduction to Principles of Distributed Computing Sergio Rajsbaum Math Institute UNAM, Mexico.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
Impossibility of Distributed Consensus with One Faulty Process Michael J. Fischer Nancy A. Lynch Michael S. Paterson Presented by: Oren D. Rubin.
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 4 – Consensus and reliable.
CPSC 668Set 11: Asynchronous Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 11: Asynchronous Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
On the Cost of Fault-Tolerant Consensus When There are no Faults Idit Keidar & Sergio Rajsbaum Appears in SIGACT News; MIT Tech. Report.
Distributed Consensus Reaching agreement is a fundamental problem in distributed computing. Some examples are Leader election / Mutual Exclusion Commit.
Distributed Consensus Reaching agreement is a fundamental problem in distributed computing. Some examples are Leader election / Mutual Exclusion Commit.
Lecture 8-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2010 Indranil Gupta (Indy) September 16, 2010 Lecture 8 The Consensus.
Distributed Algorithms – 2g1513 Lecture 9 – by Ali Ghodsi Fault-Tolerance in Distributed Systems.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 19: Asynchronous Solvability 1.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 18: Wait-Free Simulations Beyond Registers 1.
Consensus and Its Impossibility in Asynchronous Systems.
Ch11 Distributed Agreement. Outline Distributed Agreement Adversaries Byzantine Agreement Impossibility of Consensus Randomized Distributed Agreement.
1 Lectures on Parallel and Distributed Algorithms COMP 523: Advanced Algorithmic Techniques Lecturer: Dariusz Kowalski Lectures on Parallel and Distributed.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 8 Instructor: Haifeng YU.
1 Chapter 9 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch Set 11: Asynchronous Consensus 1.
1 Consensus Hierarchy Part 1. 2 Consensus in Shared Memory Consider processors in shared memory: which try to solve the consensus problem.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 5: Synchronous LE in Rings 1.
CS294, Yelick Consensus revisited, p1 CS Consensus Revisited
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 8: More Mutex with Read/Write Variables 1.
Distributed systems Consensus Prof R. Guerraoui Distributed Programming Laboratory.
Hwajung Lee. Reaching agreement is a fundamental problem in distributed computing. Some examples are Leader election / Mutual Exclusion Commit or Abort.
Chap 15. Agreement. Problem Processes need to agree on a single bit No link failures A process can fail by crashing (no malicious behavior) Messages take.
Impossibility of Distributed Consensus with One Faulty Process By, Michael J.Fischer Nancy A. Lynch Michael S.Paterson.
Agreement in Distributed Systems n definition of agreement problems n impossibility of consensus with a single crash n solvable problems u consensus with.
Chapter 21 Asynchronous Network Computing with Process Failures By Sindhu Karthikeyan.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 16: Distributed Shared Memory 1.
1 Fault tolerance in distributed systems n Motivation n robust and stabilizing algorithms n failure models n robust algorithms u decision problems u impossibility.
Replication predicates for dependent-failure algorithms Flavio Junqueira and Keith Marzullo University of California, San Diego Euro-Par Conference, Lisbon,
Alternating Bit Protocol S R ABP is a link layer protocol. Works on FIFO channels only. Guarantees reliable message delivery with a 1-bit sequence number.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 9: Fault Tolerant Consensus 1.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 9 Instructor: Haifeng YU.
1 Fault-Tolerant Consensus. 2 Communication Model Complete graph Synchronous, network.
1 SECOND PART Algorithms for UNRELIABLE Distributed Systems: The consensus problem.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Alternating Bit Protocol
Distributed Consensus
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Presentation transcript:

CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 11: Asynchronous Consensus 1

Impossibility of Asynchronous Consensus CSCE 668Set 11: Asynchronous Consensus 2  Show impossible in read/write shared memory with n processors and n - 1 faults  prove directly: not hard since so many faults  implies there is no 2-proc algorithm for 1 fault  Show impossible in r/w shared memory with n processors and 1 fault. Two approaches:  Reduction: use a hypothetical n-proc algorithm for 1 fault as a subroutine to design a 2-proc algorithm for 1 fault  Direct proof: Use similar ideas to n-1 failures case

Impossibility of Asynchronous Consensus CSCE 668Set 11: Asynchronous Consensus 3  Show impossible in message passing with n processors and 1 fault. Two approaches:  Reduction: Use a hypothetical message passing algorithm for n procs and 1 fault as a subroutine to design a shared memory algorithm for n procs and 1 fault. This would contradict previous result.  Direct approach: Use similar ideas to shared memory case, augmented to handle messages. (Historically, this was the first version that was proven.)

Modeling Asynchronous Systems with Crash Failures CSCE 668Set 11: Asynchronous Consensus 4  Let f be the maximum number of faulty processors.  For both SM and MP: All but f of the processors must take an infinite number of steps in an admissible execution.  For MP: Also require that all messages sent to a nonfaulty processor must eventually be delivered, except for those sent by a faulty processor in its last step, which might or might not be delivered.

Wait-Free Algorithms CSCE 668Set 11: Asynchronous Consensus 5  An algorithm for n processors is wait-free if it can tolerate n - 1 failures.  Intuition is that a nonfaulty processor does not wait for other processors to do something: it cannot, because it might be the only processor left alive.  First result is to show that there is no wait-free consensus algorithm in the asynchronous r/w shared memory model.

Impossibility of Wait-Free Consensus CSCE 668Set 11: Asynchronous Consensus 6  Suppose in contradiction there is an n-processor algorithm for n - 1 faults in the asynchronous read/write shared memory model.  Proof is similar to that showing f + 1 rounds are necessary in the synchronous message passing model. bivalent initial config bivalent config bivalent config bivalent config bivalent config …

Modified Notion of Bivalence CSCE 668Set 11: Asynchronous Consensus 7  In the synchronous round lower bound proof, valency referred to which decisions are reachable in failure- sparse admissible executions.  For this proof, we are concerned with which decisions are reachable in any execution, as long as it is admissible (for the asynchronous shared memory model with up to n - 1 failures).

Univalent Similarity CSCE 668Set 11: Asynchronous Consensus 8 Lemma (5.15): If C 1 and C 2 are both univalent and they are similar w.r.t. p i (shared memory state is same, p i ’s local state is same), then they have the same valency. Proof: C 1 v-valent p i -only  p i decides v C 2 w-valent  p i decides v

Bivalent Initial Configuration CSCE 668Set 11: Asynchronous Consensus 9 Lemma (5.16): There exists a bivalent initial configuration. Proof is similar to what we did for the synchronous f + 1 round lower bound proof.

Critical Processors CSCE 668Set 11: Asynchronous Consensus 10 Def: If C is bivalent and i(C) (result of p i taking one step) is univalent, then p i is critical in C. Lemma (5.17): If C is bivalent, then at least one processor is not critical in C, i.e., there is a bivalent extension. Proof: Suppose in contradiction all processors are critical. C bival. j(C) 1-val. i(C) 0-val. pipi pjpj Rest of proof is case analysis of what p i and p j do in their two steps

Critical Processors CSCE 668Set 11: Asynchronous Consensus 11 Case 1: p i and p j access different registers. j(C) 1-val. pjpj C bival. i(C) 0-val. pipi pjpj pipi Case 2: p i and p j read same register. Same proof.

Critical Processors CSCE 668Set 11: Asynchronous Consensus 12 Case 3: p i writes to a register R and p j reads from R. C bival. p j reads from R p i writes to R i(C) 0-val j(C) 1-val i(j(C)) 1-val similar w.r.t. p i p i writes to R

Critical Processors CSCE 668Set 11: Asynchronous Consensus 13 Case 4: What if p i and p j both write to the same shared variable?  Can "assume away" the problem by assuming we only have single-writer shared variables.  Or, can do a similar proof for this case.

Finishing the Impossibility Proof CSCE 668Set 11: Asynchronous Consensus 14  Create an admissible execution C 0,i 1,C 1,i 2,C 2,… in which all configurations are bivalent.  contradicts termination requirement  Start with bivalent initial configuration.  Suppose we have bivalent C k. To get bivalent C k+1 :  Let p i_k+1 be a processor that is not critical in C k.  Let C k+1 be i k+1 (C k ).

Impossibility of 1-Resilient Consensus: Reduction Idea CSCE 668Set 11: Asynchronous Consensus 15 Even if the ratio of nonfaulty processors becomes overwhelming, consensus still cannot be solved in asynchronous SM (with read/write registers). 1. Assume there exists an algorithm A for n processors and 1 failure. 2. Use A as a subroutine to design an algorithm A' for 2 processors and 1 failure. 3. We just showed such an A' cannot exist. 4. Thus A cannot exist.

Impossibility of 1-Resilient Consensus: Direct Proof Idea CSCE 668Set 11: Asynchronous Consensus 16  Suppose in contradiction there is such an algorithm.  Strategy: Construct an admissible execution (at most 1 fault) that never terminates:  show there is a bivalent initial configuration  show how to go from one bivalent configuration to another, forever (so can never terminate)  Technically more involved because in constructing this execution, we cannot kill more than one processor.

Impossibility of Consensus in Message Passing: Reduction CSCE 668Set 11: Asynchronous Consensus 17 Strategy: 1. Assume there exists an n-processor 1-resilient consensus algorithm A for the asynchronous message passing model. 2. Use A as a subroutine to design an n-processor 1-resilient consensus algorithm A' for asynchronous shared memory (with read/write variables). 3. Previous result shows A' cannot exist. 4. Thus A cannot exist.

Impossibility of Consensus in MP CSCE 668Set 11: Asynchronous Consensus 18 Idea of A':  Simulate message channels with read/write registers.  Then run algorithm A on top of these simulated channels. To simulate channel from p i to p j :  Use one register to hold the sequence of messages sent over the channel  p i "sends" a message m by writing the old value of the register with m appended  p j "receives" a message by reading the register and checking for new values at the end

Randomized Consensus CSCE 668Set 11: Asynchronous Consensus 19  To get around the negative results for asynchronous consensus, we can:  weaken the termination condition: nonfaulty processors must decide with some nonzero probability  keep the same agreement and validity conditions  This version of consensus is solvable, in both shared memory and message passing!

Motivation for Adversary CSCE 668Set 11: Asynchronous Consensus 20  Even without randomization, in an asynchronous system there are many executions of an algorithm, even when the inputs are fixed, depending on when processors take steps, when they fail, and when messages are delivered.  To be able to calculate probabilities for a randomized algorithm, we need to separate out variation due to causes other than the random choices  Group executions of interest so that each group differs only in the random choices  Perform probabilistic calculations separately for each group and then combine somehow

Adversary CSCE 668Set 11: Asynchronous Consensus 21  Concept used to account for all variability other than the random choices is that of "adversary".  Adversary is a function that takes an execution prefix and returns the next event to occur.  Adversary must obey admissibility conditions of the revelant model  Other conditions might be put on the adversary (e.g., what information it can observe, how much computational power it has)

Probabilistic Definitions CSCE 668Set 11: Asynchronous Consensus 22  An execution of a specific algorithm, exec( A,C 0,R), is uniquely determined by  an adversary A  an initial configuration C 0  a collection of random numbers R  Given a predicate P on executions and a fixed adversary A and initial config C 0, Pr[P] is the probability of {R : exec( A,C 0,R) satisfies P}  Let T be a random variable (e.g., running time). For a fixed A and C 0, the expected value of T is x Pr[T = x] ∑ x is a value of T

Probabilistic Definitions CSCE 668Set 11: Asynchronous Consensus 23  We define the expected value of a complexity measure to be the maximum over all admissible adversaries A and initial configurations C 0, of the expected value for that particular A and C 0.  So this is a "worst-case" average: worst possible adversary (pattern of asynchrony and failures) and initial configuration, averaging over the random choices.

A Randomized Consensus Algorithm CSCE 668Set 11: Asynchronous Consensus 24  Works in message passing model  Tolerates f crash failures  more complicated version handles Byzantine failures  Works in asynchronous case  circumvents asynchronous impossibility result  Requires n > 2f  this is optimal

Consensus Algorithm CSCE 668Set 11: Asynchronous Consensus 25 Code for processor p i : Initially r = 1 and prefer = p i 's input 1. while true do 2. votes := get-core( ) 3. let v be majority of phase r votes 4. if all phase r votes are v then decide v 5. outcomes := get-core( ) 6. if all phase r outcome values are w 7. then prefer := w 8. else prefer := common-coin() 9. r := r + 1 ensures a high level of consistency b/w what different procs get uses randomization to imitate tossing a coin

Properties of Get-Core CSCE 668Set 11: Asynchronous Consensus 26  Executed by n processors, at most f of which can crash.  Input parameter is a value supplied by the calling processor.  Return parameter is an n-array, one entry per processor  Every nonfaulty processor's call to get-core returns.  There exists a set C of more than n/2 processors such that every array returned by a call to get- core contains the input parameter supplied by every processor in C.

Properties of Common-Coin CSCE 668Set 11: Asynchronous Consensus 27  Subroutine implements an f-resilient common coin with bias .  Executed by n processors, at most f of which can crash.  No input parameter  Return parameter is a 0 or 1.  Every nonfaulty processor's call to common-coin returns.  Probability that a return value is 0 is at least .  Probability that a return value is 1 is at least .

Correctness of Consensus Algorithm CSCE 668Set 11: Asynchronous Consensus 28  For now, don't worry about how to implement get- core and common-coin.  Assuming we have subroutines with the desired properties, we'll show  validity  agreement  probabilistic termination (and expected running time)

Unanimity Lemma CSCE 668Set 11: Asynchronous Consensus 29 Lemma (14.6): if all procs. that reach phase r prefer v, then all nonfaulty procs decide v by phase r. Proof:  Since all prefer v, all call get-core with v  Thus get-core returns a majority of votes for v  Thus all nonfaulty procs. decide v

Validity CSCE 668Set 11: Asynchronous Consensus 30  If all processors have input v, then all prefer v in phase 1.  By unanimity lemma, all nonfaulty processors decide v by phase 1.

Agreement CSCE 668Set 11: Asynchronous Consensus 31 Claim: If p i decides v in phase r, then all nonfaulty procs. decide v by phase r + 1. Proof: Suppose r is earliest phase in which any proc. decides.  p i decides v in phase r  all its phase r votes are v  p i 's call to get-core( ) returns more than n/2 non-nil entries and all are  all entries for procs. in C are

Agreement CSCE 668Set 11: Asynchronous Consensus 32  Thus every p j receives more than n/2 entries  p j does not decide a value other than v in phase r  Also if p j calls get-core a second time in phase r, it uses input  Every p k gets only as a result of its second call to get-core in phase r  p k sets preference to v at end of phase r  in round r + 1, all prefer v and Unanimity Lemma implies they all decide v in that round.

Termination CSCE 668Set 11: Asynchronous Consensus 33 Lemma (4.10): Probability that all nonfaulty procs decide by any particular phase is at least . Proof: Case 1: All nonfaulty procs set preference in that phase using common-coin.  Prob. that all get the same value is at least 2  (  for 0 and  for 1), by property of common-coin  Then apply Unanimity Lemma (14.6)

Termination CSCE 668Set 11: Asynchronous Consensus 34 Case 2: Some processor does not set its preference using common-coin.  All procs. that don't use common-coin to set their preference for that round have the same preference, v (convince yourself)  Probability that the common-coin subroutine returns v for all procs. that use it is at least .  Then apply the Unanimity Lemma (14.6).

Expected Number of Phases CSCE 668Set 11: Asynchronous Consensus 35  What is the expected number of phases until all nonfaulty processors have decided?  Probability of all deciding in any given phase is at least .  Probability of terminating after i phases is (1–  ) i- 1 .  Geometric random variable whose expected value is 1/ .

Implementing Get-Core CSCE 668Set 11: Asynchronous Consensus 36  Difficulty in achieving consistency of messages is due to combination of asynchrony and crash possibility:  a processor can only wait to receive n - f messages  the first n - f messages that p i gets might not be from the same set of processors as p j 's first n - f messages  Overcome this by exchanging messages three times

Get-Core CSCE 668Set 11: Asynchronous Consensus 37 First exchange ("round"):  send argument value to all  wait for n - f first round msgs Second exchange ("round"):  send values received in first round to all  wait for n - f second round msgs  merge data from second round msgs Third exchange ("round"):  send values received in second round to all  wait for n - f third round msgs  merge data from third round msgs  return result

Analysis of Get-Core CSCE 668Set 11: Asynchronous Consensus 38  Lemmas 14.4 and 14.5 show that it satisfies the desired properties (termination and consistency).  Time is O(1) (using standard way of measuring time in an asynchronous system)

Implementing Common-Coin CSCE 668Set 11: Asynchronous Consensus 39 A simple algorithm:  Each processor independently outputs 0 with probability 1/2 and 1 with probability 1/2.  Bias  = 1/2 n  Advantage: simple, no communication  Disadvantage: Expected number of phases until termination is 2 n

A Common Coin with Constant Bias CSCE 668Set 11: Asynchronous Consensus 40 0 with probability 1/n 1 with probability 1 – 1/n coins := get-core( ) if there exists j s.t. coins[j] = 0 then return 0 else return 1 c :=

Correctness of Common Coin CSCE 668Set 11: Asynchronous Consensus 41 Lemma (14.12): Common-coin implements a (  n/2  – 1)-resilient coin with bias 1/4. Proof: Fix any admissible adversary that is weak (cannot see the contents of messages) and any initial configuration. All probabilities are calculated with respect to them.

Probability of Flipping 1 CSCE 668Set 11: Asynchronous Consensus 42  Probability that all nonfaulty processors get 1 for the common coin is at least the probability that they all set c to 1.  This probability is at least (1 – 1/n) n  When n = 2, this function is 1/4  This function increases up to its limit of 1/e.  Thus the probability that all nonfaulty processors get 1 is at least 1/4.

Probability of Flipping 0 CSCE 668Set 11: Asynchronous Consensus 43  Let C be the set of core processors (whose existence is guaranteed by properties of get-core).  If any processor in C sets c to 0, then all the nonfaulty processors will observe this 0 after executing get-core, and thus return 0.  Probability at least one processor in C sets c to 0 is 1 – (1 – 1/n) |C|.  This expression is at least 1/4 (by arithmetic).

Summary of Randomized Consensus Algorithm CSCE 668Set 11: Asynchronous Consensus 44  Using the given implementations for get- core and common-coin, we get an asynchronous randomized consensus algorithm for f crash failures with  n > 2f  O(1) expected time complexity expected number of phases is 4 time per phase is O(1)