Presentation is loading. Please wait.

Presentation is loading. Please wait.

More on Fault Tolerance Chapter 7. Topics Group Communication Virtual Synchrony Atomic Commit Checkpointing, Logging, Recovery.

Similar presentations


Presentation on theme: "More on Fault Tolerance Chapter 7. Topics Group Communication Virtual Synchrony Atomic Commit Checkpointing, Logging, Recovery."— Presentation transcript:

1 More on Fault Tolerance Chapter 7

2 Topics Group Communication Virtual Synchrony Atomic Commit Checkpointing, Logging, Recovery

3 Reliable Group Communication We would like a message sent to a group of processes to be delivered to every member of that group. Problems: Processes join and leave group. Processes crash (that's a leave). Sender crashes (after sending to some or doing part of the send operation). What about: Efficiency? Message delivery order? Timeliness?

4 Reliable Group Communication Revised definition: A message sent to a group of processes should be delivered to all non-faulty members of that group. For efficiency, many algorithms form a tree structure to handle message multiplication. How to implement reliability: message sequencing and ACKs. 2 1 3 sender x y y y x x

5 RGC: Handling Ack/Nacks Problem, ack implosion: does not scale well. Solution attempt: Don't ack, rather NACK missing messages. However, a receiver may not Nack because it doesn't know it missed a message because it isn't getting anything. Thus Sender has to buffer outgoing messages forever. Also, a message dropped high in the multicast tree creates a Nack implosion 2 1 3 sender x y y y x x

6 RGC: Handling Nacks If processes see all messages from others, can use Scalable Reliable Multicast (SRM) [Floyd 1997] No acks in SRM, only missing messages are NACKed. When a client detects a missed message, it waits for a random delay, then multicasts his NACK to everyone in the group. This feedback allows other group members who missed the same message to supress their NACK Assumption: the re-transmission of the NACKed message will be a multicast. This is called Feedback Suppression. Problems: still lots of NACK traffic.

7 Nonhierarchical Feedback Control Several receivers have scheduled a request for retransmission, but the first retransmission request leads to the suppression of others.

8 Hierarchical Feedback Control Hierarchies or trees frequently formed for multicast, why not use for feedback control? Better scalability. Works if there is a single sender or local group of senders and the group membership is fairly stable. A rooted tree is formed with the sender at the root. Each other node is a group of receivers. Each group of receivers has a coordinator who buffers the message and collects NACKs or ACKs from his group and sends one on up the tree to the sender. Hard to handle group membership changes.

9 Hierarchical Feedback Control The essence of hierarchical reliable multicasting. a)Each local coordinator forwards the message to its children. b)A local coordinator handles retransmission requests.

10 Atomic Multicast A special type of group communication. Atomic = message is delivered to all or none. View (also group view) is group membership at any given time. That is, the set of processes belonging to the group. The concept of a view is needed to handle membership changes.

11 Multicast Terminology Message is received by the OS and comm layer but it is not delivered to the application until it has been verifiably received by all other processes in the group.

12 Virtual Synchrony How to define atomic multicast in the presence of failures? How can we guarantee delivery to all group members? 50 members in group, I multicast a message, m1, then P10 fails before getting message, but others got the message and I assume P10 got the message. Control the membership changes with view change. Virtual Synchrony - says something about the order of the message delivery with respect to a view change message, since messages must be ordered with respect to the view change message.

13 Virtual Synchrony The principle of virtual synchronous multicast.

14 Properties of Virtual Synchrony Each process in the view has the same view. That is, they all agree on the group membership. When a process joins or leaves (including crash), this is announced to all (non-crashed) processes in the (old) group with a view change message VC. If one process, P1, in view v delivers message m, then all processes belonging to view v deliver message m in view v. (Recall difference between receive and deliver)

15 Message Ordering Unordered - P1 is delivered the messages in arbitrary order which might be different from the order in which P2 gets them. FIFO - all messages from a single source will be delivered in the order in which they were sent. Causally ordered - recall Lamport definition of causality. Potential causality must be preserved. Causally related messages from multiple sources are delivered in causal order. Total order - all processes deliver the messages in the same order. Frequently causal also. "All messages multicast to a group are delivered to all members of the group in the same order"

16 Unordered Messages Three communicating processes in the same group. The ordering of events per process is shown along the vertical axis. Process P1Process P2Process P3 sends m1receives m1receives m2 sends m2receives m2receives m1

17 FIFO Ordering Four processes in the same group with two different senders, and a possible delivery order of messages under FIFO-ordered multicasting Process P1Process P2Process P3Process P4 sends m1receives m1receives m3sends m3 sends m2receives m3receives m1sends m4 receives m2 receives m4

18 Implementing Virtual Synchrony Six different versions of virtually synchronous reliable multicasting. MulticastBasic Message OrderingTotal-ordered Delivery? Reliable multicastNoneNo FIFO multicastFIFO-ordered deliveryNo Causal multicastCausal-ordered deliveryNo Atomic multicastNoneYes FIFO atomic multicastFIFO-ordered deliveryYes Causal atomic multicastCausal-ordered deliveryYes

19 Implementing Virtual Synchrony (a) Process 4 notices that process 7 has crashed, sends a view change message. (b) Before 6 can install the new view, it must make sure all other processes received his unstable messages. Process 6 sends out all its unstable messages, followed by a flush. (c)Process 6 installs the new view when it has received a flush message from everyone else

20 Atomic Multicast: Isis, Amoeba, etc One node is the coordinator. Everyone sends his messages to the coordinator and the coordinator chooses the order and sends the message to everyone, or Everyone sends his messages to the coordinator and all nodes and the coordinator chooses the order and sends message number to everyone –(msg 5 from p4: global order 33) 1 2 3 4 1 2 3 4

21 Atomic Multicast: Totem Developed at UCSB Processes are organized into a logical ring. A token is passed around the ring. The token has the message number of the next message to be multicast. Only the token holder can multicast a message. This easily establishes total order. Retransmits for missed messages are the responsibility of the original sender.

22 Distributed Atomic Commit The distributed commit problem is how get all members of a group to perform an operation (transaction commit) or none at all. It is needed when a distributed transaction commits. It is necessary to find out if each site was successful in performing his part of the transaction before allowing any site to make that transaction's changes permanent.

23 Two Phase Commit (Gray 1987) Assumptions: Reliable communications and a designated coordinator. Simple case: no failures.. First phase is the voting phase –Coordinator sends all participants a VOTE REQUEST –All participants respond COMMIT or ABORT Second phase is decision phase. Coordinator decides commit or abort: if any participant voted ABORT, then decision must be abort. Otherwise, commit. –Coordinator sends all participants decision –Participants (who have been waiting for decision) commit or abort as instructed and ack.

24 2PC: Participant Failures If a participant, P, fails before voting, coordinator, C, can timeout and decide abort. (cannot decide commit because p1 may vote abort) If p1 fails after voting, if he voted commit, he must be prepared to commit the transaction when he recovers. He must ask the coordinator or other participants what the decision was before doing so. If he voted abort, he can abort the transaction when he recovers. So BEFORE the participant votes, he must log on stable storage his position.

25 2PC: Coordinator Failures If coordinator fails before requesting votes, participants timeout and abort. If coordinator fails after requesting votes (or after requesting some votes), Ps who did not get the vote request timeout, abort Ps who voted abort timeout and abort Ps who voted commit cannot unilaterally abort as all Ps may have voted commit, and C decided commit and some P may have received the decision. Must either wait until the coordinator recovers or contact ALL the other Ps. If one is unreachable, he cannot decide.

26 Two-Phase Commit (1) a)The finite state machine for the coordinator in 2PC using the notation (msg rcvd/msg sent). b)The finite state machine for a participant.

27 Two-Phase Commit: Recovery Actions taken by a participant P when residing in state READY and having contacted another participant Q. State of QAction by P COMMITMake transition to COMMIT ABORTMake transition to ABORT INITMake transition to ABORT READYContact another participant

28 Two-Phase Commit Outline of the steps taken by the coordinator in a two phase commit protocol actions by coordinator: write START _2PC to local log; multicast VOTE_REQUEST to all participants; while not all votes have been collected { wait for any incoming vote; if timeout { write GLOBAL_ABORT to local log; multicast GLOBAL_ABORT to all participants; exit; } record vote; } if all participants sent VOTE_COMMIT and coordinator votes COMMIT{ write GLOBAL_COMMIT to local log; multicast GLOBAL_COMMIT to all participants; } else { write GLOBAL_ABORT to local log; multicast GLOBAL_ABORT to all participants; }

29 Two-Phase Commit Steps taken by participant process in 2PC. actions by participant: write INIT to local log; wait for VOTE_REQUEST from coordinator; if timeout { write VOTE_ABORT to local log; exit; } if participant votes COMMIT { write VOTE_COMMIT to local log; send VOTE_COMMIT to coordinator; wait for DECISION from coordinator; if timeout { multicast DECISION_REQUEST to other participants; wait until DECISION is received; /* remain blocked */ write DECISION to local log; } if DECISION == GLOBAL_COMMIT write GLOBAL_COMMIT to local log; else if DECISION == GLOBAL_ABORT write GLOBAL_ABORT to local log; } else { write VOTE_ABORT to local log; send VOTE ABORT to coordinator; }

30 Two-Phase Commit Steps taken for handling incoming decision requests. actions for handling decision requests: /* executed by separate thread */ while true { wait until any incoming DECISION_REQUEST is received; /* remain blocked */ read most recently recorded STATE from the local log; if STATE == GLOBAL_COMMIT send GLOBAL_COMMIT to requesting participant; else if STATE == INIT or STATE == GLOBAL_ABORT send GLOBAL_ABORT to requesting participant; else skip; /* participant remains blocked */

31 2PC: Blocking Problem If coordinator fails after deciding but before sending all decision messages, problem for P’s If the P got the decision message, he carries out the decision (and tells others if asked) If the P voted abort, he just aborts. If the P voted commit, he must wait until the coordinator recovers or he gets a response from ALL participants (P must know who the other Ps are). The crash or unavailability of coordinator and one P results in a BLOCK.

32 How to Solve Blocking? All participants could be in same state or in states adjacent to that one state, but no further away since their moves are "coordinated". They move out of current state when directed to by coordinator. Problem: when coordinator plus one or more Ps fails, the other Ps must be able to determine the outcome based on the states of the living participants.

33 Participant State Diagrams P’s can move out of current state only when directed to by C. C gives the command only when he receives confirmation that all Ps have made the transition. Assume this is SD of P If one P is in state 1, what are the possible states of the others? Coordinator has given the command to leave state 5. C would have given the command to leave state 1 only if he knew all P’s were in state 1. 2 1 5 3 4 6 7 Possible states are {5,1} or {1,2,3}

34 Participant State Diagrams If one P is in state 3, what are the possible states of the others? Case 1: C has given the command to leave state 1 but has not yet received confirmation that all Ps are in state 3. (C may have died after sending command to some.) Possible states are {1,2,3} 2 1 5 3 4 6 7 Case 2: C gave command to leave state 3, but this P has not received it yet. Possible states are {3,4,6} State sets that are not possible: {3,5} and {1,4} and {2,4}

35 How to Solve Blocking So the conditions for a non-blocking atomic commit protocol are - There is no state from which it is possible to make a transition directly to both commit and abort. –That is, no {commit,abort} Any state from which a transition to commit is possible has the property that it is possible for participants to decide without waiting for the recovery of the coordinator or dead participants. –P can decide based on the sets of possible states.

36 Three Phase Commit Vote phase, Decision phase, (if commit) Commit phase. Vote phase –Coordinator sends vote request –All respond commit or abort (and change state) Decision phase –After getting all votes, coordinator decides commit if all voted commit and abort otherwise. –Coordinator sends Abort or Prepare-to-commit –All respond ack Commit phase –After getting all acks, coordinator sends commit command. –All ack.

37 Three Phase Commit a)Finite state machine for the coordinator in 3PC b)Finite state machine for a participant

38 Three Phase Commit Failure cases - when P recovers: If P dies before voting, P and C can assume abort. If P dies after voting abort, he unilaterally aborts. If C dies and P has voted abort, then abort. Two interesting cases to look at: Coordinator dies after requesting votes and some parts may be dead also. And coordinator times-out because some part died.

39 Three PC: Case 1 C dies after requesting votes, some Ps dead. Living Ps must decide what to do based on their states. If some in init state and some ready: abort as dead ones may be in abort state. Any in abort state: abort since some may be in ready state, but none have committed init ready precommit abort commit

40 3PC: Case 1 continued If some in ready state, some in precommit: commit since all must have voted commit If all in ready state: abort since some dead parts may have voted abort or C may not have received all votes before crashing. If all voted commit then coordinator crashed, well won't hurt to abort if all agree. init ready precommit abort commit

41 3PC: Case 2 Coordinator times-out because some P died C in wait state: some P died before/while voting: decide abort C in pre-commit state: some part has already voted commit but failed to ack the prepare command: commit transaction: when part recovers and inquires it will commit the transaction.

42 Checkpointing, Logging, Recovery Recovery is what happens after the crash. Logs and checkpoints make it possible. A checkpoint is a durable record of a consistent state that existed on this node at some time in the past. A log is a durable record or history of the significant events (such as writes but not reads) that have occurred at this site (either since the start or since the last checkpoint).

43 Recovery Strategies Backward recovery: Bring the system back to a previous correct and consistent state. Checkpoint is made periodically during normal operations by recording (on stable storage) the current state. (Problems with on-going transactions). After a failure, the state can be restored from the checkpoint info. Issue: taking a checkpoint is a lot of overhead. Forward recovery: Bring the system to a new correct state after a crash. May involve asking another site what the current state is.

44 Recovery Strategies Combination: Use both checkpoint and recovery log. –Take checkpoint and delete old log and start new log. –Log all significant messages, transactions, etc, up to the next checkpoint. –When recovering from failure, restore checkpoint state then replay log and re-do these operations. –Often used in databases

45 Checkpoints in a Distributed System Taking a checkpoint is a single site operation, whereas in a DS, there is a global state that must remain consistent. If one process, P2, fails and rolls back to a previous checkpoint, that point may be inconsistent with the rest of the sites in the DS. This means that the other sites may have to roll back also. This is a problem if the taking of checkpoints is not coordinated -- need a global snapshot.

46 Checkpointing P2 fails and rolls back to previous checkpoint. It is now inconsistent with P1, so P1 must roll back, which causes P2 to roll back one more checkpoint to final recovery line.

47 Independent Checkpointing The problem with independent checkpoints is the domino effect or cascading rollbacks to find a consistent state: need distributed snapshot!

48 Logging All events which affect the system state are logged starting with the last checkpoint. –Messages received –Updates from clients After failure, the system state is restored from the checkpoint, then the log is replayed to bring the system to a consistent state.

49 Message Logging Q did not correctly log m2 even though it may have caused Q to send m3. On replay, Q may never send m3 but R has received it.

50 End of Chapter 7


Download ppt "More on Fault Tolerance Chapter 7. Topics Group Communication Virtual Synchrony Atomic Commit Checkpointing, Logging, Recovery."

Similar presentations


Ads by Google