More on Fault Tolerance Chapter 7. Topics Group Communication Virtual Synchrony Atomic Commit Checkpointing, Logging, Recovery.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC621 Advanced Operating Systems Fault Tolerance.
Advertisements

(c) Oded Shmueli Distributed Recovery, Lecture 7 (BHG, Chap.7)
CS 603 Handling Failure in Commit February 20, 2002.
Reliable Group Communication Quanzeng You & Haoliang Wang.
CIS 720 Concurrency Control. Timestamp-based concurrency control Assign a timestamp ts(T) to each transaction T. Each data item x has two timestamps:
Computer Science Lecture 18, page 1 CS677: Distributed OS Last Class: Fault Tolerance Basic concepts and failure models Failure masking using redundancy.
Systems of Distributed Systems Module 2 -Distributed algorithms Teaching unit 3 – Advanced algorithms Ernesto Damiani University of Bozen Lesson 6 – Two.
Group Communications Group communication: one source process sending a message to a group of processes: Destination is a group rather than a single process.
Computer Science Lecture 17, page 1 CS677: Distributed OS Last Class: Fault Tolerance Basic concepts and failure models Failure masking using redundancy.
Fault Tolerance Chapter 7.
Non-blocking Atomic Commitment Aaron Kaminsky Presenting Chapter 6 of Distributed Systems, 2nd edition, 1993, ed. Mullender.
Distributed DBMSPage © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture Distributed Database.
Chapter 7 Fault Tolerance Basic Concepts Failure Models Process Design Issues Flat vs hierarchical group Group Membership Reliable Client.
Distributed Systems CS Fault Tolerance- Part III Lecture 15, Oct 26, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
Distributed DBMSPage © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture Distributed Database.
EEC 693/793 Special Topics in Electrical Engineering Secure and Dependable Computing Lecture 13 Wenbing Zhao Department of Electrical and Computer Engineering.
1 CS 194: Distributed Systems Distributed Commit, Recovery Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering.
1 Fault Tolerance Chapter 7. 2 Fault Tolerance An important goal in distributed systems design is to construct the system in such a way that it can automatically.
1 More on Distributed Coordination. 2 Who’s in charge? Let’s have an Election. Many algorithms require a coordinator. What happens when the coordinator.
1 ICS 214B: Transaction Processing and Distributed Data Management Distributed Database Systems.
Distributed Commit. Example Consider a chain of stores and suppose a manager – wants to query all the stores, – find the inventory of toothbrushes at.
Distributed Commit Dr. Yingwu Zhu. Failures in a distributed system Consistency requires agreement among multiple servers – Is transaction X committed?
Real Time Multimedia Lab Fault Tolerance Chapter – 7 (Distributed Systems) Mr. Imran Rao Ms. NiuYu 22 nd November 2005.
CS162 Section Lecture 10 Slides based from Lecture and
Fault Tolerance. Agenda Overview Introduction to Fault Tolerance Process Resilience Reliable Client-Server communication Reliable group communication.
Distributed Transactions Chapter 13
Distributed Systems CS Fault Tolerance- Part III Lecture 19, Nov 25, 2013 Mohammad Hammoud 1.
Distributed Systems Principles and Paradigms Chapter 07 Fault Tolerance 01 Introduction 02 Communication 03 Processes 04 Naming 05 Synchronization 06 Consistency.
1 8.3 Reliable Client-Server Communication So far: Concentrated on process resilience (by means of process groups). What about reliable communication channels?
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
EEC 688/788 Secure and Dependable Computing Lecture 7 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Distributed Transaction Management, Fall 2002Lecture Distributed Commit Protocols Jyrki Nummenmaa
Fault Tolerance CSCI 4780/6780. Distributed Commit Commit – Making an operation permanent Transactions in databases One phase commit does not work !!!
University of Tampere, CS Department Distributed Commit.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved Chapter 8 Fault.
Commit Algorithms Hamid Al-Hamadi CS 5204 November 17, 2009.
EEC 688/788 Secure and Dependable Computing Lecture 10 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved Chapter 8 Fault.
Fault Tolerance Chapter 7.
Fault Tolerance. Basic Concepts Availability The system is ready to work immediately Reliability The system can run continuously Safety When the system.
Kyung Hee University 1/33 Fault Tolerance Chap 7.
The Totem Single-Ring Ordering and Membership Protocol Y. Amir, L. E. Moser, P. M Melliar-Smith, D. A. Agarwal, P. Ciarfella.
Reliable Communication Smita Hiremath CSC Reliable Client-Server Communication Point-to-Point communication Established by TCP Masks omission failure,
Fault Tolerance Chapter 7. Failures in Distributed Systems Partial failures – characteristic of distributed systems Goals: Construct systems which can.
Chapter 11 Fault Tolerance. Topics Introduction Process Resilience Reliable Group Communication Recovery.
Building Dependable Distributed Systems, Copyright Wenbing Zhao
Revisiting failure detectors Some of you asked questions about implementing consensus using S - how does it differ from reaching consensus using P. Here.
Fault Tolerance Chapter 7. Basic Concepts Dependability Includes Availability Reliability Safety Maintainability.
1 CHAPTER 5 Fault Tolerance Chapter 5-- Fault Tolerance.
Fault Tolerance Chapter 7. Goal An important goal in distributed systems design is to construct the system in such a way that it can automatically recover.
Fault Tolerance (2). Topics r Reliable Group Communication.
Distributed DBMSPage © 1998 M. Tamer Özsu & Patrick Valduriez Outline Introduction Background Distributed DBMS Architecture Distributed Database.
1 Fault Tolerance Chapter 8. 2 Basic Concepts Dependability Includes Availability Reliability Safety Maintainability.
EEC 688/788 Secure and Dependable Computing Lecture 10 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Chapter 8 – Fault Tolerance Section 8.5 Distributed Commit Heta Desai Dr. Yanqing Zhang Csc Advanced Operating Systems October 14 th, 2015.
More on Fault Tolerance
Fault Tolerance Prof. Orhan Gemikonakli
Fault Tolerance Chap 7.
DC7: More Coordination Chapter 11 and 14.2
Reliable group communication
Outline Introduction Background Distributed DBMS Architecture
Distributed Systems CS
Distributed Systems CS
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Advanced Operating System
EEC 688/788 Secure and Dependable Computing
Distributed Databases Recovery
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
Last Class: Fault Tolerance
Presentation transcript:

More on Fault Tolerance Chapter 7

Topics Group Communication Virtual Synchrony Atomic Commit Checkpointing, Logging, Recovery

Reliable Group Communication We would like a message sent to a group of processes to be delivered to every member of that group. Problems: Processes join and leave group. Processes crash (that's a leave). Sender crashes (after sending to some or doing part of the send operation). What about: Efficiency? Message delivery order? Timeliness?

Reliable Group Communication Revised definition: A message sent to a group of processes should be delivered to all non-faulty members of that group. For efficiency, many algorithms form a tree structure to handle message multiplication. How to implement reliability: message sequencing and ACKs sender x y y y x x

RGC: Handling Ack/Nacks Problem, ack implosion: does not scale well. Solution attempt: Don't ack, rather NACK missing messages. However, a receiver may not Nack because it doesn't know it missed a message because it isn't getting anything. Thus Sender has to buffer outgoing messages forever. Also, a message dropped high in the multicast tree creates a Nack implosion sender x y y y x x

RGC: Handling Nacks If processes see all messages from others, can use Scalable Reliable Multicast (SRM) [Floyd 1997] No acks in SRM, only missing messages are NACKed. When a client detects a missed message, it waits for a random delay, then multicasts his NACK to everyone in the group. This feedback allows other group members who missed the same message to supress their NACK Assumption: the re-transmission of the NACKed message will be a multicast. This is called Feedback Suppression. Problems: still lots of NACK traffic.

Nonhierarchical Feedback Control Several receivers have scheduled a request for retransmission, but the first retransmission request leads to the suppression of others.

Hierarchical Feedback Control Hierarchies or trees frequently formed for multicast, why not use for feedback control? Better scalability. Works if there is a single sender or local group of senders and the group membership is fairly stable. A rooted tree is formed with the sender at the root. Each other node is a group of receivers. Each group of receivers has a coordinator who buffers the message and collects NACKs or ACKs from his group and sends one on up the tree to the sender. Hard to handle group membership changes.

Hierarchical Feedback Control The essence of hierarchical reliable multicasting. a)Each local coordinator forwards the message to its children. b)A local coordinator handles retransmission requests.

Atomic Multicast A special type of group communication. Atomic = message is delivered to all or none. View (also group view) is group membership at any given time. That is, the set of processes belonging to the group. The concept of a view is needed to handle membership changes.

Multicast Terminology Message is received by the OS and comm layer but it is not delivered to the application until it has been verifiably received by all other processes in the group.

Virtual Synchrony How to define atomic multicast in the presence of failures? How can we guarantee delivery to all group members? 50 members in group, I multicast a message, m1, then P10 fails before getting message, but others got the message and I assume P10 got the message. Control the membership changes with view change. Virtual Synchrony - says something about the order of the message delivery with respect to a view change message, since messages must be ordered with respect to the view change message.

Virtual Synchrony The principle of virtual synchronous multicast.

Properties of Virtual Synchrony Each process in the view has the same view. That is, they all agree on the group membership. When a process joins or leaves (including crash), this is announced to all (non-crashed) processes in the (old) group with a view change message VC. If one process, P1, in view v delivers message m, then all processes belonging to view v deliver message m in view v. (Recall difference between receive and deliver)

Message Ordering Unordered - P1 is delivered the messages in arbitrary order which might be different from the order in which P2 gets them. FIFO - all messages from a single source will be delivered in the order in which they were sent. Causally ordered - recall Lamport definition of causality. Potential causality must be preserved. Causally related messages from multiple sources are delivered in causal order. Total order - all processes deliver the messages in the same order. Frequently causal also. "All messages multicast to a group are delivered to all members of the group in the same order"

Unordered Messages Three communicating processes in the same group. The ordering of events per process is shown along the vertical axis. Process P1Process P2Process P3 sends m1receives m1receives m2 sends m2receives m2receives m1

FIFO Ordering Four processes in the same group with two different senders, and a possible delivery order of messages under FIFO-ordered multicasting Process P1Process P2Process P3Process P4 sends m1receives m1receives m3sends m3 sends m2receives m3receives m1sends m4 receives m2 receives m4

Implementing Virtual Synchrony Six different versions of virtually synchronous reliable multicasting. MulticastBasic Message OrderingTotal-ordered Delivery? Reliable multicastNoneNo FIFO multicastFIFO-ordered deliveryNo Causal multicastCausal-ordered deliveryNo Atomic multicastNoneYes FIFO atomic multicastFIFO-ordered deliveryYes Causal atomic multicastCausal-ordered deliveryYes

Implementing Virtual Synchrony (a) Process 4 notices that process 7 has crashed, sends a view change message. (b) Before 6 can install the new view, it must make sure all other processes received his unstable messages. Process 6 sends out all its unstable messages, followed by a flush. (c)Process 6 installs the new view when it has received a flush message from everyone else

Atomic Multicast: Isis, Amoeba, etc One node is the coordinator. Everyone sends his messages to the coordinator and the coordinator chooses the order and sends the message to everyone, or Everyone sends his messages to the coordinator and all nodes and the coordinator chooses the order and sends message number to everyone –(msg 5 from p4: global order 33)

Atomic Multicast: Totem Developed at UCSB Processes are organized into a logical ring. A token is passed around the ring. The token has the message number of the next message to be multicast. Only the token holder can multicast a message. This easily establishes total order. Retransmits for missed messages are the responsibility of the original sender.

Distributed Atomic Commit The distributed commit problem is how get all members of a group to perform an operation (transaction commit) or none at all. It is needed when a distributed transaction commits. It is necessary to find out if each site was successful in performing his part of the transaction before allowing any site to make that transaction's changes permanent.

Two Phase Commit (Gray 1987) Assumptions: Reliable communications and a designated coordinator. Simple case: no failures.. First phase is the voting phase –Coordinator sends all participants a VOTE REQUEST –All participants respond COMMIT or ABORT Second phase is decision phase. Coordinator decides commit or abort: if any participant voted ABORT, then decision must be abort. Otherwise, commit. –Coordinator sends all participants decision –Participants (who have been waiting for decision) commit or abort as instructed and ack.

2PC: Participant Failures If a participant, P, fails before voting, coordinator, C, can timeout and decide abort. (cannot decide commit because p1 may vote abort) If p1 fails after voting, if he voted commit, he must be prepared to commit the transaction when he recovers. He must ask the coordinator or other participants what the decision was before doing so. If he voted abort, he can abort the transaction when he recovers. So BEFORE the participant votes, he must log on stable storage his position.

2PC: Coordinator Failures If coordinator fails before requesting votes, participants timeout and abort. If coordinator fails after requesting votes (or after requesting some votes), Ps who did not get the vote request timeout, abort Ps who voted abort timeout and abort Ps who voted commit cannot unilaterally abort as all Ps may have voted commit, and C decided commit and some P may have received the decision. Must either wait until the coordinator recovers or contact ALL the other Ps. If one is unreachable, he cannot decide.

Two-Phase Commit (1) a)The finite state machine for the coordinator in 2PC using the notation (msg rcvd/msg sent). b)The finite state machine for a participant.

Two-Phase Commit: Recovery Actions taken by a participant P when residing in state READY and having contacted another participant Q. State of QAction by P COMMITMake transition to COMMIT ABORTMake transition to ABORT INITMake transition to ABORT READYContact another participant

Two-Phase Commit Outline of the steps taken by the coordinator in a two phase commit protocol actions by coordinator: write START _2PC to local log; multicast VOTE_REQUEST to all participants; while not all votes have been collected { wait for any incoming vote; if timeout { write GLOBAL_ABORT to local log; multicast GLOBAL_ABORT to all participants; exit; } record vote; } if all participants sent VOTE_COMMIT and coordinator votes COMMIT{ write GLOBAL_COMMIT to local log; multicast GLOBAL_COMMIT to all participants; } else { write GLOBAL_ABORT to local log; multicast GLOBAL_ABORT to all participants; }

Two-Phase Commit Steps taken by participant process in 2PC. actions by participant: write INIT to local log; wait for VOTE_REQUEST from coordinator; if timeout { write VOTE_ABORT to local log; exit; } if participant votes COMMIT { write VOTE_COMMIT to local log; send VOTE_COMMIT to coordinator; wait for DECISION from coordinator; if timeout { multicast DECISION_REQUEST to other participants; wait until DECISION is received; /* remain blocked */ write DECISION to local log; } if DECISION == GLOBAL_COMMIT write GLOBAL_COMMIT to local log; else if DECISION == GLOBAL_ABORT write GLOBAL_ABORT to local log; } else { write VOTE_ABORT to local log; send VOTE ABORT to coordinator; }

Two-Phase Commit Steps taken for handling incoming decision requests. actions for handling decision requests: /* executed by separate thread */ while true { wait until any incoming DECISION_REQUEST is received; /* remain blocked */ read most recently recorded STATE from the local log; if STATE == GLOBAL_COMMIT send GLOBAL_COMMIT to requesting participant; else if STATE == INIT or STATE == GLOBAL_ABORT send GLOBAL_ABORT to requesting participant; else skip; /* participant remains blocked */

2PC: Blocking Problem If coordinator fails after deciding but before sending all decision messages, problem for P’s If the P got the decision message, he carries out the decision (and tells others if asked) If the P voted abort, he just aborts. If the P voted commit, he must wait until the coordinator recovers or he gets a response from ALL participants (P must know who the other Ps are). The crash or unavailability of coordinator and one P results in a BLOCK.

How to Solve Blocking? All participants could be in same state or in states adjacent to that one state, but no further away since their moves are "coordinated". They move out of current state when directed to by coordinator. Problem: when coordinator plus one or more Ps fails, the other Ps must be able to determine the outcome based on the states of the living participants.

Participant State Diagrams P’s can move out of current state only when directed to by C. C gives the command only when he receives confirmation that all Ps have made the transition. Assume this is SD of P If one P is in state 1, what are the possible states of the others? Coordinator has given the command to leave state 5. C would have given the command to leave state 1 only if he knew all P’s were in state Possible states are {5,1} or {1,2,3}

Participant State Diagrams If one P is in state 3, what are the possible states of the others? Case 1: C has given the command to leave state 1 but has not yet received confirmation that all Ps are in state 3. (C may have died after sending command to some.) Possible states are {1,2,3} Case 2: C gave command to leave state 3, but this P has not received it yet. Possible states are {3,4,6} State sets that are not possible: {3,5} and {1,4} and {2,4}

How to Solve Blocking So the conditions for a non-blocking atomic commit protocol are - There is no state from which it is possible to make a transition directly to both commit and abort. –That is, no {commit,abort} Any state from which a transition to commit is possible has the property that it is possible for participants to decide without waiting for the recovery of the coordinator or dead participants. –P can decide based on the sets of possible states.

Three Phase Commit Vote phase, Decision phase, (if commit) Commit phase. Vote phase –Coordinator sends vote request –All respond commit or abort (and change state) Decision phase –After getting all votes, coordinator decides commit if all voted commit and abort otherwise. –Coordinator sends Abort or Prepare-to-commit –All respond ack Commit phase –After getting all acks, coordinator sends commit command. –All ack.

Three Phase Commit a)Finite state machine for the coordinator in 3PC b)Finite state machine for a participant

Three Phase Commit Failure cases - when P recovers: If P dies before voting, P and C can assume abort. If P dies after voting abort, he unilaterally aborts. If C dies and P has voted abort, then abort. Two interesting cases to look at: Coordinator dies after requesting votes and some parts may be dead also. And coordinator times-out because some part died.

Three PC: Case 1 C dies after requesting votes, some Ps dead. Living Ps must decide what to do based on their states. If some in init state and some ready: abort as dead ones may be in abort state. Any in abort state: abort since some may be in ready state, but none have committed init ready precommit abort commit

3PC: Case 1 continued If some in ready state, some in precommit: commit since all must have voted commit If all in ready state: abort since some dead parts may have voted abort or C may not have received all votes before crashing. If all voted commit then coordinator crashed, well won't hurt to abort if all agree. init ready precommit abort commit

3PC: Case 2 Coordinator times-out because some P died C in wait state: some P died before/while voting: decide abort C in pre-commit state: some part has already voted commit but failed to ack the prepare command: commit transaction: when part recovers and inquires it will commit the transaction.

Checkpointing, Logging, Recovery Recovery is what happens after the crash. Logs and checkpoints make it possible. A checkpoint is a durable record of a consistent state that existed on this node at some time in the past. A log is a durable record or history of the significant events (such as writes but not reads) that have occurred at this site (either since the start or since the last checkpoint).

Recovery Strategies Backward recovery: Bring the system back to a previous correct and consistent state. Checkpoint is made periodically during normal operations by recording (on stable storage) the current state. (Problems with on-going transactions). After a failure, the state can be restored from the checkpoint info. Issue: taking a checkpoint is a lot of overhead. Forward recovery: Bring the system to a new correct state after a crash. May involve asking another site what the current state is.

Recovery Strategies Combination: Use both checkpoint and recovery log. –Take checkpoint and delete old log and start new log. –Log all significant messages, transactions, etc, up to the next checkpoint. –When recovering from failure, restore checkpoint state then replay log and re-do these operations. –Often used in databases

Checkpoints in a Distributed System Taking a checkpoint is a single site operation, whereas in a DS, there is a global state that must remain consistent. If one process, P2, fails and rolls back to a previous checkpoint, that point may be inconsistent with the rest of the sites in the DS. This means that the other sites may have to roll back also. This is a problem if the taking of checkpoints is not coordinated -- need a global snapshot.

Checkpointing P2 fails and rolls back to previous checkpoint. It is now inconsistent with P1, so P1 must roll back, which causes P2 to roll back one more checkpoint to final recovery line.

Independent Checkpointing The problem with independent checkpoints is the domino effect or cascading rollbacks to find a consistent state: need distributed snapshot!

Logging All events which affect the system state are logged starting with the last checkpoint. –Messages received –Updates from clients After failure, the system state is restored from the checkpoint, then the log is replayed to bring the system to a consistent state.

Message Logging Q did not correctly log m2 even though it may have caused Q to send m3. On replay, Q may never send m3 but R has received it.

End of Chapter 7