Presentation is loading. Please wait.

Presentation is loading. Please wait.

FIT5174 Distributed & Parallel Systems

Similar presentations


Presentation on theme: "FIT5174 Distributed & Parallel Systems"— Presentation transcript:

1 FIT5174 Distributed & Parallel Systems
Lecture 6 Election Algorithms, Distributed Concurrent Transactions, Fault Tolerance FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture

2 These slides are based on slides and material by: Carlo Kopp
September 4, 1997 Acknowledgement These slides are based on slides and material by: Carlo Kopp FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 2

3 Election Algorithms September 4, 1997 Distributed algorithms often elect one process to acts as a coordinator or a server. Election Assumptions We assume each process is assigned a unique process number A process knows all process numbers A process does not know which process number is alive. An Election: Select the maximum process number among active processes Appoint the process the coordinator. All the processes must agree upon this decision. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 3

4 The Bully Algorithm (Garcia-Molina)
September 4, 1997 When a process P notices that the coordinator is no longer responding to requests, it initiates an election: P sends an Election message to all processes with higher numbers. If one of the higher numbered answers, that process becomes the new leader. If no one responds, P wins the election and becomes coordinator. Timeout overhead Delayed response FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 4

5 The Bully Algorithm September 4, 1997 At any moment, a process can get an Election message from one of its lower-numbered colleagues. The receiver sends an OK message back to the sender. The receiver then holds an election, unless it is already holding one. Eventually, all processes give up but one, and that one is the new coordinator. It announces its victory by sending Coordinator messages to all processes. If a process that was previously down comes back up, it holds an election. If it happens to be the highest-numbered process currently running, it will win the election and take over the coordinator's job. The highest numbered process always wins, thus “bully algorithm”. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 5

6 A Ring Algorithm September 4, 1997 Processes form a logical ring. Each process knows which process comes after it. If a process finds the coordinator dead, it sends an Election message which contains its own process number to its successor in the ring. If the successor is down, the message is sent to the next process along the ring. Each process appends its process number to the message upon receiving an Election message and forward it to the successor. When the message returns to the process that originally sent the message, the process changes the message type from Election message to Coordinator message and circulates the message again. The purpose of the Coordinator message is to inform all processes that the highest-numbered process is the new coordinator and the processes contained in the message are the members of the ring. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 6

7 Distributed Transactions
September 4, 1997 Client X Y Z M N T 1 2 11 P 12 21 22 Simple Distributed Transaction Nested Distributed Transaction FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 7

8 Distributed Atomic Transactions
September 4, 1997 An atomic transaction is such, that either all in a set of distributed operations are completed or none of them are completed. First a process declares to all other processes (involved in a transaction) that it is starting the transaction. The processes exchange information and perform their specific operations. The initiator announces that it wants all the other processes to “commit” to all the operations done up to that point. If all processes agree to commit, the results are made permanent. If at least one of the processes refuse, all the states - usually opened files/databases (on all machines) - are reversed to the point before starting the transaction. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 8

9 Transaction Processing
September 4, 1997 System Model for a Transaction A system consists of independent processes where each process may fail at random. No communication error may be encountered. A “stable storage” device is available. “Stable storage” guarantees no data is lost unless the storage is physically lost by disasters such as floods or earthquakes. Note: Data in dynamic RAM disappears when power is turned off. Data on a hard disk becomes lost if the disk head crashes, or other hardware fails. Thus neither are “stable storage” devices. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 9

10 Stable Storage – Mirrored Disks
September 4, 1997 Stable storage can be implemented with a pair of disks. Each block on drive 2 is a copy of the corresponding block on drive 1. When a block is updated, the block on drive 1 is first updated and verified. Then, the same block on drive 2 is updated and verified. If the system crashes after the update of drive 1 but before the update of drive 2, the blocks on the two drives are compared and the blocks on drive 1 are copied to drive 2 after system reboot. If checksum error is encountered because of a bad block, the block is copied from the disk without an error. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 10

11 Transaction Primitives
September 4, 1997 Programming a transaction requires special primitives that must be supplied by the OS or by the programming language. Examples of the primitives are: Begin Transaction: Specifies the beginning of a transaction. End Transaction: Specifies the end of a transaction, and tries to commit on the results. Abort Transaction: Abort the transaction and restore the old value. Read: Reads data from a file. Write: Writes data to a file. Begin Transaction and End Transaction specify the scope of a transaction. The operations delimited with the two operations form the transaction. These primitives are implemented as either system calls, library functions, or constructs of a programming language. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 11

12 Transaction Properties
September 4, 1997 Four essential properties: Atomic Consistent Isolated Durable FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 12

13 Atomicity Property September 4, 1997 A transaction happens indivisibly to the outside world. (Either all operations of the transaction are completed or none is completed.) Other processes cannot observe the intermediate states of the transaction. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 13

14 Consistency Property A transaction does not violate system invariants.
September 4, 1997 A transaction does not violate system invariants. For example, an invariant in a banking system is the conservation law regarding the total amount of money. The sum of money in the source and destination accounts after any transfer is the same as the sum before the transfer (although the conservation law may be violated while the transfer is in progress). FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 14

15 Isolation Property September 4, 1997 Even if more than one transactions are running simultaneously, the final result is the same as the result of running the transactions sequentially in some order. Transactions with this property are also called “Serializable.” FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 15

16 Durability Property Once a transaction commits, changes are permanent.
September 4, 1997 Once a transaction commits, changes are permanent. No failure after the commit can undo the results or cause these to be lost. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 16

17 Nested Transactions September 4, 1997 Transactions may contain sub-transactions. Such transactions are called nested transactions. Problems with nested transactions Assume a transaction starts several sub-transactions in parallel. One of these commits and makes its results visible to the parent transaction. For some reason parent is aborted, the entire system needs to be restored to its original state. Consequently the results of the sub-transaction that were committed must be undone. Undoing a committed transaction is a violation. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 17

18 Nested Transactions Characteristics
September 4, 1997 Characteristics Permanence of a sub-transaction is only valid in the world of its direct parent and is invalid in the world of further ancestors. Permanence of the results is valid only for the outermost transaction. Addressing Transaction Violation Problem One way to implement nested transactions is to create a private copy of all objects in the system for each sub-transaction. If a sub-transaction commits, its private copy replaces its parent’s copy. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 18

19 Implementing Atomic Transactions
September 4, 1997 Implementing atomic transactions means Hiding of operations within a transaction from outer processes. Making the results visible only after commit. Restoring the old state if the transaction is aborted. Two ways to restore the old state: Private workspace Write-ahead log FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 19

20 Private Workspaces September 4, 1997 When a process starts a transaction, it is given a private workspace containing all the objects including data and files. Read and write operations are performed within the private workspace. The data within the private workspace is written back when the transaction commits. The problem with this technique is that the cost of copying every object to a private workspace is high. The following optimisation is possible: If a process only reads an object, there is no need for a private copy. If a process updates an object, a copy of the object is made in the private workspace. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 20

21 Private Workspaces September 4, 1997 If we use indices to objects, we can reduce the number of copy operations. When an object is to be updated, A copy of the index and the objects is made in the private workspace. The private index is updated. On commit, the private index is copied to the parent’s index. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 21

22 Private Workspaces September 4, 1997 If the transaction aborts, the private copies of objects and the private indices are discarded. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 22

23 Write-ahead Log September 4, 1997 Before any changes to objects are made, a record (log) is written to a “write-ahead log” on stable storage. The log contains the following information which transaction is making the change, which object is being changed, and what the old and new values are. Only after the log has been written successfully, the change is made to the object. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 23

24 Write-ahead Log September 4, 1997 If the transaction committed, a commit record is written to the log (data has already been written). If the transaction aborts, the log can be used to rollback to the original state. The log can be used for recovering from crashes. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 24

25 Distributed Commit Protocol
September 4, 1997 Two-Phase Commit Protocol The action of committing a transaction must be done instantaneously and indivisibly. In a distributed system, the commit requires the cooperation of multiple processes (may be) on different machines. Two-phase commit protocol is the most widely used protocol to implement atomic commit. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 25

26 Two Phase Commit Protocol
September 4, 1997 One process functions as the coordinator. The other participating processes are subordinates (Cohorts). FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 26

27 Two Phase Commit Protocol
September 4, 1997 The coordinator writes an entry labelled “prepare” in the log on a stable storage device and sends the other processes involved (the subordinates) a message telling them to prepare. When a subordinate receives the message, it checks to see if it is ready to commit, makes a log entry, and sends back its decision. After collecting all the responses, if all the processes are ready to commit, the transaction is committed. Otherwise, the transaction is aborted. The coordinator writes a log entry and then sends a message to each subordinate informing it of the decision. The coordinator’s writing commit log actually means committing the transaction and no rolling back occurs no matter what happens afterward. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 27

28 Two Phase Commit Protocol
September 4, 1997 This protocol is highly resilient in the face of crashes because it uses stable storage devices. If the coordinator crashes after writing a “prepare” or “commit” log entry, it can still continue from sending a “prepare” or “commit” message. If a subordinate crashes after writing a “ready” or “commit” log entry, it can still continue from sending a “ready” or “finished” message upon restart. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 28

29 Three Phase Commit Protocol
September 4, 1997 Assumptions each site uses the write-ahead-log protocol at most one site can fail during the execution of the transaction Basic Concept: Before the commit protocol begins, all the sites are in state q. If the coordinator fails while in state q1, all the cohorts perform the timeout transition, thus aborting the transition. Upon recovery, the coordinator performs the failure transition. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 29

30 Distributed System Concurrency Control
September 4, 1997 Distributed System Concurrency Control When several transactions run simultaneously in different processes, we must guarantee the final result is the same as the result of running the transactions sequentially in some order. That is, we need a mechanism to guarantee isolation or serializability. This mechanism is termed Concurrency Control. Three typical concurrency algorithms: Using locks Using timestamps Optimistic FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 30

31 Distributed System Concurrency Control Using Locks
September 4, 1997 Distributed System Concurrency Control Using Locks This is the oldest and most widely used concurrency control algorithm. When a process needs to access shared resource as part of a transaction, it first locks the resource. If the resource is locked by another process, the requesting process waits until the lock is released. The basic scheme is overly restrictive. The algorithm can be improved by distinguishing read locks from write locks. Data only to be read (referenced) is locked in shared mode. Data need to be updated (modified) is locked in exclusive mode. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 31

32 Lock Based Concurrency Control
September 4, 1997 Lock Based Concurrency Control Lock modes have the following compatibility: If a resource is not locked or locked in shared mode, a transaction can lock the resource in shared mode. A transaction can lock a resource in exclusive mode only if the resource is not locked. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 32

33 Lock Based Concurrency Control
September 4, 1997 Lock Based Concurrency Control If two transactions are trying to lock the same resource in incompatible mode, the two transactions are in conflict. A conflict relation can be shared-exclusive conflict (or read-write conflict) exclusive-exclusive conflict (or write-write conflict). FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 33

34 September 4, 1997 Two Phase Locking Using locks does not necessarily guarantee serializability. Only transactions executed concurrently as follows will be serializable. A transaction locks shared resource before accessing the resource and release the lock after using it. Locking is granted according to the compatibility constraint. A transaction does not acquire any locks after releasing a lock. The usage of locks that satisfies above conditions are 2 phase lock protocol. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 34

35 Two Phase Locking September 4, 1997 A transaction acquires all necessary locks in the first phase, growing phase. The transaction releases all the locks in the second phase, shrinking phase. Two phase locking is the sufficient condition to guarantee serializability Suppose a transaction aborts after releasing some of the locks. If other transactions have used resources protected by these released locks, the transactions must also be aborted - cascading abort. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 35

36 Strict Two Phase Locking
September 4, 1997 To avoid cascading abort, many systems use “strict two-phase locking” in which transactions do not release any lock until commit is completed. Strict two-phase locking is done as follows: Begin transaction Acquire locks before read or write resources Commit Release all locks End transaction FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 36

37 Strict Two Phase Locking
September 4, 1997 The strict two-phase locking has the following advantages: A transaction always reads a value written by a committed transaction. A transaction never has to be aborted - the cascading abort never happens. All lock acquisitions and releases can be handled by the system without the transaction being aware of these. Locks are acquired whenever data are accessed and released when the transaction has finished. Two-phase locking, can cause deadlocks to occur. two processes try to acquire the same pair of locks but in the reverse order. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 37

38 Lock Granularity September 4, 1997 The size of resources that can be locked by a single lock is called “lock granularity.” The finer the granularity, the more concurrent processing is possible while more locks are required. For example, let’s compare locking by file and locking by record contained in a file. Suppose two transactions access different records in the same file. If record is the unit of lock, two transactions can access the data simultaneously. On the other hand, if file is the unit of lock, the two transactions cannot access the data simultaneously. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 38

39 Concurrency Control Using Timestamps (Reed)
September 4, 1997 Concurrency Control Using Timestamps (Reed) Assign each transaction a timestamp at Begin Transaction. Timestamps must be unique (we can ensure the uniqueness using Lamport’s algorithm). Each resource has a read timestamp and a write timestamp. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 39

40 Timestamp Based Concurrency Control
September 4, 1997 Timestamp Based Concurrency Control A read timestamp is the timestamp of the transaction which read the resource most recently. Let TRD(x) be the read timestamp of resource x. A write timestamp is the timestamp of the transaction which updated the resource most recently. Let TWR(x) be the write timestamp of resource x. Note that Read timestamps and Write-timestamps are NOT the values of actual time when the data items are being actually read or written. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 40

41 Timestamp Based Concurrency Control
September 4, 1997 Timestamp Based Concurrency Control When a transaction with timestamp T tries to read resource x, If T < TWR(x), that means the transaction needs to read a value of x that is already overwritten. Hence the request is rejected and the transaction is aborted. If T ≥ TWR(x), the transaction is allowed to read the resource. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 41

42 Timestamp Based Concurrency Control
September 4, 1997 Timestamp Based Concurrency Control When a transaction with timestamp T tries to update resource x, If T < TWR(x) the transaction is attempting to write an obsolete value of x the request is rejected and the transaction is aborted. if T < TRD(x), then the value of x that the transaction is producing was needed previously, and the system assumed that value would never be produced, the request is rejected and the transaction is aborted. Otherwise, the transaction is allowed to update the resource. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 42

43 Timestamp Based Concurrency Control
September 4, 1997 Timestamp Based Concurrency Control The timestamp-ordering protocol guarantees serializability since all the arcs in the precedence graph are of the form: Thus, there will be no cycles in the precedence graph Timestamp protocol ensures freedom from deadlock as no transaction ever waits. But the schedule may not be recoverable. transaction with smaller timestamp transaction with larger timestamp FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 43

44 Optimistic Concurrency Control
September 4, 1997 Optimistic Concurrency Control Run the transactions, be optimistic about transaction conflicts. At the end of a transaction, i.e. when it is committing, existence of conflict is examined. If no conflict is detected the local copies are written on the real resource. Otherwise, the transaction is aborted.  Progression in phases Phase: and 3 The conflict check is done by examining whether the resource used by this transaction is updated by other transaction(s) while this transaction was executing. To make this check possible, the system must keep track of the resources used by the transactions. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 44

45 Optimistic Concurrency Control
September 4, 1997 Optimistic Concurrency Control Time-stamping Each transaction is assigned a timestamp TS(Ti) at the beginning of the validation phase. Validation conditions For every pair of transactions Ti, Tj, such that TS(Ti ) < TS(Tj), one of the following validation conditions must hold: 1. Ti completes (all three phases) before Tj begins. 2. Ti completes before Tj starts its write phase, and Ti does not write any object read by Tj. 3. Ti completes its Read phase before Tj completes its Read phase, and Ti does not write any object that is either read or written by Tj. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 45

46 Optimistic Concurrency Control
September 4, 1997 Optimistic Concurrency Control Optimistic concurrency control is designed with the assumption that conflicts are rare and we do not need to abort transactions often. Update is performed on local copies. This method fits well with the private workspace. The advantage is that it allows maximum parallel execution since transactions never wait and deadlocks never occur. The disadvantage is that a transaction may be aborted at the very end since the conflict check is done at the commit point. All operations of the aborted transaction must be done again from the start. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 46

47 Summarizing Concurrency and Transactions
September 4, 1997 Why do we need Election Algorithms? To coordinate distributed transactions Two Election Algorithms? Bully Algorithm, Ring Algorithm Four Key properties of a distributed transaction? Atomic, consistent, isolated, durable Main programming primitives of a distributed transaction? Begin Transaction, Abort, Read, Write, End Transaction Two main approaches to implement distributed transactions? Private workspace, Write-ahead Log Two distributed commit protocols? 2 Phase commit, 3 Phase Commit Why Concurrency Control is necessary for distributed transactions? For implementing serialization, deadlock prevention, improving parallelism Three Schemes for implementing Concurrency Control in Distributed Systems? Lock based CC Timestamp based CC Optimistic CC FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 47

48 Fault Tolerance in Distributed Systems
September 4, 1997 Faults and Fault-Tolerant Systems Why is fault tolerance important? Parallel and distributed systems usually relay upon a large number of “nodes” connected over a network, very often a large network; networks are prone to errors, dropouts and localised failures; hardware will fail; operating systems will fail or crash. Ensuring a parallel/distributed system can perform robustly is therefore important, since otherwise a small portion of the overall system can cripple the whole system. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 48

49 Dependability Availability: System is ready to be used immediately
September 4, 1997 Availability: System is ready to be used immediately Reliability: System can run continuously without failure Safety: When a system (temporarily) fails to operate correctly, nothing catastrophic happens Maintainability: How easily a failed system can be repaired - Building a dependable system comes down to controlling failure and faults. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 49

50 Total versus Partial Failure
September 4, 1997 Total Failure: All components in a system fail Typical in non-distributed system Partial Failure: One or more (but not all) components in a distributed system fail Some components affected Other components completely unaffected Considered as fault for the whole system FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 50

51 Failure September 4, 1997 Failure: a system fails when it fails to meet its promises or cannot provide its services in the specified manner Error: part of the system state that leads to failure (i.e., it differs from its intended value) Fault: the cause of an error (results from design errors, manufacturing faults, deterioration, or external disturbance) Recursive: Failure may be initiated by a mechanical fault Manufacturing fault leads to disk failure Disk failure is a fault that leads to database failure Database failure is a fault that leads to service failure FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 51

52 Classification of Failure Types
September 4, 1997 Our view of a distributed system is a process-level view, so we begin with the description of certain types of failures that are visible at the process level; The major classes of failures are as follows: Crash Failure Omission Failure Transient Failure Byzantine Failure Software Failure Temporal Failure Security Failure FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 52

53 Classification of Failure Types
September 4, 1997 Crash (Soft) Failure A process undergoes crash failure, when it permanently ceases to execute its actions. This is an irreversible change. In an asynchronous model, crash failures cannot be detected with total certainty, since there is no lower bound of the speed at which a process can execute its actions. In a synchronous system where processor speed and channel delays are bounded, crash failures can be detected using timeouts. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 53

54 Classification of Failure Types
September 4, 1997 Omission Failure If the receiver does not receive one or more of the messages sent by the transmitter, an omission failure occurs. For wireless networks, collision occurs in MAC layer or receiving node moves out of range. Transient Failure A transient failure can disturb the state of processes in an arbitrary way. The agent inducing this problem may be momentarily active but it can make a lasting effect on the global state. E.g., a power surge, or a mechanical shock, or a lightning strike. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 54

55 Classification of Failure Types
September 4, 1997 Byzantine Failure Byzantine failures represent the weakest of all failure model that allows every conceivable form of erroneous behaviour. The term alludes to uncertainty and was first proposed by Pease et al. Assume that process i forwards the value x of a local variable to each of its neighbours. The followings inconsistencies may occur: two distinct neighbours j and k receive values x and y, where x ≠ y one or more neighbours do not receive any data from i every neighbour receives a value z where z ≠ x FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 55

56 Classification of Failure Types
September 4, 1997 Some possible causes of the Byzantine failures are: total or partial breakdown of a link joining i with one of its neighbours software problems in process i hardware synchronization problems – assume that every neighbour is connected to the same bus, and reading the same copy sent out by i, but since the clocks are not perfectly synchronized, they may not read the value of x at the same time. If value of x varies with time, then different neighbours of i may read different values of x from process i. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 56

57 Classification of Failure Types
September 4, 1997 Temporal Failure Real-time systems require actions to be completed within a specific time frame. When this time limit is not met, a temporal failure occurs. Security Failure Virus and other malicious software may lead to unexpected behaviour that manifests itself as a system fault. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 57

58 History of Fault Tolerant Systems
September 4, 1997 The first known fault-tolerant computer was SAPO, built in 1951 in Czechoslovakia by Antonin Svoboda; Most of the development in the so called LLNM (Long Life, No Maintenance) computing was done by NASA during the 1960's, in preparation for Project Apollo and other research aspects. NASA's first machine went into a space observatory, and their second attempt, the JSTAR computer, was used in Voyager. This computer had a backup of memory arrays to use memory recovery methods and thus it was called the JPL Self-Testing-And-Repairing computer. It could detect its own errors and fix them or bring up redundant modules as needed; Smart Sensor Network in the Ageless Space Vehicle Project [Topic05-ref02] Don Price et al; Tandem Computer ( ) – absorbed by HP - manufactured fault tolerant minicomputers for critical applications; FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 58

59 Fault Tolerant Systems
September 4, 1997 We designate a system that does not tolerate failures as a fault-tolerant system. In such systems, the occurrence of a fault violates liveness and safety properties. The are four major types of fault-tolerance Masking tolerance Non-masking tolerance Fail-safe tolerance Graceful degradation Note: Safety properties specify that “something bad never happens” Doing nothing easily fulfils a safety property as this will never lead to a “bad” situation Safety properties are complemented by liveness properties Liveness properties assert that: “something good” will eventually happen [Lamport] FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 59

60 Masking Tolerance (Redundancy)
September 4, 1997 Let P be the set of configurations for the fault-tolerance system. Given a set of fault actions F, the fault span Q corresponds to the largest set of configurations that the system can support. In Masking tolerance systems, when a fault F is masked its occurrence has no impact on the application, that is P = Q. Masking tolerance is important in many safety-critical applications where the failure can endanger human life or cause massive loss of property. An aircraft must be able to fly even if one of its engines malfunctions. Masking tolerance preserve both safety and liveness properties of the original system. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 60

61 Non-Masking Tolerance (Standby Redundancy)
September 4, 1997 In non-masking fault tolerance, faults may temporarily affect and violate the safety property, that is However, liveness is not compromised, and eventually normal behaviour is restored. Consider that while watching a movie, the server crashed, but the system automatically restored the service by switching to a standby proxy server. Stabilization and Checkpointing represent two opposing scenario in non-masking tolerance. Checkpointing relies on history and recovery is achieved by retrieving the lost computation. Stabilization is history-insensitive and does not care about lost computation as long as eventual recovery is guaranteed. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 61

62 Fail-Safe Tolerance September 4, 1997 Certain faulty configurations do not affect the application in an adverse way and therefore considered harmless. A fail-safe system relaxes the tolerance requirement by avoiding only those faulty configurations that will have catastrophic consequences (not withstanding the failure). As an example: At a four-way traffic crossing, if the lights are green in both directions then a collision is possible. However, if the lights are red, at best traffic will stall but will not have any catastrophic side effect. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 62

63 Graceful Degradation September 4, 1997 There are systems that neither mask, nor fully recover from the effect of failures, but exhibit a degraded behaviour that falls short of normal behaviour, but is still considered acceptable. The notion of acceptability is highly subjective and entirely dependent on the user running the application. Some examples While routing a message between two points in a network, a program computes the shortest path. In the presence of a failure, if this program returns another path but not the shortest, then this may be acceptable. An operating system may switch to a safe mode where users cannot create or modify files, but can read the files that already exist. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 63

64 Summary We looked at Failure because:
September 4, 1997 We looked at Failure because: Distributed systems are inherently more prone to failure Parallelism by definition means many things happening at once and thus more things to go wrong We may indeed use a distributed system to provide redundancy to help deal with failure Dealing with failure in distributed and parallel systems can require difference techniques compared with simple systems. FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture 64


Download ppt "FIT5174 Distributed & Parallel Systems"

Similar presentations


Ads by Google