Presentation is loading. Please wait.

Presentation is loading. Please wait.

Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing’

Similar presentations


Presentation on theme: "Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing’"— Presentation transcript:

1 Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing’ to distributed computing

2 Mutual Exclusion Problem statement: Given a set of n processes, and a shared resource, it is required that: –Mutual exclusion At any time, at most one process is accessing the resource –Liveness If a process requests for the resource, it can eventually access the resource

3 Solution to mutual exclusion How could we do this if all processes shared a common clock –Each process timestamps its request –The process with lowest timestamp is allowed to access critical section What are the properties of clocks that enable us to solve this problem?

4 Problem Logical Clocks could assign the same value to different events –Need to order these events

5 Logical Timestamps The time associated with an event is a pair, the clock and the process where the event occurred. For event a at process j, the timestamp ts.a is –ts.a = cl.a = clock value assigned by logical clock Lexicographical comparison iff x1 < y1  ( (x1 = y1)  (x2 < y2) )

6 Observation about Logical Clocks For any two distinct events a and b, either ts.a < ts.b  ts.b < ts.a The event timestamps form a total order.

7 Assumption Communication is FIFO

8 Solution to mutual exclusion, based on logical clocks Messages are timestamped with logical clocks Each process maintains a queue of pending requests When process j wants to access the resource, it adds its timestamp to the queue, and sends a request message containing its timestamp to all other processes When process k receives a request message from j, it adds j to the queue and sends a reply message to j

9 Solution to mutual exclusion, based on logical clocks (continued) Process j accesses the resource (enters critical section) iff –it has received a reply from every other process –its queue does not contain a timestamp that is smaller than its own request After a process is done accessing its critical section, it sends a release message to all processes and removes its own request from the pending queue When a process k receives the release message from j,it removes the entry of j from its pending queue

10 Solution to mutual exclusion, based on logical clocks (continued) This is called Lamport’s mutual exclusion algorithm What is the number of messages sent for every access to critical section?

11 Correctness Argument Consider each of these 3 situations –req(j)  req(k) –req(k)  req(j) –req(j) || req(k) –Show that in each of these conditions, process with smaller timestamp enters CS first.

12 Suppose j and k request for CS simultaneously –Assume that j’s request is satisfied first –After j releases CS, it requests again immediately. Show that j’s second request cannot satisfied before k’s first request

13 Optimizations Should a process wait for a reply message from every other process? Should a process send a reply message immediately? Answer these questions to obtain a protocol where only 2 (n-1) messages are used for each critical section

14 Optimizations Should a process wait for a reply message from every other process? –If timestamp of j’s request is larger than k’s timestamp How can k learn that j’s request timestamp is larger?

15 Optimizations Should a process send a reply message immediately? –k receives a request from j k is requesting –timestamp of j is larger »No need to send reply right away since j has to wait until k access its critical section first »Fine to delay reply until k finishes its critical section –timestamp of k is larger k is not requesting

16 Optimization Release message –Should we send it to all? No. but send it only to those for whom you have pending requests

17 Optimizatons make sure Either a reply message is sent or a release message is sent but not both

18 Related Problem Atomic Broadcast –Assume all messages are broadcast in nature –If m1 is delivered before m2 at process j then m1 is delivered before m2 at process k

19 Relation between Atomic Broadcast and Mutual Exclusion Atomic broadcast -> Mutual exclusion –Every process sends request to all –You can access the resource when you receive your own message and you know that previous requests have been met Mutual Exclusion -> Atomic Broadcast –When you want to broadcast: req for ME –Upon access to CS: send the message to be broadcast and wait for ack –Release critical section

20 What other Clocks Can We Use? Local Counters? Vector Clocks?

21 Classification of Mutual Exclusion Algorithms Quorum Based –Each node is associated with a quorum Q j –When j wants to enter critical section, it asks for permission from all nodes in this quorum –What property should be met by the quorums of different processes? Token Based –A token is circulated among nodes; the node that has the token can access critical section –We will look at these later

22 Classification of Mutual Exclusion Algorithms Which category would Lamport’s protocol fit in? What is the quorum of a process in this algorithm? What are the possibilities of different quorums

23 Taking Quorum Based Algorithms to Extreme Centralized mutual exclusion –A single process `coordinator' is responsible for ensuring mutual exclusion. –Each process requests the coordinator whenever it wishes to access the resource. –The coordinator permits only one process to access the resource at a time. –After a process accesses the resource, it sends a reply to the coordinator. Quorum for all processes is {c} where c is the coordinator

24 Centralized mutual exclusion Problem : What if the coordinator fails? Solution : Elect a new one –Related problem: leader election

25 Other Criteria for Mutual Exclusion Let T be transmission delay of a message Let E be time for critical section execution What is the minimum (maximum delay) between one process exiting critical section and another process entering it? What is maximum throughput, I.e., number of processes that can enter CS in a given time?

26 Criteria for Mutual Exclusion Min Delay for any protocol Max throughput for any protocol Lamport –Delay? T –Throughput? 1/(E+T) Centralized –Delay? 2T –Throughput? 1/(E +2T)

27 Quorum Based Algorithms Each process j requests permission from its quorum Q j –Requirement:  j, k :: Q j  Q k    j, k :: Q j  Q k –R j = set of processes that request permission from j R j need not be the same as Q j It is desirable that the size of R j is same/similar for all processes

28 For Centralized Mutual Exclusion | Q j | = 1 | R j | = 0 j!= c n j = c? –Shows the unbalanced nature of centralized mutual exclusion Goal: Reduce | Q j | while keeping | R j | balanced for all nodes

29 Quorum Based Algorithms Solution for | Q j | = O(  N ) –Grid based

30 Maekawa’s algorithm Maekawa showed that minimum quorum size is  N  example quorums: –for 3 processes: Q 0 ={P 0,P 1 }, Q 1 ={P 1,P 2 }, Q 2 ={P 0,P 2 } –for 7 processes: Q 0 ={P 0,P 1,P 2 }, Q 3 ={P 0,P 3,P 4 }, Q 5 ={P 0,P 5,P 6 }, Q 1 ={P 1,P 3,P 5 }, Q 4 ={P 1,P 4,P 6 }, Q 6 ={P 2,P 3,P 6 }, Q 2 ={P 2,P 4,P 5 } For n 2 - n + 1 processes, quorums of size n can be constructed

31 Basic operation Requesting CS –process requests CS by sending request message to processes in its quorum –a process has just one permission to give, if a process receives a request it sends back reply unless it granted permission to other process; in which case the request is queued Entering CS –process may enter CS when it receives replys from all processes in its quorum Releasing CS –after exiting CS process sends release to every process in its quorum –when a process gets release it sends reply to another request in its queue

32 Possible Deadlock Since processes do not communicate with all other processes in the system, CS requests may be granted out of timestamp order example: –suppose there are processes P i, P j, and P k such that: P j  Q i and P j  Q k but P k  Q i and P i  Q k –P i and P k request CS such that ts k < ts i –if request P i from reaches P j first, then P j sends reply to P i and P k has to wait for P i out of timestamp order –a wait-for cycle (hence a deadlock) may be formed

33 Maekawa’s algorithm, deadlock avoidance To avoid deadlock process recalls permission if it is granted out of timestamp order –if P j receives a request from P i with higher timestamp than the request granted permission, P j sends failed to P i –If P j receives a request from P i with lower timestamp than the request granted permission (deadlock possibility), P j sends inquire to the process whom it had given permission before –when P i receives inquire it replies with yield if it did not succeed getting permissions from other processes got failed

34 Maekawa Algorithm Number of messages Min Delay: 2T Max Throughput 1/(E+2T)

35 Faults in Maekawa’s algorithm What will happen if faults occur in Maekwa algorithm? What can a process do if some process in its quorum has failed? When will mutual exclusion be impossible in Maekawa algorithm?

36 Tree Based Mutual Exclusion Suppose processes are organized in a tree –What are possible quorums? A path from the root to the leaf Root is part of all quorums Can we construct more quorums?

37 Tree Based Quorum Based Mutual Exclusion Number of messages Min Delay Max Throughput

38 Token-based algorithms LeLann’s token ring Suzuki-Kasami’s broadcast Raymond’s tree

39 Token-ring algorithm (Le Lann) Processes are arranged in a logical ring At start, process 0 is given a token –Token circulates around the ring in a fixed direction via point-to-point messages –When a process acquires the token, it has the right to enter the critical section After exiting CS, it passes the token on Evaluation: –N–1 messages required to enter CS –Not difficult to add new processes to ring –With unidirectional ring, mutual exclusion is fair, and no process starves –Difficult to detect when token is lost –Doesn’t guarantee “happened-before” order of entry into critical section

40 Token-ring algorithm Number of messages Min Delay Max Throughput

41 Suzuki-Kasami’s broadcast algorithm Overview: –If a process wants to enter the critical section, and it does not have the token, it broadcasts a request message to all other processes in the system –The process that has the token will then send it to the requesting process However, if it is in CS, it gets to finish before sending the token –A process holding the token can continuously enter the critical section until the token is requested

42 Suzuki-Kasami’s broadcast algorithm –Request vector at process i : RN i [k] contains the largest sequence number received from process k in a request message –Token consists of vector and a queue: LN[k] contains the sequence number of the latest executed request from process k Q is the queue of requesting process

43 Suzuki-Kasami’s broadcast algorithm Requesting the critical section (CS): –When a process i wants to enter the CS, if it does not have the token, it: Increments its sequence number RN i [i] Sends a request message containing new sequence number to all processes in the system –When a process k receives the request(i,sn) message, it: Sets RN k [i] to MAX(RN k [i], sn) –If sn < RN k [i], the message is outdated –If process k has the token and is not in CS (i.e., is not using token), and if RN k [i] == LN[i]+1 (indicating an outstanding request) it sends the token to process i

44 Releasing the CS: –When a process i leaves the CS, it: Sets LN[i] of the token equal to RN i [i] –Indicates that its request RN i [i] has been executed For every process k whose ID is not in the token queue Q, it appends its ID to Q if RN i [k] == LN[k]+1 –Indicates that process k has an outstanding request If the token queue Q is nonempty after this update, it deletes the process ID at the head of Q and sends the token to that process –Gives priority to others’ requests –Otherwise, it keeps the token Evaluation: –0 or N messages required to enter CS No messages if process holds the token Otherwise (N-1) requests, 1 reply –synchronization delay – T Suzuki-Kasami’s broadcast algorithm

45 Executing the CS: –A process enters the CS when it acquires the token

46 Raymond’s tree algorithm Overview: –processors are arranged as a logical tree Edges are directed toward the processor that holds the token (called the “holder”, initially the root of tree) –Each processor has: A variable holder that points to its neighbor on the directed path toward the holder of the token A FIFO queue called request_q that holds its requests for the token, as well as any requests from neighbors that have requested but haven’t received the token –If request_q is non-empty, that implies the node has already sent the request at the head of its queue toward the holder T1 T2 T3 T4 T5 T6 T7

47 Raymond’s tree algorithm Requesting the critical section (CS): –When a process wants to enter the CS, but it does not have the token, it: Adds its request to its request_q If its request_q was empty before the addition, it sends a request message along the directed path toward the holder –If the request_q was not empty, it’s already made a request, and has to wait –When a process in the path between the requesting process and the holder receives the request message, it –When the holder receives a request message, it Sends the token (in a message) toward the requesting process Sets its holder variable to point toward that process (toward the new holder)

48 Raymond’s tree algorithm Requesting the CS (cont.): –When a process in the path between the holder and the requesting process receives the token, it Deletes the top entry (the most current requesting process) from its request_q Sends the token toward the process referenced by the deleted entry, and sets its holder variable to point toward that process If its request_q is not empty after this deletion, it sends a request message along the directed path toward the new holder (pointed to by the updated holder variable) Executing the CS: –A process can enter the CS when it receives the token and its own entry is at the top of its request_q It deletes the top entry from the request_q, and enters the CS

49 Raymond’s tree algorithm Releasing the CS: –When a process leaves the CS If its request_q is not empty (meaning a process has requested the token from it), it: –Deletes the top entry from its request_q –Sends the token toward the process referenced by the deleted entry, and sets its holder variable to point toward that process If its request_q is not empty after this deletion (meaning more than one process has requested the token from it), it sends a request message along the directed path toward the new holder (pointed to by the updated holder variable) greedy variant – a process may execute the CS if it has the token even if it is not at the top of the queue. How does this variant affect Raymond’s alg.?

50 Fault-tolerant Mutual Exclusion Based on Raymond’s algorithm

51 (Abstract) Actions of Raymond Mutual Exclusion  Request.(h.j) = Request.(h.j)  {j} h.j = k /\ h.k = k /\ j  Request.k  h.k = j, h.j = j, Request.k = Request.k – {j}

52 Actions h.j = j  Access critical section

53 Slight modification h.j = k /\ h.k = k /\ j  Request.k /\ (P.j = k \/ P.k = j)  h.k = j, h.j = j, Request.k = Request.k – {j}

54 Fault-Tolerant Mutual Exclusion What happens if the tree is broken due to faults? –A tree correction algorithm could be used to fix the tree –Example: we considered one such algorithm before

55 However, Even if the tree is fixed, the holder relation may not be accurate

56 Invariant for holder relation What are the conditions that are always true about holder relation?

57 Invariant h.j  {j, P.j}  ch.j P.j  j  (h.j = P.j \/ h.(P.j) = j) P.j  j   (h.j = P.j /\ h.(P.j) = j) Plus all the predicates in the invariant of the tree program

58 Recovery from faults h.j  {j, P.j}  ch.j  h.j = P.j

59 Recovery from faults P.j  j /\  (h.j = P.j \/ h.(P.j) = j)  h.j = P.j

60 Recovery from Faults P.j  j /\ (h.j = P.j /\ h.(P.j) = j)  h.(P.j) = P.(P.j)

61 Notion of Superposition

62 Properties of This Mutual Exclusion Algorithm Always unique token? Eventually unique token? Level of tolerance? –Nonmasking –Ensure that eventually program recovers to states from where there is exactly one token that is circulated –Some changes necessary for masking fault- tolerance –where multiple tokens do not exist during recovery We will look at a solution a little later


Download ppt "Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing’"

Similar presentations


Ads by Google