Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Mutex EE324 Lecture 11.

Similar presentations


Presentation on theme: "Distributed Mutex EE324 Lecture 11."— Presentation transcript:

1 Distributed Mutex EE324 Lecture 11

2 Vector Clocks Vector clocks overcome the shortcoming of Lamport logical clocks L(e) < L(e’) does not imply e happened before e’ Goal Want ordering that matches causality V(e) < V(e’) if and only if e → e’ Method Label each event by vector V(e) [c1, c2 …, cn] ci = # events in process i that causally precede e

3 Vector Clock Algorithm
Initially, all vectors [0,0,…,0] For event on process i, increment own ci Label message sent with local vector When process j receives message with vector [d1, d2, …, dn]: Set local each local entry k to max(ck, dk) Increment value of cj

4 Vector Clocks Vector clocks overcome the shortcoming of Lamport logical clocks L(e) < L(e’) does not imply e happened before e’ Vector timestamps are used to timestamp local events They are applied in schemes for replication of data

5 Vector Clocks At p1 a occurs at (1,0,0); b occurs at (2,0,0); piggyback (2,0,0) on m1 At p2 on receipt of m1 use max ((0,0,0), (2,0,0)) = (2, 0, 0) and add 1 to o wn element = (2,1,0) Meaning of =, <=, max etc for vector timestamps compare elements pairwise Following points animated : Vector clocks overcome the shortcoming of Lamport logical clocks (L(e) < L(e’) does not imply e happened before e’) e.g. events e and c Vector timestamps are used to timestamp local events. Applied in schemes for replication of data e.g. Gossip (p 572), Coda (p585) and causal multicast The rules VC1-VC4 are for updating vector clocks Work through figure At p1 a(1,0,0) b (2,0,0) send (2,0,0) on m1 At p2 on receipt of m1 get max((0,0,0), (2,0,0) = (2,0,0) add 1 -> (2,1,0). Meaning of =, <=, max etc for vector timestamps. Note that e-> e’ implies V(e) < V(e’). The converse is also true. But c || e parallel) because neither V(c) <= V(e) nor V(e) <= V(c).

6 Vector Clocks Note that e  e’ implies L(e)<L(e’). The converse is also true Can you see a pair of parallel events? c || e( parallel) because neither V(c) <= V(e) nor V(e) <= V(c)

7 Figure 14.6 Lamport timestamps for the events shown in Figure 14.5
Instructor’s Guide for Coulouris, Dollimore, Kindberg and Blair, Distributed Systems: Concepts and Design Edn © Pearson Education 2012

8 Figure 14.7 Vector timestamps for the events shown in Figure 14.5
Instructor’s Guide for Coulouris, Dollimore, Kindberg and Blair, Distributed Systems: Concepts and Design Edn © Pearson Education 2012

9 Logical clocks including VC doesn’t capture everything
Out-of-band communication

10 Where is vector clock used?
Data replication Bayou: Amazon Dynamo DB 210 and 211 page

11 Distributed Mutex (Reading CDK5 15.2)
We learned about mutex, semaphore, and CVs within a single system. What do they have in common? They require a shared state and we kept it in the memory. Distributed mutex No shared memory How do we implement it?  Message passing Challenges: Message can be dropped and processes can fail.

12 Distributed Mutex (Reading CDK5 15.2)
Entering/leaving a critical section Enter() --- block if neccessary ResourceAccessses() --- access shared resource (in side the CS) Leave() Goal Safety: at most one process may execute in the CS at one time Liveness: Requests to enter/exit CS eventually succeeds. (no deadlock or starvation) ordering: If one entry request “happened before” another, then entry to CS must happen in that order.

13 Distributed Mutex (Reading CDK5 15.2)
ordering Other performance objectives Reduce the number of messages Minimize synchronization delay Process A Lock server Process B

14 Mutual Exclusion A Centralized Algorithm
Process 1 asks the coordinator for permission to access a shared reso urce  Permission is granted Process 2 then asks permission to access the same resource  The co ordinator does not reply When process 1 releases the resource, it tells the coordinator, which t hen replies to 2

15 Mutual Exclusion A Centralized Algorithm
Advantages Simple, small delay (one RTT) to acquire mutex Only 3 messages required to enter and leave the critical section Safety and liveness is met (assuming the central server does not fail) Disadvantages Single point of failure How do we make the system robust to failure? Elect a new master when one fails. Must elect a master in a consistent fashion (how??) Central performance bottleneck Does not ensure  ordering (example?)

16 Mutual Exclusion A Centralized Algorithm
Does not ensure  ordering

17 A Token Ring Algorithm An unordered group of processes on a network  A logical ring constructed in software Use ring to pass right to access resource

18 A Token Ring Algorithm Benefits: Simple
Problems: Failure recovery can be difficult. A single process failure can break the ring. But, a failure can be recovered by dropping the process in the logical ring. Does not ensure  ordering Long synchronization delay: Need to wait for up to N-1 messages, for N processors

19 Lamport’s Shared Priority Queue
Maintain a global priority queue of requests for the critical section. But each process has its own queue. The ordering inside the Qs is enforced by Lamport’s clock. The request with earlier timestamp appears first in the queue. Thus, we enforce  ordering.

20 Lamport’s Shared Priority Queue
Each process i locally maintains Qi (its own version of the priority Q) To execute critical section, you must have replies from all other processes AND your request must be at the front of Qi When you have all replies: All other processes are aware of your request (because the request happens before response) You are aware of any earlier requests (assume messages from the same process are not reordered)

21 Lamport’s Shared Priority Queue
To enter critical section at process i :Stamp your request with the current time T Add request to Qi Broadcast REQUEST(T) to all processes Wait for all replies and for T to reach front of Qi To leave Pop head of Qi, Broadcast RELEASE to all processes On receipt of REQUEST(T’) from process k: Add T’ to Qi If waiting for REPLY from k for an earlier request T, wait until k replies to you Otherwise REPLY • On receipt of RELEASE Pop head of Qi

22 Shared priority queue Node1: time Action 40 (start) 41
Recv <15,3> 42 Reply to <15,3> Q: <15,3> Q: <15,3> Q: <15,3> Node2: time Action 11 (start) 12 Recv <15,3> 13 Reply to <15,3> Node3: time Action 14 (start) 15 Request <15,3>

23 Shared priority queue Node1: time Action 40 (start) 41
Recv <15,3> 42 Reply to <15,3> Q: <15,3> Q: <15,3> Q: <15,3> Node3: time Action 14 (start) 15 Request <15,3> 43 Recv reply 1 44 Recv reply 2 45 Run critical section Node2: time Action 11 (start) 12 Recv <15,3> 13 Reply to <15,3>

24 Shared priority queue Q: <15,3>, <43,1> Note the ordering!
<43,1> arrived first! Node1: time Action 40 (start) 41 Recv <15,3> 42 Reply to <15,3> 43 Request <43,1> Q: <15,3>, <18,2>, <43,1> Node3: time Action 14 (start) 15 Request <15,3> 43 Recv reply 1 44 Recv reply 2 45 Run critical section 46 Recv <43,1> Reply 48 Recv <18,2> Q: <15,3>, <18,2> Node2: time Action 11 (start) 16 Recv <15,3> 17 Reply to <15,3> 18 Request <18,2>

25 Shared priority queue Q: <15,3>, <43,1> Node1: time Action
40 (start) 41 Recv <15,3> 42 Reply to <15,3> 43 Request <43,1> Q: <15,3>, <18,2>, <43,1> Node3: time Action 14 (start) 15 Request <15,3> 43 Recv reply 1 44 Recv reply 2 45 Run critical section 46 Recv <43,1> 47 Reply to 1 48 Recv <18,2> 49 Reply to 2 Q: <15,3>, <18,2>, <43,1> Node2: time Action 11 (start) 16 Recv <15,3> 17 Reply to <15,3> 18 Request <18,2> 50 Recv reply from 3 51 Recv <43,1>

26 Shared priority queue Q: <15,3>, <18,2> <43,1>
Node1: time Action 40 (start) 41 Recv <15,3> 42 Reply to <15,3> 43 Request <43,1> Recv <18,2> Reply to 1 <18,2> Q: <15,3>, <18,2>, <43,1> Node3: time Action 14 (start) 15 Request <15,3> 43 Recv reply 1 44 Recv reply 2 45 Run critical section 46 Recv <43,1> 47 Reply to 1 48 Recv <18,2> 49 Reply to 2 Q: <15,3>, <18,2>, <43,1> Node2: time Action 11 (start) 16 Recv <15,3> 17 Reply to <15,3> 18 Request <18,2> 50 Recv reply from 3 51 Recv <43,1> Recv reply from 1 <18,2>

27 Shared Queue approach Disadvantages: Very unreliable
Everyone eventually sees the same ordering Ordered by Lamport’s clock. Disadvantages: Very unreliable Any process failure halts progress 3(N-1) messages per entry/exit Advantages: Fair, Short synchronization delay

28 Lamport’s Shared Priority Queue
Advantages: Fair Short synchronization delay Disadvantages: Very unreliable (Any process failure halts progress) 3(N-1) messages per entry/exit


Download ppt "Distributed Mutex EE324 Lecture 11."

Similar presentations


Ads by Google