Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Mutex EE324 Lecture 11.

Similar presentations


Presentation on theme: "Distributed Mutex EE324 Lecture 11."— Presentation transcript:

1 Distributed Mutex EE324 Lecture 11

2 Vector Clocks Vector clocks overcome the shortcoming of Lamport logical clocks L(e) < L(e’) does not imply e happened before e’ Goal Want ordering that matches causality V(e) < V(e’) if and only if e → e’ Method Label each event by vector V(e) [c1, c2 …, cn] ci = # events in process i that causally precede e

3 Vector Clock Algorithm
Initially, all vectors [0,0,…,0] For event on process i, increment own ci Label message sent with local vector When process j receives message with vector [d1, d2, …, dn]: Set local each local entry k to max(ck, dk) Increment value of cj

4 Vector Clocks Vector clocks overcome the shortcoming of Lamport logical clocks L(e) < L(e’) does not imply e happened before e’ Vector timestamps are used to timestamp local events They are applied in schemes for replication of data

5 Vector Clocks At p1 a occurs at (1,0,0); b occurs at (2,0,0); piggyback (2,0,0) on m1 At p2 on receipt of m1 use max ((0,0,0), (2,0,0)) = (2, 0, 0) and add 1 to o wn element = (2,1,0) Meaning of =, <=, max etc for vector timestamps compare elements pairwise Following points animated : Vector clocks overcome the shortcoming of Lamport logical clocks (L(e) < L(e’) does not imply e happened before e’) e.g. events e and c Vector timestamps are used to timestamp local events. Applied in schemes for replication of data e.g. Gossip (p 572), Coda (p585) and causal multicast The rules VC1-VC4 are for updating vector clocks Work through figure At p1 a(1,0,0) b (2,0,0) send (2,0,0) on m1 At p2 on receipt of m1 get max((0,0,0), (2,0,0) = (2,0,0) add 1 -> (2,1,0). Meaning of =, <=, max etc for vector timestamps. Note that e-> e’ implies V(e) < V(e’). The converse is also true. But c || e parallel) because neither V(c) <= V(e) nor V(e) <= V(c).

6 Vector Clocks Note that e  e’ implies L(e)<L(e’). The converse is also true Can you see a pair of parallel events? c || e( parallel) because neither V(c) <= V(e) nor V(e) <= V(c)

7 Figure 14.6 Lamport timestamps for the events shown in Figure 14.5
Instructor’s Guide for Coulouris, Dollimore, Kindberg and Blair, Distributed Systems: Concepts and Design Edn © Pearson Education 2012

8 Figure 14.7 Vector timestamps for the events shown in Figure 14.5
Instructor’s Guide for Coulouris, Dollimore, Kindberg and Blair, Distributed Systems: Concepts and Design Edn © Pearson Education 2012

9 Logical clocks including VC doesn’t capture everything
Out-of-band communication

10 Distributed Mutex (Reading CDK5 15.2)
We learned about mutex, semaphore, and CVs within a single system. What do they have in common? They require a shared state and we kept it in the memory. Distributed mutex No shared memory How do we implement it?  Message passing Challenges: Message can be dropped and processes can fail.

11 Distributed Mutex (Reading CDK5 15.2)
Entering/leaving a critical section Enter() --- block if neccessary ResourceAccessses() --- access shared resource (in side the CS) Leave() Goal Safety: at most one process may execute in the CS at one time Liveness: Requests to enter/exit CS eventually succeeds. (no deadlock or starvation) ordering: If one entry request “happened before” another, then entry to CS must happen in that order.

12 Distributed Mutex (Reading CDK5 15.2)
ordering Example explained in class Other performance objectives Reduce the number of messages Minimize synchronization delay

13 Mutual Exclusion A Centralized Algorithm
Process 1 asks the coordinator for permission to access a shared reso urce  Permission is granted Process 2 then asks permission to access the same resource  The co ordinator does not reply When process 1 releases the resource, it tells the coordinator, which t hen replies to 2

14 Mutual Exclusion A Centralized Algorithm
Advantages Simple, small delay (one RTT) to acquire mutex Only 3 messages required to enter and leave the critical section Disadvantages Single point of failure Central performance bottleneck Does not ensure  ordering (example?) Must elect a master in a consistent fashion

15 A Token Ring Algorithm An unordered group of processes on a network  A logical ring constructed in software Use ring to pass right to access resource

16 A Token Ring Algorithm Benefits: Simple
Problems: Failure recovery can be difficult. A single process failure can break the ring. But, a failure can be recovered by dropping the process in the logical ring. Does not ensure  ordering Long synchronization delay: Need to wait for up to N-1 messages, for N processors

17 Lamport’s Shared Priority Queue
Maintain a global priority queue of requests for the critical section. But each process has its own queue. The ordering inside the Qs is enforced by Lamport’s clock. Thus, we enforce  ordering.

18 Lamport’s Shared Priority Queue
Each process i locally maintains Qi (its own version of the priority Q) To execute critical section, you must have replies from all other processes AND your request must be at the front of Qi When you have all replies: All other processes are aware of your request (because the request happens before response) You are aware of any earlier requests (assume messages from the same process are not reordered)

19 Lamport’s Shared Priority Queue
To enter critical section at process i :Stamp your request with the current time T Add request to Qi Broadcast REQUEST(T) to all processes Wait for all replies and for T to reach front of Qi To leave Pop head of Qi, Broadcast RELEASE to all processes On receipt of REQUEST(T’) from process j: Add T’ to Qi If waiting for REPLY from j for an earlier request T, wait until j replies to you Otherwise REPLY • On receipt of RELEASEPop head of Qi

20 Lamport’s Shared Priority Queue
Advantages: Fair Short synchronization delay Disadvantages: Very unreliable (Any process failure halts progress) 3(N-1) messages per entry/exit

21 Announcements Midterm In the process of making it.
Written exam: take home exam. Honor code. Only textbook and lecture notes. No discussion. No Internet. Release at noon 10/24. Due Friday evening 10/25 Programming part: Can use the Internet, but no discussion.


Download ppt "Distributed Mutex EE324 Lecture 11."

Similar presentations


Ads by Google