Presentation is loading. Please wait.

Presentation is loading. Please wait.

Winter, 2004CSS490 Synchronization1 Textbook Ch6 Instructor: Munehiro Fukuda These slides were compiled from the textbook, the reference books, and the.

Similar presentations


Presentation on theme: "Winter, 2004CSS490 Synchronization1 Textbook Ch6 Instructor: Munehiro Fukuda These slides were compiled from the textbook, the reference books, and the."— Presentation transcript:

1 Winter, 2004CSS490 Synchronization1 Textbook Ch6 Instructor: Munehiro Fukuda These slides were compiled from the textbook, the reference books, and the instructor’s original materials.

2 Winter, 2004CSS490 Synchronization2 Why Clock Synchronization Computer clock: a counter decremented by a crystal oscillation. Single computers: all processes use the same clock. – No problem Multiple computers: impossible to guarantee that the crystals in different computers all run at exactly the same frequency. Synchronization: Absolute (with real time) Necessary for real-time applications such as on-line reservation systems Relative (with each other) Required for those applications that need a consistent view of time across all nodes.

3 Winter, 2004CSS490 Synchronization3 Clock Synchronization Passive Centralized Algorithms – Christian ’ s Algorithm Assumption: processing time has been measured or estimated Message_delay = (T1 – T0 – processing)/2 New client time = T + message_delay Improvements: Average multiple measurements Discard outlying measurements Client Time server Time T0 T1 Time? Time=T Processing

4 Winter, 2004CSS490 Synchronization4 Clock Synchronization Active Centralized Algorithm – Berkeley Algorithm Assumption: processing time has been measured or estimated Server: diff(i) = server_time – (ci_time + message_deley) Client: ci_time = ci_time + diff(i) Time server Client 1 Client 2 C1_time C2_time Time? Diff(1) Diff(2)

5 Winter, 2004CSS490 Synchronization5 Clock Synchronization Distributed Algorithm – Averaging Algorithm Assumption: R is large enough to wait for all broadcast messages All nodes broadcast their time periodically Each node computes average. Improvement: Discard outlying time messages. Exchange their time with their local neighbors. Node 1 Node 2 Node 3 T0 T0 +R T0 +2R N3_time=30 N2_time=32 N1_time=31

6 Winter, 2004CSS490 Synchronization6 Event Ordering Happened-Before Relation 1.Event e k i : The k th event of process i 2.Sequence h k i : The history of process I through to the event e k i 3.Cause-and-effect e→e’: e proceeds e’. 4.Parallel events e ∥ e’: e and e’ happen in parallel 5.Happens-Before Relation: If e k i, e l i ∈ h i and k < l, then e k i → e l i, If e i = send(m) and e j = receive(m), then e i → e j, If e → e’ and e’ → e”, then e → e” Most applications need not maintain the real-time synchronized clock.

7 Winter, 2004CSS490 Synchronization7 Event Ordering Logical Clock 1. e  e’  LC(e) < LC(e’) for all events 2. However, we cannot inferLC(e) < LC(e’)  e  e’ Example: LC(e 2 1 ) > LC(e 1 3 ) but e 2 1 || e 1 3 LC(e i ) := (e i != receive(m)) ? LC + 1 : max(LC, TS(m)) + 1 where TS(m) is the timestamp of message m: P1 P2 P3 e11e11 e21e21 e12e12 e22e22 e13e13 e23e23 LC=1 LC=2 LC=3 LC=4 LC=1 LC=5 m1

8 Winter, 2004CSS490 Synchronization8 Event Ordering Vector Clock P1 P2 P3 e11e11 e21e21 e12e12 e22e22 e13e13 e23e23 m1 (1,0,0) (2,0,0) (2,1,0) (2,2,0) (0,0,1) (2,2,2) Vi[I] = vi[i] + 1; Pi includes the value t = Vi in every message it sends Vi[j] = max(vi[j], t[j]) for j = 1,2, …,N 1. e  e’  V(e) < V(e’) 2. V(e) < V(e’)  e  e’ Example:neither V(e 2 1 )  V(e 1 3 ) nor V(e 2 1 )  V(e 1 3 ), and thus e 2 1 || e 1 3

9 Winter, 2004CSS490 Synchronization9 Global State Consistent Cut Finding C such that (e ∈ C) ∧ (e’ → e) ⇒ e’ ∈ C p1 p2 p3 p4 e11e11e11e11 e13e13e13e13 e12e12e12e12 e22e22e22e22 e32e32e32e32 e21e21e21e21 e21e21e21e21 e23e23e23e23 e14e14e14e14 e24e24e24e24 e34e34e34e34 C C’C’C’C’ (send) (receive)

10 Winter, 2004CSS490 Synchronization10 Global State Distributed Snapshot – Chandy/Lamport [1985] A process that wants to take a snapshot sends a snapshot request to the others. Each process records its state upon receiving the first snapshot request. Each process keep recording the messages until receiving a snapshot request from each of the other process except the one that has originally initiated a snapshot. P0 P1 P2 P0 P1 P2 Snapshot request Ordinary message Message recording s m m m s s s s

11 Winter, 2004CSS490 Synchronization11 Mutual Exclusion Centralized Approach Pros: Simple Cons: Bottleneck and fault intolerance Pc P2 P1 P3 1. Request 3. Request 2. Reply 5. Release 8. Reply 9. Release 4. Request 6. Reply 7. Release P2 P2, P3 P3 Initial After 3 After 4 After 5 After 7 Queue in Pc

12 Winter, 2004CSS490 Synchronization12 Mutual Exclusion Distributed Approach Fault intolerant Expensive cost: 2(n-1) messages P1 P2 P3 P4 TS=6 TS=4 (A) P1 and P2 send request messages Already in critical section P1 P2 P3 P4 (B) P4 is in critical section. P1 P2, P1 OK P1 P2 P3 P4 (C) P4 exits from critical section. P1 OK Enter critical section P1 P2 P3 P4 (D) P4 exits from critical section. OK Enter critical section Exit from critical section Exit from critical section

13 Winter, 2004CSS490 Synchronization13 Distributed Election The Bully Algorithm Stage 1 P1 P2 P3 P4 Request (time out) Election Answer Stage 2 P1 P2 P3 P4 Election Answer Election Coordinator Stage 3 P1 P2 P3 P4 Coordinator Request (time out) Coordinator Election: O(n 2 ) Recovery: O(n 2 )

14 Winter, 2004CSS490 Synchronization14 Distributed Election A Ring Algorithm Election: O(2(n-1)) = O(n) Recovery: O(n-1) = O(n) P1 P2 P3 P4 Election P1 Election P1,P2 Election P1,P2,P3 (time out) Election P1,P2,P3 Coordinator Request (time out) P1 P2 P3 P4 Coordinator P3 Coordinator Request (time out) Coordinator P3 P1 P2 P3 P4 Inquiry Coordinator Inquiry Recover Answer = P3 Stage 1 Stage 2 Stage 3

15 Winter, 2004CSS490 Synchronization15 Summary Global State: Debugging distributed applications Safely garbage-collecting old computations in optimistic synchronization (parallel and distributed simulation) Mutual Exclusion: Allowing multiple distributed reader/writers to enter a shared DB exclusively Distributed Election: Focusing on decentralized fault tolerance

16 Winter, 2004CSS490 Synchronization16 SPEEDS[Steinman 1992] Breathing Time Buckets p2 p1 P1 ’ s LEH P2 ’ s LEH Next GEH (GVT) This is an optimistic distributed simulator, but so aggressive as Time Warp. Each process broadcasts the oldest local even among those it will execute. This is called a Local Event Horizon (LEH). A process must suspend its even processing if it has received an older LEH than the one it is currently processing. The oldest LEH among all processes become the next Global Event Horizon (GEH). Each process may send out all messages and process all events before this new GEH. Processes which have already processed beyond GEH must roll back their computation to GEH. No anti-messages are sent out.

17 Winter, 2004CSS490 Synchronization17 Time Warp[Jefferson 1985] Optimistic Distributed Simulation Each process has an input message, an output message, and an event history queue. When a process receives a message whose timestamp is older than its local time: 1. Roll back its local event execution to that old timestamp. 2. Roll back its receipt of input messages whose timestamp is newer than that old timestamp. 3. Send anti-messages to cancel all emanated messages whose timestamp is newer than that old timestamp. GVT (Global Virtual Time): is periodically computed to garbage-collect all the executed events whose timestamp is older than GVT. 152 p2 p1 p3 163 142 143 162142 141 141120 120 122 121 Arrived late 135RollbackAnti-message LVT LVT LVT

18 Winter, 2004CSS490 Synchronization18 Samadi ’ s Algorithm [1985]p2 p1 p3 p0 Take snapshot 1612 15 tag Report 15 Report 12 20 ack Report 20 1.Each process returns an ack whenever receiving a message. 2.Once receiving a snapshot message, each process returns a tag instead of an ack until a new GVT is compute. 3.When receiving a snapshot message, each process returns to P0 the minimum time among: - the minimum timestamp among events that have not yet been processes. - the minimum timestamp among messages that have not yet been acknowledged. - the minimum timestamp among tags it has received. 20

19 Winter, 2004CSS490 Synchronization19 Mattern ’ s Algorithm [1993] p2 p1 p3 p4 (0,0,0,0) (0,1,0,0) (0,2,-1,0) (0,0,0,1) (0,0,1,1) (0,0,1,0) (0,0,0,0) 1st snapshot 2nd snapshot 1.Process Pi maintains a vector counter: Vi[1..n]. 2.Pi writes in Vi[j] the number of messages sent to Pj. 3.Pi subtract one from Vi[j] when receiving a message from Pj 4.During the 1 st circulation of a ‘take snapshot’ message, Pi performs: C[1..n]+=Vi[1..n]; Vi[1..n] = 0 Upon completing the 1 st circulation, c[I] presents the number of messages in transit to Pi. 5.During the 2nd circulation, Pi wait for performs: C[i] = 0

20 Winter, 2004CSS490 Synchronization20 Paper Review by Students SPEEDS Time Warp Distributed Snapshot Samadi ’ s Algorithm Mattern ’ s Algorithm


Download ppt "Winter, 2004CSS490 Synchronization1 Textbook Ch6 Instructor: Munehiro Fukuda These slides were compiled from the textbook, the reference books, and the."

Similar presentations


Ads by Google