Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Algorithms – 2g1513 Lecture 1b – by Ali Ghodsi Models of distributed systems continued and logical time in distributed systems.

Similar presentations


Presentation on theme: "Distributed Algorithms – 2g1513 Lecture 1b – by Ali Ghodsi Models of distributed systems continued and logical time in distributed systems."— Presentation transcript:

1 Distributed Algorithms – 2g1513 Lecture 1b – by Ali Ghodsi Models of distributed systems continued and logical time in distributed systems

2 2 State transition system - example Example algorithm:Using graphs: X:=0; while (X<3) do X = X + 1; endwhile X:=1 Formally:  States {X0, X1, X2}  Possible transitions {X0→X1, X1→X2, X2→X1 }  Start states {X0} X0 X1 X2 start

3 3 State transition system - formally A STS is formally described as the triple: (C, →, I) Such that: 1.C is a set of states 2.→ is a subset of C  C, describing the possible transitions (→  C  C) 3.I is a subset of C describing the initial states (I  C) Note that the system may have several transitions from one state E.g. → = {X2→X1, X2→X0}

4 4 Local Algorithms The local algorithm will be modeled as a STS, in which the following three events happen:  A processor changes state from one state to a another state (internal event)  A processor changes state from one state to a another state, and sends a message to the network destined to another processor (send event)  A processor receives a message destined to it and changes state from one state to another (receive event)

5 5 Model of the distributed system Based on the STS of a local algorithm, we can now define for the whole distributed system:  Its configurations We want it to be the state of all processes and the network  Its initial configurations We want it to be all possible configurations where every local algorithm is in its start state and an empty network  Its transitions, and We want each local algorithm state event (send, receive, internal) be a configuration transition in the distributed system

6 6 Execution Example 1/9 Lets do a simple distributed application, where a client sends a ping, and receives a pong from a server This is repeated indefinitely p2 server p1 client send

7 7 Execution Example 2/9 M = {ping p2, pong p1 } (Z p1, I p1, ├ I p1,├ S p1,├ R p1 )  Z p1 ={c init, c sent }  I p1 ={c init }  ├ I p1 =   ├ S p1 ={(c init, ping p2, c sent )}  ├ R p1 ={(c sent, pong p1, c init )} (Z p2, I p2, ├ I p2,├ S p2,├ R p2 )  Z p2 ={s init, s rec }  I p2 ={s init }  ├ I p2 =   ├ S p2 ={(s rec, pong p1, s init )}  ├ R p2 ={(s init, ping p2, s rec )} p2 server p1 client send

8 8 Execution Example 3/9 C={(c init, s init,  ), (c init, s rec,  ), (c sent, s init,  ), (c sent, s rec,  ), (c init, s init, {ping p2 }), (c init, s rec, {ping p2 }), (c sent, s init, {ping p2 }), (c sent, s rec, {ping p2 }), (c init, s init, {pong p1 }), (c init, s rec, {pong p1 }), (c sent, s init, {pong p1 }), (c sent, s rec, {pong p1 })...} I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )}

9 9 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 4/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c init p2 state : s init

10 10 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 5/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c init p2 state : s init

11 11 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 6/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c sent send p2 state : s init

12 12 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 7/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c sent rec p2 state : s rec

13 13 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 8/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c sent send p2 state : s init

14 14 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 9/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c init rec p2 state : s init

15 15 Applicable events Any internal event e=(c,d)  ├ I pi is said to be applicable in an configuration C=(c p1, …, c pi, …, c pn, M) if c pi =c  If event e is applied, we get e(C)=(c p1, …, d, …, c pn, M) Any send event e=(c,m,d)  ├ S pi is said to be applicable in an configuration C=(c p1, …, c pi, …, c pn, M) if c pi =c  If event e is applied, we get e(C)=(c p1, …, d, …, c pn, M  {m}) Any receive event e=(c,m,d)  ├ R pi is said to be applicable in an configuration C=(c p1, …, c pi, …, c pn, M) if c pi =c and m  M  If event e is applied, we get e(C)=(c p1, …, d, …, c pn, M-{m})

16 16 Order of events The following theorem shows an important result:  The order in which two applicable events are executed is not important! Theorem:  Let e p and e q be two events on two different processors p and q which are both applicable in configuration . Then e p can be applied to e q (  ), and e q can be applied to e p (  ).  Moreover, e p ( e q (  )) = e q ( e p (  ) ).

17 17 Order of events To avoid a proof by cases (3*3=9 cases) we represent all three event types in one abstraction We let the quadtuple (c, X, Y, d) represent any event:  c is the initial state of the processor  X is a set of messages that will be received by the event  Y is a set of message that will be sent by the event  d is the state of the processor after the event Examples:  (c init, , {ping}, c sent ) represents a send event  (s init, {pong}, , s rec ) represents a receive event  (c1r, , , c2) represents an internal event Any such event e p =(c, X, Y, d), at p, is applicable in a state  ={…, c p,…, M} if and only if c p =c and X  M.

18 18 Order of events Proof:  Let e p ={c, X, Y, d} and e q ={e, Z, W, f} and  ={…, c p, …, c q,…, M}  As both e p and e p are applicable in  we know that c p =c, c q =e, X  M, and Z  M.  e q (  ) ={…, c p, …, f,…, (M-Z)  W}, c p is untouched and c p =c, and X  (M-Z)  W as X  Z= , hence e p is applicable in e q (  )  Similar argument to show that e q is applicable in e p (  )

19 19 Order of events Proof:  Lets proof e p ( e q (  )) = e q ( e p (  ) ) e q (  )={…, c p, …, f,…, (M-Z)  W} e p (e q (  ))={…, d, …, f,…, (((M-Z)  W)-X)  Y} e p (  )={…, d, …, c q,…, (M-X)  Y} e q (e p (  ))={…, d, …, f,…, (((M-X)  Y)-Z)  W} (((M-Z)  W)-X)  Y = (((M-X)  Y)-Z)  W Because X  Z= , W  X= , Y  Z=   Both LHS and RHS can be transformed to ((M  W  Y)-Z)-X 

20 20 Exact order does not always matter In two cases the theorem does not apply:  If p=q, i.e. when the events occur on different processes They would not both be applicable if they are executed out of order  If one is a sent event, and the other is the corresponding receive event They cannot be both applicable In such cases, we say that the two events are causally related!

21 21 Causally Order The relation ≤ H on the events of an execution, called causal order, is defined as the smallest relation such that:  If e occurs before f on the same process, then e ≤ H f  If s is a send event and r its corresponding receive event, then s ≤ H r  ≤ H is transitive. I.e. If a ≤ H b and b ≤ H c then a ≤ H c  ≤ H is reflexive. I.e. If a ≤ H a for any event a in the execution Two events, a and b, are concurrent iff a ≤ H b and b ≤ H a holds

22 22 Example of Causally Related events Time-space diagram p1 p2 p3 time Causally Related Events Concurrent EventsCausally Related Events

23 23 Equivalence of Executions: Computations Computation Theorem:  Let E be an execution E =(  1,  2,  3 …), and V be the sequence of events V =( e 1, e 2, e 3 …) associated with it I.e. applying e k (  k )=  k+1 for all k≥1  A permutation P of V that preserves causal order, and starts in  1, defines a unique execution with the same number of events, and if finite, P and V’s final configurations are the same P =( f 1, f 2, f 3 …) preserves the causal order of V when for every pair of events f i ≤ H f j implies i { "@context": "http://schema.org", "@type": "ImageObject", "contentUrl": "http://images.slideplayer.com/8/2537722/slides/slide_23.jpg", "name": "23 Equivalence of Executions: Computations Computation Theorem:  Let E be an execution E =(  1,  2,  3 …), and V be the sequence of events V =( e 1, e 2, e 3 …) associated with it I.e.", "description": "applying e k (  k )=  k+1 for all k≥1  A permutation P of V that preserves causal order, and starts in  1, defines a unique execution with the same number of events, and if finite, P and V’s final configurations are the same P =( f 1, f 2, f 3 …) preserves the causal order of V when for every pair of events f i ≤ H f j implies i

24 24 Equivalence of executions If two executions F and E have the same collection of events, and their causal order is preserved, F and E are said to be equivalent executions, written F~E  F and E could have different permutation of events as long as causality is preserved!

25 25 Computations Equivalent executions form equivalence classes where every execution in class is equivalent to the other executions in the class. I.e. the following always holds for executions:  ~ is reflexive I.e. If a~ a for any execution  ~ is symmetric I.e. If a~b then b~a for any executions a and b  ~ is transitive If a~b and b~c, then a~c, for any executions a, b, c Equivalence classes are called computations of executions

26 26 Example of equivalent executions p1 p2 p3 time p1 p2 p3 time p1 p2 p3 time Same color ~ Causally related All three executions are part of the same computation, as causality is preserved

27 27 Two important results (1) The computation theorem gives two important results Result 1: There is no distributed algorithm which can observe the order of the sequence of events (that can “see” the time-space diagram)  Proof:  Assume such an algorithm exists. Assume process p knows the order in the final configuration  Run two different executions of the algorithm that preserve the causality.  According to the computation theorem their final configurations should be the same, but in that case, the algorithm cannot have observed the actual order of events as they differ 

28 28 Two important results (2) Result 2: The computation theorem does not hold if the model is extended such that each process can read a local hardware clock  Proof:  Similarly, assume a distributed algorithm in which each process reads the local clock each time a local event occurs  The final configuration of different causality preserving executions will have different clock values, which contradicts the computation theorem 

29 29 Lamport Logical Clock (informal) Each process has a local logical clock, which is kept in a local variable  p for process p, which is initially 0 The logical clock is updated for each local event on process p such that:  If an internal or send event occurs:  p =  p +1  If a receive event happens on a message from process q:  p = max(  p,  q )+1 Lamport logical clocks guarantee that:  If a≤ H b, then  p ≤  q, where a and be happen on p and q

30 30 Example of equivalent executions p1 p2 p3 time 13 4 1 4 5 6 2 0 0 0

31 31 Vector Timestamps: useful implication Each process p keeps a vector v p [n] of n positions for system with n processes, initially v p [p]=1 and v p [i]=0 for all other i The logical clock is updated for each local event on process p such that:  If any event: v p [p]=v p [p]+1  If a receive event happens on a message from process q: v p [x] = max(v p [x], v q [x]), for 1≤x≤n We say v p ≤v q y iff v p [x]≤v q [x] for 1≤x≤n If not v p ≤v q and not v q ≤v p then v p is concurrent with v q Lamport logical clocks guarantee that:  If v p ≤v q then a≤ H b, and a≤ H b, then v p ≤v q, where a and be happen on p and q

32 32 Example of Vector Timestamps p1 p2 p3 time [2,0,0][4,0,0] [4,2,0] [0,0,2] [5,0,0] [4,3,0] [4,3,3] [3,0,0] [1,0,0] [0,1,0] [0,0,1] This is great! But cannot be done with smaller vectors than size n, for n processes

33 33 Useful Scenario: what is most recent? p1 p2 p3 time [2,0,0][4,0,0] [5,2,0] [5,0,2] [5,3,0] [3,0,0] [1,0,0] [0,1,0] [0,0,1] p2 examines the two messages it receives, one from p1 [4,0,0] and one from p2 [5,0,3] and deduces that the information from p1 is the oldest ([4,0,0]≤[5,0,3]). [5,0,0] [5,0,3]

34 34 Summary The total order of executions of events is not always important  Two different executions could yield the same “result” Causal order matters:  Order of two events on the same process  Order of two events, where one is a send and the other one a corresponding receive  Order of two events, that are transitively related according to above Executions which contain permutations of each others event such that causality is preserved are called equivalent executions Equivalent executions form equivalence classes called computations.  Every execution in a computation is equivalent to every other execution in its computation Vector timestamps can be used to determine causality  Cannot be done with smaller vectors than size n, for n processes


Download ppt "Distributed Algorithms – 2g1513 Lecture 1b – by Ali Ghodsi Models of distributed systems continued and logical time in distributed systems."

Similar presentations


Ads by Google