Presentation is loading. Please wait.

Presentation is loading. Please wait.

Synchronization Chapter 2.

Similar presentations


Presentation on theme: "Synchronization Chapter 2."— Presentation transcript:

1 Synchronization Chapter 2

2 Why Synchronize(Organization)?
To control access to a single, shared resource. To agree on the ordering of events. Synchronization in Distributed Systems is much more difficult than in uniprocessor systems.

3 Clock Synchronization-1
Synchronization in Distributed systems is hard because of following reasons: 1. No shared Memory 2. No Common Clock 3.Each computer has it’s own clock 4.Global time not known

4 Clock Synchronization-2
Synchronization(Organization) based on “Actual Time”. Question: is it even possible to synchronize all the clocks in a Distributed System? With multiple computers, “clock skew” ensures that no two machines have the same value for the “current time”. But, how do we measure time?

5 Clock Synchronization-3
Clock synchronization is a problem from computer science and engineering which deals with the idea that internal clocks of several computers may differ. Even when initially set accurately, real clocks will differ after some amount of time due to clock drift ,caused by clocks counting time at slightly different rates. There are several problems that occur as a repercussion(consequence) of clock rate differences and several solutions, some being more appropriate than others in certain contexts.

6 Clock Synchronization-4
In serial communication, some people use the term "clock synchronization" merely to discuss getting one metronome-like clock signal to pulse at the same frequency as another one -frequency synchronization (plesiochronous or isochronous operation),as opposed to full phase synchronization (synchronous operation). Such "clock synchronization" is used in synchronization in telecommunications and automatic baud rate detection .

7 Logical Clocks-1 Logical clocks shows ordering or organization of events . Logical clocks solves synchronization problem. Logical Clock is a mechanism for capturing chronological relationship in DS. DS may have no physical synchronous global clock. So Logical Clock allows global ordering on events from different processes in each system.

8 Logical Clocks-2 In Logical Clock system each process has two data structures, they are : 1. Logical Local Time 2. Logical Global Time Logical Local time is used by the process to mark it’s own events . Logical Global Time is local information about global time.

9 Logical Clocks-3 Logical Clocks are useful in 1-Computational Analysis
2-Disributed Algorithm Design 3-Individual event tracking 4-Exploring computational progress

10 Clock Synchronization
Internal clocks of computers may differ, although they set well in advance due to Clock Drift & Clock Skew. Clock Drift It means the difference between a clock and actual time. Clock Skew It means times on two different clocks.

11 Clock Synchronization Solution-1
Solution of Clock Synchronization A centralized system will dictate the SYSTEM TIME. Cristian’s Algorithm & Berkeley Algorithm are solution of Clock Synchronization problem.

12 Clock synchronization Solutions
Cristian's algorithm Berkeley algorithm Network Time Protocol Clock Sampling Mutual Network Synchronization Precision Time Protocol Reference broadcast synchronization Reference Broadcast Infrastructure Synchronization Global Positioning System

13 Radio Clock

14 Radio clock A radio clock or radio-controlled clock (RCC) is a clock that is automatically synchronized by a time code transmitted by a radio transmitter connected to a time standard such as an atomic clock . Such a clock may be synchronized to the time sent by a single transmitter, such as many national or regional time transmitters, or may use multiple transmitters, like the Global Positioning System . Such systems may be used to automatically set clocks or for any purpose where accurate time is needed. OR A radio-controlled clock (RCC) is different. It's similar to an ordinary electronic clock or watch but it has two extra components: an antenna that picks up radio signals and a circuit that decodes them. The circuit uses the radio signals to figure out the correct time and adjusts the time displayed by the clock or watch accordingly. Unlike an ordinary clock or watch, an RCC always knows what time it is—you never have to tell it

15 Cristian's algorithm Christian's algorithm relies on the existence of a time server.  The time server maintains its clock by using a radio clock(A radio-controlled clock (RCC) is different. It's similar to an ordinary electronic clock or watch but it has two extra components: an antenna that picks up radio signals and a circuit that decodes them. The circuit uses the radio signals to figure out the correct time and adjusts the time displayed by the clock or watch accordingly. Unlike an ordinary clock or watch, an RCC always knows what time it is—you never have to tell it!) ,then all other computers in the system stay synchronized with it.

16 Berkeley algorithm Berkeley algorithm objective is to keep all clocks in a system synchronized with each other (Internal Synchronization) . This algorithm is more suitable for systems where a radio clock is not present, this system has no way of making sure of the actual time other than by maintaining a global average time as the global time. A time server will periodically fetch the time from all the time clients, average the results, and then report back to the clients the adjustment that needs be made to their local clocks to achieve the average. This algorithm highlights the fact that internal clocks may vary not only in the time they contain but also in the clock rate.

17 Network Time Protocol Network Time Protocol algorithm objective is to keep all clocks in a system synchronized to UTC(Universal Coordinated Time) .UTC services are offered by radio stations and satellites. In NTP service topologies based on peering, all clocks equally participate in the synchronization of the network by exchanging their timestamps (A timestamp is the current time of an event that is recorded by a computer.) . In addition NTP provides a higher level of security. NTP is highly robust, widely deployed throughout the Internet, and well tested over the years, and is generally regarded as the state of the art in distributed time synchronization protocols for unreliable networks. It can reduce synchronization offsets to times of the order of a few milliseconds over the public Internet, and to sub-millisecond levels over local area networks.

18 Clock Sampling Mutual Network Synchronization
CS-MNS is suitable for distributed and mobile applications. It has been shown to be scalable over mesh networks that include indirectly linked non-adjacent nodes, and compatible to IEEE and similar standards. It can be accurate to the order of few microseconds, but requires direct physical wireless connectivity with negligible link delay (less than 1 microsecond) on links between adjacent nodes, limiting the distance between neighboring nodes to a few hundred meters.[5]

19 Precision Time Protocol
A master/slave protocol for delivery of highly accurate time over local area networks

20 Reference broadcast synchronization
The Reference Broadcast Synchronization (RBS) algorithm is often used in wireless networks and sensor networks. In this scheme, an initiator broadcasts a reference message to urge the receivers to adjust their clocks.

21 Reference Broadcast Infrastructure Synchronization
The Reference Broadcast Infrastructure Synchronization (RBIS)[6] protocol is a master/slave synchronization protocol based on the receiver/receiver synchronization paradigm, as RBS. It is specifically tailored to be used in IEEE Wi-Fi networks configured in infrastructure mode (i.e., coordinated by an access point). The protocol does not require any modification to the access point.

22 Global Positioning System
The Global Positioning System can also be used for clock synchronization. The accuracy of GPS time signals is ±10 ns and is second only to the atomic clocks upon which they are based.

23 How Do We Measure Time? Measuring time accurately with a “global” atomic clock Measuring time is not as easy as one might think it should be. Algorithms based on the current time have been devised for use within a DS.

24 Lamport’s Logical Clocks
First point: if two processes do not interact, then their clocks do not need to be synchronized – they can operate concurrently without fear of interfering with each other. Second (critical) point: it does not matter that two processes share a common notion(concept) of what the “real” current time is. What does matter is that the processes have some agreement on the order in which certain events occur. Lamport used these two observations to define the “happens-before” relation (also often referred to within the context of Lamport’s Timestamps).

25 The “Happens-Before” Relation (1)
If A and B are events in the same process, and A occurs before B, then we can state that: A “happens-before” B is true. Equally, if A is the event of a message being sent by one process, and B is the event of the same message being received by another process, then A “happens-before” B is also true. (Note that a message cannot be received before it is sent, since it takes a finite, nonzero amount of time to arrive … and, of course, time is not allowed to run backwards).

26 The “Happens-Before” Relation (2)
Obviously, if A “happens-before” B and B “happens-before” C, then it follows that A “happens-before” C. If the “happens-before” relation holds, deductions about the current clock “value” on each DS component can then be made. It therefore follows that if C(A) is the time on A, then C(A) < C(B), and so on. If two events on separate sites have same time, use unique PIDs to break the tie.

27 Lamport’s Logical Clocks (1)
The "happens-before" relation → can be observed directly in two situations: If a and b are events in the same process, and a occurs before b, then a → b is true. If a is the event of a message being sent by one process, and b is the event of the message being received by another process, then a → b.

28 Lamport’s Logical Clocks (2)
(a) Three processes, each with its own clock. The clocks run at different rates.

29 Lamport’s Logical Clocks (3)
(b) Lamport’s algorithm corrects the clocks

30 Vector Clocks A vector clock is an algorithm for generating a partial ordering of events in a distributed system and detecting causality violations.

31 causality violation Causality (also referred to as 'causation', or 'cause and effect') is efficiency that connects one process (the cause) with another (the effect), where the first is understood to be partly responsible for the second. In general, a process has many causes, which are said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of many other effects, which all lie in its future

32 Concurrent message transmission using logical clocks
Vector Clocks (1) Concurrent message transmission using logical clocks

33 Mutual Exclusion within Distributed Systems
Mutual Exclusion(ME) ensures that no two concurrent processes will enter in CRITICAL SECTION at the same time. It is basic requirement of concurrency control ,to prevent RACE CONDITION. CRITICAL SECTION refers to a period when the processes accesses a shared resource ,such as shared memory . RACE CONDITION is a situation that occurs when a device or system attempts to perform two or more operations at the same time.

34 Mutual Exclusion within Distributed Systems
It is often necessary to protect a shared resource within a Distributed System using “mutual exclusion” – for example, it might be necessary to ensures that no other process changes a shared resource while another process is working with it.

35 DS Mutual(Joint) Exclusion(Ruling out) Techniques
Centralized: a single coordinator(process) controls whether a process can enter a critical region. Distributed: the group ,determine whether or not it is safe for a process to enter a critical section.

36 Mutual Exclusion A Centralized Algorithm (1)
(a) Process 1 asks the coordinator for permission to access a shared resource. Permission is granted.

37 Mutual Exclusion A Centralized Algorithm (2)
b) Process 2 then asks permission to access the same resource. The coordinator does not reply.

38 Mutual Exclusion A Centralized Algorithm (3)
(c) When process 1 releases the resource, it tells the coordinator, which then replies to 2

39 Comments: The Centralized Algorithm
Advantages: It works. It is fair. There’s no process starvation. Easy to implement. Disadvantages: There’s a single point of failure! The coordinator is a bottleneck on busy systems. Critical Question: When there is no reply, does this mean that the coordinator is “dead” or just busy?

40 Distributed Mutual Exclusion
Based on work by Ricart and Agrawala (1981). Requirement of their solution: total ordering of all events in the distributed system (which is achievable with Lamport’s timestamps). Note that messages in their system contain three pieces of information: The critical region ID. The requesting process ID. The current time.

41 Mutual Exclusion: Distributed Algorithm
When a requesting process decides to enter a critical region, a message is sent to all processes in the Distributed System (including itself). What happens at each process depends on the “state” of the critical region. If not in the critical region (and not waiting to enter it), a process sends back an OK to the requesting process. If in the critical region, a process will queue the request and will not send a reply to the requesting process. If waiting to enter the critical region, a process will: Compare the timestamp of the new message with that in its queue (note that the lowest timestamp wins). If the received timestamp wins, an OK is sent back, otherwise the request is queued (and no reply is sent back). When all the processes send OK, the requesting process can safely enter the critical region. When the requesting process leaves the critical region, it sends an OK to all the processes in its queue, then empties its queue.

42 Distributed Algorithm (1)
Three different cases: Case 1: If the receiver is not accessing the resource and does not want to access it, it sends back an OK message to the sender. Case 2: If the receiver already has access to the resource, it simply does not reply. Instead, it queues the request. Case 3: If the receiver wants to access the resource as well but has not yet done so, it compares the timestamp of the incoming message with the one contained in the message that it has sent everyone. The lowest one wins.

43 Distributed Algorithm (2)
(a) Two processes want to access a shared resource at the same moment

44 Distributed Algorithm (3)
(b) Process 0 has the lowest timestamp, so it wins

45 Distributed Algorithm (4)
(c) When process 0 is done, it sends an OK also, so 2 can now go ahead

46 Comments: The Distributed Algorithm
The algorithm works because in the case of a conflict, the lowest timestamp wins as everyone agrees on the total ordering of the events in the distributed system. Advantages: It works. There is no single point of failure. Disadvantages: We now have multiple points of failure!!! A “crash” is interpreted as a denial of entry to a critical region. (A patch to the algorithm requires all messages to be ACKed). Worse is that all processes must maintain a list of the current processes in the group (and this can be tricky) Worse still is that one overworked process in the system can become a bottleneck to the entire system – so, everyone slows down.

47 A Token Ring Algorithm (a) An unordered group of processes on a
network. (b) A logical ring constructed in software.

48 Token-Ring Algorithm Advantages: Disadvantages:
It works (as there’s only one token, so mutual exclusion is guaranteed). It’s fair – everyone gets a shot at grabbing the token at some stage. Disadvantages: Lost token! How is the loss detected (it is in use or is it lost)? How is the token regenerated? Process failure can cause problems – a broken ring! Every process is required to maintain the current logical ring in memory – not easy.

49 Comparison: Mutual Exclusion Algorithms
Messages per entry/exit Delay before entry (in message times) Problems Centralized 3 2 Coordinator crash Distributed 2 ( n – 1 ) Crash of any process Token-Ring 1 to  0 to n – 1 Lost token, process crash None are perfect – they all have their problems! The “Centralized” algorithm is simple and efficient, but suffers from a single point-of-failure. The “Distributed” algorithm has nothing going for it – it is slow, complicated, inefficient of network bandwidth, and not very robust. It “sucks”! The “Token-Ring” algorithm suffers from the fact that it can sometimes take a long time to reenter a critical region having just exited it.

50 Election Algorithms Many Distributed Systems require a process to act as coordinator (for various reasons). The selection of this process can be performed automatically by an “election algorithm”. For simplicity, we assume the following: Processes each have a unique, positive identifier. All processes know all other process identifiers. The process with the highest valued identifier is duly elected coordinator. When an election “concludes”, a coordinator has been chosen and is known to all processes.

51 Goal of Election Algorithms
The goal of all election algorithms is to have all the processes in a group agree on a coordinator. There are two types of election algorithm: Bully: “the biggest guy in town wins”. Ring: a logical, cyclic grouping.

52 P sends an ELECTION message to all processes with higher numbers.
Election Algorithms The Bully Algorithm: P sends an ELECTION message to all processes with higher numbers. If no one responds, P wins the election and becomes coordinator. If one of the higher-ups answers, it takes over. P’s job is done.

53 The Bully Algorithm (1) The bully election algorithm: (a) Process 4 holds an election. (b) 5 and 6 respond, telling 4 to stop. (c) Now 5 and 6 each hold an election.

54 The Bully Algorithm (2) (d) Process 6 tells 5 to stop.
(e) Process 6 wins and tells everyone that it is wins the election and became Coordinator.

55 The “Ring” Election Algorithm
The processes are ordered in a “logical ring”, with each process knowing the identifier of its successor (and the identifiers of all the other processes in the ring). When a process “notices” that a coordinator is crashed , it creates an ELECTION message (which contains its own number) and starts to circulate the message around the ring. Each process puts itself forward as a candidate for election by adding its number to this message (assuming it has a higher numbered identifier). Eventually, the original process receives its original message back (having circled the ring), determines who the new coordinator is, then circulates a COORDINATOR message with the result to every process in the ring. With the election over, all processes can get back to work.

56 Election algorithm using a ring
A Ring Algorithm Election algorithm using a ring


Download ppt "Synchronization Chapter 2."

Similar presentations


Ads by Google