1 Complexity of Network Synchronization Raeda Naamnieh.

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 14: Simulations 1.
Two absolute bounds for distributed bit complexity Yefim Dinitz, Noam Solomon, 2007 Presented by: Or Peri & Maya Shuster.
CS 542: Topics in Distributed Systems Diganta Goswami.
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
Lecture 8: Asynchronous Network Algorithms
Token-Dased DMX Algorithms n LeLann’s token ring n Suzuki-Kasami’s broadcast n Raymond’s tree.
Chapter 15 Basic Asynchronous Network Algorithms
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty.
Self Stabilizing Algorithms for Topology Management Presentation: Deniz Çokuslu.
1 K-clustering in Wireless Ad Hoc Networks Fernandess and Malkhi Hebrew University of Jerusalem Presented by: Ashish Deopura.
Lecture 7: Synchronous Network Algorithms
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Minimum Spanning Trees
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks - Alan Amis, Ravi Prakash, Thai Vuong, Dung Huynh Presenter: Nirav Shah.
Termination Detection. Goal Study the development of a protocol for termination detection with the help of invariants.
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Wireless Sensor Networks 21st Lecture Christian Schindelhauer.
Ordering and Consistent Cuts Presented By Biswanath Panda.
CPSC 668Set 2: Basic Graph Algorithms1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-1 Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
LSRP: Local Stabilization in Shortest Path Routing Hongwei Zhang and Anish Arora Presented by Aviv Zohar.
© nCode 2000 Title of Presentation goes here - go to Master Slide to edit - Slide 1 Reliable Communication for Highly Mobile Agents ECE 7995: Term Paper.
CMPE 150- Introduction to Computer Networks 1 CMPE 150 Fall 2005 Lecture 22 Introduction to Computer Networks.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
Slide Set 15: IP Multicast. In this set What is multicasting ? Issues related to IP Multicast Section 4.4.
The Byzantine Generals Strike Again Danny Dolev. Introduction We’ll build on the LSP presentation. Prove a necessary and sufficient condition on the network.
Raeda Naamnieh 1. The Partition Algorithm Intuitively, the idea of the following algorithm is to choose each cluster as a maximal subset of nodes whose.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Composition Model and its code. bound:=bound+1.
Chapter 14 Synchronizers. Synchronizers : Introduction Simulate a synchronous network over an asynchronous underlying network Possible in the absence.
Election Algorithms and Distributed Processing Section 6.5.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Timing-sync Protocol for Sensor Networks (TPSN) Presenter: Ke Gao Instructor: Yingshu Li.
Distributed Algorithms 2014 Igor Zarivach A Distributed Algorithm for Minimum Weight Spanning Trees By Gallager, Humblet,Spira (GHS)
Broadcast & Convergecast Downcast & Upcast
On Probabilistic Snap-Stabilization Karine Altisen Stéphane Devismes University of Grenoble.
Distributed Computing 5. Synchronization Shmuel Zaks ©
Synchronous Algorithms I Barrier Synchronizations and Computing LBTS.
Formal Model for Simulations Instructor: DR. Lê Anh Ngọc Presented by – Group 6: 1. Nguyễn Sơn Hùng 2. Lê Văn Hùng 3. Nguyễn Xuân Hậu 4. Nguyễn Xuân Tùng.
Leader Election Algorithms for Mobile Ad Hoc Networks Presented by: Joseph Gunawan.
© The McGraw-Hill Companies, 2006 Chapter 4 Implementing methods.
On Probabilistic Snap-Stabilization Karine Altisen Stéphane Devismes University of Grenoble.
Consensus and Its Impossibility in Asynchronous Systems.
DISTRIBUTED ALGORITHMS By Nancy.A.Lynch Chapter 18 LOGICAL TIME By Sudha Elavarti.
1 Distributed Process Management Chapter Distributed Global States Operating system cannot know the current state of all process in the distributed.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 10, 2005 Session 9.
1 Broadcast. 2 3 Use a spanning tree Root 4 synchronous It takes the same time at link to send a message It takes the same time at each node to process.
Teacher: Chun-Yuan Lin
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
Chap 15. Agreement. Problem Processes need to agree on a single bit No link failures A process can fail by crashing (no malicious behavior) Messages take.
Leader Election (if we ignore the failure detection part)
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 2: Basic Graph Algorithms 1.
Hwajung Lee. Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.   i,j  V  i,j are non-faulty ::
UNIT IV INFRASTRUCTURE ESTABLISHMENT. INTRODUCTION When a sensor network is first activated, various tasks must be performed to establish the necessary.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
1 Chapter 11 Global Properties (Distributed Termination)
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
CSE 486/586 CSE 486/586 Distributed Systems Leader Election Steve Ko Computer Sciences and Engineering University at Buffalo.
Distributed Systems Lecture 9 Leader election 1. Previous lecture Middleware RPC and RMI – Marshalling 2.
1 Traversal Algorithms  sequential polling algorithm  traversing connected networks (tree construction) complexity measures tree terminology tarry’s.
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty ::
Multi-Node Broadcasting in Hypercube and Star Graphs
Lecture 9: Asynchronous Network Algorithms
Leader Election (if we ignore the failure detection part)
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Synchronizers Outline motivation principles and definitions
Presentation transcript:

1 Complexity of Network Synchronization Raeda Naamnieh

Introduction The problem of simulating a synchronous network by an asynchronous network is investigated. develop a general simulation technique, referred to as a synchronizer. 2

The problem The model – Asynchronous network – Synchronous network The goal 3

asynchronous graph ( V, E), where the set of nodes V represents processors of the network and the set of links E represents bidirectional noninterfering communication channels operating between them. No common memory is shared by the node’s processors, and each node has a distinct identity. Each node processes messages received from its neighbors, performs local computations, and sends messages to its neighbors. 4

asynchronous All these actions are assumed to be performed in negligible time. All the messages have a fixed length and may carry only a bounded amount of information. Each message sent by a node to its neighbor arrives within some finite but unpredictable time. 5

synchronous messages are allowed to be sent only at integer times, or pulses, of a global clock. Each node has an access to this clock. At most one message can be sent over a given link at a certain pulse. The delay of each link is at most one time unit of the global clock. 6

complexity The communication complexity C, is the total number of messages sent during the algorithm. The time complexity T, – Of synchronous algorithm: is the number of pulses passed from its starting time until its termination. – Of asynchronous algorithm: is the worst-case number of time units from the start to the completion of the algorithm. 7 communication time

The goal design an efficient synchronizer that enables any synchronous algorithm to run in any asynchronous network. For that purpose, the synchronizer generates sequences of “clock-pulses” at each node of the network. satisfying the following property: A new pulse is generated at a node only after it receives all the messages of the synchronous algorithm, sent to that node by its neighbors at the previous pulses. This property ensures that the network behaves as a synchronous one from the point of view of the particular execution of the particular synchronous algorithm. 8

The goal (2) the property may be achieved if additional messages are sent for the purpose of synchronization. The total complexity of the resulting algorithm depends on the overhead introduced by the synchronizer. 9

Total complexity Synchronizer. communication and time requirements added by a Synchronizer per each pulse of the synchronous algorithm, respectively. complexities of the initialization phase of the Synchronizer v (if exist), respectively. communication and time complexities of algorithms A and S, respectively. A Synchronizer v is “efficient” if all the parameters are “small enough.” 10

The solution Gamma synchronizer 11

The solution OUTLINE OF A NUMBER OF SYNCHRONIZERS – The concept of safety. – synchronizer – synchronizer – main result. FORMAL DESCRIPTION OF SYNCHRONIZER. 12

safety A node is said to be safe with respect to a certain pulse if each message of the synchronous algorithm sent by that node at that pulse has already arrived at its destination. (neighbors). After execution of a certain pulse, each node eventually becomes safe (with respect to that pulse). acknowledgment is sent back whenever a message of the algorithm is received from a neighbor. each node may detect that it is safe whenever all its messages have been acknowledged. Observe that the acknowledgments do not increase the asymptotic communication complexity. 13

Safty 14 msg

Safty 15 msg

Safty 16

Safty 17 safe

safty A new pulse may be generated at a node whenever it is guaranteed that no message sent at the previous pulses of the synchronous algorithm may arrive at that node in the future. Certainly, this is the case whenever all the neighbors of that node are known to be safe with respect to the previous pulse. It only remains to find a way to deliver this information to each node with small communication and time costs. 18

synchronizer Using the acknowledgment mechanism described above, each node detects eventually that it is safe and then reports this fact directly to all its neighbors. Whenever a node learns that all its neighbors are safe, a new pulse is generated. The complexities of Synchronizer in communication and time are : since one (additional) message is sent over each link in each direction and all the communication is performed between neighbors. 19

synchronizer This synchronizer needs an initialization phase, in which a leader s is chosen in the network and a spanning tree of the network, rooted at s, is constructed. 20

synchronizer Synchronizer itself operates as follows: – After execution of a certain pulse, the leader will eventually learn that all the nodes in the network are safe. – at that time it broadcasts a certain message along the tree, notifying all the nodes that they may generate a new pulse. – The above time is detected by means of a certain communication pattern, referred to in the future as, that is started at the leaves of the tree and terminates at the root. – Namely, whenever a node learns that it is safe and all its descendants in the tree are safe, it reports this fact to its father. 21

synchronizer The complexities of Synchronizer are: because all of the process is performed along the spanning tree. Actually, the time is proportional only to the height of this tree, which may reach in the worst case. 22

synchronizer This synchronizer needs an initialization phase, in which the network is partitioned into clusters. The partition is defined by any spanning forest of the communication graph ( V, E) of the network. Each tree of the forest defines a cluster of nodes and will be referred to as an. Between each two neighboring clusters, one is chosen, which will serve for communication between these clusters. Inside each cluster, a is chosen, which will coordinate the operations of the cluster via the intra-cluster tree. 23

synchronizer We say that a if all its nodes are known to be safe. 24

synchronizer Synchronizer is performed in two phases. In the first phase, Synchronizer is applied separately in each cluster along the intra-cluster trees. Whenever the leader of a cluster learns that its cluster is safe, it reports this fact to all the nodes in the cluster as well as to all the leaders of the neighboring clusters. Now the nodes of the cluster enter the second phase, in which they wait until all the neighboring clusters are known to be safe and then generate the next pulse (as if Synchronizer were applied among clusters). 25

More detailed description In order to start a new pulse, a cluster leader broadcasts along the tree a message, which triggers the nodes which receive it to enter the new pulse. After terminating its part in the algorithm, a node enters the first phase of the synchronizer, in which SAFE messages are convergecast along each intra-cluster tree, as in Synchronizer. This process is started by the leaves, which send SAFE messages to fathers whenever they detect that they are safe. 26

More detailed description Whenever a nonleaf node detects that it is safe and has received the message SAFE from each of its sons, then, if it is not itself the leader, it sends SAFE to its father. Otherwise, if it is the leader, it learns that its cluster is safe and reports this fact to all neighboring clusters by starting the broadcast of a message. Each node forwards this message to all its sons and along all incident preferred links. 27

More detailed description Now the nodes of the cluster enter the second phase. In order to determine the time at which all the neighboring clusters are known to be safe. a standard convergecast process is performed, namely, a node sends a message to its father whenever all the clusters neighboring it or any of its descendants are known to be safe. This situation is detected by a node whenever it receives messages from all its sons and messages from all the incident preferred links. 28

More detailed description This process is started at the leaves and is finished whenever the above conditions are satisfied at the leader of the cluster. At that time, the leader of the cluster knows that all the neighboring clusters as well as its own cluster are safe. Now it informs the nodes of its cluster that they can generate the next pulse by starting the broadcast of the PULSE message. 29

Complexity Denote by the set of all the tree links and all the preferred links in a partition P. Denote by the maximum height of a tree in the forest of P. 30

Complexity It is easy to see that at most four messages of the synchronizer are sent over each link of ; thus: 31

Complexity It requires time for each cluster to verify that it is safe and additional time to verify that all the neighboring clusters are safe; thus: 32

Complexity Observe that the above parameters depend only on the structure of the forest. It does not really matter how the preferred links are chosen in the partition, since their total number equals to the total number of pairs of neighboring trees. Using the partition algorithm of the next section, we achieve: – Here, is a parameter of the partition algorithm and may be chosen arbitrarily in the range. 33

Complexity Using the partition algorithm of the next section, we achieve: – Here, is a parameter of the partition algorithm and may be chosen arbitrarily in the range. By increasing in the range from 2 to, increases from to while decreases from to. 34

Complexity The particular choice of k is up to the user and depends on the relative importance of saving communication and time in a particular network. This choice may also depend on the topology of the network, since, in fact, no matter which partition is used, and also (D is the diameter of the network), provided that each intracluster tree is a BFS tree with respect to some node. For example, in a sparse network, where,we choose, while in a full network, where, we choose. This is because in a sparse (full) network, communication (time) is small anyway. 35

FORMAL DESCRIPTION OF SYNCHRONIZER. 36