Synchronizers Outline motivation principles and definitions

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

Chapter 15 Basic Asynchronous Network Algorithms
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty.
Junction Trees And Belief Propagation. Junction Trees: Motivation What if we want to compute all marginals, not just one? Doing variable elimination for.
Termination Detection of Diffusing Computations Chapter 19 Distributed Algorithms by Nancy Lynch Presented by Jamie Payton Oct. 3, 2003.
Minimum Spanning Trees
Termination Detection. Goal Study the development of a protocol for termination detection with the help of invariants.
CPSC 668Set 14: Simulations1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Information Dissemination in Highly Dynamic Graphs Regina O’Dell Roger Wattenhofer.
1 Complexity of Network Synchronization Raeda Naamnieh.
CPSC 668Set 2: Basic Graph Algorithms1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
ADDITIONAL ANALYSIS TECHNIQUES LEARNING GOALS REVIEW LINEARITY The property has two equivalent definitions. We show and application of homogeneity APPLY.
Distributed Algorithms Broadcast and spanning tree.
Chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-1 Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
© nCode 2000 Title of Presentation goes here - go to Master Slide to edit - Slide 1 Reliable Communication for Highly Mobile Agents ECE 7995: Term Paper.
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 4 – Consensus and reliable.
Efficient Algorithms to Implement Failure Detectors and Solve Consensus in Distributed Systems Mikel Larrea Departamento de Arquitectura y Tecnología de.
Lecture 12 Synchronization. EECE 411: Design of Distributed Software Applications Summary so far … A distributed system is: a collection of independent.
Composition Model and its code. bound:=bound+1.
Chapter 14 Synchronizers. Synchronizers : Introduction Simulate a synchronous network over an asynchronous underlying network Possible in the absence.
Distributed Algorithms 2014 Igor Zarivach A Distributed Algorithm for Minimum Weight Spanning Trees By Gallager, Humblet,Spira (GHS)
Broadcast & Convergecast Downcast & Upcast
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 3: Greedy algorithms Phan Th ị Hà D ươ ng 1.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Issues with Clocks. Context The tree correction protocol was based on the idea of local detection and correction. Protocols of this type are complex to.
The minimum cost flow problem. Simplifying assumptions Network is connected (as an undirected graph). – We can consider each connected component separately.
1 Broadcast. 2 3 Use a spanning tree Root 4 synchronous It takes the same time at link to send a message It takes the same time at each node to process.
CS3505: DATA LINK LAYER. data link layer  phys. layer subject to errors; not reliable; and only moves information as bits, which alone are not meaningful.
Teacher: Chun-Yuan Lin
D u k e S y s t e m s Asynchronous Replicated State Machines (Causal Multicast and All That) Jeff Chase Duke University.
Leader Election (if we ignore the failure detection part)
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 2: Basic Graph Algorithms 1.
SysRép / 2.5A. SchiperEté The consensus problem.
Hwajung Lee. Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.   i,j  V  i,j are non-faulty ::
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 16: Distributed Shared Memory 1.
“Distributed Algorithms” by Nancy A. Lynch SHARED MEMORY vs NETWORKS Presented By: Sumit Sukhramani Kent State University.
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Fundamentals of Fault-Tolerant Distributed Computing In Asynchronous Environments Paper by Felix C. Gartner Graeme Coakley COEN 317 November 23, 2003.
EEC 688/788 Secure and Dependable Computing Lecture 10 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty ::
Author:Zarei.M.;Faez.K. ;Nya.J.M.
Visit for more Learning Resources
The minimum cost flow problem
Parallel and Distributed Simulation Techniques
Reactive Synchronization Algorithms for Multiprocessors
Net 435: Wireless sensor network (WSN)
A Mutual Exclusion Algorithm for Ad Hoc Mobile Networks
On the Complexity of Buffer Allocation in Message Passing Systems
Lecture 9: Asynchronous Network Algorithms
任課教授:陳朝鈞 教授 學生:王志嘉、馬敏修
EECS 498 Introduction to Distributed Systems Fall 2017
Leader Election (if we ignore the failure detection part)
Distributed Consensus
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Tree Construction (BFS, DFS, MST) Chapter 5
Research: algorithmic solutions for networking
Introduction to locality sensitive approach to distributed systems
Lectures on Graph Algorithms: searching, testing and sorting
EEC 688/788 Secure and Dependable Computing
Lecture 8: Synchronous Network Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Mark McKelvin EE249 Embedded System Design December 03, 2002
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
Dynamic Routing Protocols part3 B
Outline Introduction Background Distributed DBMS Architecture
ADDITIONAL ANALYSIS TECHNIQUES
Multidisciplinary Optimization
M. Mock and E. Nett and S. Schemmer
Presentation transcript:

Synchronizers Outline motivation principles and definitions implementation rules and phases complexity measures synchronizers  - pulse time is optimal, message complexity is high  - message complexity is optimal, pulse time is high

Motivation algorithms for synchronous networks are easier to design, analyze and debug than in asynchronous networks idea: design the algorithm in synchronous network and transform it to asynchronous synchronizer – general approach to the transformation ASYNC is used in this section synchronizers may be less efficient than ad hoc coding directly in ASYNC but turns out to be “pretty good”: finds solutions in ASYNC better than known before

Principles given a synchronizer  and a synchronous algorithm S it is possible to combine them to yield an asynchronous program A= (S) that is equivalent to S A – combined algorithm consisting of two components having their own local variables and message types original component corresponding to S synchronization component corresponding to  pulse variable – Pv maintained by each process v and stores the serial pulse being simulated (different processes may simulate different pulses) pulse generation – the process of updating pulse variable pulse compatibility – the original message is sent and received during the same pulse Lemma 6.1.3 – a synchronizer that guarantees pulse compatibility is correct correctness is defined in terms of equivalence of executions

Implementation rules readiness property: a process is ready for the next pulse if it received all original messages sent by its neighbors in the previous pulse readiness rule: a process generates next pulse only when it finishes with previous pulse original actions and satisfies readiness property readiness rule prohibits receiving messages “from the past” but not “from the future” delaying rule: the process delays delivery of the messages from subsequent pulses Lemma 6.1.4: a synchronizer imposing readiness and delaying rules guarantees pulse compatibility from Lemma 6.1.2 Corollary 6.1.5: such synchronizer is correct

Implementation phases readiness is easy to guarantee if S has complete communication: each process sends a message to every neighbor at every pulse What to do if partial communication? Phase A: each process w sends original messages, receivers acknowledge. w is safe wrt a pulse if it got acks for all original messages it sent during this pulse Phase B: inform neighbors that the process is safe Lemma 6.1.6: if every neighbor of w is safe with respect to a pulse, w is ready for next pulse

Alternative implementation phases Notice that when a process w is ready for next phase its neighbor v may not be ready just safe. v is not ready because its own neighbor u is not safe hence when w sends original messages of the next pulse to v, v has to delay them due to delay rule alternatively, pulse increase is divided into two stages a process may receive messages of the next pulse but not send a process may send messages of the next pulse a process is enabled if its every neighbor is ready enabling rule: a process is allowed to send original messages of the pulse only when it is enabled for this pulse Lemma 6.1.7: a synchronizer imposing readiness and enabling rules is correct Phase C: inform each process when its neighbors are ready

Complexity measures consider synchronizer v v needs to spend some resources before it starts running Timeinit(v) – time requirements for initialization Messageinit(v) – message requirements for initialization v imposes certain overhead wrt S at each step Timepulse(v) – the earliest time all processes change to pulse p from pulse p-1 Messagepulse(v) – message overhead incurred by v in one pulse

Synchronizer  when process is safe – it sends messages to every neighbor Lemma 6.3.1  is correct complexity initialization can be done by the flooding algorithm Timeinit() = O(|E|) Messageinit() = O(Diam(G)) step Timepulse() = O(1) – from pulse to pulse it takes constant number of steps for each process Messagepulse() = O(|E|)

Synchronizer  there is a rooted spanning tree T convergecast safety information up the tree and broadcast it back Lemma 6.3.3  is correct complexity initialization can be done by setting up a BFS tree by distributed Bellman-Ford Timeinit() = O(n|E|) Messageinit() = O(Diam(G)) step – costs involved in a single convergecast and broadcast Timepulse() = O(Diam(G)) Messagepulse() = O(n)