Two absolute bounds for distributed bit complexity Yefim Dinitz, Noam Solomon, 2007 Presented by: Or Peri & Maya Shuster.

Slides:



Advertisements
Similar presentations
BY Lecturer: Aisha Dawood. The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains.
Advertisements

Comp 122, Spring 2004 Order Statistics. order - 2 Lin / Devi Comp 122 Order Statistic i th order statistic: i th smallest element of a set of n elements.
ON THE COMPLEXITY OF ASYNCHRONOUS GOSSIP Presented by: Tamar Aizikowitz, Spring 2009 C. Georgiou, S. Gilbert, R. Guerraoui, D. R. Kowalski.
Multi-Party Contract Signing Sam Hasinoff April 9, 2001.
Problems and Their Classes
Global States.
Distributed Leader Election Algorithms in Synchronous Ring Networks
Impossibility of Distributed Consensus with One Faulty Process
CS 542: Topics in Distributed Systems Diganta Goswami.
Distributed Computing 1. Lower bound for leader election on a complete graph Shmuel Zaks ©
Chapter 15 Basic Asynchronous Network Algorithms
Gossip and its application Presented by Anna Kaplun.
Is 1 different from 12? Eli Gafni UCLA Eli Gafni UCLA.
Lecture 7: Synchronous Network Algorithms
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Termination Detection. Goal Study the development of a protocol for termination detection with the help of invariants.
CPSC 668Set 5: Synchronous LE in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
1 Complexity of Network Synchronization Raeda Naamnieh.
Chapter 3 Growth of Functions
CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CS 582 / CMPE 481 Distributed Systems
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 3 – Distributed Systems.
 Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 7: Failure Detectors.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
What Can Be Implemented Anonymously ? Paper by Rachid Guerraui and Eric Ruppert Presentation by Amir Anter 1.
A (nlog(n)) lower bound for leader election in a clique.
Bit Complexity of Breaking and Achieving Symmetry in Chains and Rings.
1 Brief Announcement: Distributed Broadcasting and Mapping Protocols in Directed Anonymous Networks Michael Langberg: Open University of Israel Moshe Schwartz:
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 2 – Distributed Systems.
Chapter Resynchsonous Stabilizer Chapter 5.1 Resynchsonous Stabilizer Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of Jan 2004, Shlomi.
Josef Widder1 Why, Where and How to Use the  - Model Josef Widder Embedded Computing Systems Group INRIA Rocquencourt, March 10,
Distributed Algorithms: Agreement Protocols. Problems of Agreement l A set of processes need to agree on a value (decision), after one or more processes.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Distributed Computing 5. Synchronization Shmuel Zaks ©
1 A Modular Approach to Fault-Tolerant Broadcasts and Related Problems Author: Vassos Hadzilacos and Sam Toueg Distributed Systems: 526 U1580 Professor:
Selected topics in distributed computing Shmuel Zaks
C++ Programming: From Problem Analysis to Program Design, Third Edition Chapter 17: Recursion.
Lecture #12 Distributed Algorithms (I) CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
Introduction Algorithms and Conventions The design and analysis of algorithms is the core subject matter of Computer Science. Given a problem, we want.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 10, 2005 Session 9.
DISTRIBUTED SYSTEMS II A POLYNOMIAL LOCAL SOLUTION TO MUTUAL EXCLUSION Prof Philippas Tsigas Distributed Computing and Systems Research Group.
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 5: Synchronous LE in Rings 1.
CS294, Yelick Consensus revisited, p1 CS Consensus Revisited
Chap 15. Agreement. Problem Processes need to agree on a single bit No link failures A process can fail by crashing (no malicious behavior) Messages take.
Vertex Coloring Distributed Algorithms for Multi-Agent Networks
Impossibility of Distributed Consensus with One Faulty Process By, Michael J.Fischer Nancy A. Lynch Michael S.Paterson.
Agreement in Distributed Systems n definition of agreement problems n impossibility of consensus with a single crash n solvable problems u consensus with.
Chapter 21 Asynchronous Network Computing with Process Failures By Sindhu Karthikeyan.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
1 Fault tolerance in distributed systems n Motivation n robust and stabilizing algorithms n failure models n robust algorithms u decision problems u impossibility.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 9: Fault Tolerant Consensus 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Distributed Systems Lecture 9 Leader election 1. Previous lecture Middleware RPC and RMI – Marshalling 2.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Applied Discrete Mathematics Week 2: Functions and Sequences
Michael Langberg: Open University of Israel
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Asymptotic Notations Algorithms Lecture 9.
Distributed Consensus
Parallel and Distributed Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Lecture 8: Synchronous Network Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Presentation transcript:

Two absolute bounds for distributed bit complexity Yefim Dinitz, Noam Solomon, 2007 Presented by: Or Peri & Maya Shuster

OUTLINE IntroductionIntroduction Model Problems Definitions and notation 2-processor solutions & bounds2-processor solutions & bounds Chain solutionsChain solutions

Model A chain of n+1 processors and n links. Each processor is assigned with a unique ID from the set Z m = {0,1,…,(M-1)}, where |Z m |=M ≥ 2. Each processor has knowledge of its links (up to 2). All processes are running the same (deterministic) algorithm A(n,M).

Model Zzz… Initially, all processors are asleep and in the same initial state, except for the id and number of incident links information. Zzz… …

Model When a processor receives a message on an incident link or when it wakes up spontaneously (before receiving the first message) - it is activated. !!! INCOMING!!! !!!

Model Upon activation, a processor processes its local information according to Algorithm. Sends messages on its incident links (it may send zero or more messages on each link), It may also output something, then enters the waiting state Send Message

Model All of this is done immediately. We assume that a scheduler wakes each terminal eventually, if not awaken by a message previously And that no internal processor is awaken by the scheduler. For convenience of analysis, we consider a formal clock: it starts from moment 0, and each processor activation step is done at the next moment 1, 2,.... NOTE: This clock isn’t shared or synchronized by the processors, it is merely for overview analysis.

Model Each message consists of a single bit. Messages are assumed to arrive in a finite, yet unbounded time from the moment they are sent. There may be different executions beginning from the same initial state, due to scheduling.

Example ID = 2 ID =5 ID = 1 Zzz…

Example ID = 2 ID =5 ID = 1 Zzz… !!!

Example ID = 2 ID =5 ID = 1 Zzz… =2 1

Example ID = 2 ID =5 ID = 1 1 Zzz… !!!

Example ID = 2 ID =5 ID = 1 Zzz… =… =… 1 0

Example ID = 2 ID =5 ID = 1 0 !!! Zzz… Non-Leader 1 Zzz… Non-Leader

Example ID = 2 ID =5 ID = 1 1 Zzz… Non-Leader !!! Non-Leader Zzz… Leader

Example ID = 2 ID =5 ID = 1 Zzz… Non-Leader Zzz… Leader Zzz…

Problem 1.The Leader problem requires that each processor should output (decide on) a binary value: [1]“leader”/ [0]“non-leader” so that there will be exactly one leader. Besides, it is required that any non-leader should learn which of its incident links is in the direction to the leader.

Problem 2.The MaxF version of Leader requires that the terminal with the maximal id must be the elected leader. ID = 2 ID =5 ID = 1 Non-Leader Leader

Definition For a task T, Bit-C(T ) is the number of bits sent needed to solve T, in the worst case. We will express bounds for it in the terms of chain length: n and Id- domain size: M.  clearly, Bit-C(MaxF) ≥ Bit-C(Leader)

Lower & Upper bound An upper bound must be valid for all inputs and all schedulers. To obtain a lower bound L, however, it is sufficient to show that, for any algorithm, there exists some input and some scheduler, such that the execution under that scheduler with that input requires at least L bits.

Definition - Termination Property A distributed algorithm solving some task is said to have the termination property (or is terminating), if: Each processor is eventually ensured that no more messages concerning this task will be sent by any processor and that there are no messages in transit,except, maybe, messages sent by itself. For a distributed task T, we denote by Bit-C t (T) its bit complexity for the case when the termination property is required.  clearly, Bit-C t (T) ≥Bit-C(T).

OUTLINE

2-processor solutions Goal: solve MaxF for a chain of length 1 Thus, solving Leader as well…

Naïve solution Each Sends log 2 M -1 bits ID = b ID = b

Naïve solution Each Sends log 2 M -1 bits ID = b ID = b Why is this enough?

Naïve solution Each Sends log 2 M -1 bits ID = b ID = b Can we do better?

A Terminating Algorithm For M=2, no bits are sent. Processor with ID=1 is leader; other is non-leader For M>2: A recursive Algorithm In each round we divide the ID range [0..(M-1)] to: Less er Median Great er

Round Description Les ser 0 Gre ater 1 Med ian wait Upon wake-up (spontaneous or Message)

Round Description Les ser 1 Gre ater 0 I’m Non- Leader I’m the Leader

Round Description Med ian 1 0 I’m Non- Leader I’m the Leader

Round Description Les ser 0 Gre ater 1 Both are the same GoTo next round on the sub-range

Example Say M=11  Z M =[0..10] ID = 8 ID = 10

Round I ID = 8 ID = 10 ⌊ /2 ⌋ = ⌊ 11/2 ⌋ =5 < 8  I’m greater ⌊ /2 ⌋ = ⌊ 11/2 ⌋ =5 < 10  I’m greater 11

Round I ID = 8 ID = 10 He’s the same as me! 11

Round II ID = 8 ID = 10  Z M =[6..10] ⌊ /2 ⌋ = ⌊ (11+6)/2 ⌋ =8 = 8  I’m Median ⌊ /2 ⌋ = ⌊ (11+6)/2 ⌋ =8 < 10  I’m greater 1

Round II ID = 8 ID = 10 He’s greater than me!  I’m non-Leader 1 0

Round II ID = 8 ID = 10 Non-Leader He’s smaller than me!  I’m the Leader 0

Example - Outcome ID = 8 ID = 10 Non-Leader He’s smaller than me!  I’m the Leader Elected Leader is the one with maximal ID. Non-leader knows it isn’t leader. The algorithm terminates under any scheduler Total Bits sent: 2*2=4

Example - Outcome ID = 8 ID = 10 Non-Leader He’s smaller than me!  I’m the Leader What could change under different scheduling?

Example - Outcome ID = 8 ID = 10 Non-Leader He’s smaller than me!  I’m the Leader Why must it terminate?

Worst-case Time Analysis Less er Med ian Great er

Worst-case Time Analysis

This is roughly a 1-bit difference

OUTLINE

Another solution - In our distributed network model, messages are guaranteed to be delivered in some finite time. - This means, a sent message will arrive, but a processor cannot stop waiting, if there is a chance that some message is on its way.

The Algorithm For M<6, use the former Algorithm (up to 4 bits used). For any M≥6: divide the ID range [0..(M-1)] to: In each round, 2 bits are sent. Last round is an exception: 4 bits are sent. Less er MinGreat er Max

Round Description Les ser 0 Gre ater 1 Mi n wait Ma x wait Decide: leader Decide: non-leader

Round Description Les ser 0 Gre ater 1 Both are the same GoTo next round on sub-range

Round Description Mi n Ma x 0/1 1/0 When Min/Max receive any bit Respond with opposite bit As if to say: “we are different, let’s finish”

Round Description Les ser 1 Gre ater 0 Resend last bit to confirm intentions 0 1

Round Description Mi n Ma x 0/1 0 1 When Min/Max receive any bit Respond with “decision” bit As if to say: “no matter what you are, my decision is the final one”

Round Description Les ser 0/1 Gre ater 0/1 Got ‘0’: Decide Leader Got ‘1’: Decide Non-Leader

Correctness Les ser 0 Lesser got ‘0’  go to next round 0

Correctness Les ser 1 Lesser got ‘1’  This is the last round! (resend ‘0’) Either second processor is greater / Max  second bit will be ‘1’ and it’ll decide ‘Non-Leader’ Or, second processor is Min  Second bit will be ‘0’ and it’ll decide ‘Leader’ 0 0

Correctness Mi n 0/1 Min element decides immediately ‘Non-leader’ Either other processor is Lesser:  got ‘0’  sent ‘1’ [“we are different – let’s finish”] (will get ‘0’ again) sends ‘0’ [I am Non-Leader] Or other processor is Greater:  got ‘1’  sent ‘0’ [“we are different – let’s finish”] (will get ‘1’ again) sends ‘0’ [I am Non-Leader] 1/00

Round Description Mi n Ma x But what if one is Min and one is MAX??? wait

Round Description Mi n Ma x The algorithm is still correct, since they both decide immediately, But it will not terminate, as they can’t tell when to stop waiting for a message. wait Decide: leader Decide: non-leader

Worst-case Time Analysis Similar to previous Algorithm For M≥6:  The proper recurrent relation: Less er Min Great er Max

Worst-case Time Analysis Only difference and isn’t a problem since 2r≥4

Worst-case Time Analysis