Presentation is loading. Please wait.

Presentation is loading. Please wait.

ON THE COMPLEXITY OF ASYNCHRONOUS GOSSIP Presented by: Tamar Aizikowitz, Spring 2009 C. Georgiou, S. Gilbert, R. Guerraoui, D. R. Kowalski.

Similar presentations


Presentation on theme: "ON THE COMPLEXITY OF ASYNCHRONOUS GOSSIP Presented by: Tamar Aizikowitz, Spring 2009 C. Georgiou, S. Gilbert, R. Guerraoui, D. R. Kowalski."— Presentation transcript:

1 ON THE COMPLEXITY OF ASYNCHRONOUS GOSSIP Presented by: Tamar Aizikowitz, Spring 2009 C. Georgiou, S. Gilbert, R. Guerraoui, D. R. Kowalski

2 Introduction In previous lectures, we considered the problem of gossiping in synchronous systems. It is common to argue that distributive applications are generally synchronous. However, sometimes… Delay bounds are not known. Known bounds may be conservative. Today, we consider gossip in asynchronous systems. No a priori bounds on message delay and relative processor speeds.

3 Outline Model Definition and Assumptions Lower Bound and Cost of Asynchrony Asynchronous Gossip Algorithms EARS SEARS TEARS Application for Randomized Consensus

4 Model Definition and Assumptions

5 Model Definitions Asynchronous message-passing Fixed set of n processes, with known unique identifiers: [n] = {1,…,n} Direct communication between all processes Physical communication network ignored Up to f < n crash failures No lost or corrupt messages

6 Timing Assumptions Time proceeds in discrete steps. Each step, some subset of processes are scheduled to take a local step. Each local step, a process: As long as a process has not crashed, it will eventually be scheduled for a local step Local Computations

7 Timing Bounds For a given execution, we define bounds on delays: d = maximum message delivery time If p sent m to q at time t, then q will receive m no later than time t + d (assuming q is not crashed). Simulates communication delay. δ = maximum step size Every δ time steps, every non-crashed process is scheduled at least once. Simulates relative processor speeds.

8 Gossip Every process has a rumor it wants to spread A gossip protocol must satisfy: Rumor gathering: eventually, every correct process has collected all rumors of all other correct process. Validity: any rumor added to a processes collection must be the initial rumor of some process. Quiescence: eventually, every process stops sending messages forever.

9 Gossip Continued… Gossip completes when each correct process has: Received the rumors of all other correct processes Stopped sending messages All other processes are crashed. Note: In an asynchronous system, a process can never terminate. It cannot be sure that it received all messages. It can, however, stop sending messages.

10 Complexity Measures Let A be an asynchronous gossip algorithm. A has time complexity T as (d,δ) and message complexity M as (d,δ) if for every infinite execution where bounds d and δ hold: Every correct process completes by expected time T as The total number of messages sent is M as If d=δ=1 are known a priori to the algorithm then A is synchronous and T s, M s are defined analogously.

11 Adversary Models We consider two adversary models… Adaptive Adversary: Schedules processes, message deliveries, and crashes, dynamically during the computation. Determines d and δ bounds. Knows the distribution of the algorithms random choices. Oblivious Adversary: Determines schedule beforehand.

12 The cost of asynchrony. Lower Bound

13 Background Best results for synchronous gossip: Time: O(polylog n) Messages: O(n polylog n) B.S. Chlebus, D.R. Kowalski, Time and Communication Efficient Consensus for Crash Failures (will be presented next week…) Trivial algorithm for asynchronous gossip: Time: O(d+δ) Messages: Θ(n 2 )

14 Lower Bound Theorem 1: For every gossip algorithm A, there exist d,δ1 and an adaptive adversary that causes up to f<n failures such that, in expectation, either: M as (d,δ) = Ω( n + f 2 ), or T as (d,δ) = Ω( f (d+δ) ) In other words… No randomized asynchronous gossip protocol can be both time and message efficient against an adaptive adversary. Efficient = w.r.t. best known synchronous protocol.

15 Adversary Strategy Main Idea: two types of gossiping techniques… Send to many: Send to few: message inefficient time inefficient

16 Proof of Lower Bound The Ω(n) lower bound for the number of messages is straightforward: Therefore, we need to show Ω( f 2 ) for the number of messages or Ω( f (d+δ)) for the time… Every proc. needs to send its rumor to at least one other proc.

17 Divide and Conquer Set f = min{ f,n/4} Partition [n] into two sets: |S 1 | = n-f/2 and |S 2 | = f/2 Execute set S 1 with d=δ=1 until all processes in S 1 complete, and cease to send messages. S1S1 S2S2

18 Choose Adversary Strategy Let t be the time at which S 1 completes. If t > f: Fail all processes in S 2 Gossip is complete at time t As d=δ=1 and t > f t = Ω( f (d+δ)) If t f, check whether most processes in S 2 send many messages or few messages. Apply appropriate adversarial strategy Many messages M as (d,δ) = Ω( f 2 ) Few messages T as (d,δ) = Ω( f (d+ δ) )

19 Examine S 2 For each p in S 2 simulate: p receives all messages sent to it from S 1 p executes f/2 isolated steps, i.e., doesnt receive any messages p is promiscuous if, in expectation, p sends at least f/32 messages. Let P S 2 denote the set of promiscuous procs.

20 S 2 Mostly Promiscuous Case 1: |P| f/4 (most procs. are promiscuous) At time t, deliver all messages from S 1 to S 2. Schedule all processes from S 2 in each of the next f/2 time steps δ = 1 Do not deliver any messages d > f/2 All processes in S 2 have taken f/2 isolated steps In expectation, each proc. in P sends f/32 messages M as (d,δ) = Ω( f/4 f/32) = Ω( f 2 )

21 Case 2: |P| < f/4 (most procs. not promiscuous) NonP = S 2 – P, i.e. the non-promiscuous procs. Main idea: find two procs. in NonP with a constant probability of not communicating directly, and make sure they dont communicate for a long time. S 2 Mostly Non-promiscuous S1S1

22 Finding two Disconnected Procs. We need to find two processes with a constant probability of not communicating directly… N(p) = all processes q s.t. p sends a message to q with probability < 1/4 during f/2 isolated steps. For p NonP, the number of processes not in N(p) is less than f/8. Else, p sends a message with probability > 1/4 to at least f/8 processes p sends > f/32 messages, which is a contradiction to p NonP.

23 Claim: For p NonP, there are many processes in N(p) from NonP. |N(p) NonP| f/8 Finding two Disconnected Procs. |NonP| f/4 |N(p)| f/8

24 Finding two Disconnected Procs. Consider the following directed graph: Nodes: processes from NonP at least f/4 nodes Edges: if q N(p) Each p has f/8 outgoing edges Total of f/8 f/4 = f 2 /32 edges There are ( ) = f/4 (f/4 - 1)/2 = f 2 /32 - f/8 pairs. There exists a bi-directional edge in the graph. There exist p,q s.t. p N(q) and q N(p). p,q have a constant probability of not communicating. pq f/4 2

25 Isolating Two Processes At time t, fail all processes in S 2 except p and q. Execute p,q for f/2 local steps with d=1. Fail all processes in S 1 that p,q send messages to. S1S1

26 Isolating Two Processes Continued… Pr[p,q do not communicate] = (1-1/4)(1-1/4)=9/16 All processes which receive messages in S 1 are failed p and q are isolated with probability 9/16. By Markovs inequality: the probability that p or q send less than f/8 messages is at least 3/4. Pr[X f/8] f/32 / f/8 = 1/4 With probability 9/16, p and q send at most f/4 messages. Number failed f/4 + f/2 - 2 = 3f/4 – 2 < f.

27 Proof of Lower Bound Completion! Using a union bound, the probability that p,q do not communicate and that they send no more than f/4 messages is at least (1-(7/16 + 7/16)) = 1/8. In this case, gossip is not complete after f/2 local steps, as p and q do not know each others rumor. d=1 and each local step takes δ p,q run for time at least (d + δ)f/2 with probability at least 1/8. In expectation, T as (d,δ) = Ω( f (d+ δ)).

28 Cost of Asynchrony Consider the worst cast ratio between asynchronous algorithms and synchronous ones: Cost T = T as / min T s Cost M = M as / min M s Based on Theorem 1 we have: Cost T = Ω( f ) Cost M = Ω(1 + f 2 /n) Note: For f = Θ(n) we have either a Θ(n) slowdown or a Θ(n) increase in messages.

29 EARS SEARS TEARS Gossip Algorithms

30 E pidemic A synchronous R umor S preading Each process has the following data: r p = the rumor of process p V p = the set of all rumors known to p I p = a set of pairs (r,q) s.t. p knows r was sent to q L p = { q | r V p, (r,q) I p } Main idea: Send V p and I p to a random process Update V p and I p according to messages received Use L p to know when to sleep

31 EARS( r p ) Init: V p {r p } ; I p Ø ; L p [n] ; sleep_cnt 0 repeat: for every message m = received do V p V p U m.V ; I p I p U m.I update L p based on V p and I p if L p = Ø then sleep_cnt++ else sleep_cnt 0 if sleep_cnt < Θ( n / n-f log n) then choose q uniformly at random from [n] send m = to q for every r in V p do I p I p U (r,q) update L p based on V p and I p

32 EARS Analysis Rumor Gathering: Every correct process eventually takes a local step and sends its rumor to another process. Every process that receives this rumor will continue spreading it until it knows that all procs. have received it. Eventually, w.h.p., every process has received the rumor. Validity: Only original r p values are gossiped Quiescence: After all processes have gathered all the rumors, all L p -s will be empty, and eventually, w.h.p., all processes will go to sleep.

33 EARS Analysis Continued… Theorem 6: Algorithm EARS completes gossip w.h.p. under an oblivious adversary with O( n / n-f log 2 n(d+δ)) time complexity O(n log 3 n(d+δ)) message complexity Note: for small f and d=δ=1, complexity is comparable to best synchronous algorithm. O(log 2 n) time complexity O(n log 3 n) message complexity

34 S pamming EARS Same as EARS except: Message is sent to Θ(n ε log n) processes Only one shut-down step Theorem 7: For every constant ε < 1, algorithm SEARS has, w.h.p. O( n / ε(n -f ) (d+δ)) time complexity O( n 2ε / ε(n -f ) log n(d+δ)) message complexity Note: for f < n/2 we have constant time w.r.t. n. Intuition: Send more messages each round to save time, but pay with high message complexity.

35 Two-hop EARS Majority gossip: Each correct process receives only a majority of the rumors. Assumption: f < n/2 Majority gossip is useful for applications such as Consensus… Main idea: two phase algorithm: Phase 1: send rumor to a set of processes Phase 2: every certain number of phase 1 messages received, send all known rumors to a set of processes

36 TEARS( r p ) Sketch Init: a 4 n 1/2 log n ; V p {r p } ; first_cnt 0 set1 q, put q in set1 with probability a/n set2 q, put q in set2 with probability a/n for every q in set1 do send m = to q for every m received do V p V p U m.V if m.flag = first then first_cnt ++ if pred(first_cnt) then // check number of first messages for every q in set2 do send m = to q

37 TEARS Correctness Best case analysis: The sets set1 and set2 are of size a, in expectation. Therefore: Each process sends its rumor, in expectation, a times in the first phase. Every process eventually receives a first level messages with processes rumors. If all processes receives all a first level messages before sending their final second level message, then they will send a second level messages with a rumors. Every process will receive a 2 =16n log 2 n > n/2.

38 TEARS Correctness Continued… Worst case analysis: Using the Chernoff bound, it can be shown, w.h.p., that A sufficient number of rumors reach a sufficient number of processes in first level messages before they finish their second phase. These are called well distributed rumors. These rumors are then sent by enough processes in second phase messages. Therefore, w.h.p., each process receives an additional amount of rumors that complements the number of well distributed rumors to at least a majority.

39 TEARS Analysis Theorem 12: Algorithm TEARS completes majority gossip w.h.p. under an oblivious adversary with: Time complexity: O(d+δ) Message complexity: O(n 7/4 log 2 n) Proof of time complexity: By time δ, all 1 st level messages have been sent. By time δ+d, all these messages have arrived. By time 2δ+d, all 2 nd level messages have been sent. By time 2δ+2d, all these messages have arrived Gossip completes in O(d+δ).

40 Randomized Consensus

41 The Consensus Problem n processes, each with an initial value v p. Each process must choose an output value d p satisfying: Agreement: All output values are the same. Validity: Every output value is v p for some p. Termination: Every process eventually decides and outputs a value, w.h.p. (preferably 1). Recall: Non-randomized consensus with even one crash failure is impossible.

42 The Rabin-Canetti Framework Initially: r = 1 and prefer = v p while true do votes get-core(vote,prefer,r) // get votes of majority let v be the majority of phase r votes if all phase r votes are v then dp v // decide v outcomes get-core(outcome,v,r) if all phase r outcome values are w then prefer w else prefer common-coin() r ++

43 Routine get-core initially: set1 = set2 = set3 = Ø ; values[ j ] = when get-cor(val) invoked do values[i] val broadcast(1,val) when (1,v) received from p j values[ j ] v add j to set1 if | set1| = n-f then broadcast(2,values) when (2,V ) received from p j merge V into values add j to set2 if |set2| = n-f then broadcast(3,values) when (3,V ) received from p j merge V into values add j to set3 if |set3| = n-f then return(values)

44 Implementing get-core using Gossip Replace broadcast sends with asynchronous gossip. Majority gossip is sufficient. Note: gossip start asynchronously (not all processes finish phase 1 and start phase 2 at the same time). Assuming a process begins gossip as soon as it receives a rumor, the asymptotic complexity remains the same. To do so, if a process receives a rumor from a gossip protocol it has not yet initiated, it adopts the state of the sender, and proceeds to gossip accordingly.

45 Analysis of Algorithms Theorem 13: For an oblivious adversary and f < n/2, consensus algorithms based on EARS, SEARS and TEARS using the Canetti-Rabin framework have the same complexity as the gossip protocols. In particular: the algorithm based on TEARS has: O(d+δ) time complexity O(n 7/4 log 2 n) message complexity This is the first randomized asynchronous consensus algorithm to terminate in constant time w.r.t. n and with strictly sub-quadratic message complexity.

46 Thank you!


Download ppt "ON THE COMPLEXITY OF ASYNCHRONOUS GOSSIP Presented by: Tamar Aizikowitz, Spring 2009 C. Georgiou, S. Gilbert, R. Guerraoui, D. R. Kowalski."

Similar presentations


Ads by Google