Presentation is loading. Please wait.

Presentation is loading. Please wait.

O(log n / log log n) RMRs Randomized Mutual Exclusion Danny Hendler Philipp Woelfel PODC 2009 Ben-Gurion University University of Calgary.

Similar presentations


Presentation on theme: "O(log n / log log n) RMRs Randomized Mutual Exclusion Danny Hendler Philipp Woelfel PODC 2009 Ben-Gurion University University of Calgary."— Presentation transcript:

1 O(log n / log log n) RMRs Randomized Mutual Exclusion Danny Hendler Philipp Woelfel PODC 2009 Ben-Gurion University University of Calgary

2 Talk outline  Prior art and our results  Basic Algorithm (CC)  Enhanced Algorithm (CC)  Pseudo-code  Open questions

3 Most Relevant Prior Art  Best upper bound for mutual exclusion: O(log n) RMRs (Yang and Anderson, Distributed Computing '96).  A tight Θ (n log n) RMRs lower bound for deterministic mutex (Attiya, Hendler and Woelfel, STOC '08)  Compare-and-swap (CAS) is equivalent to read/write for RMR complexity (Golab, Hadzilacos, Hendler and Woelfel, PODC '07)

4 Our Results Randomized mutual exclusion algorithms (for both CC/DSM) that have:  O(log N / log log N) expected RMR complexity against a strong adversary, and  O(log N) deterministic worst-case RMR complexity Separation in terms of RMR complexity between deterministic/randomized mutual exclusion algorithms

5 Shared-memory scheduling adversary types  Oblivious adversary: Makes all scheduling decisions in advance  Weak adversary: Sees a process' coin-flip only after the process takes the following step, can change future scheduling based on history  Strong adversary: Can change future scheduling after each coin-flip / step based on history

6 Talk outline  Prior art and our results  Basic algorithm (CC model)  Enhanced Algorithm (CC model)  Pseudo-code  Open questions

7 Basic Algorithm – Data Structures 12 Δ Δ-1 Δ 0 1 12n Δ = Θ (log n / log log n) Key idea: Processes apply randomized promotion Key idea: Processes apply randomized promotion

8 Basic Algorithm – Data Structures (cont'd) Δ-1 Δ 0 1 12 Δ 12n lock  {P,} apply: p i1 p i2 p ik Promotion Queue notified[1…n] Per-node structure

9 Basic Algorithm – Key Idea Δ-1 Δ 0 1 i Lock= apply: i i Randomized Promotion Randomized Promotion

10 Basic Algorithm – Entry Section Δ-1 Δ 0 1 i Lock= apply: i CAS(, i) i 

11 Basic Algorithm – Entry Section: scenario #2 Δ-1 Δ 0 1 i Lock=q apply: i CAS(,i) Failure

12 Basic Algorithm – Entry Section: scenario #2 Δ-1 Δ 0 1 i Lock=q apply: i await (n.lock=) || apply[ch]=)

13 Basic Algorithm – Entry Section: scenario #2 Δ-1 Δ 0 1 i Lock=q apply: i await (n.lock=) || apply[ch]=)

14 Basic Algorithm – Entry Section: scenario #2 Δ-1 Δ 0 1 i await (notified[i) =true) CS

15 Climb up from leaf until last node captured in entry section Lock=p apply: Basic Algorithm – Exit Section Δ-1 Δ 0 1 p  Lottery

16 Perform a lottery on the root Lock=p apply: Basic Algorithm – Exit Section Δ-1 Δ 0 1 p s Promotion Queue t q 

17 Basic Algorithm – Exit Section Δ-1 Δ 0 1 i await (notified[i) =true) t s Promotion Queue q t CS

18 Basic Algorithm – Exit Section (scenario #2) Δ-1 Δ 0 1 i Promotion Queue EMPTY Free Root Lock

19 Basic Algorithm – Properties Lemma: mutual exclusion is satisfied Proof intuition: when a process exits, it either  signals a single process without releasing the root's lock, or  if the promoted-processes queue is empty, releases the lock. o When lock is free, it is captured atomically by CAS

20 Basic Algorithm – Properties (cont'd) Lemma: Expected RMR complexity is Θ(log N / log log N) await (n.lock=) || apply[ch]=) A waiting process participates in a lottery every constant number of RMRs incurred here Probability of winning a lottery is 1/ Δ Expected #RMRs incurred before promotion is Θ(log N / log log N)

21 Basic Algorithm – Properties (cont'd)  Mutual Exclusion  Expected RMR complexity: Θ(log N / log log N)  Non-optimal worst-case complexity and (even worse) starvation possible.

22 Talk outline  Prior art and our results  Basic algorithm (CC)  Enhanced Algorithm (CC)  Pseudo-code  Open questions

23 The enhanced algorithm. Key idea Quit randomized algorithm after incurring ‘'too many’’ RMRS and then execute a deterministic algorithm. Problems  How do we count the number of RMRs incurred?  How do we “quit” the randomized algorithm?

24 Enhanced algorithm: counting RMRs problem await (n.lock=) || apply[ch]=) The problem: A process may incur here an unbounded number of RMRs without being aware of it.

25 Counting RMRs: solution Key idea Perform both randomized and deterministic promotion Lock=p apply:  Increment promotion token whenever releasing a node  Perform deterministic promotion according to promotion index in addition to randomized promotion token:

26 The enhanced algorithm: quitting problem 12 Δ 12N Upon exceeding allowed number of RMRs, why can't a process simply release captured locks and revert to a deterministic algorithm? ? Waiting processes may incur RMRs without participating in lotteries!

27 Quitting problem: solution Add a deterministic Δ - process mutex object to each node Δ-1 Δ 0 1 12 Δ 12n lock  {P,} apply: Per-node structure MX: Δ -process mutex token:

28 Quitting problem: solution (cont'd) After incurring O(log Δ ) RMRs on a node, compete for the MX lock. Then spin trying to capture node lock. In addition to randomized and deterministic promotion, an exiting process promotes also the process that holds the MX lock, if any. lock  {P,} apply: Per-node structure MX: Δ -process mutex token:

29 Quitting problem: solution (cont'd) After incurring O(log Δ ) RMRs on a node, compete for the MX lock. Then spin trying to capture node lock. Worst-case number of RMRs = O(Δ log Δ)=O(log n)

30 Talk outline  Prior art and our results  Basic algorithm (CC)  Enhanced Algorithm (CC)  Pseudo-code  Open questions

31 Data-structures i'th the i'th leaf

32 The entry section i'th

33 The exit section i'th

34 Open Problems  Is this best possible? For strong adversary? For weak adversary? For oblivious adversary?  Is there an abortable randomized algorithm?  Is there an adaptive one?


Download ppt "O(log n / log log n) RMRs Randomized Mutual Exclusion Danny Hendler Philipp Woelfel PODC 2009 Ben-Gurion University University of Calgary."

Similar presentations


Ads by Google