THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.

Slides:



Advertisements
Similar presentations
CS 603 Process Synchronization: The Colored Ticket Algorithm February 13, 2002.
Advertisements

Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
1 Chapter 2 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2007 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Multiprocessor Synchronization Algorithms ( ) Lecturer: Danny Hendler The Mutual Exclusion problem.
Chair of Software Engineering Concurrent Object-Oriented Programming Prof. Dr. Bertrand Meyer Lecture 4: Mutual Exclusion.
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Chapter 3 The Critical Section Problem
1 Course Syllabus 1. Introduction - History; Views; Concepts; Structure 2. Process Management - Processes; State + Resources; Threads; Unix implementation.
1 Chapter 3 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
CPSC 668Set 7: Mutual Exclusion with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Multiprocess Synchronization Algorithms ( )
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
The Critical-Section Problem
1 Tuesday, June 20, 2006 "The box said that I needed to have Windows 98 or better... so I installed Linux." - LinuxNewbie.org.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Bakery Algorithm - Proof
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 8: More Mutex with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Process Synchronization
Concurrency in Distributed Systems: Mutual exclusion.
The Critical Section Problem
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
28/10/1999POS-A1 The Synchronization Problem Synchronization problems occur because –multiple processes or threads want to share data; –the executions.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
1 Chapter 10 Distributed Algorithms. 2 Chapter Content This and the next two chapters present algorithms designed for loosely-connected distributed systems.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch Set 11: Asynchronous Consensus 1.
The Complexity of Distributed Algorithms. Common measures Space complexity How much space is needed per process to run an algorithm? (measured in terms.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University.
1 Chapter 2 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 8: More Mutex with Read/Write Variables 1.
1 Concurrent Processes. 2 Cooperating Processes  Operating systems allow for the creation and concurrent execution of multiple processes  concurrency.
1 Shared Memory. 2 processes 3 Types of Shared Variables Read/Write Test & Set Read-Modify-Write.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Hwajung Lee. Well, you need to capture the notions of atomicity, non-determinism, fairness etc. These concepts are not built into languages like JAVA,
Hwajung Lee. Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity, non-determinism,
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Program Correctness. The designer of a distributed system has the responsibility of certifying the correctness of the system before users start using.
CPSC 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 6: Mutual Exclusion in Shared Memory 1.
SOCSAMS e-learning Dept. of Computer Applications, MES College Marampally INTERPROCESS COMMUNICATION AND SYNCHRONIZATION SYNCHRONIZATION.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
6.852: Distributed Algorithms Spring, 2008 Class 14.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Bakery Algorithm - Proof
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Model and complexity Many measures Space complexity Time complexity
Concurrent Distributed Systems
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
The Critical-Section Problem
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Lecture 20 Syed Mansoor Sarwar
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Multiprocessor Synchronization Algorithms ( )
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Syllabus 1. Introduction - History; Views; Concepts; Structure
Presentation transcript:

THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem

Concurrent Distributed Systems Changes to the model from the MPS: –MPS basically models a distributed system in which processors needs to coordinate to perform a widesystem goal (e.g., electing their leader) –Now, we turn our attention to concurrent systems, where the processors run in parallel but without necessarily cooperating (for instance, they might just be a set of laptops in a LAN)

Shared Memory Model Changes to the model from the MPS: –No cooperation  no communication channels between processors and no inbuf and outbuf state components –Processors communicate via a set of shared variables, instead of passing messages  no any communication graph! –Each shared variable has a type, defining a set of operations that can be performed atomically (i.e., instantaneously, without interferences)

Shared Memory processors

Types of Shared Variables 1.Read/Write 2.Read-Modify-Write 3.Test & Set 4.Fetch-and-add 5.Compare-and-swap. We will focus on the Read/Write type (the simplest one to be realized)

Read/Write Variables Read(v)Write(v,a) return(v); v = a; In one atomic step a processor can: – read the variable, or – write the variable … but not both!

write 10

10

write read

write read 10

write 10 write 20 Simultaneous writes

write 10 write 20 Possibility 1 10 Simultaneous writes are scheduled:

write 10 write 20 Possibility 2 20 Simultaneous writes are scheduled:

write In general: write xє{a 1,…,a k } Simultaneous writes are scheduled:

read Simultaneous Reads: no problem! aa a All read the same value

Computation Step in the Shared Memory Model When processor p i takes a step: –p i 's state in old configuration specifies which shared variable is to be accessed and with which operation –operation is done: shared variable's value in the new configuration changes according to the operation –p i 's state in new configuration changes according to its old state and the result of the operation

The mutual exclusion (mutex) problem –The main challenge in managing concurrent systems is coordinating access to resources that are shared among processes –Assumptions on the system: Non-anonymous Asynchronous Fault-free

Each processor's code is divided into four sections: –entry: synchronize with others to ensure mutually exclusive access to the … –critical: use some resource; when done, enter the… –exit: clean up; when done, enter the… –remainder: not interested in using the resource Mutex code sections entrycriticalexitremainder

Mutex Algorithms A mutual exclusion algorithm specifies code for entry and exit sections to ensure: –mutual exclusion: at most one processor is in its critical section at any time, and –some kind of liveness condition. There are three commonly considered ones…

Mutex Liveness Conditions no deadlock: if a processor is in its entry section at some time, then later some processor is in its critical section no lockout: if a processor is in its entry section at some time, then later the same processor is in its critical section bounded waiting: no lockout + while a processor is in its entry section, any other processor can enter the critical section no more than a bounded number of times. These conditions are increasingly strong.

Mutex Algorithms: assumptions The code for the entry and exit sections is allowed to assume that –no processor stays in its critical section forever –shared variables used in the entry and exit sections are not accessed during the critical and remainder sections

Complexity Measure for Mutex Main complexity measure of interest for shared memory mutex algorithms is amount of shared space needed. Space complexity is affected by: –how powerful is the type of the shared variables –how strong is the progress property to be satisfied (no deadlock vs. no lockout vs. bounded waiting)

Mutex Results Using R/W Liveness Condition upper boundlower bound no deadlockn no lockout3n booleans (tournament alg.) bounded waiting 2n unbounded (bakery alg.)

Bakery Algorithm Guaranteeing: –Mutual exclusion –Bounded waiting Using 2n shared read/write variables –booleans Choosing[i] : initially false, written by p i and read by others –integers Number[i] : initially 0, written by p i and read by others

Bakery Algorithm Code for entry section: Choosing[i] := true Number[i] := max{Number[0],..., Number[n-1]} + 1 Choosing[i] := false for j := 0 to n-1 (except i) do wait until Choosing[j] = false wait until Number[j] = 0 or (Number[j],j) > (Number[i],i) endfor Code for exit section: Number[i] := 0

BA Provides Mutual Exclusion Lemma 1: If p i is in the critical section, then Number [i] > 0. Proof: Trivial. Lemma 2: If p i is in the critical section and Number [k] ≠ 0 (k ≠ i), then ( Number [k],k) > ( Number [i],i). Proof: Observe that a chosen number changes only after exiting from the CS. Since p i is in the CS, it passed the second wait statement for j=k. There are two cases: p i in CS and Number [k] ≠ 0 p i 's most recent read of Number [k]; Case 1: returns 0 Case 2: returns ( Number [k],k) > ( Number [i],i)

Case 1 p i in CS and Number [k] ≠ 0 p i 's most recent read of Number [k], returns 0. So p k is in remainder or choosing number. p i 's most recent read of Choosing [k], returns false. So p k is not in the middle of choosing number. p i 's most recent write to Number [i] So p k chooses its number in this interval, sees p i 's number, and chooses a larger one (and so it will never enter in CS before p i, which means that its number cannot change before p i exits from the CS, and so the claim is true)

Case 2 p i in CS and Number [k] ≠ 0 p i 's most recent read of Number [k] returns (Number[k],k)>(Number[i],i). So p k has already taken its number. So p k chooses a number not less than that of p i in this interval. Observe that it maybe p k finished its choice before than p i, but since (Number[k],k)>(Number[i],i), it must be Number[i]≤Number[k]. Thus, p k will never enter in CS before p i. Indeed, either p k will be stopped by p i in the first wait statement (in case Number[i]=Number[k] but p i is still choosing its number), or in the second one (in case Number[i]<Number[k]), and so the claim is true END of PROOF

Mutual Exclusion for BA Mutual Exclusion: Suppose p i and p k are simultaneously in CS. –By Lemma 1, both have number > 0. –By Lemma 2, ( Number [k],k) > ( Number [i],i) and ( Number [i],i) > ( Number [k],k)

No Lockout for BA Assume in contradiction there is a starved processor. Starved processors are stuck at the wait statements, not while choosing a number. Starved processors can be stuck only at the second wait statement, clearly. Let p i be a starved processor with smallest ( Number [i],i). Any processor entering entry section after p i has chosen its number, chooses a larger number, and therefore cannot overtake p i Every processor with a smaller ticket eventually enters CS (not starved) and exits, setting its number to 0. Thus p i cannot be stuck at the second wait statement forever by another processor.

What about bounded waiting? YES: It’s easy to see that any processor in the entry section can be overtaken at most once by any other processor (and so in total it can be overtaken at most n-1 times).

Space Complexity of BA Number of shared variables is 2n · Choosing variables are booleans · Number variables are unbounded: as long as the CS is occupied and some processor enters the entry section, the ticket number increases Is it possible for an algorithm to use less shared space?

Bounded 2-Processor ME Algorithm with ND Start with a bounded algorithm for 2 processors with ND, then extend to NL, then extend to n processors. Uses 2 binary shared read/write variables: W[0] : initially 0, written by p 0 and read by p 1 W[1] : initially 0, written by p 1 and read by p 0 Asymmetric code: p 0 always has priority over p 1

Bounded 2-Processor ME Algorithm with ND Code for p 0 's entry section: W[0] := wait until W[1] = 0 Code for p 0 's exit section: 7. 8W[0] := 0

Bounded 2-Processor ME Algorithm with ND Code for p 1 's entry section: 1W[1] := 0 2wait until W[0] = 0 3W[1] := if (W[0] = 1) then goto Line 1 6. Code for p 1 's exit section: 7. 8W[1] := 0

Discussion of 2-Processor ND Algorithm Satisfies mutual exclusion: processors use W variables to make sure of this (a processor enters only when the other W variable is set to 0) Satisfies no deadlock But unfair w.r.t. p 1 (lockout) Fix by having the processors alternate in having the priority

Bounded 2-Processor ME Algorithm with NL Uses 3 binary shared read/write variables: W[0] : initially 0, written by p 0 and read by p 1 W[1] : initially 0, written by p 1 and read by p 0 Priority : initially 0, written and read by both

Bounded 2-Processor ME Algorithm with NL Code for p i ’s entry section: 1W[i] := 0 2wait until W[1-i] = 0 or Priority = i 3W[i] := 1 4if (Priority = 1-i) then 5 if (W[1-i] = 1) then goto Line 1 6else wait until (W[1-i] = 0) Code for p i ’s exit section: 7Priority := 1-i 8W[i] := 0

Analysis: ME Mutual Exclusion: Suppose in contradiction p 0 and p 1 are simultaneously in CS. W.l.o.g., assume p 1 last write of W before entering CS happens not later than p 0 last write of W before entering CS W[0]=W[1]=1, both procs in CS p 1 's most recent write of 1 to W[1] (Line 3) p 0 's most recent write of 1 to W[0] (Line 3) p 0 's most recent read of W[1] before entering CS (Line 5 or 6): must return 1, not 0!

Analysis: No-Deadlock Useful for showing no-lockout. If one processor ever enters remainder forever, other one cannot be starved. –Ex: If p 1 enters remainder forever, then p 0 will keep seeing W[1] = 0. So any deadlock would starve both processors in the entry section

Analysis: No-Deadlock Suppose in contradiction there is deadlock. W.l.o.g., suppose Priority gets stuck at 0 after both processors are stuck in their entry sections. p 0 and p 1 stuck in entry Priority =0 p 0 not stuck in Line 2, skips Line 5, stuck in Line 6 with W[0]=1 waiting for W[1] to be 0 p 0 sees W[1]=0, enters CS p 1 sent back in Line 5, stuck at Line 2 with W[1]=0, waiting for W[0] to be 0

Analysis: No-Lockout Suppose in contradiction p 0 is starved. Since there is no deadlock, p 1 enters CS infinitely often. The first time p 1 executes Line 7 in exit section after p 0 is stuck in entry, Priority gets stuck at 0. p 1 at Line 7; Priority =0 forever after p 0 stuck in entry p 0 stuck at Line 6 with W[0]=1, waiting for W[1] to be 0 p 1 enters entry, gets stuck at Line 2, waiting for W[0] to be 0: it cannot reenter in CS! DEADLOCK!

Bounded Waiting? NO: A processor, even if having priority, might be overtaken repeatedly (in principle, an unbounded number of times) when it is in between Line 2 and 3.

Bounded n-Processor ME Alg. Can we get a bounded space NL mutex algorithm for n>2 processors? Yes! Based on the notion of a tournament tree: complete binary tree with n-1 nodes –tree is conceptual only! does not represent message passing channels A copy of the 2-proc. algorithm is associated with each tree node –includes separate copies of the 3 shared variables

Tournament Tree p 0, p 1 p 2, p 3 p 4, p 5 p 6, p 7

Tournament Tree ME Algorithm Each proc. begins entry section at a specific leaf (two procs per leaf) A proc. proceeds to next level in tree by winning the 2-proc. competition for current tree node: –on left side, play role of p 0 –on right side, play role of p 1 When a proc. wins the 2-proc. algorithm associated with the tree root, it enters CS.

The code

More on Tournament Tree Alg. Code is recursive p i begins at tree node 2 k +  i/2 , playing role of p i mod 2, where k =  log n  -1. After winning node v, "critical section" for node v is –entry code for all nodes on path from  v/2  to root –real critical section –exit code for all nodes on path from root to  v/2 

Tournament Tree Analysis Correctness: based on correctness of 2- processor algorithm and tournament structure: –ME for tournament alg. follows from ME for 2- proc. alg. at tree root. –NL for tournament alg. follows from NL for the 2- proc. algs. at all nodes of tree Space Complexity: 3n boolean read/write shared variables. Bounded Waiting? No, as for the 2- processor algorithm.