Multithreaded Programming ECEN5043 Software Engineering of Multiprogram Systems University of Colorado Lectures 5 & 6.

Slides:



Advertisements
Similar presentations
Symmetric Multiprocessors: Synchronization and Sequential Consistency.
Advertisements

Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
Synchronization. How to synchronize processes? – Need to protect access to shared data to avoid problems like race conditions – Typical example: Updating.
Synchronization and Deadlocks
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CSC321 Concurrent Programming: §3 The Mutual Exclusion Problem 1 Section 3 The Mutual Exclusion Problem.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Concurrent Programming James Adkison 02/28/2008. What is concurrency? “happens-before relation – A happens before B if A and B belong to the same process.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Chapter 3 The Critical Section Problem
CIS 720 Mutual Exclusion. Critical Section problem Process i do (true) entry protocol; critical section; exit protocol; non-critical section od.
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Race Conditions. Isolated & Non-Isolated Processes Isolated: Do not share state with other processes –The output of process is unaffected by run of other.
Concurrency.
Multithreaded and Distributed Programming -- Classes of Problems ECEN5053 Software Engineering of Distributed Systems University of Colorado Foundations.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Concurrency CS 510: Programming Languages David Walker.
University of Pennsylvania 9/19/00CSE 3801 Concurrent Processes CSE 380 Lecture Note 4 Insup Lee.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Thread Synchronization: Too Much Milk. 2 Implementing Critical Sections in Software Hard The following example will demonstrate the difficulty of providing.
Synchronization (Barriers) Parallel Processing (CS453)
The Critical Section Problem
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Lecture 2 Foundations and Definitions Processes/Threads.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
1 Chapter 2.3 : Interprocess Communication Process concept  Process concept  Process scheduling  Process scheduling  Interprocess communication Interprocess.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Memory Consistency Models. Outline Review of multi-threaded program execution on uniprocessor Need for memory consistency models Sequential consistency.
1 Concurrent Processes. 2 Cooperating Processes  Operating systems allow for the creation and concurrent execution of multiple processes  concurrency.
CIS 842: Specification and Verification of Reactive Systems Lecture INTRO-Examples: Simple BIR-Lite Examples Copyright 2004, Matt Dwyer, John Hatcliff,
Hwajung Lee. Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity, non-determinism,
Synchronicity Introduction to Operating Systems: Module 5.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Lecture 3 Concurrency and Thread Synchronization     Mutual Exclusion         Dekker's Algorithm         Lamport's Bakery Algorithm.
CIS 720 Lecture 5. Techniques to avoid interference Disjoint variables –If the write set of each process is disjoint from the read and write set of other.
Program Correctness. The designer of a distributed system has the responsibility of certifying the correctness of the system before users start using.
Software Systems Verification and Validation Laboratory Assignment 4 Model checking Assignment date: Lab 4 Delivery date: Lab 4, 5.
Presented by: Belgi Amir Seminar in Distributed Algorithms Designing correct concurrent algorithms Spring 2013.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Agenda  Quick Review  Finish Introduction  Java Threads.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Background on the need for Synchronization
G.Anuradha Reference: William Stallings
Memory Consistency Models
Memory Consistency Models
Designing Parallel Algorithms (Synchronization)
Background and Motivation
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Synchronization Tools
CIS 720 Lecture 5.
Foundations and Definitions
Programming with Shared Memory Specifying parallelism
CIS 720 Lecture 5.
CSE 542: Operating Systems
Presentation transcript:

Multithreaded Programming ECEN5043 Software Engineering of Multiprogram Systems University of Colorado Lectures 5 & 6

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado2 The Essence of Multiple Threads  Two or more processes that work together to perform a task  Each process is a sequential program  One thread of control per process  Communicate using shared variables  Need to synchronize with each other, 1 of 2 ways  Mutual exclusion  Condition synchronization

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado3 Opportunities & Challenges  What kinds of processes to use  How many  How they should interact  Key to developing a correct program is to ensure the process interaction is properly synchronized

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado4 Focus  Programs in most common languages  Explicit concurrency, communication, & synchronization  Specify the actions of each process and how they communicate & synchronize  Asynchronous process execution  Shared memory  Single CPU and operating system

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado5 Multiprocessing monkey wrench  The solutions we will address this semester will presume a single CPU and therefore the concurrent processes share coherent memory  A multiprocessor environment with shared memory introduces cache and memory consistency problems and overhead to manage it.  A distributed-memory multiprocessor/multicomputer/network environment has additional issues of latency, bandwidth, etc.  We focus on the first bullet in this semester.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado6 Recall  A process is a sequential program that has its own thread of control when executed  A concurrent program contains multiple processes so every one has multiple threads  Multithreaded usually means a program contains more processes than there are processors to execute them  A multithreaded software system manages multiple independent activities

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado7 Why write as multithreaded?  To be cool (wrong reason)  Sometimes, it is easier to organize the code and data as a collection of processes than as a single huge sequential program  Each process can be scheduled and executed independently  Other applications can continue to execute “in the background”

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado8 Many applications, 5 basic paradigms  Iterative parallelism  Recursive parallelism  Producers and consumers (pipelines)  Clients and servers  Interacting peers

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado9 Iterative parallelism  Example?  Several, often identical processes  Each contains one or more loops  Therefore each process is iterative  They work together to solve a single program  Communicate and synchronize using shared variables  Independent computations – disjoint write sets

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado10 Recursive parallelism  One or more independent recursive procedures  Recursion is the dual of iteration  Procedure calls are independent – each works on different parts of the shared data  Often used in imperative languages for  Divide and conquer algorithms  Backtracking algorithms (e.g. tree-traversal)  Used to solve combinatorial problems such as sorting, scheduling, and game playing  If too many recursive procedures, we prune.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado11 Producers and consumers  One-way communication between processes  Often organized into a pipeline through which info flows  Each process is a filter that consumes the output of its predecessor and produces output for its successor  That is, a producer-process computes and outputs a stream of results  Sometimes implemented with a shared bounded buffer as the pipe, e.g. Unix stdin and stdout  Synchronization primitives: flags, semaphores, monitors

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado12 Clients and servers  Dominant interactive pattern in distributed systems (see next semester)  Client process requests a service & waits for reply  Server waits for requests; then acts upon them.  Server can be implemented  By a single process that handles one client process at a time  Multithreaded to service requests concurrently  Concurrent programming generalizations of procedures and procedure calls

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado13 Interacting peers  Occurs in distributed programs  Several processes that execute the same code and exchange messages to accomplish a task  Used to implement  Distributed parallel programs including distributed versions of iterative parallelism  Decentralized decision making

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado14 Summary  Concurrent programming paradigms on a single processor  Iterative parallelism  Recursive parallelism  Producers and consumers  No analog in sequential programs because producers and consumers are, by definition, independent processes with their own threads and their own rates of progress

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado15 Shared-Variable Programming  Frowned on in sequential programs, although convenient  Absolutely necessary in concurrent programs  Must communicate to work together

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado16 Need to communicate  Communication fosters need for synchronization  Mutual exclusion – need to not access shared data at the same time  Condition synchronization – one needs to wait for another

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado17 Some terms  State – values of the program variables at a point in time, both explicit and implicit. Each process in a program executes independently and, as it executes, examines and alters the program state.  Atomic actions -- A process executes sequential statements. Each statement is implemented at the machine level by one or more atomic actions that indivisibly examine or change program state.  Concurrent program execution interleaves sequences of atomic actions. A history is a trace of a particular interleaving.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado18 Terms -- continued  The next atomic action in any ONE of the processes could be the next one in a history. So there are many ways actions can be interleaved and conditional statements allow even this to vary.  The role of synchronization is to constrain the possible histories to those that are desirable.  Mutual exclusion combines atomic actions into sequences of actions called critical sections where the entire section appears to be atomic.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado19 Terms – continued further  Property of a program is an attribute that is true of every possible history.  Safety – never enters a bad state  Liveness – the program eventually enters a good state

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado20 How can we verify?  How do we demonstrate a program satisfies a property?  A dynamic test considers just one possible history  Limited number of tests unlikely to demonstrate the absence of bad histories  Operational reasoning -- exhaustive case analysis  Assertional reasoning – abstract analysis  Atomic actions are predicate transformers

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado21 Assertional Reasoning  Use assertions to characterize sets of states  Allows a compact representation of states and their transformations  More on this later in the course

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado22 Warning  We must be wary of dynamic testing alone  it can reveal only the presence of errors, not their absence.  Concurrent programs are difficult to test & debug  Difficult (impossible) to stop all processes at once in order to examine their state  Each execution in general will produce a different history

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado23 Example 1a -- Pattern in a File  Find all instances of a pattern in filesomething.  Consider string line; read a line of input from stdin into line; while (!EOF) { look for pattern in line; if (pattern is in line) write line; read next line of input; }

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado24 Example 1b -- concurrent & correct? string line; read a line of input from stdin into line; while (!EOF) { co look for pattern in line; if (pattern is in line) write line; // read next line of input into line; oc; }

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado25 Example 1c -- different variables string line1, line2; read a line of input from stdin into line1; while (!EOF) { co look for pattern in line1; if (pattern is in line1) write line1; // read next line of input into line2; oc; }

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado26 Example 1d - copy the line string line1, line2; read a line of input from stdin into line1; while (!EOF) { co look for pattern in line1; if (pattern is in line1) write line1; // read next line of input into line2; oc; line1 = line2; }

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado27 Co inside while vs. while inside co??  Possible to get the loop inside the co brackets so that the multi-process creation only occurs once?  Yes. Put a while loop inside each of the two processes.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado28 Both processes inside co brackets co process 1: find patterns string line1; while (true) { wait for buffer to be full or done to be true; if (done) break; line1 = buffer; signal buffer is empty; look for pattern in line1; if (pattern is in line1) write line1; } process 2: read new lines string line2; while (true) { read next line of input into line2; if (EOF) (done=true; break;) wait for buffer to be empty; buffer = line2; signal that buffer is full; } oc; string buffer; bool done = false;

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado29 Synchronization  Required for correct answers whenever processes both read and write shared variables.  Sometimes groups of instructions must be treated as if atomic -- critical sections  Technique of double checking before updating a shared variable is useful (even though it sounds strange)  Example of double checking -- next

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado30 Example 2 -- sequential  Find the maximum value in an array int m = 0; for [ i = 0 to n-1 ] { if (a[i] > m) m = a[i]; } If we try to examine every array element in parallel, all processes will try to update m but the final value will be the value assigned by the last process that updates m.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado31 Example 2b - concurrent w/ doublecheck  OK to do comparisons in parallel because they are read-only actions  But -- necessary to ensure that when the program terminates, m is the maximum :-) int m = 0; co [i = 0 to n-1] if (a[i] > m) m) #recheck only if above ck true m = a[i]; > oc; Angle brackets indicate atomic operation.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado32 Why synchronize?  If processes do not interact, all interleavings are acceptable.  If processes do interact, only some interleavings are acceptable.  Role of synchronization: prevent unacceptable interleavings  Combine fine-grain atomic actions into coarse-grained composite actions (we call this....what?)  Delay process execution until program state satisfies some predicate

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado33 Notation for synchronization General: Mutual exclusion: Conditional synchronization only: This is equivalent to: while (not condition); (note the ending empty statement, i.e. semicolon)

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado34  Unconditional atomic action  does not contain a delay condition  can execute immediately as long as it executes atomically (not interleaved)  examples:  individual machine instructions  expressions we place in angle brackets  await statements where guard condition is constant true or is omitted

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado35  Conditional atomic action - await statement with a guard condition  If condition is false in a given process, it can only become true by the action of other processes.  How long will the process wait if it has a conditional atomic action?

Locks and Barriers

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado37 How to implement synchronization  To implement mutual exclusion  Implement atomic actions in software using locks to protect critical sections  Needed in most concurrent programs  To implement conditional synchronization  Implement synchronization point that all processes must reach before any process is allowed to proceed -- barrier  Needed in many parallel programs -- why?

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado38 Bad states, Good state  Mutual exclusion -- at most one process at a time is executing its critical section  its bad state is one in which two processes are in their critical section  Absence of Deadlock (“livelock”) -- If 2 or more processes are trying to enter their critical sections, at least one will succeed.  its bad state is one in which all the processes are waiting to enter but none is able to do so  two more on next slide

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado39 Bad states -- continued  Absence of Unnecessary Delay -- If a process is trying to enter its c.s. and the other processes are executing their noncritical sections or have terminated, the first process is not prevented from entering its c.s.  Bad state is one in which the one process that wants to enter cannot do so, even though no other process is in the c.s.  Eventual entry -- process that is attempting to enter its c.s. will eventually succeed.  liveness property, depends on scheduling policy

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado40 Logical property of mutual exclusion  When process1 is in its c.s., set property1 true.  Similarly, for process2 where property2 is true.  Bad state is where property1 and property2 are both true at the same time  Therefore  want every state to satisfy the negation of the bad state --  mutex: NOT(property1 AND property2)  Needs to be a global invariant True in the initial state and after each event that affects property1 or property2

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado41 Coarse-grain solution process process1 { while (true) { critical section; property1 = false; noncritical section; } process process2 { while (true) { critical section; property2 = false; noncritical section; } bool property1 = false; property2 = false; COMMENT: mutex: NOT(property1 AND property2) -- global invariant

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado42 Does it avoid the problems?  Deadlock: if each process were blocked in its entry protocol, then both property1 and property2 would have to be true. Both are false at this point in the code.  Unnecessary delay: One process blocks only if the other one is not in its c.s.  Liveness -- see next slide

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado43 Liveness guaranteed?  Liveness property -- process trying to enter its critical section eventually is able to do so  If process1 trying to enter but cannot, then property2 is true;  therefore process2 is in its c.s. which eventually exits making property2 false; allows process1’s guard to become true  If process1 still not allowed entry, it’s because the scheduler is unfair or because process2 again gains entry -- (happens infinitely often?)  Strongly-fair scheduler required, not likely.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado44 Three “spin lock” solutions  A “spin lock” solution uses busy-waiting  Ensure mutual exclusion, are deadlock free, and avoid unnecessary delay  Require a fairly strong scheduler to ensure eventual entry  Do not control the order in which delayed processes enter their c.s.’s when >= 2 try  Three fair solutions to the critical section problem  Tie-breaker algorithm  Ticket algorithm  Bakery algorithm

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado45 Tie-Breaker  In typical P section attempting to enter its c.s., there is no control over which will succeed.  To make it fair, processes should take turns  Peterson’s algorithm uses an additional variable to indicate which process was last to enter its c.s.  Consider the “coarse-grained” program but...  implement the conditional atomic actions in the entry protocol using only simple variables and sequential statements.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado46 Tie-breaker implementation  Could implement the await statement by first looping until the guard is false and then execute the assignment. (Sound familiar?)  But this pair of events is not executed atomically -- does not support mutual exclusion.  If reversed, deadlock can result. (Remember?)  Let last be an integer variable to indicate which was last to start executing its entry protocol.  If both are trying (property1 and property2 are true), the last to starts its entry protocol delays.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado47 Tie-breaker Implementation for n  If there are n processes, the entry protocol in each process consists of a loop that iterates thru n-1 stages.  If we can ensure at most one process at a time is allowed to get thru all n-1 stages, then at most one at a time can be in its critical section.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado48 n-process tie-breaker algorithm See handout (also in Notes half of this slide) This is quite complex and hard to understand. But... livelock free avoids unnecessary delay ensures eventual entry (A process delays only if some other process is ahead of it in the entry protocol. Every process eventually exits its critical section)

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado49 Ticket Algorithm  Based on the idea of drawing tickets (numbers) and then waiting turns  Needs a number-dispenser and a display indicating which number customer is being served  If one processor, customers are served one at a time in order of arrival  (If the ticket algorithm runs for a very long time, incrementing a counter will cause arithmetic overflow.)

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado50 int number = 1, next = 1, turn[1:n] = ([n] 0); process CS[i = 1 to n] { while (true) { turn[i] = FetchAndAdd(number, 1); /* entry */ while (turn[i] != next) skip; critical section; next = next + 1; /* exit protocol */ noncritical section; } What is the global invariant?

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado51 Bakery Algorithm  Downside of the ticket algorithm:  Without FetchAndAdd, require an additional critical section and ITS solution might not be fair.  Bakery algorithm  fair and does not require any special machine instructions  Ticket: customer draws unique # and waits for its number to become next  This: Customers check with each other rather than with a central next-counter to decide on order of service.

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado52  To enter its c.s., process CS[i] sets turn[i] to one more than the maximum of all the current values in turn.  Then CS[i] waits until turn[i] is the smallest nonzero value in turn.  What is the global invariant in words (not predicate logic notation)?

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado53 Bakery algorithm -- coarse-grain version int turn[1:n] = ([n] 0); process CS[i = 1 to n] { while (true) { for [j = 1 to n such-that j != i] critical section turn[i] = 0; noncritical section; }

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado54 Bakery algorithm -- practicality?  Cannot be implemented on contemporary machines  The assignment to turn[i] requires computing the maximum of n values  The await statement references a shared variable (turn[j]) twice.  These could be atomic implementations by using another c.s. protocol such as the tie- breaker algorithm (inefficient)  What to do?

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado55 Initial (wrong) attempts  When n processes need to synchronize, often useful first to develop a two-process solution and then to generalize that solution.  Consider entry protocol for CS1:  turn1 = turn2 + 1;  while (turn2 != 0 and turn1 > turn2) skip;  Flip the 1’s and 2’s to get the corresp. for CS2  Is this a solution? What’s the problem?  The assignments and the while loop guards are not implemented atomically. So?

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado56 Does the gallant approach work?  If both turn2 and turn2 are 1, let one of the processes proceed and have the other delay. (For example, strengthen the delay loop in CS2 to be turn2 >= turn1.)  Still possible for both to enter their c.s. because of a race condition.  Avoid the race condition: have each process set its value of turn to 1 (or any nonzero value) at the start of the entry protocol. Then it examines the other’s value of turn and resets its own: turn1 = 1; turn1 = turn2 + 1; while (turn2 != 0 and turn1 > turn2) skip;

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado57 Working but not symmetric  One process cannot now exit its while loop until the other has finished setting its value of turn if it is in the middle of doing so.  Who has precedence?  These entry protocols are not quite symmetric.  Rewrite them, but first: Let (a, b) and (c, d) be pairs of integers and define the greater-than relation between such pairs as follows: (a, b) > (c, d) == true if a > c or if a == c and b > d == false otherwise

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado58 Symmetric --> easy to generalize  Rewrite turn1 > turn2in CS1 as (turn1, 1) > (turn2, 2)  Rewrite turn2 >= turn1 in CS2 as (turn2, 2) > (turn1, 1)  n-process bakery algorithm -- next slide

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado59 n-process bakery algorithm int turn[1:n] = ([n] 0); process CS[i = 1 to n] { while (true) { turn[i] = 1; turn[i] = max(turn[1:n]) + 1; for [j = 1 to n such that j != i] while (turn[j] != 0 and (turn[i], i) > (turn[j], j)) skip; critical section; turn[i] = 0; noncritical section; }

revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado60 Some interesting points re bakery  Devised by Leslie Lamport in 1974 and improved in 1979  More intuitive than earlier critical section solutions  Allows processes to enter in essentially FIFO order