Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multithreaded Programming ECEN5043 Software Engineering of Multiprogram Systems University of Colorado Lectures 5 & 6.

Similar presentations


Presentation on theme: "Multithreaded Programming ECEN5043 Software Engineering of Multiprogram Systems University of Colorado Lectures 5 & 6."— Presentation transcript:

1 Multithreaded Programming ECEN5043 Software Engineering of Multiprogram Systems University of Colorado Lectures 5 & 6

2 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado2 The Essence of Multiple Threads  Two or more processes that work together to perform a task  Each process is a sequential program  One thread of control per process  Communicate using shared variables  Need to synchronize with each other, 1 of 2 ways  Mutual exclusion  Condition synchronization

3 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado3 Opportunities & Challenges  What kinds of processes to use  How many  How they should interact  Key to developing a correct program is to ensure the process interaction is properly synchronized

4 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado4 Focus  Programs in most common languages  Explicit concurrency, communication, & synchronization  Specify the actions of each process and how they communicate & synchronize  Asynchronous process execution  Shared memory  Single CPU and operating system

5 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado5 Multiprocessing monkey wrench  The solutions we will address this semester will presume a single CPU and therefore the concurrent processes share coherent memory  A multiprocessor environment with shared memory introduces cache and memory consistency problems and overhead to manage it.  A distributed-memory multiprocessor/multicomputer/network environment has additional issues of latency, bandwidth, etc.  We focus on the first bullet in this semester.

6 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado6 Recall  A process is a sequential program that has its own thread of control when executed  A concurrent program contains multiple processes so every one has multiple threads  Multithreaded usually means a program contains more processes than there are processors to execute them  A multithreaded software system manages multiple independent activities

7 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado7 Why write as multithreaded?  To be cool (wrong reason)  Sometimes, it is easier to organize the code and data as a collection of processes than as a single huge sequential program  Each process can be scheduled and executed independently  Other applications can continue to execute “in the background”

8 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado8 Many applications, 5 basic paradigms  Iterative parallelism  Recursive parallelism  Producers and consumers (pipelines)  Clients and servers  Interacting peers

9 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado9 Iterative parallelism  Example?  Several, often identical processes  Each contains one or more loops  Therefore each process is iterative  They work together to solve a single program  Communicate and synchronize using shared variables  Independent computations – disjoint write sets

10 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado10 Recursive parallelism  One or more independent recursive procedures  Recursion is the dual of iteration  Procedure calls are independent – each works on different parts of the shared data  Often used in imperative languages for  Divide and conquer algorithms  Backtracking algorithms (e.g. tree-traversal)  Used to solve combinatorial problems such as sorting, scheduling, and game playing  If too many recursive procedures, we prune.

11 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado11 Producers and consumers  One-way communication between processes  Often organized into a pipeline through which info flows  Each process is a filter that consumes the output of its predecessor and produces output for its successor  That is, a producer-process computes and outputs a stream of results  Sometimes implemented with a shared bounded buffer as the pipe, e.g. Unix stdin and stdout  Synchronization primitives: flags, semaphores, monitors

12 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado12 Clients and servers  Dominant interactive pattern in distributed systems (see next semester)  Client process requests a service & waits for reply  Server waits for requests; then acts upon them.  Server can be implemented  By a single process that handles one client process at a time  Multithreaded to service requests concurrently  Concurrent programming generalizations of procedures and procedure calls

13 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado13 Interacting peers  Occurs in distributed programs  Several processes that execute the same code and exchange messages to accomplish a task  Used to implement  Distributed parallel programs including distributed versions of iterative parallelism  Decentralized decision making

14 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado14 Summary  Concurrent programming paradigms on a single processor  Iterative parallelism  Recursive parallelism  Producers and consumers  No analog in sequential programs because producers and consumers are, by definition, independent processes with their own threads and their own rates of progress

15 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado15 Shared-Variable Programming  Frowned on in sequential programs, although convenient  Absolutely necessary in concurrent programs  Must communicate to work together

16 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado16 Need to communicate  Communication fosters need for synchronization  Mutual exclusion – need to not access shared data at the same time  Condition synchronization – one needs to wait for another

17 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado17 Some terms  State – values of the program variables at a point in time, both explicit and implicit. Each process in a program executes independently and, as it executes, examines and alters the program state.  Atomic actions -- A process executes sequential statements. Each statement is implemented at the machine level by one or more atomic actions that indivisibly examine or change program state.  Concurrent program execution interleaves sequences of atomic actions. A history is a trace of a particular interleaving.

18 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado18 Terms -- continued  The next atomic action in any ONE of the processes could be the next one in a history. So there are many ways actions can be interleaved and conditional statements allow even this to vary.  The role of synchronization is to constrain the possible histories to those that are desirable.  Mutual exclusion combines atomic actions into sequences of actions called critical sections where the entire section appears to be atomic.

19 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado19 Terms – continued further  Property of a program is an attribute that is true of every possible history.  Safety – never enters a bad state  Liveness – the program eventually enters a good state

20 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado20 How can we verify?  How do we demonstrate a program satisfies a property?  A dynamic test considers just one possible history  Limited number of tests unlikely to demonstrate the absence of bad histories  Operational reasoning -- exhaustive case analysis  Assertional reasoning – abstract analysis  Atomic actions are predicate transformers

21 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado21 Assertional Reasoning  Use assertions to characterize sets of states  Allows a compact representation of states and their transformations  More on this later in the course

22 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado22 Warning  We must be wary of dynamic testing alone  it can reveal only the presence of errors, not their absence.  Concurrent programs are difficult to test & debug  Difficult (impossible) to stop all processes at once in order to examine their state  Each execution in general will produce a different history

23 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado23 Example 1a -- Pattern in a File  Find all instances of a pattern in filesomething.  Consider string line; read a line of input from stdin into line; while (!EOF) { look for pattern in line; if (pattern is in line) write line; read next line of input; }

24 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado24 Example 1b -- concurrent & correct? string line; read a line of input from stdin into line; while (!EOF) { co look for pattern in line; if (pattern is in line) write line; // read next line of input into line; oc; }

25 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado25 Example 1c -- different variables string line1, line2; read a line of input from stdin into line1; while (!EOF) { co look for pattern in line1; if (pattern is in line1) write line1; // read next line of input into line2; oc; }

26 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado26 Example 1d - copy the line string line1, line2; read a line of input from stdin into line1; while (!EOF) { co look for pattern in line1; if (pattern is in line1) write line1; // read next line of input into line2; oc; line1 = line2; }

27 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado27 Co inside while vs. while inside co??  Possible to get the loop inside the co brackets so that the multi-process creation only occurs once?  Yes. Put a while loop inside each of the two processes.

28 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado28 Both processes inside co brackets co process 1: find patterns string line1; while (true) { wait for buffer to be full or done to be true; if (done) break; line1 = buffer; signal buffer is empty; look for pattern in line1; if (pattern is in line1) write line1; } process 2: read new lines string line2; while (true) { read next line of input into line2; if (EOF) (done=true; break;) wait for buffer to be empty; buffer = line2; signal that buffer is full; } oc; string buffer; bool done = false;

29 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado29 Synchronization  Required for correct answers whenever processes both read and write shared variables.  Sometimes groups of instructions must be treated as if atomic -- critical sections  Technique of double checking before updating a shared variable is useful (even though it sounds strange)  Example of double checking -- next

30 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado30 Example 2 -- sequential  Find the maximum value in an array int m = 0; for [ i = 0 to n-1 ] { if (a[i] > m) m = a[i]; } If we try to examine every array element in parallel, all processes will try to update m but the final value will be the value assigned by the last process that updates m.

31 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado31 Example 2b - concurrent w/ doublecheck  OK to do comparisons in parallel because they are read-only actions  But -- necessary to ensure that when the program terminates, m is the maximum :-) int m = 0; co [i = 0 to n-1] if (a[i] > m) m) #recheck only if above ck true m = a[i]; > oc; Angle brackets indicate atomic operation.

32 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado32 Why synchronize?  If processes do not interact, all interleavings are acceptable.  If processes do interact, only some interleavings are acceptable.  Role of synchronization: prevent unacceptable interleavings  Combine fine-grain atomic actions into coarse-grained composite actions (we call this....what?)  Delay process execution until program state satisfies some predicate

33 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado33 Notation for synchronization General: Mutual exclusion: Conditional synchronization only: This is equivalent to: while (not condition); (note the ending empty statement, i.e. semicolon)

34 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado34  Unconditional atomic action  does not contain a delay condition  can execute immediately as long as it executes atomically (not interleaved)  examples:  individual machine instructions  expressions we place in angle brackets  await statements where guard condition is constant true or is omitted

35 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado35  Conditional atomic action - await statement with a guard condition  If condition is false in a given process, it can only become true by the action of other processes.  How long will the process wait if it has a conditional atomic action?

36 Locks and Barriers

37 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado37 How to implement synchronization  To implement mutual exclusion  Implement atomic actions in software using locks to protect critical sections  Needed in most concurrent programs  To implement conditional synchronization  Implement synchronization point that all processes must reach before any process is allowed to proceed -- barrier  Needed in many parallel programs -- why?

38 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado38 Bad states, Good state  Mutual exclusion -- at most one process at a time is executing its critical section  its bad state is one in which two processes are in their critical section  Absence of Deadlock (“livelock”) -- If 2 or more processes are trying to enter their critical sections, at least one will succeed.  its bad state is one in which all the processes are waiting to enter but none is able to do so  two more on next slide

39 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado39 Bad states -- continued  Absence of Unnecessary Delay -- If a process is trying to enter its c.s. and the other processes are executing their noncritical sections or have terminated, the first process is not prevented from entering its c.s.  Bad state is one in which the one process that wants to enter cannot do so, even though no other process is in the c.s.  Eventual entry -- process that is attempting to enter its c.s. will eventually succeed.  liveness property, depends on scheduling policy

40 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado40 Logical property of mutual exclusion  When process1 is in its c.s., set property1 true.  Similarly, for process2 where property2 is true.  Bad state is where property1 and property2 are both true at the same time  Therefore  want every state to satisfy the negation of the bad state --  mutex: NOT(property1 AND property2)  Needs to be a global invariant True in the initial state and after each event that affects property1 or property2

41 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado41 Coarse-grain solution process process1 { while (true) { critical section; property1 = false; noncritical section; } process process2 { while (true) { critical section; property2 = false; noncritical section; } bool property1 = false; property2 = false; COMMENT: mutex: NOT(property1 AND property2) -- global invariant

42 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado42 Does it avoid the problems?  Deadlock: if each process were blocked in its entry protocol, then both property1 and property2 would have to be true. Both are false at this point in the code.  Unnecessary delay: One process blocks only if the other one is not in its c.s.  Liveness -- see next slide

43 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado43 Liveness guaranteed?  Liveness property -- process trying to enter its critical section eventually is able to do so  If process1 trying to enter but cannot, then property2 is true;  therefore process2 is in its c.s. which eventually exits making property2 false; allows process1’s guard to become true  If process1 still not allowed entry, it’s because the scheduler is unfair or because process2 again gains entry -- (happens infinitely often?)  Strongly-fair scheduler required, not likely.

44 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado44 Three “spin lock” solutions  A “spin lock” solution uses busy-waiting  Ensure mutual exclusion, are deadlock free, and avoid unnecessary delay  Require a fairly strong scheduler to ensure eventual entry  Do not control the order in which delayed processes enter their c.s.’s when >= 2 try  Three fair solutions to the critical section problem  Tie-breaker algorithm  Ticket algorithm  Bakery algorithm

45 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado45 Tie-Breaker  In typical P section attempting to enter its c.s., there is no control over which will succeed.  To make it fair, processes should take turns  Peterson’s algorithm uses an additional variable to indicate which process was last to enter its c.s.  Consider the “coarse-grained” program but...  implement the conditional atomic actions in the entry protocol using only simple variables and sequential statements.

46 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado46 Tie-breaker implementation  Could implement the await statement by first looping until the guard is false and then execute the assignment. (Sound familiar?)  But this pair of events is not executed atomically -- does not support mutual exclusion.  If reversed, deadlock can result. (Remember?)  Let last be an integer variable to indicate which was last to start executing its entry protocol.  If both are trying (property1 and property2 are true), the last to starts its entry protocol delays.

47 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado47 Tie-breaker Implementation for n  If there are n processes, the entry protocol in each process consists of a loop that iterates thru n-1 stages.  If we can ensure at most one process at a time is allowed to get thru all n-1 stages, then at most one at a time can be in its critical section.

48 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado48 n-process tie-breaker algorithm See handout (also in Notes half of this slide) This is quite complex and hard to understand. But... livelock free avoids unnecessary delay ensures eventual entry (A process delays only if some other process is ahead of it in the entry protocol. Every process eventually exits its critical section)

49 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado49 Ticket Algorithm  Based on the idea of drawing tickets (numbers) and then waiting turns  Needs a number-dispenser and a display indicating which number customer is being served  If one processor, customers are served one at a time in order of arrival  (If the ticket algorithm runs for a very long time, incrementing a counter will cause arithmetic overflow.)

50 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado50 int number = 1, next = 1, turn[1:n] = ([n] 0); process CS[i = 1 to n] { while (true) { turn[i] = FetchAndAdd(number, 1); /* entry */ while (turn[i] != next) skip; critical section; next = next + 1; /* exit protocol */ noncritical section; } What is the global invariant?

51 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado51 Bakery Algorithm  Downside of the ticket algorithm:  Without FetchAndAdd, require an additional critical section and ITS solution might not be fair.  Bakery algorithm  fair and does not require any special machine instructions  Ticket: customer draws unique # and waits for its number to become next  This: Customers check with each other rather than with a central next-counter to decide on order of service.

52 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado52  To enter its c.s., process CS[i] sets turn[i] to one more than the maximum of all the current values in turn.  Then CS[i] waits until turn[i] is the smallest nonzero value in turn.  What is the global invariant in words (not predicate logic notation)?

53 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado53 Bakery algorithm -- coarse-grain version int turn[1:n] = ([n] 0); process CS[i = 1 to n] { while (true) { for [j = 1 to n such-that j != i] critical section turn[i] = 0; noncritical section; }

54 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado54 Bakery algorithm -- practicality?  Cannot be implemented on contemporary machines  The assignment to turn[i] requires computing the maximum of n values  The await statement references a shared variable (turn[j]) twice.  These could be atomic implementations by using another c.s. protocol such as the tie- breaker algorithm (inefficient)  What to do?

55 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado55 Initial (wrong) attempts  When n processes need to synchronize, often useful first to develop a two-process solution and then to generalize that solution.  Consider entry protocol for CS1:  turn1 = turn2 + 1;  while (turn2 != 0 and turn1 > turn2) skip;  Flip the 1’s and 2’s to get the corresp. for CS2  Is this a solution? What’s the problem?  The assignments and the while loop guards are not implemented atomically. So?

56 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado56 Does the gallant approach work?  If both turn2 and turn2 are 1, let one of the processes proceed and have the other delay. (For example, strengthen the delay loop in CS2 to be turn2 >= turn1.)  Still possible for both to enter their c.s. because of a race condition.  Avoid the race condition: have each process set its value of turn to 1 (or any nonzero value) at the start of the entry protocol. Then it examines the other’s value of turn and resets its own: turn1 = 1; turn1 = turn2 + 1; while (turn2 != 0 and turn1 > turn2) skip;

57 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado57 Working but not symmetric  One process cannot now exit its while loop until the other has finished setting its value of turn if it is in the middle of doing so.  Who has precedence?  These entry protocols are not quite symmetric.  Rewrite them, but first: Let (a, b) and (c, d) be pairs of integers and define the greater-than relation between such pairs as follows: (a, b) > (c, d) == true if a > c or if a == c and b > d == false otherwise

58 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado58 Symmetric --> easy to generalize  Rewrite turn1 > turn2in CS1 as (turn1, 1) > (turn2, 2)  Rewrite turn2 >= turn1 in CS2 as (turn2, 2) > (turn1, 1)  n-process bakery algorithm -- next slide

59 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado59 n-process bakery algorithm int turn[1:n] = ([n] 0); process CS[i = 1 to n] { while (true) { turn[i] = 1; turn[i] = max(turn[1:n]) + 1; for [j = 1 to n such that j != i] while (turn[j] != 0 and (turn[i], i) > (turn[j], j)) skip; critical section; turn[i] = 0; noncritical section; }

60 revised 1/29/2007 ECEN5043 SW Eng of Multiprogram Systems, University of Colorado60 Some interesting points re bakery  Devised by Leslie Lamport in 1974 and improved in 1979  More intuitive than earlier critical section solutions  Allows processes to enter in essentially FIFO order


Download ppt "Multithreaded Programming ECEN5043 Software Engineering of Multiprogram Systems University of Colorado Lectures 5 & 6."

Similar presentations


Ads by Google