Presentation is loading. Please wait.

Presentation is loading. Please wait.

Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.

Similar presentations


Presentation on theme: "Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion."— Presentation transcript:

1 Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion

2 Processes, Threads, Concurrency Traditional processes are sequential: one instruction at a time is executed. Multithreaded processes may have several sequential threads that can execute concurrently. Processes (threads) are concurrent if their executions overlap – start time of one occurs before finish time of another.

3 Concurrent Execution On a uniprocessor, concurrency occurs when the CPU is switched from one process to another, so the instructions of several threads are interleaved (alternate) On a multiprocessor, execution of instructions in concurrent threads may be overlapped (occur at same time) if the threads are running on separate processors.

4 Concurrent Execution An interrupt, followed by a context switch, can take place between any two instructions. Hence the pattern of instruction overlapping and interleaving is unpredictable. Processes and threads execute asynchronously – we cannot predict if event a in process i will occur before event b in process j.

5 Sharing and Concurrency System resources (files, devices, even memory) are shared by processes, threads, the OS. Uncontrolled access to shared entities can cause data integrity problems – Example: Suppose two threads (1 and 2) have access to a shared (global) variable “balance”, which represents a bank account. Each thread has its own private (local) variable “withdrawal i ”, where i is the process number

6 Example Let balance = 100, withdrawal 1 =50, and withdrawal 2 = 75. Thread i will execute the following algorithm: if (balance >= withdrawal i ) balance = balance – withdrawal i else // print “Can’t overdraw account!” If thread1 executes first, balance will be 50 and thread2 can’t withdraw funds. If thread2 executes first, balance will be 25 and thread1 can’t withdraw funds.

7 But --- what if the two threads execute concurrently instead of sequentially? Break down into machine-level operations: if (balance >= withdrawal i ) balance = balance – withdrawal i move balance to register compare register to withdrawal i branch if less-than register = register – withdrawal i store register contents in balance

8 Example-Multiprocessor (A possible instruction sequence showing interleaved execution) Thread 1 (1) Move balance to register 1 (register = 100) (3) compare register 1 to withdraw 1,, (5)register 1 = register 1 – withdraw 1 (100-50) (6) store register 1 in balance (balance = 50) Thread 2 (2) Move balance to register 2 (register = 100) (4) compare register 2 to withdraw 2 (7) register 2 = register 2 – withdraw 2 (100 – 75) (8) store register 2 in balance (balance = 25 )

9 Example – Uniprocessor (A possible instruction sequence showing interleaved execution) Thread 1 –Move balance to register (Reg. = 100) P 1 ’s time slice expires – its state is saved … … P 1 is re-scheduled; its state is restored (Reg. = 100) –balance = balance – withdraw 1 (100-50) –Result: balance = 50 Thread 2 –Move balance to reg. –balance >= withdraw 2 –balance = balance – withdraw 2 = (100-75)

10 Race Conditions The previous examples illustrate a race condition (data race): an undesirable condition that exists when several processes access shared data, and –At least one access is a write and –The accesses are not mutually exclusive Race conditions can lead to inconsistent results.

11 Mutual Exclusion Mutual exclusion forces serial resource access as opposed to concurrent access. When one thread locks a critical resource, no other thread can access it until the lock owner releases the resource. Critical section (CS): code that accesses shared resources. Mutual exclusion guarantees that only one process/thread at a time can execute its critical section, with respect to a given variable.

12 Mutual Exclusion Requirements It must ensure that only one process/thread at a time can access a shared resource. In addition, a good solution will ensure that –If no thread is in the CS a thread that wants to execute its CS must be allowed to do so –When 2 or more threads want to enter their CS’s, can’t postpone decision indefinitely –Every thread should have a chance to execute its critical section (no starvation)

13 Solution Model Begin_mutual_exclusion /* some mutex primitive execute critical section End_mutual_exclusion /* some mutex primitive The problem: How to implement the mutex primitives? –Busy wait solutions (e.g., test-set operation, spinlocks of various sorts, Peterson’s algorithm) –Semaphores (OS feature usually, blocks waiting process) –Monitors (language feature – e.g. Java)

14 Semaphores Definition: an integer variable on which processes can perform two indivisible operations, P( ) and V( ), + initialization. Each semaphore has a wait queue associated with it. Semaphores are protected by the operating system.

15 Semaphores Binary semaphore: only values are 1 and 0 Traditional semaphore: may be initialized to any non-negative value. Counting semaphores: P & V operations may reduce semaphore values below 0, in which case the negative value records the number of blocked processes. (See CS 490 textbook)

16 Semaphores Are used to synchronize and coordinate processes and/or threads Calling the P (wait) operation may cause a process to block Calling the V (signal) operation never causes a process to block, but may wake a process that has been blocked by a previous P operation.

17 High-level Algorithms Assume S is a semaphore (must be initialized according to problem requirements) P(S): if S > = 1 then S = S – 1 else block the process on S queue V(S): if some processes are blocked on the queue for S then unblock a process else S = S + 1

18 Usage – Mutual Exclusion Using a semaphore to enforce mutual exclusion. P(mutex)// mutex initially = 1 execute CS; V(mutex) Each process that uses a shared resource must first check (using P) that no other process is in the critical section and then must use V to release the critical section.

19 Bank Problem Revisited Thread 1 P(S) Move balance to register 1 Compare register 1 to withdraw 1,, register 1 = register 1 – withdraw 1 Store register 1 in balance V(S) Thread 2 P(S) Move balance to register 2 Compare register 2 to withdraw 2 register 2 = register 2 – withdraw 2 Store register 2 in balance V(S) Semaphore S = 1

20 Example – Uniprocessor Thread 1 –P(S) S is decremented: S = 0, T1 continues to execute –Move balance to register (Reg. = 100) T 1 ’s time slice expires – its state is saved … T 1 is re-scheduled; its state is restored (Reg. = 100) –balance = balance – withdraw 1 (100-50) –V(S) Thread 2 returns to run state, S remains 0 Thread 2 –P(S) Since S = 0, T1 is blocked T2 resumes executing some time after T1 executes V(S) –Move balance to reg. (50) –balance >= withdraw 2 Since !(50>=75), T2 does not make withdrawal –V(S) Since no thread is waiting, S is set back to 1

21 P and V Must Be Indivisible Semaphore operations must be indivisible, or atomic. Once OS begins either a P or V operation, it cannot be interrupted until it is completed.

22 P and V Must Be Indivisible P operation must be indivisible; otherwise there is no guarantee that two processes won’t try to test P at the “same” time and both find it equal to 1. –P(S): if S > = 1 then S = S – 1 else block the process on S queue Two V operations executed at the same time could unblock two processes, leading to two processes in their critical sections concurrently. –V(S): if some processes are blocked on the queue for S then unblock a process else S = S + 1

23 if S >= 1 then S = S – 1 else block the process on S queue execute critical section if processes are blocked on the queue for S then unblock a process else S = S + 1

24 Semaphore Usage – Event Wait (synchronization that isn’t mutex) Suppose a process P2 wants to wait on an event of some sort (call it A) which is to be executed by another process P1 Initialize a shared semaphore to 0 By executing a wait (P) on the semaphore, P2 will wait until P1 executes event A and signals, using the V operation.

25 Event Wait – Example semaphore signal = 0; Process 1 …. execute event A V(signal) Process 2 … P(signal) …

26 Semaphores Are Not Perfect Programmer must know something about other processes using the semaphore Must use semaphores carefully (be sure to use them when needed; don’t leave out a V, etc.) Hard to prove program correctness when using semaphores.

27 Other Synchronization Problems (in addition to mutual exclusion) Dining Philosophers: resource deadlock Producer-consumer: buffering (as of messages, input data, etc.) Readers-writers: data base or file sharing –Reader’s priority –Writer’s priority

28 Producer-Consumer Producer processes and consumer processes share a (usually finite) pool of buffers. Producers add data to pool (a queue of records/objects) Consumers remove data, in FIFO order

29 Producer-Consumer Requirements The processes are asynchronous. A solution must ensure producers don’t deposit data if pool is full and consumers don’t take data if pool is empty. Access to buffer pool must be mutually exclusive since multiple consumers (or producers) may try to access the pool simultaneously.

30 Bounded Buffer P/C Algorithm Initialization: s=1; n=0; e=sizeofbuffer; Producer: while(true) produce v; P(e); // wait for buffer slot P(s); // wait for buffer pool access append(v); V(s); // release buffer pool V(n); // signal a full buffer Consumer: while(true) P(n); // wait for a full buffer P(s); // wait for buffer pool access w:=take(); V(s); // release buffer pool V(e); // signal an empty buffer consume(w);

31 Readers and Writers Problem Characteristics: –concurrent processes access shared data area (files, block of memory, set of registers) –some processes only read information, others write (modify and add) information Restrictions: –Multiple readers may read concurrently, but when a writer is writing, there should be no other writers or readers.

32 Compare to Prod/Cons Differences between Readers/Writers (R/W) and Producer/Consumer (P/C): –Data in P/C is ordered - placed into buffer and retrieved according to FIFO discipline. –In R/W, same data may be read many times by many readers, or data may be written by writer and changed before any reader reads.

33 procedure writer; begin repeat P (wsem); write data; V (wsem); forever end; // Initialization code integer readcount = 0; // done only once semaphore x, wsem = 1; // done only once procedure reader; begin repeat P (x); readcount := readcount + 1; if readcount = 1 then P (wsem); V (x); read data; P (x); readcount := readcount - 1; if readcount = 0 then signal (wsem); V (x); forever end;

34 Readers-Writers Variations Writer priority solutions –If both readers and writers are waiting, give priority to writers Reader priority solutions –If both readers and writers are waiting, give priority to readers What kinds of priorities exist in the previous algorithm?

35 Any Questions? Can you think of any real examples of producer- consumer or reader-writer situations?


Download ppt "Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion."

Similar presentations


Ads by Google