Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.

Slides:



Advertisements
Similar presentations
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Advertisements

Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
1 Semaphores and Monitors CIS450 Winter 2003 Professor Jinhua Guo.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Chapter 2.3 : Interprocess Communication
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Critical Problem Revisit. Critical Sections Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 14: October 14, 2010 Instructor: Bhuvan Urgaonkar.
1 Chapter 2.3 : Interprocess Communication Process concept  Process concept  Process scheduling  Process scheduling  Interprocess communication Interprocess.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Synchronization Background The Critical-Section.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Process Synchronization
Background on the need for Synchronization
Chapter 5: Process Synchronization
Process Synchronization
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Process/Thread Synchronization (Part 2)
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion

Processes, Threads, Concurrency Traditional processes are sequential: one instruction at a time is executed. Multithreaded processes may have several sequential threads that can execute concurrently. Processes (threads) are concurrent if their executions overlap – start time of one occurs before finish time of another.

Concurrent Execution On a uniprocessor, concurrency occurs when the CPU is switched from one process to another, so the instructions of several threads are interleaved (alternate) On a multiprocessor, execution of instructions in concurrent threads may be overlapped (occur at same time) if the threads are running on separate processors.

Concurrent Execution An interrupt, followed by a context switch, can take place between any two instructions. Hence the pattern of instruction overlapping and interleaving is unpredictable. Processes and threads execute asynchronously – we cannot predict if event a in process i will occur before event b in process j.

Sharing and Concurrency System resources (files, devices, even memory) are shared by processes, threads, the OS. Uncontrolled access to shared entities can cause data integrity problems – Example: Suppose two threads (1 and 2) have access to a shared (global) variable “balance”, which represents a bank account. Each thread has its own private (local) variable “withdrawal i ”, where i is the process number

Example Let balance = 100, withdrawal 1 =50, and withdrawal 2 = 75. Thread i will execute the following algorithm: if (balance >= withdrawal i ) balance = balance – withdrawal i else // print “Can’t overdraw account!” If thread1 executes first, balance will be 50 and thread2 can’t withdraw funds. If thread2 executes first, balance will be 25 and thread1 can’t withdraw funds.

But --- what if the two threads execute concurrently instead of sequentially? Break down into machine-level operations: if (balance >= withdrawal i ) balance = balance – withdrawal i move balance to register compare register to withdrawal i branch if less-than register = register – withdrawal i store register contents in balance

Example-Multiprocessor (A possible instruction sequence showing interleaved execution) Thread 1 (1) Move balance to register 1 (register = 100) (3) compare register 1 to withdraw 1,, (5)register 1 = register 1 – withdraw 1 (100-50) (6) store register 1 in balance (balance = 50) Thread 2 (2) Move balance to register 2 (register = 100) (4) compare register 2 to withdraw 2 (7) register 2 = register 2 – withdraw 2 (100 – 75) (8) store register 2 in balance (balance = 25 )

Example – Uniprocessor (A possible instruction sequence showing interleaved execution) Thread 1 –Move balance to register (Reg. = 100) P 1 ’s time slice expires – its state is saved … … P 1 is re-scheduled; its state is restored (Reg. = 100) –balance = balance – withdraw 1 (100-50) –Result: balance = 50 Thread 2 –Move balance to reg. –balance >= withdraw 2 –balance = balance – withdraw 2 = (100-75)

Race Conditions The previous examples illustrate a race condition (data race): an undesirable condition that exists when several processes access shared data, and –At least one access is a write and –The accesses are not mutually exclusive Race conditions can lead to inconsistent results.

Mutual Exclusion Mutual exclusion forces serial resource access as opposed to concurrent access. When one thread locks a critical resource, no other thread can access it until the lock owner releases the resource. Critical section (CS): code that accesses shared resources. Mutual exclusion guarantees that only one process/thread at a time can execute its critical section, with respect to a given variable.

Mutual Exclusion Requirements It must ensure that only one process/thread at a time can access a shared resource. In addition, a good solution will ensure that –If no thread is in the CS a thread that wants to execute its CS must be allowed to do so –When 2 or more threads want to enter their CS’s, can’t postpone decision indefinitely –Every thread should have a chance to execute its critical section (no starvation)

Solution Model Begin_mutual_exclusion /* some mutex primitive execute critical section End_mutual_exclusion /* some mutex primitive The problem: How to implement the mutex primitives? –Busy wait solutions (e.g., test-set operation, spinlocks of various sorts, Peterson’s algorithm) –Semaphores (OS feature usually, blocks waiting process) –Monitors (language feature – e.g. Java)

Semaphores Definition: an integer variable on which processes can perform two indivisible operations, P( ) and V( ), + initialization. Each semaphore has a wait queue associated with it. Semaphores are protected by the operating system.

Semaphores Binary semaphore: only values are 1 and 0 Traditional semaphore: may be initialized to any non-negative value. Counting semaphores: P & V operations may reduce semaphore values below 0, in which case the negative value records the number of blocked processes. (See CS 490 textbook)

Semaphores Are used to synchronize and coordinate processes and/or threads Calling the P (wait) operation may cause a process to block Calling the V (signal) operation never causes a process to block, but may wake a process that has been blocked by a previous P operation.

High-level Algorithms Assume S is a semaphore (must be initialized according to problem requirements) P(S): if S > = 1 then S = S – 1 else block the process on S queue V(S): if some processes are blocked on the queue for S then unblock a process else S = S + 1

Usage – Mutual Exclusion Using a semaphore to enforce mutual exclusion. P(mutex)// mutex initially = 1 execute CS; V(mutex) Each process that uses a shared resource must first check (using P) that no other process is in the critical section and then must use V to release the critical section.

Bank Problem Revisited Thread 1 P(S) Move balance to register 1 Compare register 1 to withdraw 1,, register 1 = register 1 – withdraw 1 Store register 1 in balance V(S) Thread 2 P(S) Move balance to register 2 Compare register 2 to withdraw 2 register 2 = register 2 – withdraw 2 Store register 2 in balance V(S) Semaphore S = 1

Example – Uniprocessor Thread 1 –P(S) S is decremented: S = 0, T1 continues to execute –Move balance to register (Reg. = 100) T 1 ’s time slice expires – its state is saved … T 1 is re-scheduled; its state is restored (Reg. = 100) –balance = balance – withdraw 1 (100-50) –V(S) Thread 2 returns to run state, S remains 0 Thread 2 –P(S) Since S = 0, T1 is blocked T2 resumes executing some time after T1 executes V(S) –Move balance to reg. (50) –balance >= withdraw 2 Since !(50>=75), T2 does not make withdrawal –V(S) Since no thread is waiting, S is set back to 1

P and V Must Be Indivisible Semaphore operations must be indivisible, or atomic. Once OS begins either a P or V operation, it cannot be interrupted until it is completed.

P and V Must Be Indivisible P operation must be indivisible; otherwise there is no guarantee that two processes won’t try to test P at the “same” time and both find it equal to 1. –P(S): if S > = 1 then S = S – 1 else block the process on S queue Two V operations executed at the same time could unblock two processes, leading to two processes in their critical sections concurrently. –V(S): if some processes are blocked on the queue for S then unblock a process else S = S + 1

if S >= 1 then S = S – 1 else block the process on S queue execute critical section if processes are blocked on the queue for S then unblock a process else S = S + 1

Semaphore Usage – Event Wait (synchronization that isn’t mutex) Suppose a process P2 wants to wait on an event of some sort (call it A) which is to be executed by another process P1 Initialize a shared semaphore to 0 By executing a wait (P) on the semaphore, P2 will wait until P1 executes event A and signals, using the V operation.

Event Wait – Example semaphore signal = 0; Process 1 …. execute event A V(signal) Process 2 … P(signal) …

Semaphores Are Not Perfect Programmer must know something about other processes using the semaphore Must use semaphores carefully (be sure to use them when needed; don’t leave out a V, etc.) Hard to prove program correctness when using semaphores.

Other Synchronization Problems (in addition to mutual exclusion) Dining Philosophers: resource deadlock Producer-consumer: buffering (as of messages, input data, etc.) Readers-writers: data base or file sharing –Reader’s priority –Writer’s priority

Producer-Consumer Producer processes and consumer processes share a (usually finite) pool of buffers. Producers add data to pool (a queue of records/objects) Consumers remove data, in FIFO order

Producer-Consumer Requirements The processes are asynchronous. A solution must ensure producers don’t deposit data if pool is full and consumers don’t take data if pool is empty. Access to buffer pool must be mutually exclusive since multiple consumers (or producers) may try to access the pool simultaneously.

Bounded Buffer P/C Algorithm Initialization: s=1; n=0; e=sizeofbuffer; Producer: while(true) produce v; P(e); // wait for buffer slot P(s); // wait for buffer pool access append(v); V(s); // release buffer pool V(n); // signal a full buffer Consumer: while(true) P(n); // wait for a full buffer P(s); // wait for buffer pool access w:=take(); V(s); // release buffer pool V(e); // signal an empty buffer consume(w);

Readers and Writers Problem Characteristics: –concurrent processes access shared data area (files, block of memory, set of registers) –some processes only read information, others write (modify and add) information Restrictions: –Multiple readers may read concurrently, but when a writer is writing, there should be no other writers or readers.

Compare to Prod/Cons Differences between Readers/Writers (R/W) and Producer/Consumer (P/C): –Data in P/C is ordered - placed into buffer and retrieved according to FIFO discipline. –In R/W, same data may be read many times by many readers, or data may be written by writer and changed before any reader reads.

procedure writer; begin repeat P (wsem); write data; V (wsem); forever end; // Initialization code integer readcount = 0; // done only once semaphore x, wsem = 1; // done only once procedure reader; begin repeat P (x); readcount := readcount + 1; if readcount = 1 then P (wsem); V (x); read data; P (x); readcount := readcount - 1; if readcount = 0 then signal (wsem); V (x); forever end;

Readers-Writers Variations Writer priority solutions –If both readers and writers are waiting, give priority to writers Reader priority solutions –If both readers and writers are waiting, give priority to readers What kinds of priorities exist in the previous algorithm?

Any Questions? Can you think of any real examples of producer- consumer or reader-writer situations?