Module 2.1: Process Synchronization

Slides:



Advertisements
Similar presentations
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Advertisements

Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
1 Semaphores and Monitors CIS450 Winter 2003 Professor Jinhua Guo.
Lecture 5: Concurrency: Mutual Exclusion and Synchronization.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
University of Pennsylvania 9/21/00CSE 3801 Concurrent Process Synchronization (Lock and semaphore) CSE 380 Lecture Note 5 Insup Lee.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
University of Pennsylvania 9/19/00CSE 3801 Concurrent Processes CSE 380 Lecture Note 4 Insup Lee.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
University of Pennsylvania 9/28/00CSE 3801 Concurrent Programming (Critical Regions, Monitors, and Threads) CSE 380 Lecture Note 6 Insup Lee.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Concurrency, Mutual Exclusion and Synchronization.
Critical Problem Revisit. Critical Sections Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results.
1 Concurrency: Mutual Exclusion and Synchronization Module 2.2.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Synchronization Background The Critical-Section.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Process Synchronization Tanenbaum Ch 2.3, 2.5 Silberschatz Ch 6.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Synchronization Background The Critical-Section Problem Peterson’s.
Chapter 6: Process Synchronization
Process Synchronization
Process Synchronization: Semaphores
Auburn University COMP 3500 Introduction to Operating Systems Synchronization: Part 4 Classical Synchronization Problems.
Background on the need for Synchronization
Chapter 5: Process Synchronization
Chapter 6: Process Synchronization
Chapter 6: Process Synchronization
Chapter 5: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Semaphore Originally called P() and V() wait (S) { while S <= 0
Process Synchronization
Module 7a: Classic Synchronization
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Presentation transcript:

Module 2.1: Process Synchronization Examples of Shared Variable Problem Mutual Exclusion Solutions to ME Semaphores Critical Regions Monitors Process synchronization means coordination among processes. K. Salah

Concurrent Processes Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources If there is no controlled access to shared data, some processes will obtain an inconsistent view of this data The action performed by concurrent processes will then depend on the order in which their execution is interleaved This order can not be predicted Activities of other processes Handling of I/O and interrupts Scheduling policies of the OS K. Salah

Shared variable problem An Example Process P1 and P2 are running this same procedure and have access to the same variable “a” Processes can be interrupted anywhere If P1 is first interrupted after user input and P2 executes entirely Then the character echoed by P1 will be the one read by P2 !! static char a; void echo() { cin >> a; cout << a; } K. Salah

A Second Example Example (Simple Shared Variable) Two processes are each reading characters typed at their respective terminals Want to keep running count of total number of characters typed on both terminals A Shared variable V is introduced; each time a character is typed, a process uses the code: V := V + 1; to update the count. During testing it is observed that the count recorded in V is less than the actual number of characters typed. What happened? Þ The programmer failed to realize that the assignment was not executed as a single indivisible action, but rather as the following sequence of instructions: MOVE V, r0 INCR r0 MOVE r0, V K. Salah

The Producer/Consumer Problem Third Example buffer P C process process from time to time, the producer places an item in the buffer the consumer removes an item from the buffer careful synchronization/coordination required the consumer must wait if the buffer empty the producer must wait if the buffer full typical solution would involve a shared variable count (recall previous example) also known as the Bounded Buffer problem K. Salah

The Mutual Exclusion Problem The previous two examples are typical of kind of “race condition” problem that arises in operating system programming. Occurs when more than one process has simultaneous access to shared data, whose values are supposed to obey some integrity constraint. Other examples: airline reservation system, bank transaction system Problem generally solved by making access to shared variables mutually exclusive: at most one process can access shared variables at a time The period of time when one process has exclusive access to the data is called a critical section. K. Salah

The Critical Section Problem Definition. A critical section is a sequence of activities (or statements) in a process during which a mutually excluded resource(s) (either hardware or software) must be accessed. The critical section problem is to ensure that two concurrent activities do not access shared data at the same time. A solution to the mutual exclusion problem must satisfy the following three requirements: Mutual Exclusion Progress Bounded waiting (no starvation) Progress: if no process executes in its c.s. and some wants to enter its c.s., it should be allowed do so. Bounded waiting: there exists a bound on how many times others can enter cs.. Before the waiting one enter its c.s. K. Salah

Requirement to Critical-Section Problem 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. Assume that each process executes at a nonzero speed No assumption concerning relative speed of the n processes. K. Salah

Methods for Mutual Exclusion 1. Disable interrupts (hardware solution) 2. Strict alternation and Peterson’s solution (software solution) 3. Switch variables (assume atomic read and write) 4. Locks (hardware solution with TSL or TAS) 5. Semaphores (software solution) 6. Critical Regions and Monitors (HLL solution) K. Salah

Disable Interrupts process A process B ... ... disable interrupts disable interrupts CS CS enable interrupts enable interrupts Prevents scheduling during CS, since the timer interrupt is disabled. May hinder real-time response and delays All processes are excluded even if they do not access the same variables This is sometimes necessary (to prevent further interrupts during interrupt handling), used by the kernel when updating its variables and lists, e.g. ready and blocked lists. K. Salah

Lock Variables Not used in any system. It does not work properly. The idea is to have a lock variable guarding the CS: If the lock is 0, the process sets the lock to 1 and enter CS If the lock is 1, the process waits until the lock becomes 0 Has the same problem as shared variables. Both processes may read simultaneously the lock 0. K. Salah

Strict Alternation turn is a shared variable and initially set to A Process A While (TRUE){ while (turn != A); /* wait */ CS; turn = B; ... } turn is a shared variable and initially set to A different CS's can be implemented using different switch variables busy waiting is a waste of CPU cycles and causes the priority inversion problem. Priority inversion problem can occur if there are 2 processes H and L with H to be run whenever it is “ready”. danger of long blockage since A and B strictly alternates, i.e., Process A or B can not run twice in a row. We need a solution that does not require strict alternation Process B While (TRUE){ while (turn != B); /* wait */ CS; turn = A; ... } K. Salah

Peterson’s Solution #define N 2 /* 2 processes: 0 and 1 */ int interested[N] = {FALSE,FALSE}; int turn; void enter_region (int process) { int other; other = 1 – process; interested[process] = TRUE; /* resolves the strict alternation */ turn = process; /* resolves simultaneous enter_region call. Last turn only counts. In other words, turn is used to break ties! */ while (turn == process && interested[other] == TRUE); } void leave_region(int process) interested[process] = FALSE; K. Salah

Peterson’s Solution (cont.) Properties: Complex and unclear Busy waiting Mutual exclusion is preserved Strict alternation is resolved Can be extended for n processes K. Salah

TSL or TAS Instruction TSL: Test and Set Lock (or TAS: TestAndSet) is implemented in HW, e.g. Motorola 68000 microprocessor. The Test/read and Set/write bus cycles are done atomically (not interrupted). enter_region: tsl r0, flag ; if flag is 0, set flag to 1 cmp r0, #0 jnz enter_region ret leave_region: mov flag, #0 Swap or exchange, can be found in Stallings page 180. If not supported by hardware, TAS can be implemented by disabling and enabling interrupts. TAS can also be implemented using atomic swap(x,y). K. Salah

Properties 1. Busy waiting problem. Better to have the process blocked on IPC primitive (semaphore, event counter, message) and then awakened later. 2. Starvation is possible: If we have P1, P2, and P3. With improper scheduling, P1 and P2 may always execute and not P3. P1 and P2 may have higher priority than P3. P3 will starve. Does the other schemes have it? 3. Different locks may be used for different shared resources. Examples: (1) VAX 11, (2) B6500 MIPS -- Load-Linked/Store Conditional (LL/SC) Pentium -- Compare and Exchange, Exchange, Fetch and Add SPARC -- Load Store Unsigned Bit (LDSTUB) in v9 PowerPC -- Load Word and Reserve (lwarx) K. Salah

Semaphores P V Dijkstra ‘65 wait signal Per Brinch Hansen The semaphore has a value that is invisible to the users and a queue of processes waiting to acquire the semaphore. Code for counting semaphores: type semaphore = record value : integer; L : list of process; end P(S):[ S.value := S.value-1; if S.value < 0 then add this process to S.L; block; end if ] V(S):[ S.value := S.value + 1; if S.value <= 0 then remove a process P from S.L; wakeup(P); // place it on the ready queue. end if ] K. Salah

Properties of semaphore parbegin S.value = 1 P1: ... P(S); CS1; V(S); ... P2: ... P(S); CS2; V(S); ... . . . Pn: ... P(S); CSn; V(S); ... parend Properties 1. No busy waiting 2. May starve unless FCFS (scheduling left to the implementer of semaphores) 3. Can handle multiple users by proper initialization. Example: 3 tape drivers 4. If S is either 1 or 0, it is called a binary semaphore or mutex. How to implement a counting semaphore using mutex? Answer is Galven, page 172. K. Salah

Code for Binary Semaphores waitB(S): if (S.value = 1) { S.value := 0; } else { place this process in S.L; block; } signalB(S): if (S.L is empty) { S.value := 1; } else { remove a process P from S.L; wakeup(P); } K. Salah

How to implement a counting semaphore using mutex? S: counting semaphore S1: mutex = 1; S2: mutex = 0; C: integer P(S): P(S1) C := C-1; if (C < 0) { V(S1); P(S2); } V(S): P(S1); C := C+1; if (C <= 0) V(S2); else K. Salah

More properties and examples 5. Can implement scheduling of activities using a precedence graph. Here we use semaphores for synchronizing different activities, not resolving mutual exclusion. An activity is a work done by a specific process. Initially system creates all processes to do these specific activities. For example, process x that performs activity x doesn’t start performing activity x unless it is signaled (or told) by process y. Example of process synchronization: Router fault detection, fault logging, alarm reporting, and fault fixing. 1. Draw process precedence graph 2. Write psuedo code for process synchronization using semaphores 6. Proper use can't be enforced by compiler. e.g. P(S) V(S) CS CS V(S) P(S) e.g. S1, S2 P1: P(S1) P2: P(S2) P(S2) P(S1) CS CS V(S2) V(S1) V(S1) V(S2) This is a deadlock situation K. Salah

Classical problems The bounded buffer problem The readers and writers problems The dining philosophers problem K. Salah

The Producer-Consumer Problem bounded buffer (of size n) one set of processes (producers) write to it one set of processes (consumers) read from it semaphore: full = 0 /* counting semaphores */ empty = n mutex = 1 /* binary semaphore */ process Producer process Consumer do forever do forever . P(full) /* produce */ P(mutex) . /* take from buffer */ P(empty) V(mutex) P(mutex) V(empty) /* add to buffer */ . V(mutex) /* consume */ V(full) . end end K. Salah

The Readers and Writers Problem Shared data to be accessed in two modes: reading and writing. Any number of processes permitted to read at one time; writes must exclude all other operations. Read Write Read Y N conflict Write N N matrix Intuitively: Reader: | Writers: when(#writers==0) do | when(#readers==0 #readers=#readers+1 | and #writers==0) do | #writers = 1 | <read> | <write> | #readers=#readers-1 | #writers = 0 . | . . | . . | . K. Salah

Semaphore Solution to Readers and Writers Semaphore: mutex = 1 /* mutual excl. for updating readcount */ wrt = 1 /* mutual excl. writer */ int variable: readcount = 0 Reader: P(mutex) readcount = readcount + 1 if readcount == 1 then P(wrt) V(mutex) <read> P(mutex) readcount = readcount – 1 if readcount == 0 then V(wrt) V(mutex) Writer: P(wrt) <write> V(wrt) Notes: wrt also used by first/last reader that enters/exits critical section. Solution gives priority to readers in that writers can be starved by stream of readers. K. Salah

The Dining Philosopher Problem Five philosopher spend their lives thinking + eating. One simple solution is to represent each chopstick by a semaphore. P before picking it up & V after using it. var chopstick: array[0..4] of semaphores=1 philosopher i repeat P( chopstick[i] ); P( chopstick[i+1 mod 5] ); ... eat ... V( chopstick[i] ); V( chopstick[i+1 mod 5] ); ... think ... forever Is deadlock possible? Deadlock is possible. Put the binary simaphore at the top and end, allows one philospher to eat. The solution is more parctical to allow more than one, 2 at the max. See page 20. K. Salah

Concurrent Programming An OS consists of a large number of programs that execute asynchronously and cooperate. Traditionally, these programs were written in assembly language for the following reasons: High-level languages (HLL) did not provide mechanisms for writing machine-dependent code (such as device drivers). HLL did not provide the appropriate tools for writing concurrent programs. HLL for concurrent programs were not efficient. HLL for OS must provide facilities for synchronization and modularization. Two ways used by HLL: Critical Regions and Conditional Critical Regions Monitors K. Salah

Motivating examples P and V operations are better than shared variables but still susceptible to programming errors P(S) P(S) . ==> . . . V(S) P(S) P(S1) P(S1) . . P(S2) P(S2) . ==> . . . V(S2) V(S1) . . V(S1) V(S2) K. Salah

Critical Regions A higher-level programming language construct proposed in 1972 by Brinch Hansen and Hoare. if a variable is to be shared, it must be declared as such access to shared variables only in mutual exclusion var a: shared int var b: shared int region a do -- access variable a -- Compiler generates equivalent code using P and V: P(Sa) -- access variable a -- V(Sa) K. Salah

Critical Regions aren't perfect Process 1: region a do region b do stmt1; Process 2: region b do region a do stmt2; K. Salah

Conditional Critical Regions Critical regions are basically a mutex They are not easily adapted to general synchronization problems, i.e. those requiring a counting semaphore Hoare, again in 1972, proposed conditional critical regions: region X when B do S X will be accessed in mutual exclusion in S process delayed until B becomes true K. Salah

The Producer-consumer problem Var buffer: shared record pool: array[0...n-1] of item; count, in, out: integer = 0; Producer: region buffer when count < n do begin pool[in] := item_produced in : = in + 1 mod n count := count + 1 end Consumer: region buffer when count > 0 do begin item_consumed := pool[out] out := out + 1 mod n count := count – 1 end K. Salah

Monitors A monitor is a shared data object together with a set of operations whichs manipulate it. To enforce mutual exclusion, at most one process may execute operations defined for the data object at any given time. All uses of shared variables are governed by monitors. Support data abstraction (hide implementation details) Only one process may execute a monitor's procedure at a time data type “condition” for synchronization (can be waited or signaled within a monitor procedure) Two operations on “condition” variables: wait: Forces the caller to be delayed. Exclusion released. Hidden Q of waiters. signal: One waiting process is resumed if there are waiters. K. Salah

Semaphore using monitors type semaphore = monitor var busy: boolean nonbusy: condition procedure entry P begin if busy then nonbusy.wait fi busy := true end {P} procedure entry V begin busy := false nonbusy.signal end {V} begin busy := false end {monitor} What could be other ways to implement semaphores? Solving Dinning Philosopher’s problem using Monitors in textbook. The above is mutex implementation. See page 190, stallings book on implementation of counting semaphores. Disabling interrupts + TSL Montiors: 1. only one process can be active in the monitor, guarding against busy variable to be changed simultenously. 2. One disadvantage of monitors is that not all languages support it. 3. The example of producer consumer using monitors is shown in the text. This is an easier example. K. Salah