1 Concurrency: Mutual Exclusion and Synchronization Module 2.2.

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Operating System Concepts and Techniques Lecture 12 Interprocess communication-1 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Lecture 5: Concurrency: Mutual Exclusion and Synchronization.
The Critical-Section Problem
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Chapter 2.3 : Interprocess Communication
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Synchronization Solutions
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Concurrency, Mutual Exclusion and Synchronization.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
1 Chapter 2.3 : Interprocess Communication Process concept  Process concept  Process scheduling  Process scheduling  Interprocess communication Interprocess.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Synchronicity II Introduction to Operating Systems: Module 6.
Operating System Concepts and Techniques Lecture 13 Interprocess communication-2 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
OS Winter’03 Concurrency. OS Winter’03 Bakery algorithm of Lamport  Critical section algorithm for any n>1  Each time a process is requesting an entry.
Interprocess Communication Race Conditions
Semaphores Synchronization tool (provided by the OS) that does not require busy waiting. Logically, a semaphore S is an integer variable that, apart from.
Chapter 6: Process Synchronization
Process Synchronization
Chapter 5: Process Synchronization
Process Synchronization: Semaphores
Background on the need for Synchronization
Chapter 5: Process Synchronization
Chapter 5: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
The Critical-Section Problem
Introduction to Cooperating Processes
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

1 Concurrency: Mutual Exclusion and Synchronization Module 2.2

2 Problems with concurrent execution n Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources n If there is no controlled access to shared data, some processes will obtain an inconsistent view of this data n The action performed by concurrent processes will then depend on the order in which their execution is interleaved n This order can not be predicted u Activities of other processes u Handling of I/O and interrupts u Scheduling policies of the OS

3 An example n Process P1 and P2 are running this same procedure and have access to the same variable “a” n Processes can be interrupted anywhere n If P1 is first interrupted after user input and P2 executes entirely n Then the character echoed by P1 will be the one read by P2 !! static char a; void echo() { cin >> a; cout << a; }

4 The critical section problem n When a process executes code that manipulates shared data (or resource), we say that the process is in it’s critical section (CS) (for that shared data) n The execution of critical sections must be mutually exclusive: at any time, only one process is allowed to execute in its critical section (even with multiple CPUs) n Then each process must request the permission to enter it’s critical section (CS)

5 The critical section problem n The section of code implementing this request is called the entry section n The critical section (CS) might be followed by an exit section n The remaining code is the remainder section n The critical section problem is to design a protocol that the processes can use so that their action will not depend on the order in which their execution is interleaved (possibly on many processors)

6 Framework for analysis of solutions n Each process executes at nonzero speed but no assumption on the relative speed of n processes n General structure of a process: n many CPU may be present but memory hardware prevents simultaneous access to the same memory location n No assumption about order of interleaved execution n For solutions: we need to specify entry and exit sections repeat entry section critical section exit section remainder section forever

7 Requirements for a valid solution to the critical section problem n Mutual Exclusion u Only one process can be in the CS at a time n Bound Waiting u After a process has made a request to enter its CS, there is a bound on the number of times that the other processes are allowed to enter their CS F otherwise the process will suffer from starvation n Progress u When no process is in a CS, any process that requests entry to its CS must be permitted to enter without delay.

8 Types of solutions n Software solutions u algorithms who’s correctness does not rely on any other assumptions (see prior framework) n Hardware solutions u rely on some special machine instructions n Operating System solutions u provide some functions and data structures to the programmer

9 Software solutions u Dekker’s Algorithm u Peterson’s algorithm u Bakery algorithm

10 Peterson’s Algorithm

11 Extending Peterson’s to n processes

12 What about process failures? n If all 3 criteria (ME, progress, bound waiting) are satisfied, then a valid solution will provide robustness against failure of a process in its remainder section (RS) u since failure in RS is just like having an infinitely long RS n However, no valid solution can provide robustness against a process failing in its critical section (CS)

13 Drawbacks of software solutions n Processes that are requesting to enter in their critical section are busy waiting (consuming processor time needlessly) n If CSs are long, it would be more efficient to block processes that are waiting...

14 Hardware solutions: interrupt disabling n On a uniprocessor: mutual exclusion is preserved but efficiency of execution is degraded: while in CS, we cannot interleave execution with other processes that are in RS n On a multiprocessor: mutual exclusion is not preserved n Generally not an acceptable solution Process Pi: repeat disable interrupts critical section enable interrupts remainder section forever

15

16

17

18

19 Hardware solutions: special machine instructions n Normally, access to a memory location excludes other access to that same location n Extension: designers have proposed machines instructions that perform 2 actions atomically (indivisible) on the same memory location (ex: reading and writing) n The execution of such an instruction is mutually exclusive (even with multiple CPUs) n They can be used simply to provide mutual exclusion but need more complex algorithms for satisfying the 3 requirements of the CS problem

20 The test-and-set instruction n A C++ description of test-and-set: n An algorithm that uses testset for Mutual Exclusion: n Shared variable b is initialized to 1 n Only the first Pi who sets b enter CS Process Pi: repeat while(b==0); b:=0; CS b:=1; RS forever

21 The real tsl enter_region: tsl r0, flag ; if flag is 0, set flag to 1 cmp r0, #0 jnz enter_region ret leave_region: mov flag, #0 ret

22 The test-and-set instruction (cont.) n Mutual exclusion is preserved: if Pi enter CS, the other Pj are busy waiting n Problem: still using busy waiting n When Pi exit CS, the selection of the Pj who will enter CS is arbitrary: no bound waiting. Hence starvation is possible n Processors (ex: Pentium) often provide an atomic xchg(a,b) instruction that swaps the content of a and b. n But xchg(a,b) suffers from the same drawbacks as test-and-set

23 Semaphores n Synchronization tool (provided by the OS) that do not require busy waiting n A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually exclusive operations: u wait(S) u signal(S) n To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

24 Semaphores n Hence, in fact, a semaphore is a record (structure): type semaphore = record count: integer; queue: list of process end; var S: semaphore; n When a process must wait for a semaphore S, it is blocked and put on the semaphore’s queue n The signal operation removes one process from the blocked queue and puts it in the list of ready processes

25 Semaphore’s operations (atomic) wait(S): S.count--; if (S.count<0) { block this process place this process in S.queue } signal(S): S.count++; if (S.count<=0) { remove a process P from S.queue place this process P on ready list } S.count must be initialized to a nonnegative value (depending on application)

26 Semaphores: observations n When S.count >=0: the number of processes that can execute wait(S) without being blocked = S.count n When S.count<0: the number of processes waiting on S is = |S.count| n Atomicity and mutual exclusion u no 2 process can be in wait(S) and signal(S) (on the same S) at the same time (even with multiple CPUs) u Hence the blocks of code defining wait(S) and signal(S) are, in fact, critical sections

27 Semaphores: observations n The critical sections defined by wait(S) and signal(S) are very short: typically 10 instructions n Solutions: u uniprocessor: disable interrupts during these operations (ie: for a very short period). This does not work on a multiprocessor machine. u multiprocessor: use previous software or hardware schemes. The amount of busy waiting should be small.

28 Using semaphores for solving critical section problems n For n processes n Initialize S.count to 1 n Then only 1 process is allowed into CS (mutual exclusion) n To allow k processes into CS, we initialize S.count to k Process Pi: repeat wait(S); CS signal(S); RS forever

29 Using semaphores to synchronize processes n We have 2 processes: P1 and P2 n Statement S1 in P1 needs to be performed before statement S2 in P2 n Then define a semaphore “synch” n Initialize synch to 0 n Proper synchronization is achieved by having in P1: S1; signal(synch); n And having in P2: wait(synch); S2;

30 Semaphores & Synchronization n Here we use semaphores for synchronizing different activities, not resolving mutual exclusion. An activity is a work done by a specific process. Initially system creates all processes to do these specific activities. For example, process x that performs activity x doesn’t start performing activity x unless it is signaled (or told) by process y. Example of process synchronization: Router fault detection, fault logging, alarm reporting, and fault fixing. 1. Draw process precedence graph 2. Write psuedo code for process synchronization using semaphores

31 Binary semaphores (Mutexes) n The semaphores we have studied are called counting (or integer) semaphores n We have also binary semaphores u similar to counting semaphores except that “count” is Boolean valued u counting semaphores can be implemented by binary semaphores... u generally more difficult to use than counting semaphores (eg: they cannot be initialized to an integer k > 1)

32 Binary semaphores waitB(S): if (S.value = 1) { S.value := 0; } else { block this process place this process in S.queue } signalB(S): if (S.queue is empty) { S.value := 1; } else { remove a process P from S.queue place this process P on ready list }

33 How to implement a counting semaphore using mutex? S: counting semaphore S1: mutex = 1; S2: mutex = 0; C: integer P(S): P(S1) C := C-1; if (C < 0) { V(S1); P(S2); } V(S1); V(S): P(S1); C := C+1; if (C <= 0) V(S2); else V(S1);

34 mutex vs. futex n futex is part of recent version of Linux 2.6 n Stands for fast userspace mutex. Gives better performance. There is less system call done. u System calls are only done on blocking and waking up a process u _down and _up operations are atomic instructions (no need for system calls.)

35 Spinlocks n Spinlocks are counting semaphores that use busy waiting (instead of blocking) n Useful on multi processors when critical sections last for a short time u Short time < time of 2 ctx swx n For S-- and S++, use ME techniques of software or hardware u If not, use just a flag with one value. Common way of implementing. n We then waste a bit of CPU time but we save process switch u Remember ctx swx is expensive u Typically used by the SMP kernel code wait(S): S--; while S<0 do{}; signal(S): S++;

36 Adaptive mutexes in Solaris n Do spin lock if the lock is held by a thread that is currently running on another processor u If it is running on another processor, thread will finish soon, and no need to do ctx. u Advance techniques release the lock if the thread holding the lock gets blocked n Block otherwise u i.e., if lock is held by a thread not running

37 Problems with semaphores n semaphores provide a powerful tool for enforcing mutual exclusion and coordinate processes n But wait(S) and signal(S) are scattered among several processes. Hence, difficult to understand their effects u Could easily cause deadlock n Usage must be correct in all the processes n One bad (or malicious) process can fail the entire collection of processes u Also may cause priority inversion

38 Shared memory in Unix n A block of virtual memory shared by multiple processes n The shmget system call creates a new region of shared memory or return an existing one n A process attaches a shared memory region to its virtual address space with the shmat system call u Becomes part of the process VA space u Don’t swap it when suspending, if used by other processes n shmctl changes the shared memory segment’s properties (e.g., permissions). n shmdt detatches (i.e. removes) a shared memory segment from a process n Mutual exclusion must be provided by processes using the shared memory n Fastest form of IPC provided by Unix

39 Unix signals n Similar to hardware interrupts without priorities n Each signal is represented by a numeric value. Ex: u 02, SIGINT: to interrupt a process u 09, SIGKILL: to terminate a process n Each signal is maintained as a single bit in the process table entry of the receiving process: the bit is set when the corresponding signal arrives (no waiting queues) n A signal is processed as soon as the process runs in user mode n A default action (eg: termination) is performed unless a signal handler function is provided for that signal (by using the signal system call)