Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Concurrency: Mutual Exclusion and Synchronization Module 2.2.

Similar presentations


Presentation on theme: "1 Concurrency: Mutual Exclusion and Synchronization Module 2.2."— Presentation transcript:

1 1 Concurrency: Mutual Exclusion and Synchronization Module 2.2

2 2 Problems with concurrent execution n Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources n If there is no controlled access to shared data, some processes will obtain an inconsistent view of this data n The action performed by concurrent processes will then depend on the order in which their execution is interleaved n This order can not be predicted u Activities of other processes u Handling of I/O and interrupts u Scheduling policies of the OS

3 3 An example n Process P1 and P2 are running this same procedure and have access to the same variable “a” n Processes can be interrupted anywhere n If P1 is first interrupted after user input and P2 executes entirely n Then the character echoed by P1 will be the one read by P2 !! static char a; void echo() { cin >> a; cout << a; }

4 4 The critical section problem n When a process executes code that manipulates shared data (or resource), we say that the process is in it’s critical section (CS) (for that shared data) n The execution of critical sections must be mutually exclusive: at any time, only one process is allowed to execute in its critical section (even with multiple CPUs) n Then each process must request the permission to enter it’s critical section (CS)

5 5 The critical section problem n The section of code implementing this request is called the entry section n The critical section (CS) might be followed by an exit section n The remaining code is the remainder section n The critical section problem is to design a protocol that the processes can use so that their action will not depend on the order in which their execution is interleaved (possibly on many processors)

6 6 Framework for analysis of solutions n Each process executes at nonzero speed but no assumption on the relative speed of n processes n General structure of a process: n many CPU may be present but memory hardware prevents simultaneous access to the same memory location n No assumption about order of interleaved execution n For solutions: we need to specify entry and exit sections repeat entry section critical section exit section remainder section forever

7 7 Requirements for a valid solution to the critical section problem n Mutual Exclusion u Only one process can be in the CS at a time n Bound Waiting u After a process has made a request to enter its CS, there is a bound on the number of times that the other processes are allowed to enter their CS F otherwise the process will suffer from starvation n Progress u When no process is in a CS, any process that requests entry to its CS must be permitted to enter without delay.

8 8 Types of solutions n Software solutions u algorithms who’s correctness does not rely on any other assumptions (see prior framework) n Hardware solutions u rely on some special machine instructions n Operating System solutions u provide some functions and data structures to the programmer

9 9 Software solutions u Dekker’s Algorithm u Peterson’s algorithm u Bakery algorithm

10 10 Peterson’s Algorithm

11 11 Extending Peterson’s to n processes http://www.electricsand.com/peterson.htm

12 12 What about process failures? n If all 3 criteria (ME, progress, bound waiting) are satisfied, then a valid solution will provide robustness against failure of a process in its remainder section (RS) u since failure in RS is just like having an infinitely long RS n However, no valid solution can provide robustness against a process failing in its critical section (CS)

13 13 Drawbacks of software solutions n Processes that are requesting to enter in their critical section are busy waiting (consuming processor time needlessly) n If CSs are long, it would be more efficient to block processes that are waiting...

14 14 Hardware solutions: interrupt disabling n On a uniprocessor: mutual exclusion is preserved but efficiency of execution is degraded: while in CS, we cannot interleave execution with other processes that are in RS n On a multiprocessor: mutual exclusion is not preserved n Generally not an acceptable solution Process Pi: repeat disable interrupts critical section enable interrupts remainder section forever

15 15

16 16

17 17

18 18

19 19 Hardware solutions: special machine instructions n Normally, access to a memory location excludes other access to that same location n Extension: designers have proposed machines instructions that perform 2 actions atomically (indivisible) on the same memory location (ex: reading and writing) n The execution of such an instruction is mutually exclusive (even with multiple CPUs) n They can be used simply to provide mutual exclusion but need more complex algorithms for satisfying the 3 requirements of the CS problem

20 20 The test-and-set instruction n A C++ description of test-and-set: n An algorithm that uses testset for Mutual Exclusion: n Shared variable b is initialized to 1 n Only the first Pi who sets b enter CS Process Pi: repeat while(b==0); b:=0; CS b:=1; RS forever

21 21 The real tsl enter_region: tsl r0, flag ; if flag is 0, set flag to 1 cmp r0, #0 jnz enter_region ret leave_region: mov flag, #0 ret

22 22 The test-and-set instruction (cont.) n Mutual exclusion is preserved: if Pi enter CS, the other Pj are busy waiting n Problem: still using busy waiting n When Pi exit CS, the selection of the Pj who will enter CS is arbitrary: no bound waiting. Hence starvation is possible n Processors (ex: Pentium) often provide an atomic xchg(a,b) instruction that swaps the content of a and b. n But xchg(a,b) suffers from the same drawbacks as test-and-set

23 23 Semaphores n Synchronization tool (provided by the OS) that do not require busy waiting n A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually exclusive operations: u wait(S) u signal(S) n To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

24 24 Semaphores n Hence, in fact, a semaphore is a record (structure): type semaphore = record count: integer; queue: list of process end; var S: semaphore; n When a process must wait for a semaphore S, it is blocked and put on the semaphore’s queue n The signal operation removes one process from the blocked queue and puts it in the list of ready processes

25 25 Semaphore’s operations (atomic) wait(S): S.count--; if (S.count<0) { block this process place this process in S.queue } signal(S): S.count++; if (S.count<=0) { remove a process P from S.queue place this process P on ready list } S.count must be initialized to a nonnegative value (depending on application)

26 26 Semaphores: observations n When S.count >=0: the number of processes that can execute wait(S) without being blocked = S.count n When S.count<0: the number of processes waiting on S is = |S.count| n Atomicity and mutual exclusion u no 2 process can be in wait(S) and signal(S) (on the same S) at the same time (even with multiple CPUs) u Hence the blocks of code defining wait(S) and signal(S) are, in fact, critical sections

27 27 Semaphores: observations n The critical sections defined by wait(S) and signal(S) are very short: typically 10 instructions n Solutions: u uniprocessor: disable interrupts during these operations (ie: for a very short period). This does not work on a multiprocessor machine. u multiprocessor: use previous software or hardware schemes. The amount of busy waiting should be small.

28 28 Using semaphores for solving critical section problems n For n processes n Initialize S.count to 1 n Then only 1 process is allowed into CS (mutual exclusion) n To allow k processes into CS, we initialize S.count to k Process Pi: repeat wait(S); CS signal(S); RS forever

29 29 Using semaphores to synchronize processes n We have 2 processes: P1 and P2 n Statement S1 in P1 needs to be performed before statement S2 in P2 n Then define a semaphore “synch” n Initialize synch to 0 n Proper synchronization is achieved by having in P1: S1; signal(synch); n And having in P2: wait(synch); S2;

30 30 Semaphores & Synchronization n Here we use semaphores for synchronizing different activities, not resolving mutual exclusion. An activity is a work done by a specific process. Initially system creates all processes to do these specific activities. For example, process x that performs activity x doesn’t start performing activity x unless it is signaled (or told) by process y. Example of process synchronization: Router fault detection, fault logging, alarm reporting, and fault fixing. 1. Draw process precedence graph 2. Write psuedo code for process synchronization using semaphores

31 31 Binary semaphores (Mutexes) n The semaphores we have studied are called counting (or integer) semaphores n We have also binary semaphores u similar to counting semaphores except that “count” is Boolean valued u counting semaphores can be implemented by binary semaphores... u generally more difficult to use than counting semaphores (eg: they cannot be initialized to an integer k > 1)

32 32 Binary semaphores waitB(S): if (S.value = 1) { S.value := 0; } else { block this process place this process in S.queue } signalB(S): if (S.queue is empty) { S.value := 1; } else { remove a process P from S.queue place this process P on ready list }

33 33 How to implement a counting semaphore using mutex? S: counting semaphore S1: mutex = 1; S2: mutex = 0; C: integer P(S): P(S1) C := C-1; if (C < 0) { V(S1); P(S2); } V(S1); V(S): P(S1); C := C+1; if (C <= 0) V(S2); else V(S1);

34 34 mutex vs. futex n futex is part of recent version of Linux 2.6 n Stands for fast userspace mutex. Gives better performance. There is less system call done. u System calls are only done on blocking and waking up a process u _down and _up operations are atomic instructions (no need for system calls.)

35 35 Spinlocks n Spinlocks are counting semaphores that use busy waiting (instead of blocking) n Useful on multi processors when critical sections last for a short time u Short time < time of 2 ctx swx n For S-- and S++, use ME techniques of software or hardware u If not, use just a flag with one value. Common way of implementing. n We then waste a bit of CPU time but we save process switch u Remember ctx swx is expensive u Typically used by the SMP kernel code wait(S): S--; while S<0 do{}; signal(S): S++;

36 36 Adaptive mutexes in Solaris n Do spin lock if the lock is held by a thread that is currently running on another processor u If it is running on another processor, thread will finish soon, and no need to do ctx. u Advance techniques release the lock if the thread holding the lock gets blocked n Block otherwise u i.e., if lock is held by a thread not running

37 37 Problems with semaphores n semaphores provide a powerful tool for enforcing mutual exclusion and coordinate processes n But wait(S) and signal(S) are scattered among several processes. Hence, difficult to understand their effects u Could easily cause deadlock n Usage must be correct in all the processes n One bad (or malicious) process can fail the entire collection of processes u Also may cause priority inversion

38 38 Shared memory in Unix n A block of virtual memory shared by multiple processes n The shmget system call creates a new region of shared memory or return an existing one n A process attaches a shared memory region to its virtual address space with the shmat system call u Becomes part of the process VA space u Don’t swap it when suspending, if used by other processes n shmctl changes the shared memory segment’s properties (e.g., permissions). n shmdt detatches (i.e. removes) a shared memory segment from a process n Mutual exclusion must be provided by processes using the shared memory n Fastest form of IPC provided by Unix

39 39 Unix signals n Similar to hardware interrupts without priorities n Each signal is represented by a numeric value. Ex: u 02, SIGINT: to interrupt a process u 09, SIGKILL: to terminate a process n Each signal is maintained as a single bit in the process table entry of the receiving process: the bit is set when the corresponding signal arrives (no waiting queues) n A signal is processed as soon as the process runs in user mode n A default action (eg: termination) is performed unless a signal handler function is provided for that signal (by using the signal system call)


Download ppt "1 Concurrency: Mutual Exclusion and Synchronization Module 2.2."

Similar presentations


Ads by Google