Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5: Process Synchronization

Similar presentations


Presentation on theme: "Chapter 5: Process Synchronization"— Presentation transcript:

1 Chapter 5: Process Synchronization
Processes can execute concurrently CPU scheduler switches rapidly between processes to provide concurrent execution A process may be interrupted at any time, partially completing execution Concurrent(parallel) access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes. Process synchronization and coordination are necessary. Outcome of the execution depends on the particular order, in which order process access takes place. (Can be different each time processes are run).

2 Example

3 Bounded-Buffer producer-consumer problem
Shared-memory solution to bounded-buffer problem allows at most N – 1 items in buffer at the same time. A solution, where all N buffers are used can be implemented as follow: Suppose that we wanted to provide a solution to the producer-consumer problem that fills all the buffers. We can do so by having an integer counter that keeps track of the number of full buffers. Initially, counter is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer.

4 Bounded-Buffer producer-consumer problem
Shared variables: const int N=?; // buffer size int buffer[N]; // kind of items kept in the buffer int counter=0, in=0, out=0; // in & out indexes to buffer. Producer code: Consumer code: int nextp; // local variable of item produced do{ …. produce an item in nextp; while (counter==N); //buffer full buffer [in] = nextp; in = (in + 1) % N; counter = counter + 1; } while (true); int nextc; // local variable of item consumed do{ while (counter==0); //buffer empty nextc = buffer [out]; out = (out + 1) % N; counter = counter – 1; …. Consume the item in nextc; } while (true);

5 Bounded Buffer producer-consumer problem
Given producer, consumer routines are correct separately, but they may not function correctly when executed concurrently. The statements counter = counter + 1; counter = counter - 1; must be performed atomically. Atomic operation means an operation that completes in its entirety without interruption.

6 Bounded Buffer producer-consumer problem
The statement “counter = counter + 1” may be implemented in machine language as: register1 = counter register1 = register1 + 1 counter = register1 The statement “counter = counter - 1” may be implemented as: register2 = counter register2 = register2 – 1 counter = register2 register1 and register2 are CPU local registers.

7 Bounded Buffer producer-consumer problem
If both the producer and consumer attempt to update the buffer concurrently, the machine language statements may get interleaved(mixed). Interleaving depends upon how the producer and consumer processes are scheduled.

8 Bounded Buffer producer-consumer problem
Assume counter is initially 5. One interleaving of statements is: producer: register1 = counter (register1 = 5) producer: register1 = register1 + 1 (register1 = 6) consumer: register2 = counter (register2 = 5) consumer: register2 = register2 – 1 (register2 = 4) producer: counter = register1 (counter = 6) consumer: counter = register2 (counter = 4) Note: The value of counter may be either 4 or 6, where the correct result should be 5. Depending the order of execution, unexpected results may occur. Correct result is generated if producer and consumer execute separately (not concurrently).

9 Race Condition We would arrive at this incorrect state because we allowed both processes to manipulate the variable counter concurrently. Race condition: The situation where several processes access and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. To prevent race conditions, concurrent processes must be synchronized. It is necessary to encure that only one process at a time can be manipulating the variable counter.

10 The Critical-Section Problem
Critical section problem occurs in systems where multiple processes (P1, P2,……, Pn) all compete for the use of shared data. Each process has a code segment, called critical section(region), in which the shared data is accessed. The critical section of a program is a fragment, which performs the access to a shared resource (such as a common variable, writing file, updating table, etc.) Critical-Section Problem is to design a protocol to ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section (no two processes are executing in their critical sections at the same time). Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section.

11 The Critical-Section Problem
A enters critical region A leaves critical region Process A B attempts to enter critical region B enters critical region B leaves critical region Process B B blocked Time

12 General Structure The general structure of a typical process Pi: do{
Entry section // Each process must request // permission to enter critical section. Critical section(CS) Exit section // Marks the end of the critical section Remainder section (RS)// remaining code } while (true);

13 Example – Producer Consumer Problem
Producer code: Consumer code: int nextp; // local variable of item produced do{ …. produce an item in nextp; while (counter==N); //buffer full buffer [in] = nextp; in = (in + 1) % N; counter = counter + 1; // CS part } while (true); int nextc; // local variable of item consumed do{ while (counter==0); //buffer empty nextc = buffer [out]; out = (out + 1) % N; counter = counter – 1; // CS part …. Consume the item in nextc; } while (true);

14 Solution to Critical-Section Problem
CS problem must satisfy the following three requirements: 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections(only one process can be in its CS, others must wait). 2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then only those processes that are not running in their remainder sections can participate in the decision of which will enter its critical section and, this cannot be postponed indefinitely (A process that is executing outside of its CS must not block another process from entering its CS). 3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. Assume that each process executes at a nonzero speed No assumption concerning relative speed of the n processes.

15 Techniques for the solution of CS problem
Software Solutions Hardware Solutions Operating System Solution (semaphores) Operating System Concepts

16 SOFTWARE SOLUTIONS FOR 2-PROCESSES Algorithm 1 – Strict Alternation Approach
Shared variables: int turn; initially turn = 0 //initial value of turn is zero turn = i  Pi can enter its critical section Process Pi do { while (turn == j) ; critical section turn = j; reminder section } while (true); Process Pj do { while (turn == i) ; critical section turn = i; reminder section } while (true);

17 Algorithm 1 – Strict Alternation Approach
Mutual Exclusion: Variable turn shows which process can enter its CS; it has value either 0 or 1. So only one of the processes(P0 or P1) can be in its CS. If turn=0, P1 waits, P0 enters its CS. If turn=1, P0 waits P1 enters its CS. Progress: This solution does not satisfy the progress requirement: If Pi wants to enter its critical section, and turn is j, then it can not even if Pj may be executing in its remainder section. If P1 wants to enter its CS and turn is 0 and P0 is running outside of its CS (RS), is not satisfied for P1. Bounded waiting: other process is allowed to enter its critical section only once after a process has made a request to enter its critical section and before that request is granted. turn = 0, P0 enters its CS, after exit, turn = 1, P1 enters its CS, after exit turn = 0.

18 Algorithm 2 An alternative to strict alternation is to keep more states and record the processes that want to enter their critical section. Shared variables bool flag[2]; initially flag [0] = flag [1] = false. flag [i] = true  Pi ready to enter its critical section Process Pi do { flag[ i ] = true; while (flag[ j ]); //wait critical section flag [ i ] = false; remainder section } while (true); Satisfies mutual exclusion, what about progress requirement?

19 Progress requirement? However, the progress requirement is not satisfied: If processes Pi and Pj both execute the entry assignment in the following sequence: T0: Pi sets flag[ i ]= true; T1: Pj sets flag[ j ]= true; T2 : ? The processes are in the respective while statements looping forever, none of them continue, while no one is in CS.

20 Algorithm 3 – Peterson’s Algorithm
Good algorithmic description of solving the problem Two process solution Assume that the machine-language instructions are atomic; that is, cannot be interrupted The two processes share two variables: int turn; bool flag[2]; The variable turn indicates whose turn it is to enter the critical section. Initially turn is 0. The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready! Initially flag [0] = flag [1] = false.

21 Algorithm 3 – Peterson’s Algorithm
Combined shared variables of algorithms 1 and 2. Process Pi Shared variables bool flag[2]={false, false}; int turn=0; turn = i  Pi can enter its critical section flag [i] = true  Pi ready to enter its critical section do { flag [i] = true; turn = j; //give priority to other process while (flag [ j ] and turn == j) ; //wait critical section flag [i] = false; //allow other process to enter CS remainder section } while (true); What about the three requirements?

22 Algorithm 3 – Peterson’s Algorithm
Mutual Exclusion(ME) requirement Process Pi do { flag [i] = true; turn = j; while (flag [ j ] and turn == j) ; critical section flag [i] = false; remainder section } while (true); If both processes were to be in their CSs, then flag[0]=flag[1]=true. Since turn is either 0 or 1, only one can enter its CS. Hence, ME is satisfied

23 Algorithm 3 – Peterson’s Algorithm
Progress requirement Process Pi do { flag [i] = true; turn = j; while (flag [ j ] and turn == j) ; critical section flag [i] = false; remainder section } while (true); When P0 is in its RS, P1 can enter its CS any number of times it wants since flag[i] = false. Hence, PROGRESS is satisfied

24 Algorithm 3 – Peterson’s Algorithm
Bounded waiting requirement Process Pi do { flag [i] = true; turn = j; while (flag [ j ] and turn == j) ; critical section flag [i] = false; remainder section } while (true); If Pi is willing to enter its CS for the second time while Pj waits to enter its CS, this is not allowed since flag[ j ] = true and turn = j. After at most one entry other process enter its CS. Hence Bounded Waiting is satisfied

25 Multiple-process Solutions: Bakery Algorithm
Critical section for n processes (process names are unique) Before entering its critical section, each process receives a ticket number(priority). Holder of the smallest ticket number enters the critical section. If processes Pi and Pj receive the same ticket number(the algorithm does not guarantee that all processes will receive different numbers) and i < j, then Pi is served first(the process with he lowest name); else Pj is served first. The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...

26 Bakery Algorithm Shared data bool choosing[n]={false};
Notation: “<“  lexicographical order (ticket #, process id #) (a,b) < (c,d) if a < c or if a = c and b < d max (a0,…, an-1) is a number, k, such that k  ai for i = 0,…, n – 1 There is no pair of processes having (ticket #, process id #) equal to each other!!! Shared data bool choosing[n]={false}; int number[n]={0};

27 Bakery Algorithm Process Pi: do { choosing[i] = true;
number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[ j ]) ; while (number[ j ] != 0 && (number[ j ],j) < (number[ i ],i)) ; } critical section number[i] = 0; remainder section } while (true);

28 Bakery Algorithm Mutual Exclusion(ME) requirement Process Pi: do {
choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[ j ]) ; while (number[ j ] != 0 && (number[ j ],j) < (number[ i ],i)) ; } critical section number[i] = 0; remainder section } while (true); If there is a process Pj having smaller (number[ j ],j), then it has higher priority to enter. Then, Pi waits. Hence ME is satisfied

29 Bakery Algorithm Progress requirement Process Pi: do {
choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[ j ]) ; while (number[ j ] != 0 && (number[ j ],j) < (number[ i ],i)) ; } critical section number[i] = 0; remainder section } while (true); When Pi is in its RS, Pj is not postponed due to Pi since number[i] = 0. Hence, PROGRESS is satisfied

30 Bakery Algorithm Bounded waiting requirement Process Pi: do {
choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[ j ]) ; while (number[ j ] != 0 && (number[ j ],j) < (number[ i ],i)) ; } critical section number[i] = 0; remainder section } while (true); Due to increasing order of enumeration type ticket number selection, Pi will enter its CS after all other processes enter once. A different process, Pj will be holding the largest ticket value if it wants to enter for the second time.

31 Synchronization Hardware
For solving critical section problem, some simple hardware instructions can be used. All solutions below based on idea of locking Protecting critical regions via locks Hardware designers have proposed machine instructions that perform 2 actions atomically (indivisibly, non-interruptibly) on the same memory location (ex: test and modify, swapping). Either test memory word and set value Or swap contents of two memory words Solution to Critical-section Problem Using Locks do { acquire lock critical section release lock remainder section } while (TRUE);

32 TestAndSet Instruction
TestAndSet instruction operates as the following function: bool TestAndSet(bool &target) { bool rv = target; target = true; return rv; } If two TestAndSet instructions are executed simultaneously (each on a different CPU), they will be executed sequentially (i.e. atomically) in some arbitrary order. Executed atomically Returns the original value of passed parameter Set the new value of passed parameter to “TRUE”.

33 Mutual Exclusion with TestAndSet
Shared data: bool lock = false; Process Pi do { while (TestAndSet(lock)) ;/*do nothing */ critical section lock = false; remainder section }while(true); The first process to execute TestAndSet finds lock=false, changes its value to true and enters its CS. Others have to wait since lock is changed to true. Hence ME is satisfied.

34 address operator should be used
Remark for TestAndSet TestAndSet is also equivalent to the following function: bool TestAndSet(bool *target) { bool rv = *target; *target = true; return rv; } address operator should be used Process Pi do { while (TestAndSet(&lock)) ; critical section lock = false; remainder section }while(true);

35 Synchronization Hardware – Swap function
Atomically swaps two variables. void Swap (bool &a, bool &b) { bool temp = a; a = b; b = temp; } Alternatively, void Swap (bool *a, bool *b) bool temp = *a; *a = *b; *b = temp;

36 Mutual Exclusion with Swap
Shared data: bool lock=false; Process Pi bool key; do { key = true; do{ Swap(lock,key); }while (key == true); critical section lock = false; remainder section } while(true); The first process, to enter its CS, sets key to false and lock to true. Hence it enters into its CS (key==true is false) and others have to wait since lock = true Hence ME is satisfied

37 Bounded-Waiting with Swap
Shared data: bool lock=false; Process Pi bool key; do { key = true; do{ Swap(lock,key); }while (key == true); critical section lock = false; remainder section } while(true); On exit, Pi sets lock=false. Hence, any process can enter its CS arbitrarily. Bounded Waiting is not satisfied

38 Another Algorithm using TestAndSet (satisfies all CS requirements)
Shared data: bool waiting[n]={false}; bool lock = false; Process Pi do { waiting[i] = true; key = true; while(waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; critical section j = (i+1)%n; while( (j!=i) && !waiting[ j ]) j = (j+1)%n; if(j==i) lock = false; else waiting[ j ] = false; remainder section } while(true);

39 Mutual Exclusion using TestAndSet
Shared data: bool waiting[n]={false}; bool lock = false; Process Pi do { waiting[i] = true; key = true; while(waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; critical section j = (i+1)%n; while( (j!=i) && !waiting[ j ]) j = (j+1)%n; if(j==i) lock = false; else waiting[ j ] = false; remainder section } while(true); Pi can enter its CS, if both waiting[i]=true and key = true. After that, the first process to execute TestAndSet will have key =false, so must wait. Hence ME is satisfied

40 Progress using TestAndSet
Shared data: bool waiting[n]={false}; bool lock = false; Process Pi do { waiting[i] = true; key = true; while(waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; critical section j = (i+1)%n; while( (j!=i) && !waiting[ j ]) j = (j+1)%n; if(j==i) lock = false; else waiting[ j ] = false; remainder section } while(true); A process exiting its CS either sets lock or waiting[ j ] to false. Both allow a process waiting to enter its CS to proceed. Hence progress is satisfied

41 Bounded-Waiting using TestAndSet
Shared data: bool waiting[n]=false; bool lock = false; Process Pi do { waiting[i] = true; key = true; while(waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; critical section j = (i+1)%n; while( (j!=i) && !waiting[ j ]) j = (j+1)%n; if(j==i) lock = false; else waiting[ j ] = false; remainder section } while(true); When a process leaves its CS, it scans the processes in cyclic ordering (i+1,i+2,…,n-1, 0,..., i-1). It allows the first process in the sequence that is in its entry section (i.e. waiting[ j ]=true) to proceed. A waiting process will enter its CS in at most (n-1) turns. Hence Bounded Waiting is satisfied

42 Semaphores Previous solutions are complicated and generally inaccessible to application programmers OS designers build software tools to solve critical section problem Synchronization tool provided by the OS that does not require busy waiting (see slide 43). Semaphore S – integer variable can only be accessed via two indivisible (atomic) operations wait ( ) and signal ( ) Definition of the wait ( ) operation wait (S){ //to test while (S <= 0) ; // busy wait S = S – 1; } Definition of the signal ( ) operation signal (S){ //to increment S = S + 1; } When one process modifies the semaphore value, no other process can simultaneously modify the semaphore value. Also in wait(S), the testing of the integer value of S (S <= 0), as well as possible modifications (S=S-1), must be executed without interrruption.

43 Critical Section of n Processes
Shared data: semaphore mutex = 1; Process Pi do { wait(mutex); critical section signal(mutex); remainder section } while (true); There are counting and binary semaphores in the operating systems. Counting semaphores value can range over an unrestricted domain. Used to control access to a given resource consisting a number of instances. The semaphore is initialized to the number of resource available. Binary semaphores value can range only between 0 and 1.

44 Semaphore Implementation
Must guarantee that no two processes can execute the wait() and signal() on the same semaphore at the same time The semaphore definition above and all other mutual exclusion solutions given so far require "busy waiting" (looping continuously in entry code). Busy waiting wastes CPU cycles so must be dealt with. To avoid busy waiting: When a process has to wait, it will be put in a blocked queue of processes waiting for the same event. Define a semaphore as a structure, struct Semaphore { int value; int List[n]; }; Semaphore S; Assume two simple operations: block() suspends the process that invokes it. wakeup(P) resumes the execution of a blocked process P.

45 Implementation Semaphore operations on variable S are now defined as,
wait(S) { S.value--; if (S.value < 0) { add this process to S.List; block(); //suspends, waiting state } signal(S) { S.value++; if (S.value <= 0) { remove a process P from S.List; wakeup(P); //resumes, ready state Here, semaphore value may be negative. If it is, its magnitude is the number processes waiting on that semphore. abs(value) shows the number of blocked processes. value < = 0 means that there is a process to wakeup.

46 Semaphore as a General Synchronization Tool
Execute instruction B in Pj only after instruction A executed in Pi Use semaphore flag initialized to 0 process Pi process Pj   instruction A wait(flag); signal(flag); instruction B Note: Semaphores can solve various synchronization problems. System must quarantee that no two processes can execute wait() and signal() operations on the same semaphore at the same time. Two operations are executed atomically.

47 Deadlock and Starvation
Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Let S and Q be two semaphores initialized to 1 P0 P1 wait(S); wait(Q); wait(Q); wait(S);   signal(S); signal(Q); signal(Q); signal(S); PO executes wait(S) and P1 executes wait(Q), when P0 executes wait(Q), it must wait until P1 executes signal(Q). Similarly, when P1 executes wait (S), it must wait until P0 executes signal (S). Since these signal() operations cannot be executed, P1 and P1 are deadlocked. A set of processes in deadlocked state when every process in the set is waiting for an event that can be caused only by another process in the set. Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

48 Classical Problems of Synchronization
Classical problems used to test newly-proposed synchronization schemes Bounded-Buffer Producer-Consumer Problem Readers and Writers Problem Dining-Philosophers Problem

49 Bounded-Buffer Producer-Consumer Problem
n buffers, each can hold one item Shared data int n; /*# of slots in the buffer*/ semaphore full = 0, /*# of full buffers*/ empty = n, /*# of empty buffers*/ mutex = 1; /*for access to the buffer pool */ Producer process Consumer process do { produce an item in nextp wait(empty); wait(mutex); add nextp to buffer signal(mutex); signal(full); } while (true); do { wait(full); wait(mutex); remove an item from buffer to nextc signal(mutex); signal(empty); consume the item in nextc } while (true);

50 Readers-Writers Problem
A data set is shared among a number of concurrent processes Readers – only read the data set; they do not perform any updates Writers – can both read and write Problem – allow multiple readers to read at the same time Only one single writer can access the shared data at the same time Several variations of how readers and writers are considered – all involve some form of priorities Shared Data Data set Semaphore rw_mutex initialized to 1 Semaphore mutex initialized to 1 Integer read_count initialized to 0

51 Readers-Writers Problem
Shared data semaphore mutex = 1, /* controls access to read_count*/ rw_mutex = 1; /* controls entry to a writer or first or last reader */ int read_count = 0; /* number of readers currently reading the object */ Writer process Reader process do{ wait(mutex); read_count++; if (read_count == 1) //first reader wait(rw_mutex); signal(mutex); reading is performed read_count--; if (read_count == 0) //last reader signal(rw_mutex); } while(true); do{ wait(rw_mutex); writing is performed signal(rw_mutex); } while (true);

52 Dining-Philosophers Problem
Five philosophers are sitting around a table. Each philosopher has a plate in front him/her. Between each plate there is a chopstick. A philosopher is either “thinking” or “eating”. When a philosopher gets hungry he/she tries to get the 2 chopsticks on his/her left and right. Having obtained both chopsticks the philosopher can eat.They cannot have both chopstick at the same time. When finished eating, the philosopher puts both chopsticks down, and resumes thinking.

53 Dining-Philosophers Problem
Shared data semaphore chopstick[5]={1,1,1,1,1}; Philosopher i: do { wait(chopstick[i]); wait(chopstick[(i+1) % 5]); //eat signal(chopstick[i]); signal(chopstick[(i+1) % 5]); //think } while (true);

54 Dining-Philosophers Problem
Deadlock occurs if each philosopher starts by picking his/her left chopstick(or the right one)! There are several methods that ensure freedom from deadlocks: Allow at most four philosophers to be sitting simultaneously at the table while maintaining five chopsticks! Allow a philosopher to pick up his/her chopsticks only if both chopsticks are available.

55 Problems with Semaphores
Incorrect use of semaphore operations: semaphore mutex=1; signal (mutex) …CS... wait (mutex) //several processes may be in their CSs at the same time wait (mutex) …CS... wait (mutex) //deadlock will occur Omitting of wait (mutex) or signal (mutex) (or both) //either ME will be violated or deadlock will occur. Deadlock and starvation are possible.


Download ppt "Chapter 5: Process Synchronization"

Similar presentations


Ads by Google