Presentation is loading. Please wait.

Presentation is loading. Please wait.

Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the.

Similar presentations


Presentation on theme: "Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the."— Presentation transcript:

1 Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes. Process synchronization and coordination are necessary. Outcome of the execution depends on the particular order, in which order process access takes place. (Can be different each time processes are run).

2 Example

3 Bounded-Buffer producer-consumer problem Shared-memory solution to bounded-buffer problem allows at most N – 1 items in buffer at the same time. A solution, where all N buffers are used can be implemented as follow:.  Suppose that we modify the producer-consumer code by adding a variable counter, initialized to 0 and incremented each time a new item is added to the buffer

4 Bounded-Buffer producer-consumer problem int nextp; // local variable of item produced do{ …. produce an item in nextp; …. while (counter==N); //buffer full buffer [in] = nextp; in = (in + 1) % N; counter = counter + 1; } while (true); int nextc;// local variable of item consumed do{ while (counter==0); //buffer empty nextc = buffer [out]; out = (out + 1) % N; counter = counter – 1; …. Consume the item in nextc; …. } while (true); Producer code: Consumer code: Shared variables: const int N=?;// buffer size int buffer[N];// kind of items kept in the buffer int counter=0,in=0, out=0;// in & out indexes to buffer.

5 Bounded Buffer producer-consumer problem The statements counter = counter + 1; counter = counter - 1; must be performed atomically. Atomic operation means an operation that completes in its entirety without interruption.

6 Bounded Buffer producer-consumer problem The statement “counter = counter + 1” may be implemented in machine language as: register1 = counter register1 = register1 + 1 counter = register1 The statement “counter = counter - 1” may be implemented as: register2 = counter register2 = register2 – 1 counter = register2

7 Bounded Buffer producer-consumer problem If both the producer and consumer attempt to update the buffer concurrently, the assembly language statements may get interleaved. Interleaving depends upon how the producer and consumer processes are scheduled.

8 Bounded Buffer producer-consumer problem Assume counter is initially 5. One interleaving of statements is: producer: register1 = counter (register1 = 5) producer: register1 = register1 + 1 (register1 = 6) consumer: register2 = counter (register2 = 5) consumer: register2 = register2 – 1 (register2 = 4) producer: counter = register1 (counter = 6) consumer: counter = register2 (counter = 4) The value of counter may be either 4 or 6, where the correct result should be 5. Depending the order of execution, unexpected results may occur.

9 Race Condition We would arrive at this incorrect state because we allowed both processes to manipulate the variable counter concurrently. Race condition: The situation where several processes access and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. To prevent race conditions, concurrent processes must be synchronized.

10 The Critical-Section Problem Critical section problem occurs in systems where multiple processes (P1, P2,……, Pn) all compete for the use of shared data. The critical section of a program is a fragment, which performs the access to a shared resource (such as a common variable) Each process has a code segment, called critical section, in which the shared data is accessed. Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.

11 General Structure The general structure of a typical process Pi: do{ Entry section// Each process must request // permission to enter critical section. Critical section Exit section// Marks the end of the critical section Remainder section } while (true);

12 Example – Producer Consumer Problem int nextp; // local variable of item produced do{ …. produce an item in nextp; …. while (counter==N); //buffer full buffer [in] = nextp; in = (in + 1) % N; counter = counter + 1; } while (true); int nextc;// local variable of item consumed do{ while (counter==0); //buffer empty nextc = buffer [out]; out = (out + 1) % N; counter = counter – 1; …. Consume the item in nextc; …. } while (true); Producer code: Consumer code:

13 Solution to Critical-Section Problem 1.Mutual Exclusion. If process P i is executing in its critical section, then no other processes can be executing in their critical sections. 2.Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then only those processes that are not running in their remainder sections can participate in the decision of which will enter its critical section and, this cannot be postponed indefinitely. 3.Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. Assume that each process executes at a nonzero speed No assumption concerning relative speed of the n processes.

14 SOLUTIONS FOR 2-PROCESSES Algorithm 1 – Strict Alternation Approach Shared variables:  int turn; initially turn = 0  turn = i  P i can enter its critical section Process P i do { while (turn != i) ; critical section turn = j; reminder section } while (true); Process P j do { while (turn != j) ; critical section turn = i; reminder section } while (true);

15 Algorithm 1 – Strict Alternation Approach Mutual Exclusion: Variable turn shows which process can enter its CS; it has value either 0 or 1. So only one of the processes can be in its CS. Progress: This solution does not satisfy the progress requirement: If Pi wants to enter its critical section, and turn is j, then it can not even if Pj may be executing in its remainder section. Bounded waiting: other process is allowed to enter its critical section only once after a process has made a request to enter its critical section and before that request is granted.

16 Algorithm 2 Shared variables  bool flag[2]; initially flag [0] = flag [1] = false.  flag [i] = true  P i ready to enter its critical section Process P i do { flag[ i ] = true; while (flag[ j ]); critical section flag [ i ] = false; remainder section } while (true); Satisfies mutual exclusion, what about progress requirement? An alternative to strict alternation is to keep more states and record the processes that want to enter their critical section.

17 Progress requirement? However, the progress requirement is not satisfied: If processes Pi and Pj both execute the entry assignment in the following sequence: T0: Pi sets flag[ i ]= true; T1: Pj sets flag[ j ]= true; T2 : ? The processes are in the respective while statements looping forever.

18 Algorithm 3 – Peterson’s Algorithm Combined shared variables of algorithms 1 and 2. Process P i do { flag [i] = true; turn = j; while (flag [ j ] and turn == j) ; critical section flag [i] = false; remainder section } while (true); What about the three requirements?

19 Algorithm 3 – Peterson’s Algorithm Combined shared variables of algorithms 1 and 2. Process P i do { flag [i] = true; turn = j; while (flag [ j ] and turn == j) ; critical section flag [i] = false; remainder section } while (true); If both processes were to be in their CSs, then flag[i]=flag[j]=true. Since turn is either 0 or 1, only one can enter its CS. Hence, ME is satisfied

20 Algorithm 3 – Peterson’s Algorithm Combined shared variables of algorithms 1 and 2. Process P i do { flag [i] = true; turn = j; while (flag [ j ] and turn == j) ; critical section flag [i] = false; remainder section } while (true); When P0 is in its RS, P1 can enter its CS any number of times it wants since flag[i] = false. Hence, PROGRESS is satisfied

21 Algorithm 3 – Peterson’s Algorithm Combined shared variables of algorithms 1 and 2. Process P i do { flag [i] = true; turn = j; while (flag [ j ] and turn == j) ; critical section flag [i] = false; remainder section } while (true); If Pi is willing to enter its CS for the second time while Pj waits to enter its CS, this is not allowed since flag[ j ] = true and turn =j. Hence Bounded Waiting is satisfied

22 Multiple-process Solutions: Bakery Algorithm Before entering its critical section, each process receives a ticket number. Holder of the smallest ticket number enters the critical section. If processes P i and P j receive the same number and i < j, then P i is served first; else P j is served first. The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5... Critical section for n processes

23 Bakery Algorithm Notation: “<“  lexicographical order (ticket #, process id #)  (a,b) < (c,d) if a < c or if a = c and b < d  max (a 0,…, a n-1 ) is a number, k, such that k  a i for i = 0,…, n – 1  There is no pair of processes having (ticket #, process id #) equal to each other!!! Shared data bool choosing[n]={false}; int number[n]={0};

24 Bakery Algorithm do { choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[ j ]) ; while ( number[ j ] != 0 && (number[ j ],j) < (number[ i ],i) ) ; } critical section number[i] = 0; remainder section } while (true); Process Pi:

25 Bakery Algorithm do { choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[ j ]) ; while ( number[ j ] != 0 && (number[ j ],j) < (number[ i ],i) ) ; } critical section number[i] = 0; remainder section } while (true); Process Pi: If there is a process Pj having smaller (number[ j ],j), then it has higher priority to enter. Then, Pi waits. Hence ME is satisfied

26 Bakery Algorithm do { choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[ j ]) ; while ( number[ j ] != 0 && (number[ j ],j) < (number[ i ],i) ) ; } critical section number[i] = 0; remainder section } while (true); Process Pi: When Pi is in its RS, Pj is not postponed due to Pi since number[i] = 0. Hence, PROGRESS is satisfied

27 Bakery Algorithm do { choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[ j ]) ; while ( number[ j ] != 0 && (number[ j ],j) < (number[ i ],i) ) ; } critical section number[i] = 0; remainder section } while (true); Process Pi: Due to increasing order of enumeration type ticket number selection, Pi will enter its CS after all other processes enter once. A different process, Pj will be holding the largest ticket value if it wants to enter for the second time.

28 Synchronization Hardware For solving critical section problem, some simple hardware instructions can be used. Hardware designers have proposed machine instructions that perform 2 actions atomically (indivisibly) on the same memory location (ex: test and modify, swapping). TestAndSet instruction operates as the following function: bool TestAndSet(bool &target) { bool rv = target; target = true; return rv; } If two TestAndSet instructions are executed simultaneously (each on a different CPU), they will be executed sequentially (i.e. atomically) in some arbitrary order.

29 Mutual Exclusion with TestAndSet Shared data: bool lock = false; Process P i do { while (TestAndSet(lock)) ; critical section lock = false; remainder section }while(true); The first process to execute TestAndSet finds lock=false, changes its value to true and enters its CS. Others have to wait since lock is changed to true. Hence ME is satisfied.

30 Remark for TestAndSet Process P i do { while (TestAndSet(&lock)) ; critical section lock = false; remainder section }while(true); TestAndSet is also equivalent to the following function: bool TestAndSet(bool *target) { bool rv = *target; *target = true; return rv; } address operator should be used

31 Synchronization Hardware – Swap function Atomically swaps two variables. void Swap (bool &a, bool &b) { bool temp = a; a = b; b = temp; } Alternatively, void Swap (bool *a, bool *b) { bool temp = *a; *a = *b; *b = temp; }

32 Mutual Exclusion with Swap Shared data: bool lock=false; Process P i bool key; do { key = true; do{ Swap(lock,key); }while (key == true); critical section lock = false; remainder section } while(true); The first process to enter its CS sets key to false and lock to true. Hence it enters into its CS (key==true is false) and others have to wait since lock = true Hence ME is satisfied

33 Mutual Exclusion with Swap Shared data: bool lock=false; Process P i bool key; do { key = true; do{ Swap(lock,key); }while (key == true); critical section lock = false; remainder section } while(true); On exit, Pi sets lock=false. Hence, any process can enter its CS arbitrarily. Bounded Waiting is not satisfied

34 Bounded-Waiting using TestAndSet Shared data: bool waiting[n]={false}; bool lock = false; Process P i do { waiting[i] = true; key = true; while(waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; critical section j = (i+1)%n; while( (j!=i) && !waiting[ j ]) j = (j+1)%n; if(j==i)lock = false; else waiting[ j ] = false; remainder section } while(true);

35 Bounded-Waiting using TestAndSet Shared data: bool waiting[n]={false}; bool lock = false; Process P i do { waiting[i] = true; key = true; while(waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; critical section j = (i+1)%n; while( (j!=i) && !waiting[ j ]) j = (j+1)%n; if(j==i)lock = false; else waiting[ j ] = false; remainder section } while(true); Pi can enter its CS if either waiting[i]=false or key = false. The first process to execute TestAndSet will have key =false. Others must wait. Hence ME is satisfied

36 Bounded-Waiting using TestAndSet Shared data: bool waiting[n]={false}; bool lock = false; Process P i do { waiting[i] = true; key = true; while(waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; critical section j = (i+1)%n; while( (j!=i) && !waiting[ j ]) j = (j+1)%n; if(j==i)lock = false; else waiting[ j ] = false; remainder section } while(true); A process exiting its CS either sets lock or waiting[ j ] to false. Both allow a process waiting to enter its CS to proceed. Hence progress is satisfied

37 Bounded-Waiting using TestAndSet Shared data: bool waiting[n]=false; bool lock = false; Process P i do { waiting[i] = true; key = true; while(waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; critical section j = (i+1)%n; while( (j!=i) && !waiting[ j ]) j = (j+1)%n; if(j==i)lock = false; else waiting[ j ] = false; remainder section } while(true); When a process leaves its CS, it scans the processes in cyclic ordering (i+1,i+2,…, i-1). It allows the first process in the sequence that is in its entry section (i.e. waiting[ j ]=true) to proceed. A waiting process will enter its CS in at most (n-1) turns. Hence Bounded Waiting is satisfied

38 Semaphores Synchronization tool that does not require busy waiting. Semaphore S – integer variable can only be accessed via two indivisible (atomic) operations wait (S): while (S <= 0) ; S = S – 1; signal (S): S = S + 1;

39 Critical Section of n Processes Shared data: semaphore mutex = 1; Process Pi do { wait(mutex); critical section signal(mutex); remainder section } while (true);

40 Semaphore Implementation The semaphore definition above and all other mutual exclusion solutions given so far require ‘busy waiting’ (looping continuously in entry code). Busy waiting wastes CPU cycles so must be dealt with. To avoid busy waiting: When a process has to wait, it will be put in a blocked queue of processes waiting for the same event. Define a semaphore as a structure, struct Semaphore { int value; int List[n]; }; Semaphore S; Assume two simple operations:  block suspends the process that invokes it.  wakeup(P) resumes the execution of a blocked process P.

41 Implementation Semaphore operations on variable S are now defined as, wait(S): S.value--; if (S.value < 0) { add this process to S.List; block; } signal(S): S.value++; if (S.value <= 0) { remove a process P from S.List; wakeup(P); } abs(value) shows the number of blocked processes. value < 0 means that there is a process to wakeup.

42 Semaphore as a General Synchronization Tool Execute instruction B in P j only after instruction A executed in P i Use semaphore flag initialized to 0 process P i process P j   instruction Await(flag); signal(flag);instruction B

43 Deadlock and Starvation Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Let S and Q be two semaphores initialized to 1 P 0 P 1 wait(S);wait(Q); wait(Q);wait(S);  signal(S);signal(Q); signal(Q);signal(S); Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

44 Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem

45 Bounded-Buffer Producer-Consumer Problem do { … produce an item in nextp … wait(empty); wait(mutex); … add nextp to buffer … signal(mutex); signal(full); } while (true); Shared data semaphore full = 0, empty = n, mutex = 1; do { wait(full); wait(mutex); … remove an item from buffer to nextc … signal(mutex); signal(empty); … consume the item in nextc … } while (true); Producer process Consumer process

46 Readers-Writers Problem Shared data semaphore mutex = 1, wrt = 1; int readcount = 0; Writer process Reader process do{ wait(wrt); … writing is performed … signal(wrt); } while (true); do{ wait(mutex); readcount++; if (readcount == 1) wait(wrt); signal(mutex); … reading is performed … wait(mutex); readcount--; if (readcount == 0) signal(wrt); signal(mutex); } while(true);

47 Dining-Philosophers Problem Five philosophers are sitting around a table. Each philosopher has a plate in front him/her. Between each plate is a chopstick. A philosopher is either “thinking” or “eating”. When a philosopher gets hungry he/she tries to get the two chopsticks on his/her left and right. Having obtained both chopsticks the philosopher can eat. When finished eating, the philosopher puts both chopsticks down, and resumes thinking

48 Dining-Philosophers Problem Philosopher i: do { wait(chopstick[i]); wait(chopstick[(i+1) % 5]); … eat … signal(chopstick[i]); signal(chopstick[(i+1) % 5]); … think … } while (true); Shared data semaphore chopstick[5]={1,1,1,1,1};

49 Dining-Philosophers Problem Deadlock occurs if each philosopher starts by picking his/her left chopstick! There are several methods that ensure freedom from deadlocks:  Allow at most four philosophers to be sitting simultaneously at the table while maintaining five chopsticks!  Allow a philosopher to pick up his/her chopsticks only if both chopsticks are available.


Download ppt "Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the."

Similar presentations


Ads by Google