Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Cooperating Processes

Similar presentations


Presentation on theme: "Introduction to Cooperating Processes"— Presentation transcript:

1 Introduction to Cooperating Processes
Independent process cannot affect or be affected by the execution of another process. Cooperating process can affect or be affected by the execution of another process. Advantages of process cooperation: Information sharing Computation speed-up Modularity Convenience

2 Cooperation Among Processes by Sharing
Processes use and update shared data such as shared variables, files, and data bases. Writing must be mutually exclusive to prevent a race condition leading to inconsistent data views. Critical sections are used to provide this data integrity. A process requiring the critical section must not be delayed indefinitely; no deadlock or starvation.

3 Cooperation Among Processes by Communication
Communication provides a way to synchronize, or coordinate, the various activities. Possible to have deadlock - each process waiting for a message from the other process. Possible to have starvation - two processes sending a message to each other while another process waits for a message.

4 Example -- Producer/Consumer Problem
Paradigm for cooperating processes - Producer process produces information that is consumed by a Consumer process. Example 1: a print program produces characters that are consumed by a printer. Example 2: an assembler produces object modules that are consumed by a loader.

5 Producer/Consumer Dynamics
A producer process produces information that is consumed by a consumer process. At any time, a producer activity may create some data. At any time, a consumer activity may want to accept some data. The data should be saved in a buffer until they are needed. If the buffer is finite, we want a producer to block if its new data would overflow the buffer. We also want a consumer to block if there are no data available when it wants them.

6 Producer/Consumer Problem (Contd.)
We need a buffer to hold items that are produced and later consumed: unbounded-buffer places no practical limit on the size of the buffer. bounded-buffer assumes that there is a fixed buffer size.

7 Idea for Producer/Consumer Solution
The bounded buffer is implemented as a circular array with 2 logical pointers: in and out. The variable in points to the next free position in the buffer. The variable out points to the first full position in the buffer.

8 Bounded-Buffer – Shared-memory Solution
Shared data #define BUFFER_SIZE 10 typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0;

9 Bounded-Buffer – Producer Process
item nextProduced; while (1) { while (((in + 1) % BUFFER_SIZE) == out); /* do nothing */ buffer[in] = nextProduced; in++; in=in% BUFFER_SIZE; } item nextConsumed; while (1) { while (in == out); /* do nothing */ nextConsumed = buffer[out]; out++; out=out % BUFFER_SIZE; } Consumer Process

10 Problems with concurrent execution
Concurrent processes often need to share data (maintained either in shared memory or files) and resources. If there is no controlled access to shared data, some processes will obtain an inconsistent view of this data. The action performed by concurrent processes will then depend on the order in which their execution is interleaved.

11 Atomic Statement Atomic/Indivisible operation means an operation that completes in its entirety without interruption. The statement “in++” may be implemented in machine languageas: register1 = in register1= register in = register1 The statement “out++” may be implemented as: register2 = out register2 = register out = register2

12 Problem with no synchronization
Assume in is initially 5. One interleaving of statements is: producer1: register1 = in(register1 = 5) producer1: register1 = register1 + 1 (register1 = 6) producer2: register2 = in(register2 = 5) producer2: register2 = register2 – 1 (register2 = 4) producer1: in = register1 (counter = 6) producer2: in = register2 (counter = 4) The value of count may be either 4 or 6, whereas the correct result should be 5.

13 This is the Race Condition
Race condition: The situation where several processes access and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. To prevent race conditions, concurrent processes must coordinate or be synchronized.

14 The Critical-Section Problem
n processes all competing to use some shared data. Each process has a code segment, called critical section (CS), in which the shared data is accessed. Problem – ensure that when one process is executing in its CS, no other process is allowed to execute in its CS.

15 Different Sections of a process
Entry Section Before this point process does not contend for the shared resource, Decision taking point Critical Section Here processes uses the shared resource Exit Section Here processes uses the shared resource Remainder Section After this point process does not contend for the shared resource

16 CS Problem Dynamics (1) When a process executes code that manipulates shared data (or resource), we say that the process is in it’s Critical Section (for that shared data). The execution of critical sections must be mutually exclusive: at any time, only one process is allowed to execute in its critical section (even with multiple processors). So each process must first request permission to enter its critical section.

17 CS Problem Dynamics (2) The section of code implementing this request is called the Entry Section (ES). The critical section (CS) might be followed by a Leave/Exit Section (LS). The remaining code is the Remainder Section (RS). The critical section problem is to design a protocol that the processes can use so that their action will not depend on the order in which their execution is interleaved (possibly on many processors).

18 Solution to Critical-Section Problem
There are 3 requirements that must stand for a correct solution: Mutual Exclusion Progress Bounded Waiting We can check on all three requirements in each proposed solution, even though the non-existence of each one of them is enough for an incorrect solution.

19 Mutual Exclusion Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. Implications: Critical sections better be focused. Better not get into an infinite loop in there. If a process somehow halts/waits in its critical section, it must not interfere with other processes.

20 Progress Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the process that will enter the critical section next cannot be postponed indefinitely. If only one process wants to enter, it should be able to. If two or more want to enter, one of them should succeed. Process in remainder section can’t affect the decision

21 Bounded Waiting Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. Assume that each process executes at a nonzero speed. No assumption concerning relative speed of the n processes.

22 Initial Attempts to Solve 2-Processes
General structure of process Pi (other is Pj) - do { entry section critical section leave section remainder section } while (1); Processes may share some common variables to synchronize their actions.

23 Software Solutions - Algorithm 1
Shared variables: int turn; initially turn = 0 turn = i  Pi can enter its critical section Process Pi do { while (turn != i) ; /* do no operation*/ critical section turn = j; remainder section } while (1); Satisfies mutual exclusion and bounded waiting, but not progress.

24 Software Solutions-Algorithm 2(a)
Shared variables boolean flag[2]; initially flag [0] = flag [1] = false flag [i] = true  Pi ready to enter its critical section Process Pi do { while (flag[j]); /* do no operation*/ flag[i] = true; critical section flag [i] = false; remainder section } while (1); Satisfies progress, but not mutual exclusion and bounded waiting requirements.

25 Software Solutions - Algorithm 2(b)
Shared variables boolean flag[2]; initially flag [0] = flag [1] = false flag [i] = true  Pi wants to enter its critical section Process Pi do { flag[i] = true; while (flag[j]); /* do no operation*/ critical section flag [i] = false; remainder section } while (1); Satisfies mutual exclusion and bounded waiting, but not progress requirement.

26 Software Solutions - Algorithm 3
Combined shared variables of algorithms 1 and 2. Process Pi do { flag [i]:= true; turn = j; while (flag [j] and turn = j); critical section flag [i] = false; remainder section } while (1); Meets all three requirements; solves the critical-section problem for two processes.

27 Discusion We consider first the case of 2 processes:
Algorithm 1 and 2(a)/2(b) are incorrect. Algorithm 3 is correct (Peterson’s algorithm). Then we generalize to n processes: The Bakery algorithm.

28 Algorithm for more than two processes
Critical section for n processes: Before entering its critical section, process receives a number, Holder of the smallest number enters the critical section. If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served first. The numbering scheme always generates numbers in non-decreasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...

29 Data structures Shared data: boolean choosing[n]; int number[n];
Data structures are initialized to false and 0, respectively. Choosing a number: max (a0,…, an-1) is a number, k, such that k  ai for i = 0, …, n – 1 Notation for lexicographical order (a,b) < (c,d) if a < c or if a = c and b < d

30 Bakery Algorithm do { choosing[i] = true;
number[i] = max(number[0], …, number[n – 1]) +1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((number[j] != 0) && (number[j] < number[i])) ; } critical section number[i] = 0; remainder section } while (1);

31 If process fails??? If all 3 criteria (ME, progress, bounded waiting) are satisfied, then a valid solution will provide robustness against failure of a process in its remainder section (RS). since failure in RS is just like having an infinitely long RS. However, no valid solution can provide robustness against a process failing in its critical section (CS). A process Pi that fails in its CS does not signal that fact to other processes: for them Pi is still in its CS.

32 Drawbacks of software solutions
Software solutions are very delicate. Processes that are requesting to enter their critical section are busy waiting (consuming processor time needlessly). If critical sections are long, it would be more efficient to block processes that are waiting.

33 Hardware solutions: interrupt disabling
On a uniprocessor: mutual exclusion is preserved but efficiency of execution is degraded: while in CS, we cannot interleave execution with other processes that are in RS. On a multiprocessor: mutual exclusion is not preserved: CS is now atomic but not mutually exclusive. Generally not an acceptable solution. Process Pi: repeat disable interrupts critical section enable interrupts remainder section forever

34 Hardware solutions: special machine instructions
Normally, access to a memory location excludes other access to that same location. Extension: designers have proposed machines instructions that perform 2 actions atomically (indivisible) on the same memory location (e.g., reading and writing). The execution of such an instruction is also mutually exclusive (even on Multiprocessors). They can be used to provide mutual exclusion but need to be complemented by other mechanisms to satisfy the other 2 requirements of the CS problem.

35 Example with bakery algorithm
Schedule by RR algorithm with time slice of 10 unit time Process P1 AT->0 5-statements do { 4 -statements choosing[i] = true; num[i] = max(num[0], …, num[n – 1]) +1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((num[j] != 0) && (num[j] < num[i])) ; } 3-critical section number[i] = 0; 2-remainder section } while (1); Process P2 AT->5 2 -statements do { choosing[i] = true; num[i] = max(num[0], …, num[n – 1]) +1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((num[j] != 0) && (num[j] < num[i])) ; } 4-critical section number[i] = 0; 3-remainder section } while (1); Process P3 AT->7 6 -statements do { 3 -statements choosing[i] = true; num[i] = max(num[0], …, num[n – 1]) +1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((num[j] != 0) && (num[j] < num[i])) ; } 5-critical section number[i] = 0; 6-remainder section } while (1);

36 Problem Draw the Gantt chart for the following processes up to 90 milliseconds, scheduled by RR scheduling algorithm with time slice 10. Assume that all statements need 1 millisecond for its completion. P0 and P1 arrives at time 0 and 5 respectively 7-statements Do { 2- statements flag[0]=true; while(flag[1]==true); 9- statements flag[0]=false }while(1); 6-statements Do { 4- statements flag[0]=true; while(flag[1]==true); 5- statements flag[0]=false 7- statements }while(1); P0 P1


Download ppt "Introduction to Cooperating Processes"

Similar presentations


Ads by Google