Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 6: Process Synchronization Dr. Mohamed Hefeeda.

Similar presentations


Presentation on theme: "1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 6: Process Synchronization Dr. Mohamed Hefeeda."— Presentation transcript:

1 1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 6: Process Synchronization Dr. Mohamed Hefeeda

2 2 Objectives  Understand  The Critical-Section Problem  And its hardware and software solutions

3 3 Consumer-Producer Problem  Classic example of process coordination  Two processes sharing a buffer  One places items into the buffer (producer)  Must wait if the buffer is full  The other takes items from the buffer (consumer)  Must wait if buffer is empty  Solution: Keep a counter on number of items in the buffer

4 4 Producer Process while (true) { /* produce an item in nextProduced */ while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++; }

5 5 Consumer Process while (true) { while (count == 0) ; // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; /*consume item in nextConsumed */ } What can go wrong with this solution?

6 6 Race Condition  count++ could be implemented as  register1 = count  register1 = register1 + 1  count = register1  count-- could be implemented as  register2 = count  register2 = register2 - 1  count = register2  Consider this execution interleaving with “count = 5” initially: S0: producer executes register1 = count {register1 = 5} S1: producer executes register1 = register1 + 1 {register1 = 6} S2: consumer executes register2 = count {register2 = 5} S3: consumer executes register2 = register2 - 1 {register2 = 4} S4: producer executes count = register1 {count = 6 } S5: consumer executes count = register2 {count = 4}

7 7 Race Condition  Occurs when multiple processes manipulate shared data concurrently and the result depends on the particular order of manipulation  Data inconsistency may arise  Solution idea  Mark code segment that manipulates shared data as critical section  If a process is executing its critical section, no other processes can execute their critical sections  More formally, any method that solves the Critical-Section Problem must satisfy three requirements …

8 8 Critical-Section (CS) Problem 1.Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections 2.Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then selection of the process that will enter the critical section next cannot be postponed indefinitely 3.Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assumptions Each process executes at a nonzero speed No restriction on the relative speed of the N processes

9 9 Solutions for CS Problem  Disable interrupts during running CS  Currently running code would execute without preemption  Possible only on uniprocessor systems. Why? Every processor has its own interrupts Disabling interrupts in all processors is inefficient  Any problems with this solution even on uniprocessor systems? Users could make CS arbitrary large  unresponsive system  Solutions using software only  Solutions using hardware support

10 10 Peterson’s Solution  Software solution; no hardware support  Two process solution  Assume LOAD and STORE instructions are atomic (i.e., cannot be interrupted)  may not always be true in modern computers  The two processes share two variables:  int turn;  Boolean flag[2];  turn indicates whose turn it is to enter critical section  The flag array indicates whether a process is ready to enter critical section  flag[i] = true ==> process P i is ready

11 11 Algorithm for Process P i while (true) { flag[i] = TRUE; turn = j; while (flag[j] && turn == j) ; CRITICAL SECTION flag[i] = FALSE; REMAINDER SECTION } Does this algorithm satisfy the three requirements? Yes. Show this as an exercise.

12 12 Synchronization Hardware  Many systems provide hardware support for critical section code  more efficient and easier for programmers  Modern machines provide special atomic (non- interruptable) hardware instructions  Either test a memory word and set value  Or swap contents of two memory words

13 13 TestAndndSet Instruction Definition: boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv; }

14 14 Solution using TestAndSet  Shared boolean variable lock, initialized to false while (true) { while ( TestAndSet (&lock ) ) ; /* do nothing // critical section lock = FALSE; // remainder section } Does this algorithm satisfy the three requirements? NO. A process can wait indefinitely for another faster process that is accessing its CS. Check Fig 6.8 for a modified version.

15 15 Swap Instruction Definition:  void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp; }

16 16 Solution using Swap  Shared boolean variable lock initialized to FALSE;  Each process has a local boolean variable key while (true) { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section }

17 17 Semaphores  Much easier to use than hardware-based solutions  Semaphore S – integer variable  Two standard operations to modify S:  wait()  signal()  These two operations are indivisible (atomic)

18 18 Semaphore Operations wait (S) { while (S <= 0) ; // no-op, called busy waiting, spinlock S--; } signal (S) { S++; }  Later, we will see how to implement these operations with no busy waiting

19 19 Semaphore Types and Usage  Two types  Counting semaphore: can be any integer value  Binary semaphore: can be 0 or 1 (known as mutex locks)  Usage examples:  Mutual exclusion Semaphore mutex; // initialized to 1 wait (mutex); Critical Section signal (mutex);

20 20 Semaphore Types and Usage (cont’d)  Process synchronization: S2 in P2 should be executed after S1 in P1 P1: S1; signal (sem); P2: wait (sem); S2;  Control access to a resource with finite number of instances  e.g., producer-consumer problem with finite buffer

21 21 Semaphore Implementation with no Busy Waiting  Each semaphore has:  value (of type integer)  waiting queue  Two internal operations: block – place the process invoking the operation in the waiting queue wakeup – remove one of the processes from the waiting queue and place it in the ready queue

22 22 Semaphore Implementation with no Busy Waiting wait (S) { value--; if (value < 0) { add this process to waiting queue block(); } } signal (S) { value++; if (value <= 0) { /*some processes are waiting*/ remove a process P from waiting queue wakeup(P); } }

23 23 Semaphore Implementation  Must guarantee that no two processes can execute wait and signal on same semaphore at same time  Thus, implementation becomes the critical section problem where wait and signal code are placed in critical section, and protected by  Disabling interrupts (uniprocessor systems only)  Busy waiting or spinlocks (multiprocessor systems)  Well, why do not we do the above in applications?  Applications may spend long (and unknown) amount of time in critical sections, unlike kernel which spends short and known beforehand time in critical section (~ ten instructions)

24 24 Deadlock and Starvation  Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes  Let S and Q be two semaphores initialized to 1 P 0 P 1 wait (S); wait (Q); wait (Q); wait (S);. signal (S); signal (Q); signal (Q); signal (S);  Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended

25 25 Be Careful When Using Semaphores  Some common programming problems …  signal (mutex) …. wait (mutex) Multiple processes can access CS at the same time  wait (mutex) … wait (mutex) Processes may block for ever  Forgetting wait (mutex) or signal (mutex)

26 26 Classical Problems of Synchronization  Bounded-Buffer (Producer-Consumer) Problem  Dining-Philosophers Problem  Readers-Writers Problem  These problems are  abstractions that can be used to model many other resource sharing problems  used to test newly proposed synchronization schemes

27 27 Bounded-Buffer Problem  N buffers, each can hold one item  Semaphore mutex initialized to the value 1  Semaphore full initialized to the value 0  Semaphore empty initialized to the value N

28 28 Bounded Buffer Problem (cont’d)  Structure of the producer process while (true) { // produce an item wait (empty); wait (mutex); // add item to buffer signal (mutex); signal (full); }

29 29 Bounded Buffer Problem (cont’d)  Structure of the consumer process while (true) { wait (full); wait (mutex); // remove an item from buffer signal (mutex); signal (empty); // consume removed item }

30 30 Dining-Philosophers Problem  Philosophers alternate between eating and thinking  To eat, a philosopher needs two chopsticks (at her left and right)  Models multiple processes sharing multiple resources  Write a program for each philosopher s.t. no starvation/deadlock occurs  Solution approach:  Bowl of rice (data set)  Array of semaphores: chopstick [5] initialized to 1

31 31 Dining-Philosophers Problem: Philosopher i  While (true) { wait ( chopstick[i] ); wait ( chopstick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think }  What can go wrong with this solution?  All philosophers pick their left chopsticks at same time (deadlock)  Solutions?  Pick chopsticks only if both are available  Asymmetric: odd philosopher picks left chopstick first, even picks right first

32 32 Readers-Writers Problem  A data set is shared among a number of concurrent processes  Readers – only read the data set; they do not perform any updates  Writers – can read and write  Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time.  Shared Data  Data set  Semaphore mutex initialized to 1  Semaphore wrt initialized to 1  Integer readcount initialized to 0

33 33 Readers-Writers Problem (cont’d)  The structure of a writer process while (true) { wait (wrt) ; // writing is performed signal (wrt) ; }

34 34 Readers-Writers Problem (cont’d)  Idea of reader processes:  The first reader needs to check that there is no writer in CS  Other readers access CS right away, but they need to update the current number of readers in CS (readcount) while (true) { wait (mutex) ; // mutex: protects readcount readcount ++ ; if (readercount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (redacount == 0) signal (wrt) ; signal (mutex) ; }

35 35 Readers-Writers Problem (cont’d)  Some systems implement reader-writer locks  E.g., Solaris, Linux, Pthreads API  A process can ask for a reader-write lock either in read or write mode  When would you use reader-writer locks?  Applications where it is easy to identify readers only and writers only processes  Applications with more readers than writers  Tradeoff: cost vs. concurrency  Reader-writer locks require more overhead to establish than semaphores, but they provide higher concurrency by allowing multiples readers in CS

36 36 Condition Variables  Condition x, y;  Two operations on a condition variable:  x.wait () – a process that invokes the operation is suspended.  x.signal () – resumes one of processes (if any) that invoked x.wait ()  What is the difference between condition variables and semaphores?  condition variable: if no process is suspended, signal has no effect  semaphores: signal always increments semaphore's value  Condition variables are usually used to suspend/awake processes

37 37 Monitors  Monitor: High-level abstraction that provides a convenient and effective mechanism for process synchronization  Compiler (not programmer) takes care of mutual exclusion  Only one process may be active within the monitor at a time monitor monitor_name { //shared variable declarations procedure P1 (…) { …. } … procedure Pn (…) {……} Initialization code ( ….) { … } }

38 38 Schematic View of Monitors

39 39 Solution to Dining Philosophers monitor DP { enum {THINKING, HUNGRY, EATING} state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); //check both chopsticks, // set state[i] = EATING if both are available if (state[i] != EATING) self [i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); // right test((i + 1) % 5); // left }

40 40 Solution to Dining Philosophers (cont’d) void test (int i) { if ( (state[i] == HUNGRY) && (state[(i + 4) % 5] != EATING) && (state[(i + 1) % 5] != EATING) ) { state[i] = EATING ; self[i].signal () ; } initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } } // end monitor

41 41 Solution to Dining Philosophers (cont’d)  Each philosopher invokes the operations pickup() and putdown() in the following sequence: dp.pickup (i) EAT dp.putdown (i)

42 42 Monitors Implementation  It is up to the compiler to ensure mutual exclusion in monitors  Semaphores are usually used  Languages like Java, C# (not C), and Concurrent Pascal provide monitors-like mechanisms  Java public class SimpleClass {.... public synchronized void insert(…) { …} public synchronized void remove(…) { …} …. }  Java guarantees that once a thread starts executing a synchronized method, no other thread can execute any other synchronized method in the class  Java 1.5+ has semaphores, condition variables, mutex locks, …  In java.util.concurrent package  Exercise: write java programs for the Producer-Consumer and dinning philosophers problems

43 43 Synchronization Examples  Windows XP  Linux  Pthreads

44 44 Windows XP Synchronization  Masks interrupts to protect access to global resources on uniprocessor systems (inside kernel)  Uses spinlocks on multiprocessor systems  Provides dispatcher objects for thread synchronization outside kernel, which can act as mutexes, semaphores, or events (condition variables)

45 45 Linux Synchronization  Linux:  disables interrupts to implement short critical sections (on single processor systems)  Linux provides:  semaphores  Spinlocks (on SMP)  Reader-writer locks

46 46 Pthreads Synchronization  Pthreads API is OS- independent  It provides:  mutex locks  condition variables  extensions include:  semaphores  read-write locks  spin locks  May not be portable #include pthread_mutex_t mutex; pthread_mutex_init(&mutex, null); pthread_mutex_lock(&mutex); pthread_mutex_unlock(&mutex); #include sem_t sem; sem_init(&sem, 0, 5); sem_wait(&sem); sem_post(&sem);

47 47 Summary  Processor Synchronization  Techniques to coordinate access to shared data  Race condition  Multiple processes manipulating shared data and result depends on execution order  Critical section problem  Three requirements: mutual exclusion, progress, bounded waiting  Software solution: Peterson’s Algorithm  Hardware support: TestAndSet(), Swap() Busy waiting (or spinlocks)  Semaphores: wait(), signal() must be atomic  moves the CS problem to kernel  Monitors: high-level constructs (compiler)  Some classical synchronization problems


Download ppt "1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 6: Process Synchronization Dr. Mohamed Hefeeda."

Similar presentations


Ads by Google