Ch. 7 Process Synchronization (1/2) 2015-04-17. 7.I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.

Slides:



Advertisements
Similar presentations
Module 6: Process Synchronization
Advertisements

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating Systems Part III: Process Management (Process Synchronization)
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Synchronization Principles Gordon College Stephen Brinton.
1 Tuesday, June 20, 2006 "The box said that I needed to have Windows 98 or better... so I installed Linux." - LinuxNewbie.org.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
1 Chapter 7: Process Synchronization 2 Contents Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Synchronization (other solutions …). Announcements Assignment 2 is graded Project 1 is due today.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
The Critical Section Problem
Concurrency, Mutual Exclusion and Synchronization.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Synchronization Background The Critical-Section.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the.
1 Concurrent Processes. 2 Cooperating Processes  Operating systems allow for the creation and concurrent execution of multiple processes  concurrency.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Operating Systems Lecture Notes Synchronization Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Synchronization Background The Critical-Section Problem Peterson’s.
Chapter 6: Process Synchronization
Process Synchronization
Chapter 5: Process Synchronization
Process Synchronization: Semaphores
Background on the need for Synchronization
Chapter 5: Process Synchronization
Chapter 5: Process Synchronization
Chapter 6: Process Synchronization
Chapter 6: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Chapter 6: Process Synchronization
Module 7a: Classic Synchronization
Process Synchronization
Lecture 2 Part 2 Process Synchronization
Critical section problem
Grades.
Chapter 6: Process Synchronization
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
Presentation transcript:

Ch. 7 Process Synchronization (1/2)

7.I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer :  Assume that there is a fixed buffer size.  Consumer must wait if the buffer is empty.  Producer must wait if the buffer is full F Unbounded buffer In = out > empty Next free position Full : in + 1 = out first full position

7.I Background F Producer while(1){ /* produce an item */ while (counter == BUFFER_SIZE) ; buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE counter++; } F Consumer while(1){ while (counter == 0) ; nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE counter--; /* consume the item */ }

7.I Background F Concurrent processing : base of multi-programmed O.S.  Concurrent access of shared data raise inconsistency  synchronization(process) needed example ) T 0 : producer execute register 1 := counter T 1 : producer execute register 1 := register T 2 : consumer execute register 2 := counter T 3 : consumer execute register 2 := register T 4 : consumer execute counter := register 1 T 5 : producer execute counter := register 2 In case of T 4 and T 5, ‘count’ value will be inconsistent

7.2 Critical Section Problem F Critical Section : Cooperating protocol  Critical Section must satisfy the following three requirement  Mutual Exclusion  Process P i is executing in Critical Section, other can’t be executing in Critical Section  Progress  If no process is executing in its critical section and some processes wish to enter their critical section, then  Only those process that are not executing in the Remainder can participate in the decision on which will enter its critical section  Bounded waiting  There must exist a bound on the number of times that other processes are allowed to enter their critical section after a process has made a request to enter the Critical Section and that request is granted  Critical-section problem  design a protocol that the processes can use to cooperate  For each process  assume that executing at a nonzero speed  no assumption concerning the relative speed

7.2 Critical Section Problem  Generate Structure of Concurrent process  General structure of a typical process P i do { Common variable declaration; parbegin; p 0 ; p 1 ; parend; } while do { entry section Critical Section exit section Remainder section } while(1);

7.2 Critical Section Problem F Two process solution  restrict only two processes at a time  Algorithm 1 do { while ( turn != i ); critical section turn = j; remainder section } while (1)

7.2 Critical Section Problem  Algorithm 1  ensure that only one process at a time can be execute in Critical Section  does not satisfy the progress requirement  let process share a common integer variable turn  turn initialized to 0 or 1  if turn == i then process P i is allowed to execute in Critical Section  Problem  if turn == 0 and P 1 is ready to enter in C.S, P 1 can’t enter even though P 0 may be in remainder section

7.2 Critical Section Problem  The Structure of process P i in algorithm 1 do { while (turn != i ) ; Critical Section turn = j ; Remainder Section } while (1);

7.2 Critical Section Problem  Algorithm 2  replace variable turn (in Algorithm 1) with following array boolean flag[2];  elements of the array are initialized to false  flag[i] is true : P i is ready to enter the Critical Section  P i first sets flag[i] to be true, P i checks to verify that P j is not also ready to enter the Critical Section  if P j were ready, then P i wait until P j exit the Critical Section  mutual-exclusion requirement is satisfied  does not satisfy progress requirement

7.2 Critical Section Problem  The structure of process P i in algorithm 2 do { flag[i] = true; while ( flag[j] ) ; Critical Section flag[i] = false ; Remainder Section } while (1);

7.2 Critical Section Problem  Problem  consider the following execution sequence T 0 : P 0 sets flag[0] = true T 1 : P 1 sets flag[1] = true  means that P 0 and P 1 want to enter the C.S at the same time  P 0 and P 1 are looping forever in their respective while statements

7.2 Critical Section Problem  Algorithm 3  combining the key ideas of algorithm 1 and 2  All three requirements are satisfied  The process share next two variable boolean flag[2]; int turn;  initially flag[0] = flag[1] = false  if both process try to enter at the same time, turn will be set to both i and j at roughly at the same time  eventual value of turn decides which of the two processes is allowed to enter its critical section first

7.2 Critical Section Problem  The structure of process P i in algorithm 3 do { flag[i] = true; turn = j; while (flag[j] and turn == j) ; Critical Section flag[i] = false ; Remainder Section } while (1) ;

7.2 Critical Section Problem F Multiple-Process Solutions  solving the critical section problem for n processes  known as the bakery algorithm  each customer receives a number  The customer with the lowest number is served next  can’t guarantee that two process do not receive the same number  if P i and P j receive the same number and if i < j then, P i served first  Common data structures boolean choosing [n]; int number [n];  initialized to false and 0  Define the following notation (a, b) < (c, d) if a < c or if a = c and b < d. max(a 0, …, a n-1 ) is a number; k, such that k  a i for i = 0, …, n-1

7.2 Critical Section Problem  The structure of process P i in the bakery algorithm do { choosing[i] = true; number[i] = max(number[0], number[1], …, number[n-1]) +1; choosing[i] = false; for ( j = 0 ; j < n-1 ; j++ ) { while ( choosing[j] ) ; while ( number[j]  0) && (number[j], j) < (number[i], i) ; } Critical Section number[i] = 0 ; Remainder Section } while (1);

7.3 Synchronization Hardware F Critical Section problem  could be solved  disabling interrupts on a multiprocessor  The message passing delay entry into each critical section  system efficiency decrease F Test-and-Set  this instruction is executed atomically = uninterruptible unit  if two Test-and -Set instructions are executed simultaneously, they will be executed sequentially in some arbitrary order  can implement mutual exclusion by declaring a Boolean variable lock ( initialized to false )

7.3 Synchronization Hardware  Definition of TestAndSet instruction  Mutual-exclusion implementation with TestAndSet boolean TestAndSet(boolean &target) { boolean rv = target; target = true; return rv; } do { while ( TestAndSet ( lock ) ) ; Critical Section lock = false; Remainder Section } while (1)

7.3 Synchronization Hardware F Swap  swaps the contents of two words, atomically  can implement mutual exclusion by declaring global variable lock ( initialized to false )  does not satisfy the bounded-waiting requirement  Definition of Swap instruction void Swap(boolean &a, boolean &b ) { booleantemp = a; a = b; b = temp; }

7.3 Synchronization Hardware  Mutual-exclusion implementation with the Swap do { key = true; while (key == true) Swap(lock, key); Critical Section lock = false; Remainder Section } while (1);

7.3 Synchronization Hardware F An algorithm that uses the Test-and-Set  satisfied all the critical section requirements  common data structure boolean waiting[n] boolean lock  this data structure are initialized to false

7.3 Synchronization Hardware  Bounded-waiting mutual exclusion with Test-and-Set do { waiting[i] = true; key = true; while (waiting[i] && key) key = TestAndSet(lock); waiting[i]= false; Critical Section j = (i+1) % n; while(( j != i) && (!waiting[j])) j = (j+1) % n; if (j == i) lock = false else waiting[j] = false; Remainder Section until false

7.4 Semaphores  The solutions in Section 7.3 are not easy to generalize to more complex problems  Synchronization tool (Semaphore)  semaphore S : integer variable  standard atomic operations : P(for wait, to test), V(for signal, to increment)  Modification to S in operations must be executed indivisibly  The testing of S(S  0) and modification(S:=S-1) must be executed without interruption wait(S) : while S  0 do no-op; S = S - 1 signal(S) : S = S + 1

7.4 Semaphores F Usage  use semaphore to deal with the n-process critical-section problem  The n processes share a semaphore, mutex initialized to 1  Ex) two concurrently running processes  P 1 with a statement S 1, P 2 with a statement S 2  S 2 be executed only after S 1 has completed  P 1 and P 2 share a semaphore synch, initialized to 0 S 1 ; signal(synch); wait(synch); S 2 ; P1P1 P2P2

7.4 Semaphores F Implementation  The main disadvantage of semaphore is busy waiting  while a process in critical section, any other process must loop continuously in the entry code  Busy waiting wastes CPU cycles  This type of semaphore is called a spinlock  The advantage of a spinlock  No context switch is required when a process must wait on a lock  Mutual-exclusion implementation with semaphores repeat wait(mutex); signal(mutex); critical section remainder section until false;

7.4 Semaphores F Implementation  To overcome the busy waiting, modify the wait and signal  wait operation  Rather than busy waiting, process can block itself  place a process into a waiting queue  the state of process is switched to the waiting state  control is transferred to the CPU scheduler  scheduler selects another process to execute  signal operation  blocked process is restarted by wakeup operation when some other process executes a signal operation  wakeup change process from waiting state to ready state  process is placed in the ready queue

7.4 Semaphores type struct { int value; struct process *L; } semaphore; void wait (semaphore S) { S.value--; if (S.value <0) { add this process to S.L; block(); } void signal (semaphore S) { S.value++; if (S.value <=0) { remove a process P from S.L; wakeup (P); }

7.4 Semaphores F Deadlocks and Starvation  Deadlock : two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes  Ex) two processes P 0, P 1, two semaphore S, Q (value 1)  when P 0 executes wait(Q), it must wait until P 1 executes signal(Q)  when P 1 executes wait(S), it must wait until P 0 executes signal(S)  Starvation(indefinite blocking)  processes wait indefinitely within the semaphore P 0 wait(S) wait(Q) signal(S); signal(Q); P 1 wait(Q) wait(S) signal(Q); signal(S);

7.4 Semaphores F Binary Semaphores  Counting semaphore : semaphore described in the previous sections  Binary semaphore : a semaphore with an integer value that can range only between 0 and 1  implement a counting semaphore using binary semaphore  S : a counting semaphore binary-semaphore S1, S2; int C;

7.4 Semaphores F Binary Semaphores  Initially S1 = 1, S2 = 0  C is set to the initial value of the counting semaphore S  wait, signal operations on the counting semaphore S can be implemented as follows: wait(S1); C --; if (C  0) { signal(S1); wait(S2); } signal(S1); wait(S1); C ++; if (C  0) signal(S2); else signal(S1); wait operationsignal operation