Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency.

Slides:



Advertisements
Similar presentations
Synchronization and Deadlocks
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Interprocess Communication
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
Race Conditions. Isolated & Non-Isolated Processes Isolated: Do not share state with other processes –The output of process is unaffected by run of other.
1 Friday, June 16, 2006 "In order to maintain secrecy, this posting will self-destruct in five seconds. Memorize it, then eat your computer." - Anonymous.
Concurrency.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
University of Pennsylvania 9/19/00CSE 3801 Concurrent Processes CSE 380 Lecture Note 4 Insup Lee.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CE Operating Systems Lecture 5 Processes. Overview of lecture In this lecture we will be looking at What is a process? Structure of a process Process.
Concurrency, Mutual Exclusion and Synchronization.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
CE Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Web Server Architecture Client Main Thread for(j=0;j
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
Process Synchronization
Chapter 5: Process Synchronization
Background on the need for Synchronization
Chapter 5: Process Synchronization
Process Synchronization - I
143a discussion session week 3
Topic 6 (Textbook - Chapter 5) Process Synchronization
Lecture 19 Syed Mansoor Sarwar
Introduction to Cooperating Processes
Grades.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
Process Synchronization
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Presentation transcript:

Process Synchronization Ch. 4.4 – Cooperating Processes Ch. 7 – Concurrency

Cooperating Processes  Independent process cannot affect or be affected by the execution of another process.  Cooperating process can affect or be affected by the execution of another process

Advantages of process cooperation  Information sharing Allow concurrent access to data sources  Computation speed-up Sub-tasks can be executed in parallel  Modularity System functions can be divided into separate processes or threads  Convenience

Context Switches can Happen at Any Time  A process switch (full context switch) can happen at any time there is a mode switch into the kernel  This could be because of a: System call (semi-predictable) Timer (round robin, etc.) I/O interrupt (unblock some other process) Other interrupt, etc.  The programmer generally cannot predict at what point in a program this might happen

Preemption is Unpredictable  This means that the program’s work can be interrupted at any time (I.e. just after the completion of any instruction): Some other program gets to run for a while And the interrupted program eventually gets restarted exactly where it left off. After the other program (process) executes other instructions that we have no control over  This can lead to trouble if processes are not independent

Problems with Concurrent Execution  Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources  If there is no controlled access to shared data, execution of the processes on these data can interleave.  The results will then depend on the order in which data were modified i.e. the results are non-deterministic.

Shared Memory kernel process B process A shared memory

An Example: Bank Account A joint account. Each account holder accesses money at the same time – one deposits, the other withdraws. The bank’s computer is executing the routine below simultaneously as two processes running the same transaction processing program void update(acct,amount) { temp = getbalance(acct); temp += amount; putbalance(acct,temp); }

Banking Example void update(acct,amount) { temp = getbalance(acct); temp += amount; putbalance(acct,amount); } temp = 60 temp = putbalance (160) Initial balance = $60 A’s deposit = $100 B’s withdrawal = $50 Net balance = $110 A’s process: Process Switch! B’s process: temp = putbalance (10) temp = 60 What is the final bank balance?

Race Conditions  A situation such as this, where processes “race” against each other, causing possible errors, is called a race condition.  2 or more processes are reading/writing shared data and the final result depends on the order the processes have run  Can happen at the application level and the OS level

Printer queue example (OS level)  Printer queue – often implemented as a circular queue. Out = position of next item to be printed In = position of next empty slot.  lw or lpr File added to print queue  What happens if 2 processes requesting queuing of a print job at the same time? Each must access the variable “in”.

Dueling queueing Timeline (Process A) 1. Read in = Insert job at position 7 7. in++ (in = 8) 8. Exit lw Timeline (Process B) Read in = 7 3. Insert job at position 7 4. in++ (in = 8) 5. Exit lw What happened to B’s print job?

Producer-Consumer Problem  Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process. unbounded-buffer places no practical limit on the size of the buffer. bounded-buffer assumes that there is a fixed buffer size.  Print queue is an example – processes putting jobs in queue, printer daemon taking jobs out. Daemon = process that runs continually and handles service requests

Basic Producer-Consumer Producer repeat produce item /*if buffer full, do nothing*/ while (counter ==n); insert item counter ++; forever Consumer repeat /* if buffer empty, do nothing*/ while (counter ==0); remove item counter--; consume item forever Shared data: (bounded buffer) Buff size = n Counter = 0

Problems with Basic algorithm  More than 1 process can access shared “counter” variable  Race condition can result in incorrect value for “counter”  Inefficient: Busy-wait checking value of counter

Producer-Consumer with Sleep Producer repeat produce item /*if buffer full, go to sleep*/ if (counter ==n) sleep(); insert item counter ++; If (count ==1) wakeup(consumer); forever Consumer repeat /* if buffer empty, go to sleep*/ if (counter ==0) sleep(); remove item counter--; if (count ==(n-1) wakeup (producer) consume item forever Shared data: (bounded buffer) Buff size = n Counter = 0

Problems  If counter has a value of 1..n-1, both processes are running, so both can access shared “counter” variable  Race condition can result in incorrect value for “counter”  Could lead to deadlock with both processes asleep

Could also lead to deadlock… Timeline (Consumer) 1. If (counter ==0) True Sleep() Timeline (Producer) Produce item 3. If (counter ==n) F 4. Insert item 5. Counter++ 6. If (counter == 1) T 7. Wakeup (consumer) 8. Wakeup call lost as consumer not sleeping Eventually both will be asleep - deadlock

Critical section  That part of the program where shared resources are accessed  When a process executes code that manipulates shared data (or resource), we say that the process is in a critical section (CS) (for that resource)  Entry and exit sections (small pieces of code) guard the critical section

The Critical Section Problem  CS’s can be thought of as sequences of instructions that are ‘tightly bound’ so no other process should interfere via interleaving or parallel execution.  The execution of CS’s must be mutually exclusive: At any time, only one process should be allowed to execute in a CS (even with multiple CPUs)  Therefore we need a system where each process must request permission to enter its CS, and we need a means to “administer” this

The Critical Section Problem  The section of code implementing this request is called the entry section  The critical section (CS) will be followed by an exit section, which opens the possibility of other processes entering their CS.  The remaining code is the remainder section RS  The critical section problem is to design the processes so that their results will not depend on the order in which their execution is interleaved.  We must also prevent deadlock and starvation.

Framework for analysis of solutions  Each process executes at nonzero speed but no assumption on the relative speed of n processes  General structure of a process:  Several CPUs may be present but memory hardware prevents simultaneous access to the same memory location  No assumptions about order of interleaved execution  The central problem is to design the entry and exit sections repeat entry section critical section exit section remainder section forever