© Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Introduction to Concurrency.

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 2 Processes and Threads
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Interprocess Communication
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Computer Systems/Operating Systems - Class 8
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
EEE 435 Principles of Operating Systems Interprocess Communication Pt I (Modern Operating Systems 2.3)
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 2: Processes Topics –Processes –Threads –Process Scheduling –Inter Process Communication (IPC) Reference: Operating Systems Design and Implementation.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Avishai Wool lecture Priority Scheduling Idea: Jobs are assigned priorities. Always, the job with the highest priority runs. Note: All scheduling.
Concurrency CS 510: Programming Languages David Walker.
Chapter 6.1: Process Synchronization Part Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Process Synchronization Process Synchronization.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Chapter 2.3 : Interprocess Communication
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Operating Systems (CSCI2413) Lecture 3 Processes phones off (please)
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Principles of I/0 hardware.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Classical problems.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
Games Development 2 Concurrent Programming CO3301 Week 9.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Deadlock Operating Systems: Internals and Design Principles.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Processes and Threads Part II Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
Transmitter Interrupts Review of Receiver Interrupts How to Handle Transmitter Interrupts? Critical Regions Text: Tanenbaum
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Background on the need for Synchronization
G.Anuradha Reference: William Stallings
Concurrency.
Chapter 5: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Operating Systems Concepts
Chapter 5 Mutual Exclusion(互斥) and Synchronization(同步)
Presentation transcript:

© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency

© Janice Regan, CMPT 300, May What is a thread?  Each thread has its own program image  Program counter, registers  Stack, data  State  Each thread shares  Program text  Global variables (can share other specified variables)  Files and other communication connections

© Janice Regan, CMPT 300, May Concurrency  Cooperating processes (threads) can share some information as common variables or files  Threads can share variables directly  Processes can share variables using inter process messaging or shared variables (more complicated)  Both threads and processes may share files and connections  If processes (threads) share variables or files then problems may arise when two or more processes (threads) try to access the same variable/file /connection simultaneously  Cooperating processes (threads) can also share or compete for other resources.

© Janice Regan, CMPT 300, May Important Ideas: Concurrency  Critical section: A section of a process that requires access to shared resources.  Mutual exclusion: When one process is executing a critical section, no other process may execute a critical section using the same resources  Race condition: Two processes read/write shared data and the final result depends on the order the read/writes are done  Starvation: a process is not given any resources and does not run even though it is ready to go  Deadlock: Two (or more) processes cannot proceed because each is waiting for the other to do something  Livelock: Two (or more) process continuously change state without doing useful work

Race condition  Consider a system where two or more processes, sharing a common processor, attempt to use a common resource such as a shared variable or a hardware device. A race condition occurs when the output or result of a process depends on the specific order in which all or part of each of the processes is executed.  A race condition causes anomalous behaviour because of a critical dependence on the relative timing of events within a group of processes. © Janice Regan, CMPT 300, May

5 One example: Race condition  A race condition occurs when two processes (threads) try to read or from or write to a shared memory location in such a way that the order in which parts of the processes are executed determines the final result.  Simple example  Thread 1 and 2 share variables b, and c. Initially b=1, c=2  At some point in its execution thread 1 calculates b = b + c  At some point in its execution thread 2 calculates c = b + c  If thread 1 does its calculation first b= b+c=3 c = (b+c) + c=5  If thread 2 does its calculation first c = (b+c)=3 b=b + (b+c)=5

© Janice Regan, CMPT 300, May Race condition: example  Another race condition void echo() { chin = getchar(); chout = chin; putchar(chout); }  This code can be used to read a character from the keyboard and display it on the screen

© Janice Regan, CMPT 300, May Race condition: example: 2  Consider a single user multiprocessor system using a single copy of the echo process to service multiple windows (applications)  Process W1 (window 1) calls echo to process a keystroke (‘x’)  getchar() reads ‘x’ into chin  W1 is interrupted by process W2 (window 2)  W2 uses getchar() reading ‘q’ into chin, copying it into chout and printing q to window 2  Process W1 resumes chout = chin puts the present value of chin (‘q’) into chout and displays q to window 1  The character ‘x’ originally input to process W1 is lost

© Janice Regan, CMPT 300, May Race condition: example: 3  Try mutual exclusion: assume that only one process may use the echo routine at any one time  Process W1 (window 1) calls echo to process a keystroke (‘x’)  getchar() reads ‘x’ into chin  W1 is interrupted by process W2 (window 2)  W2 wants to use echo but finds in busy and has to wait, therefore while W2 is waiting  Process W1 resumes chout = chin puts the present value of chin (‘x’) into chout and displays x to window 1. Process W1 continues until it completes the call to echo then completes or is interrupted.  W2 resumes, uses getchar() reading ‘q’ into chin, copying it to chout and printing q to window 2

© Janice Regan, CMPT 300, May Race condition: example: 3  Problem with mutual exclusion on a multiprocessor  W1 runs on processor 1 getchar() reads ‘x’ into chin  W2 runs on processor 2 getchar() reads ‘q’ into chin  Process W2 continues, puts the present value of chin (‘q’) into chout and displays q to window 1  Process W1 continues, copying chin = ‘q’ into chout and printing q to window 2  Only one copy of echo() is running on each processor, but we still have the same problem because the memory is not protected from the other processor, only from other processes running on the same processor

© Janice Regan, CMPT 300, May An example (from your text): 1  Consider a printer daemon. This O/S process runs in the background (is a daemon) and periodically checks to see if there are any files to print  On your disk is a printer daemon directory containing the files to be printed and a control file  The spooler control block in memory (a copy of the control file) has a series of “slots”, each slot contains the name of the file.  Each slot is numbered (indexed). The file whose name is in slot 1 will be printed first, slot 2 next and so on. (large circular buffer)  Each time a file is printed a new process is created to print the file  There are two shared variables in and out (shared between all processes printing files.

© Janice Regan, CMPT 300, May Printer daemon: directory … … … a1 a2 a3 Spooler Control block … … … File a1 File a2 File a … 9 in=7 out=4

© Janice Regan, CMPT 300, May An example (from your text): 2  The printer daemon determines which process should be printed next by using the shared variables  Shared variable out holds the index of the location in the spooler control block list that contains the reference to the next file to be printed  Shared variable in holds the index of the location in the spooler control block array that will be used to contain the reference to the location of the file for the next print request made

© Janice Regan, CMPT 300, May An example (from your text): 3  Two processes A and B both decide that they want to print a file at the “same” time  Process A reads variable in and gets a value 7  Process A is interrupted by process B which also reads variable in and gets a value of 7.  Process B stores the reference to the file it wants to print in slot 7 of the spooler control block and updates in to 8. Process B continues doing other things  Later Process A then runs again. Stores the reference to the file it wants to print in slot 7 of the spooler control block and updates in to 8 (no change)  When the spooler prints the file for slot 7 it prints the output for Process A. The file for Process B is never printed

© Janice Regan, CMPT 300, May An example (from your text): 4  Two processes A and B both decide that they want to print a file at the “same” time  Process A reads variable in and gets a value 7  Process A stores the reference to the file it wants to print in slot 7 of the spooler control block  Process A is interrupted by process B which also reads variable in and gets a value of 7.  Process B stores the reference to the file it wants to print in slot 7 of the spooler control block and updates in to 8. Process B continues doing other things  Later Process A then runs again and updates in to 8 (no change) and continues running  When the spooler prints the file for slot 7 it prints the output for Process B. The file for Process A is never printed

© Janice Regan, CMPT 300, May Preventing Race conditions  Define a ‘critical section’ where the access to the resource causing the race condition occurs (like our echo() function)  Make sure that mutual exclusion is enforced, only one process can access the resource (in this case the echo function) at one time  How do we enforce/implement mutual exclusion?  How do the processes interact?  What other problems are introduced by enforcing mutual exclusion?

© Janice Regan, CMPT 300, May Prevention of race conditions: necessary conditions  What conditions must be satisfied to prevent race conditions?  Must assure  Arbitrary ordering: Processes can run in any order on any processor (on one CPU or on a group of CPUs) and at any speed and still produce the same results  Mutual exclusion enforced: Not more than one process may be in a critical region for a particular resource at a given time. Each critical section must run for a finite (short) time.  Non-Blocking: Processes not within their critical regions cannot prevent other processes from entering their own critical regions  No starvation: limit number of processes allowed to enter critical section before this processes’ critical section runs. Cannot wait forever to enter a critical section in a given process

© Janice Regan, CMPT 300, May Process interaction: 1  Processes may be unaware of each other  Independent processes (interactive or batch). Results of one process are independent of the others  Processes may compete for resources, competition is mediated by OS  May require control mechanisms within the processes themselves (startCriticalRegion, endCriticalRegion routines …)  Competition for resources may affect turnaround time or response time (waits make these times longer)  Must enforce mutual exclusion within critical sections  Enforcing mutual exclusion within critical sections can cause starvation or deadlock

© Janice Regan, CMPT 300, May Process interaction: 2  Processes may be indirectly aware of each other.  Processes may indirectly share information (common shared memory locations, files or databases)  Processes do not communicate directly, but results of one process may affect the results of another process  Processes cooperatively share information and are aware of the need to maintain data integrity and consistency  Information is held in shared resources (disks, memory, …) so accessing of the information has the same need for mutual exclusion hardware and software support (startCriticalRegion …) as for competing processes  Mutual exclusion can still cause deadlocks and starvation  Also a concern with data object integrity. Maintaining consistent information

© Janice Regan, CMPT 300, May Process interaction: 3  Processes may be directly aware of each other  Processes communicate by sending messages to each other  The messages are used to coordinate the sharing of resources  No need for mutual exclusion, nothing is shared as messages are sent/received. Order of access is determined by order of sending / receiving messages  Still can have starvation or deadlock

© Janice Regan, CMPT 300, May Implementation  Now we have the basic ideas we need  How do we actually implement mutual exclusion?  There are several approaches  Interrupt disabling  Lock variables  Strict alternation  Special instructions  Peterson’s solution  Message passing  Semaphores and Mutexes  Monitors