Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Introduction to Concurrency.

Similar presentations


Presentation on theme: "© Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Introduction to Concurrency."— Presentation transcript:

1 © Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Introduction to Concurrency

2 © Janice Regan, CMPT 300, May 2007 1 What is a thread?  Each thread has its own program image  Program counter, registers  Stack, data  State  Each thread shares  Program text  Global variables (can share other specified variables)  Files and other communication connections

3 © Janice Regan, CMPT 300, May 2007 2 Concurrency  Cooperating processes (threads) can share some information as common variables or files  Threads can share variables directly  Processes can share variables using inter process messaging or shared variables (more complicated)  Both threads and processes may share files and connections  If processes (threads) share variables or files then problems may arise when two or more processes (threads) try to access the same variable/file /connection simultaneously  Cooperating processes (threads) can also share or compete for other resources.

4 © Janice Regan, CMPT 300, May 2007 3 Important Ideas: Concurrency  Critical section: A section of a process that requires access to shared resources.  Mutual exclusion: When one process is executing a critical section, no other process may execute a critical section using the same resources  Race condition: Two processes read/write shared data and the final result depends on the order the read/writes are done  Starvation: a process is not given any resources and does not run even though it is ready to go  Deadlock: Two (or more) processes cannot proceed because each is waiting for the other to do something  Livelock: Two (or more) process continuously change state without doing useful work

5 Race condition  Consider a system where two or more processes, sharing a common processor, attempt to use a common resource such as a shared variable or a hardware device. A race condition occurs when the output or result of a process depends on the specific order in which all or part of each of the processes is executed.  A race condition causes anomalous behaviour because of a critical dependence on the relative timing of events within a group of processes. © Janice Regan, CMPT 300, May 2007 4

6 5 One example: Race condition  A race condition occurs when two processes (threads) try to read or from or write to a shared memory location in such a way that the order in which parts of the processes are executed determines the final result.  Simple example  Thread 1 and 2 share variables b, and c. Initially b=1, c=2  At some point in its execution thread 1 calculates b = b + c  At some point in its execution thread 2 calculates c = b + c  If thread 1 does its calculation first b= b+c=3 c = (b+c) + c=5  If thread 2 does its calculation first c = (b+c)=3 b=b + (b+c)=5

7 © Janice Regan, CMPT 300, May 2007 6 Race condition: example  Another race condition void echo() { chin = getchar(); chout = chin; putchar(chout); }  This code can be used to read a character from the keyboard and display it on the screen

8 © Janice Regan, CMPT 300, May 2007 7 Race condition: example: 2  Consider a single user multiprocessor system using a single copy of the echo process to service multiple windows (applications)  Process W1 (window 1) calls echo to process a keystroke (‘x’)  getchar() reads ‘x’ into chin  W1 is interrupted by process W2 (window 2)  W2 uses getchar() reading ‘q’ into chin, copying it into chout and printing q to window 2  Process W1 resumes chout = chin puts the present value of chin (‘q’) into chout and displays q to window 1  The character ‘x’ originally input to process W1 is lost

9 © Janice Regan, CMPT 300, May 2007 8 Race condition: example: 3  Try mutual exclusion: assume that only one process may use the echo routine at any one time  Process W1 (window 1) calls echo to process a keystroke (‘x’)  getchar() reads ‘x’ into chin  W1 is interrupted by process W2 (window 2)  W2 wants to use echo but finds in busy and has to wait, therefore while W2 is waiting  Process W1 resumes chout = chin puts the present value of chin (‘x’) into chout and displays x to window 1. Process W1 continues until it completes the call to echo then completes or is interrupted.  W2 resumes, uses getchar() reading ‘q’ into chin, copying it to chout and printing q to window 2

10 © Janice Regan, CMPT 300, May 2007 9 Race condition: example: 3  Problem with mutual exclusion on a multiprocessor  W1 runs on processor 1 getchar() reads ‘x’ into chin  W2 runs on processor 2 getchar() reads ‘q’ into chin  Process W2 continues, puts the present value of chin (‘q’) into chout and displays q to window 1  Process W1 continues, copying chin = ‘q’ into chout and printing q to window 2  Only one copy of echo() is running on each processor, but we still have the same problem because the memory is not protected from the other processor, only from other processes running on the same processor

11 © Janice Regan, CMPT 300, May 2007 10 An example (from your text): 1  Consider a printer daemon. This O/S process runs in the background (is a daemon) and periodically checks to see if there are any files to print  On your disk is a printer daemon directory containing the files to be printed and a control file  The spooler control block in memory (a copy of the control file) has a series of “slots”, each slot contains the name of the file.  Each slot is numbered (indexed). The file whose name is in slot 1 will be printed first, slot 2 next and so on. (large circular buffer)  Each time a file is printed a new process is created to print the file  There are two shared variables in and out (shared between all processes printing files.

12 © Janice Regan, CMPT 300, May 2007 11 Printer daemon: directory … … … a1 a2 a3 Spooler Control block … … … File a1 File a2 File a3 0 1 2 3 4 5 6 7 8 … 9 in=7 out=4

13 © Janice Regan, CMPT 300, May 2007 12 An example (from your text): 2  The printer daemon determines which process should be printed next by using the shared variables  Shared variable out holds the index of the location in the spooler control block list that contains the reference to the next file to be printed  Shared variable in holds the index of the location in the spooler control block array that will be used to contain the reference to the location of the file for the next print request made

14 © Janice Regan, CMPT 300, May 2007 13 An example (from your text): 3  Two processes A and B both decide that they want to print a file at the “same” time  Process A reads variable in and gets a value 7  Process A is interrupted by process B which also reads variable in and gets a value of 7.  Process B stores the reference to the file it wants to print in slot 7 of the spooler control block and updates in to 8. Process B continues doing other things  Later Process A then runs again. Stores the reference to the file it wants to print in slot 7 of the spooler control block and updates in to 8 (no change)  When the spooler prints the file for slot 7 it prints the output for Process A. The file for Process B is never printed

15 © Janice Regan, CMPT 300, May 2007 14 An example (from your text): 4  Two processes A and B both decide that they want to print a file at the “same” time  Process A reads variable in and gets a value 7  Process A stores the reference to the file it wants to print in slot 7 of the spooler control block  Process A is interrupted by process B which also reads variable in and gets a value of 7.  Process B stores the reference to the file it wants to print in slot 7 of the spooler control block and updates in to 8. Process B continues doing other things  Later Process A then runs again and updates in to 8 (no change) and continues running  When the spooler prints the file for slot 7 it prints the output for Process B. The file for Process A is never printed

16 © Janice Regan, CMPT 300, May 2007 15 Preventing Race conditions  Define a ‘critical section’ where the access to the resource causing the race condition occurs (like our echo() function)  Make sure that mutual exclusion is enforced, only one process can access the resource (in this case the echo function) at one time  How do we enforce/implement mutual exclusion?  How do the processes interact?  What other problems are introduced by enforcing mutual exclusion?

17 © Janice Regan, CMPT 300, May 2007 16 Prevention of race conditions: necessary conditions  What conditions must be satisfied to prevent race conditions?  Must assure  Arbitrary ordering: Processes can run in any order on any processor (on one CPU or on a group of CPUs) and at any speed and still produce the same results  Mutual exclusion enforced: Not more than one process may be in a critical region for a particular resource at a given time. Each critical section must run for a finite (short) time.  Non-Blocking: Processes not within their critical regions cannot prevent other processes from entering their own critical regions  No starvation: limit number of processes allowed to enter critical section before this processes’ critical section runs. Cannot wait forever to enter a critical section in a given process

18 © Janice Regan, CMPT 300, May 2007 17 Process interaction: 1  Processes may be unaware of each other  Independent processes (interactive or batch). Results of one process are independent of the others  Processes may compete for resources, competition is mediated by OS  May require control mechanisms within the processes themselves (startCriticalRegion, endCriticalRegion routines …)  Competition for resources may affect turnaround time or response time (waits make these times longer)  Must enforce mutual exclusion within critical sections  Enforcing mutual exclusion within critical sections can cause starvation or deadlock

19 © Janice Regan, CMPT 300, May 2007 18 Process interaction: 2  Processes may be indirectly aware of each other.  Processes may indirectly share information (common shared memory locations, files or databases)  Processes do not communicate directly, but results of one process may affect the results of another process  Processes cooperatively share information and are aware of the need to maintain data integrity and consistency  Information is held in shared resources (disks, memory, …) so accessing of the information has the same need for mutual exclusion hardware and software support (startCriticalRegion …) as for competing processes  Mutual exclusion can still cause deadlocks and starvation  Also a concern with data object integrity. Maintaining consistent information

20 © Janice Regan, CMPT 300, May 2007 19 Process interaction: 3  Processes may be directly aware of each other  Processes communicate by sending messages to each other  The messages are used to coordinate the sharing of resources  No need for mutual exclusion, nothing is shared as messages are sent/received. Order of access is determined by order of sending / receiving messages  Still can have starvation or deadlock

21 © Janice Regan, CMPT 300, May 2007 20 Implementation  Now we have the basic ideas we need  How do we actually implement mutual exclusion?  There are several approaches  Interrupt disabling  Lock variables  Strict alternation  Special instructions  Peterson’s solution  Message passing  Semaphores and Mutexes  Monitors


Download ppt "© Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems Introduction to Concurrency."

Similar presentations


Ads by Google