1 OMSE 510: Computing Foundations 7: More IPC & Multithreading Chris Gilmore Portland State University/OMSE Material Borrowed from Jon Walpole’s lectures.

Slides:



Advertisements
Similar presentations
Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
Advertisements

Chapter 6: Process Synchronization
1 Friday, June 16, 2006 "In order to maintain secrecy, this posting will self-destruct in five seconds. Memorize it, then eat your computer." - Anonymous.
Classic Synchronization Problems
Ceng Operating Systems Chapter 2.2 : Process Scheduling Process concept  Process scheduling Interprocess communication Deadlocks Threads.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
1 CS 333 Introduction to Operating Systems Class 7 - Deadlock Jonathan Walpole Computer Science Portland State University.
1 Administrivia  Assignments 0 & 1  Class contact for the next two weeks  Next week  midterm exam  project review & discussion (Chris Chambers) 
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Review: Producer-Consumer using Semaphores #define N 100// number of slots in the buffer Semaphore mutex = 1;// controls access to critical region Semaphore.
Project 2 – solution code
Concurrency: Deadlock and Starvation Chapter 6. Revision Describe three necessary conditions for deadlock Which condition is the result of the three necessary.
1 CS 333 Introduction to Operating Systems Class 5 – Classical IPC Problems Jonathan Walpole Computer Science Portland State University.
1 CS 333 Introduction to Operating Systems Class 7 - Deadlock Jonathan Walpole Computer Science Portland State University.
1 CS 333 Introduction to Operating Systems Class 5 – Semaphores and Classical Synchronization Problems Jonathan Walpole Computer Science Portland State.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Jonathan Walpole Computer Science Portland State University
1 A common project 2 error method Lock () var oldIntStat: int oldIntStat = SetInterruptsTo (DISABLED) if mutex == 0 mutex = 1 heldBy = currentThread oldIntStat.
1 CS 333 Introduction to Operating Systems Class 4 – Synchronization Primitives Semaphores Jonathan Walpole Computer Science Portland State University.
Jonathan Walpole Computer Science Portland State University
Monitors CSCI 444/544 Operating Systems Fall 2008.
1 CS 333 Introduction to Operating Systems Class 7 - Deadlock Jonathan Walpole Computer Science Portland State University.
CS533 Concepts of Operating Systems Class 3 Monitors.
Jonathan Walpole Computer Science Portland State University
1 CSE 513 Introduction to Operating Systems Class 4 - IPC & Synchronization (2) Deadlock Jonathan Walpole Dept. of Comp. Sci. and Eng. Oregon Health and.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
1 Lecture 10: Uniprocessor Scheduling. 2 CPU Scheduling n The problem: scheduling the usage of a single processor among all the existing processes in.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Jonathan Walpole Computer Science Portland State University
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
1 CS 333 Introduction to Operating Systems Class 5 – Semaphores and Classical Synchronization Problems Jonathan Walpole Computer Science Portland State.
1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
CS510 Concurrent Systems Introduction to Concurrency.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
CS 153 Design of Operating Systems Spring 2015 Lecture 11: Scheduling & Deadlock.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Synchronization.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
CS333 Intro to Operating Systems Jonathan Walpole.
CS333 Intro to Operating Systems Jonathan Walpole.
CS533 Concepts of Operating Systems Class 2a Monitors.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
CE Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
Introduction to operating systems What is an operating system? An operating system is a program that, from a programmer’s perspective, adds a variety of.
CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.
Interprocess Communication Race Conditions
CS533 Concepts of Operating Systems Class 3
Chapter 2 Scheduling.
Jonathan Walpole Computer Science Portland State University
CS510 Operating System Foundations
CS510 Operating System Foundations
Chapter 2.2 : Process Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
CS399 New Beginnings Jonathan Walpole.
CS533 Concepts of Operating Systems Class 3
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
CS510 Operating System Foundations
Oregon Health and Science University
CSE 153 Design of Operating Systems Winter 2019
Presentation transcript:

1 OMSE 510: Computing Foundations 7: More IPC & Multithreading Chris Gilmore Portland State University/OMSE Material Borrowed from Jon Walpole’s lectures

2 Today Classical IPC Problems Monitors Message Passing Scheduling

3 Classical IPC problems Producer Consumer (bounded buffer) Dining philosophers Sleeping barber Readers and writers

4 Producer consumer problem 8 Buffers InP OutP Consumer Producer Producer and consumer are separate threads Also known as the bounded buffer problem

5 Is this a valid solution? thread producer { while(1){ // Produce char c while (count==n) { no_op } buf[InP] = c InP = InP + 1 mod n count++ } thread consumer { while(1){ while (count==0) { no_op } c = buf[OutP] OutP = OutP + 1 mod n count-- // Consume char } n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count

6 0 thread consumer { 1 while(1) { 2 while (count==0) { 3 sleep(empty) 4 } 5 c = buf[OutP] 6 OutP = OutP + 1 mod n 7 count--; 8 if (count == n-1) 9 wakeup(full) 10 // Consume char 11 } 12 } How about this? 0 thread producer { 1 while(1) { 2 // Produce char c 3 if (count==n) { 4 sleep(full) 5 } 6 buf[InP] = c; 7 InP = InP + 1 mod n 8 count++ 9 if (count == 1) 10 wakeup(empty) 11 } 12 } n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count

7 Does this solution work? 0 thread producer { 1 while(1){ 2 // Produce char c... 3 down(empty_buffs) 4 buf[InP] = c 5 InP = InP + 1 mod n 6 up(full_buffs) 7 } 8 } 0 thread consumer { 1 while(1){ 2 down(full_buffs) 3 c = buf[OutP] 4 OutP = OutP + 1 mod n 5 up(empty_buffs) 6 // Consume char... 7 } 8 } Global variables semaphore full_buffs = 0; semaphore empty_buffs = n; char buff[n]; int InP, OutP;

8 Producer consumer problem 8 Buffers InP OutP Consumer Producer Producer and consumer are separate threads What is the shared state in the last solution? Does it apply mutual exclusion? If so, how?

9 Definition of Deadlock A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause Usually the event is the release of a currently held resource None of the processes can … be awakened run release resources

10 Deadlock conditions A deadlock situation can occur if and only if the following conditions hold simultaneously Mutual exclusion condition – resource assigned to one process Hold and wait condition – processes can get more than one resource No preemption condition Circular wait condition – chain of two or more processes (must be waiting for resource from next one in chain)

11 Resource acquisition scenarios acquire (resource_1) use resource_1 release (resource_1) Thread A: Example: var r1_mutex: Mutex... r1_mutex.Lock() Use resource_1 r1_mutex.Unlock()

12 Resource acquisition scenarios Thread A: acquire (resource_1) use resource_1 release (resource_1) Another Example: var r1_sem: Semaphore... r1_sem.Down() Use resource_1 r1_sem.Up()

13 Resource acquisition scenarios acquire (resource_2) use resource_2 release (resource_2) Thread A:Thread B: acquire (resource_1) use resource_1 release (resource_1)

14 Resource acquisition scenarios acquire (resource_2) use resource_2 release (resource_2) Thread A:Thread B: No deadlock can occur here! acquire (resource_1) use resource_1 release (resource_1)

15 Resource acquisition scenarios: 2 resources acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) Thread A:Thread B:

16 Resource acquisition scenarios: 2 resources acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) Thread A:Thread B: No deadlock can occur here!

17 Resource acquisition scenarios: 2 resources acquire (resource_1) use resources 1 release (resource_1) acquire (resource_2) use resources 2 release (resource_2) acquire (resource_2) use resources 2 release (resource_2) acquire (resource_1) use resources 1 release (resource_1) Thread A:Thread B:

18 Resource acquisition scenarios: 2 resources acquire (resource_1) use resources 1 release (resource_1) acquire (resource_2) use resources 2 release (resource_2) acquire (resource_2) use resources 2 release (resource_2) acquire (resource_1) use resources 1 release (resource_1) Thread A:Thread B: No deadlock can occur here!

19 Resource acquisition scenarios: 2 resources acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) acquire (resource_2) acquire (resource_1) use resources 1 & 2 release (resource_1) release (resource_2) Thread A:Thread B:

20 Resource acquisition scenarios: 2 resources acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) acquire (resource_2) acquire (resource_1) use resources 1 & 2 release (resource_1) release (resource_2) Thread A:Thread B: Deadlock is possible!

21 Examples of deadlock Deadlock occurs in a single program Programmer creates a situation that deadlocks Kill the program and move on Not a big deal Deadlock occurs in the Operating System Spin locks and locking mechanisms are mismanaged within the OS Threads become frozen System hangs or crashes Must restart the system and kill and applications

22 Five philosophers sit at a table One fork between each philosopher Why do they need to synchronize? How should they do it? Dining philosophers problem while(TRUE) { Think(); Grab first fork; Grab second fork; Eat(); Put down first fork; Put down second fork; } Each philosopher is modeled with a thread

23 Is this a valid solution? #define N 5 Philosopher() { while(TRUE) { Think(); take_fork(i); take_fork((i+1)% N); Eat(); put_fork(i); put_fork((i+1)% N); }

24 Working towards a solution … #define N 5 Philosopher() { while(TRUE) { Think(); take_fork(i); take_fork((i+1)% N); Eat(); put_fork(i); put_fork((i+1)% N); } take_forks(i) put_forks(i)

25 Working towards a solution … #define N 5 Philosopher() { while(TRUE) { Think(); take_forks(i); Eat(); put_forks(i); }

26 Picking up forks // only called with mutex set! test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; up(sem[i]); } int state[N] semaphore mutex = 1 semaphore sem[i] take_forks(int i) { down(mutex); state [i] = HUNGRY; test(i); up(mutex); down(sem[i]); }

27 Putting down forks // only called with mutex set! test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; up(sem[i]); } int state[N] semaphore mutex = 1 semaphore sem[i] put_forks(int i) { down(mutex); state [i] = THINKING; test(LEFT); test(RIGHT); up(mutex); }

28 Dining philosophers Is the previous solution correct? What does it mean for it to be correct? Is there an easier way?

29 The sleeping barber problem

30 The sleeping barber problem Barber: While there are people waiting for a hair cut, put one in the barber chair, and cut their hair When done, move to the next customer Else go to sleep, until someone comes in Customer: If barber is asleep wake him up for a haircut If someone is getting a haircut wait for the barber to become free by sitting in a chair If all chairs are all full, leave the barbershop

31 Designing a solution How will we model the barber and customers? What state variables do we need?.. and which ones are shared? …. and how will we protect them? How will the barber sleep? How will the barber wake up? How will customers wait? What problems do we need to look out for?

32 Is this a good solution? Barber Thread: while true Down(customers) Lock(lock) numWaiting = numWaiting-1 Up(barbers) Unlock(lock) CutHair() endWhile Customer Thread: Lock(lock) if numWaiting < CHAIRS numWaiting = numWaiting+1 Up(customers) Unlock(lock) Down(barbers) GetHaircut() else -- give up & go home Unlock(lock) endIf const CHAIRS = 5 var customers: Semaphore barbers: Semaphore lock: Mutex numWaiting: int = 0

33 The readers and writers problem Multiple readers and writers want to access a database (each one is a thread) Multiple readers can proceed concurrently Writers must synchronize with readers and other writers only one writer at a time ! when someone is writing, there must be no readers ! Goals: Maximize concurrency. Prevent starvation.

34 Designing a solution How will we model the readers and writers? What state variables do we need?.. and which ones are shared? …. and how will we protect them? How will the writers wait? How will the writers wake up? How will readers wait? How will the readers wake up? What problems do we need to look out for?

35 Is this a valid solution to readers & writers? Reader Thread: while true Lock(mut) rc = rc + 1 if rc == 1 Down(db) endIf Unlock(mut)... Read shared data... Lock(mut) rc = rc - 1 if rc == 0 Up(db) endIf Unlock(mut)... Remainder Section... endWhile var mut: Mutex = unlocked db: Semaphore = 1 rc: int = 0 Writer Thread: while true...Remainder Section... Down(db)...Write shared data... Up(db) endWhile

36 Readers and writers solution Does the previous solution have any problems? is it “fair”? can any threads be starved? If so, how could this be fixed?

37 Monitors It is difficult to produce correct programs using semaphores correct ordering of up and down is tricky! avoiding race conditions and deadlock is tricky! boundary conditions are tricky! Can we get the compiler to generate the correct semaphore code for us? what are suitable higher level abstractions for synchronization?

38 Monitors Related shared objects are collected together Compiler enforces encapsulation/mutual exclusion Encapsulation: Local data variables are accessible only via the monitor’s entry procedures (like methods) Mutual exclusion A monitor has an associated mutex lock Threads must acquire the monitor’s mutex lock before invoking one of its procedures

39 Monitors and condition variables But we need two flavors of synchronization Mutual exclusion Only one at a time in the critical section Handled by the monitor’s mutex Condition synchronization Wait until a certain condition holds Signal waiting threads when the condition holds

40 Monitors and condition variables Condition variables (cv) for use within monitors wait(cv) thread blocked (queued) until condition holds monitor mutex released!! signal(cv) signals the condition and unblocks (dequeues) a thread

41 Monitor structures initialization code “entry” methods y x shared data condition variables monitor entry queue List of threads waiting to enter the monitor Can be called from outside the monitor. Only one active at any moment. Local to monitor (Each has an associated list of waiting threads) local methods

42 Monitor example for mutual exclusion process Producer begin loop BoundedBuffer.deposit(c) end loop end Producer process Consumer begin loop BoundedBuffer.remove(c) end loop end Consumer monitor: BoundedBuffer var buffer :...; nextIn, nextOut :... ; entry deposit(c: char) begin... end entry remove(var c: char) begin... end end BoundedBuffer

43 Observations That’s much simpler than the semaphore-based solution to producer/consumer (bounded buffer)! … but where is the mutex? … and what do the bodies of the monitor procedures look like?

44 Monitor example with condition variables monitor : BoundedBuffer var buffer : array[0..n-1] of char nextIn,nextOut : 0..n-1 := 0 fullCount : 0..n := 0 notEmpty, notFull : condition entry deposit(c:char) entry remove(var c: char) begin begin if (fullCount = n) then if (fullCount = n) then wait(notFull) wait(notEmpty) end if end if buffer[nextIn] := c c := buffer[nextOut] nextIn := nextIn+1 mod n nextOut := nextOut+1 mod n fullCount := fullCount+1 fullCount := fullCount-1 signal(notEmpty) signal(notFull) end deposit end remove end BoundedBuffer

45 Condition variables “Condition variables allow processes to synchronize based on some state of the monitor variables.”

46 Condition variables in producer/consumer “NotFull” condition “NotEmpty” condition Operations Wait() and Signal() allow synchronization within the monitor When a producer thread adds an element... A consumer may be sleeping Need to wake the consumer... Signal

47 Condition synchronization semantics “Only one thread can be executing in the monitor at any one time.” Scenario: Thread A is executing in the monitor Thread A does a signal waking up thread B What happens now? Signaling and signaled threads can not both run! … so which one runs, which one blocks, and on what queue?

48 Monitor design choices Condition variables introduce a problem for mutual exclusion only one process active in the monitor at a time, so what to do when a process is unblocked on signal? must not block holding the mutex, so what to do when a process blocks on wait? Should signals be stored/remembered? signals are not stored if signal occurs before wait, signal is lost! Should condition variables count?

49 Monitor design choices Choices when A signals a condition that unblocks B A waits for B to exit the monitor or blocks again B waits for A to exit the monitor or block Signal causes A to immediately exit the monitor or block (… but awaiting what condition?) Choices when A signals a condition that unblocks B & C B is unblocked, but C remains blocked C is unblocked, but B remains blocked Both B & C are unblocked … and compete for the mutex? Choices when A calls wait and blocks a new external process is allowed to enter but which one?

50 Option 1: Hoare semantics What happens when a Signal is performed? signaling thread (A) is suspended signaled thread (B) wakes up and runs immediately Result: B can assume the condition is now true/satisfied Hoare semantics give strong guarantees Easier to prove correctness When B leaves monitor, A can run. A might resume execution immediately... or maybe another thread (C) will slip in!

51 Option 2: MESA Semantics (Xerox PARC) What happens when a Signal is performed? the signaling thread (A) continues. the signaled thread (B) waits. when A leaves monitor, then B runs. Issue: What happens while B is waiting? can another thread (C) run after A signals, but before B runs? In MESA semantics a signal is more like a hint Requires B to recheck the state of the monitor variables (the invariant) to see if it can proceed or must wait some more

52 Code for the “deposit” entry routine monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char) if cntFull == N notFull.Wait() endIf buffer[nextIn] = c nextIn = (nextIn+1) mod N cntFull = cntFull + 1 notEmpty.Signal() endEntry entry remove()... endMonitor Hoare Semantics

53 Code for the “deposit” entry routine monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char) while cntFull == N notFull.Wait() endWhile buffer[nextIn] = c nextIn = (nextIn+1) mod N cntFull = cntFull + 1 notEmpty.Signal() endEntry entry remove()... endMonitor MESA Semantics

54 Code for the “remove” entry routine monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char)... entry remove() if cntFull == 0 notEmpty.Wait() endIf c = buffer[nextOut] nextOut = (nextOut+1) mod N cntFull = cntFull - 1 notFull.Signal() endEntry endMonitor Hoare Semantics

55 Code for the “remove” entry routine monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char)... entry remove() while cntFull == 0 notEmpty.Wait() endWhile c = buffer[nextOut] nextOut = (nextOut+1) mod N cntFull = cntFull - 1 notFull.Signal() endEntry endMonitor MESA Semantics

56 “Hoare Semantics” What happens when a Signal is performed? The signaling thread (A) is suspended. The signaled thread (B) wakes up and runs immediately. B can assume the condition is now true/satisfied From the original Hoare Paper: “No other thread can intervene [and enter the monitor] between the signal and the continuation of exactly one waiting thread.” “If more than one thread is waiting on a condition, we postulate that the signal operation will reactivate the longest waiting thread. This gives a simple neutral queuing discipline which ensures that every waiting thread will eventually get its turn.”

57 Implementing Hoare Semantics Thread A holds the monitor lock Thread A signals a condition that thread B was waiting on Thread B is moved back to the ready queue? B should run immediately Thread A must be suspended... the monitor lock must be passed from A to B When B finishes it releases the monitor lock Thread A must re-aquire the lock Perhaps A is blocked, waiting to re-aquire the lock

58 Implementing Hoare Semantics Problem: Possession of the monitor lock must be passed directly from A to B and then back to A Simply ending monitor entry methods with monLock.Unlock() … will not work A’s request for the monitor lock must be expedited somehow

59 Implementing Hoare Semantics Implementation Ideas: Consider a thread like A that hands off the mutex lock to a signaled thread, to be “urgent”. Thread C is not “urgent” Consider two wait lists associated with each MonitorLock UrgentlyWaitingThreads NonurgentlyWaitingThreads Want to wake up urgent threads first, if any

60 Brinch-Hansen Semantics Hoare Semantics On signal, allow signaled process to run Upon its exit from the monitor, signaler process continues. Brinch-Hansen Semantics Signaler must immediately exit following any invocation of signal Restricts the kind of solutions that can be written … but monitor implementation is easier

61 Reentrant code A function/method is said to be reentrant if... A function that has been invoked may be invoked again before the first invocation has returned, and will still work correctly Recursive routines are reentrant In the context of concurrent programming... A reentrant function can be executed simultaneously by more than one thread, with no ill effects

62 Reentrant Code Consider this function... integer seed; integer rand() { seed = seed * return seed; } What if it is executed by different threads concurrently?

63 Reentrant Code Consider this function... integer seed; integer rand() { seed = seed * return seed; } What if it is executed by different threads concurrently? The results may be “random” This routine is not reentrant!

64 When is code reentrant? Some variables are “local” -- to the function/method/routine “global” -- sometimes called “static” Access to local variables? A new stack frame is created for each invocation Each thread has its own stack What about access to global variables? Must use synchronization!

65 Making this function threadsafe integer seed; semaphore m = 1; integer rand() { down(m); seed = seed * tmp = seed; up(m); return tmp; }

66 Making this function reentrant integer seed; integer rand( *seed ) { *seed = *seed * return *seed; }

67 Message Passing Interprocess Communication via shared memory across machine boundaries Message passing can be used for synchronization or general communication Processes use send and receive primitives receive can block (like waiting on a Semaphore) send unblocks a process blocked on receive (just as a signal unblocks a waiting process)

68 Producer-consumer with message passing The basic idea: After producing, the producer sends the data to consumer in a message The system buffers messages The producer can out-run the consumer The messages will be kept in order But how does the producer avoid overflowing the buffer? After consuming the data, the consumer sends back an “empty” message A fixed number of messages (N=100) The messages circulate back and forth.

69 Producer-consumer with message passing thread producer var c, em: char while true // Produce char c... Receive(consumer, &em) -- Wait for an empty msg Send(consumer, &c) -- Send c to consumer endWhile end

70 Producer-consumer with message passing thread consumer var c, em: char while true Receive(producer, &c) -- Wait for a char Send(producer, &em) -- Send empty message back // Consume char... endWhile end const N = Size of message buffer var em: char for i = 1 to N -- Get things started by Send (producer, &em) -- sending N empty messages endFor

71 Design choices for message passing Option 1: Mailboxes System maintains a buffer of sent, but not yet received, messages Must specify the size of the mailbox ahead of time Sender will be blocked if the buffer is full Receiver will be blocked if the buffer is empty

72 Design choices for message passing Option 2: No buffering If Send happens first, the sending thread blocks If Receiver happens first, the receiving thread blocks Sender and receiver must Rendezvous (ie. meet) Both threads are ready for the transfer The data is copied / transmitted Both threads are then allowed to proceed

73 Barriers (a) Processes approaching a barrier (b) All processes but one blocked at barrier (c) Last process arrives; all are let through

74 Quiz What is the difference between a monitor and a semaphore? Why might you prefer one over the other? How do the wait/signal methods of a condition variable differ from the up/down methods of a semaphore? What is the difference between Hoare and Mesa semantics for condition variables? What implications does this difference have for code surrounding a wait() call?

75 Scheduling New Process Ready Blocked Running Termination Process state model Responsibility of the Operating System

76 CPU scheduling criteria CPU Utilization – how busy is the CPU? Throughput – how many jobs finished/unit time? Turnaround Time – how long from job submission to job termination? Response Time – how long (on average) does it take to get a “response” from a “stimulus”? Missed deadlines – were any deadlines missed?

77 Scheduler options Priorities May use priorities to determine who runs next amount of memory, order of arrival, etc.. Dynamic vs. Static algorithms Dynamically alter the priority of the tasks while they are in the system (possibly with feedback) Static algorithms typically assign a fixed priority when the job is initially started. Preemptive vs. Nonpreemptive Preemptive systems allow the task to be interrupted at any time so that the O.S. can take over again.

78 Scheduling policies First-Come, First Served (FIFO) Shortest Job First (non-preemeptive) Shortest Job First (with preemption) Round-Robin Scheduling Priority Scheduling Real-Time Scheduling

79 First-Come, First-Served (FIFO) Start jobs in the order they arrive (FIFO queue) Run each job until completion

80 First-Come, First-Served (FIFO) Arrival Processing Process Time Time Start jobs in the order they arrive (FIFO queue) Run each job until completion

81 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

82 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Arrival Times of the Jobs Start jobs in the order they arrive (FIFO queue) Run each job until completion

83 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

84 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

85 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

86 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

87 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

88 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

89 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

90 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

91 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

92 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

93 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

94 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

95 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

96 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion

97 Shortest Job First Select the job with the shortest (expected) running time Non-Preemptive

98 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Same Job Mix Select the job with the shortest (expected) running time Non-Preemptive

99 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

100 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

101 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

102 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

103 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

104 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

105 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

106 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

107 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

108 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Seslect the job with the shortest (expected) running time Non-Preemptive

109 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

110 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

111 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

112 Shortest Job First Arrival Processing Turnaround Process Time Time Delay Time Select the job with the shortest (expected) running time Non-Preemptive

113 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Same Job Mix Preemptive version of SJF

114 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

115 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

116 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

117 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

118 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

119 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

120 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

121 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

122 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

123 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

124 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

125 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

126 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

127 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

128 Shortest Remaining Time Arrival Processing Turnaround Process Time Time Delay Time Preemptive version of SJF

129 Round-Robin Scheduling Goal: Enable interactivity Limit the amount of CPU that a process can have at one time. Time quantum Amount of time the OS gives a process before intervention The “time slice” Typically: 1 to 100ms

130 Round-Robin Scheduling Arrival Processing Process Time Time

131 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

132 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

133 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

134 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

135 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

136 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

137 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

138 Ready List: Round-Robin Scheduling Arrival Processing Process Time Time

139 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

140 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

141 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

142 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

143 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

144 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

145 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

146 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

147 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

148 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

149 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

150 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

151 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

152 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

153 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

154 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

155 Round-Robin Scheduling Arrival Processing Process Time Time Ready List:

156 Round-Robin Scheduling Arrival Processing Process Time Time

157 Round-Robin Scheduling Arrival Processing Turnaround Process Time Time Delay Time

158 Round-Robin Scheduling Effectiveness of round-robin depends on The number of jobs, and The size of the time quantum. Large # of jobs means that the time between scheduling of a single job increases Slow responses Larger time quantum means that the time between the scheduling of a single job also increases Slow responses Smaller time quantum means higher processing rates but also more overhead!

159 Scheduling in general purpose systems

160 Priority scheduling Assign a priority (number) to each process Schedule processes based on their priority Higher priority jobs processes get more CPU time Managing priorities Can use “nice” to reduce your priority Can periodically adjust a process’ priority Prevents starvation of a lower priority process Can improve performance of I/O-bound processes by basing priority on fraction of last quantum used

161 Multi-Level Queue Scheduling CPU High priority Low priority Multiple queues, each with its own priority. Equivalently: each priority has its own ready queue Within each queue...Round-robin scheduling. Simplist Approach: A Process’s priority is fixed & unchanging

162 Multi-Level Feedback Queue Scheduling Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times. Dynamic Priorities Priorities are altered over time, as process behavior changes!

163 Multi-Level Feedback Queue Scheduling Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times. Dynamic Priorities Priorities are altered over time, as process behavior changes! Issue: When do you change the priority of a process and how often?

164 Multi-Level Feedback Queue Scheduling Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times. Dynamic Priorities Priorities are altered over time, as process behavior changes! Issue: When do you change the priority of a process and how often? Solution: Let the amount of CPU used be an indication of how a process is to be handled Expired time quantum  more processing needed Unexpired time quantum  less processing needed Adjusting quantum and frequency vs. adjusting priority?

165 Multi-Level Feedback Queue Scheduling CPU High priority Low priority ?? n priority levels, round-robin scheduling within a level Quanta increase as priority decreases Jobs are demoted to lower priorities if they do not complete within the current quantum

166 Multi-Level Feedback Queue Scheduling Details, details, details... Starting priority? High priority vs. low priority Moving between priorities: How long should the time quantum be?

167 Lottery Scheduling Scheduler gives each thread some lottery tickets To select the next process to run... The scheduler randomly selects a lottery number The winning process gets to run ExampleThread A gets 50 tickets Thread B gets 15 tickets Thread C gets 35 tickets There are 100 tickets outstanding.

168 Lottery Scheduling Scheduler gives each thread some lottery tickets. To select the next process to run... The scheduler randomly selects a lottery number The winning process gets to run ExampleThread A gets 50 tickets 50% of CPU Thread B gets 15 tickets 15% of CPU Thread C gets 35 tickets 35% of CPU There are 100 tickets outstanding.

169 Lottery Scheduling Scheduler gives each thread some lottery tickets. To select the next process to run... The scheduler randomly selects a lottery number The winning process gets to run ExampleThread A gets 50 tickets 50% of CPU Thread B gets 15 tickets 15% of CPU Thread C gets 35 tickets 35% of CPU There are 100 tickets outstanding. Flexible Fair Responsive

170 A Brief Look at Real- Time Systems Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor systems or multimedia data)

171 A Brief Look at Real- Time Systems Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor systems or multimedia data) Two “main” types of schedulers... Rate-Monotonic Schedulers Earliest-Deadline-First Schedulers

172 A Brief Look at Real- Time Systems Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor systems or multimedia data) Two “main” types of schedulers... Rate-Monotonic Schedulers Assign a fixed, unchanging priority to each process No dynamic adjustment of priorities Less aggressive allocation of processor Earliest-Deadline-First Schedulers Assign dynamic priorities based upon deadlines

173 A Brief Look at Real- Time Systems Typically real-time systems involve several steps (that aren’t in traditional systems) Admission control All processes must ask for resources ahead of time. If sufficient resources exist,the job is “admitted” into the system. Resource allocation Upon admission... the appropriate resources need to be reserved for the task. Resource enforcement Carry out the resource allocations properly

174 Rate Monotonic Schedulers Process P1 T = 1 second C = 1/2 second / period For preemptable, periodic processes (tasks) Assigns a fixed priority to each task T = The period of the task C = The amount of processing per task period In RMS scheduling, the question to answer is... What priority should be assigned to a given task?

175 Rate Monotonic Schedulers P1 P2

176 Rate Monotonic Schedulers P1 PRI > P2 PRI P2 PRI > P1 PRI P1 P2