Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts

Similar presentations


Presentation on theme: "Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts"— Presentation transcript:

1 Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
4/24/2017 Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts scheduling policy goals and metrics basic scheduling algorithms: First Come First Served (FCFS) Round Robin (RR) Shortest Job First (SJF) Shortest Remainder First (SRF) priority multilevel queue multilevel feedback queue Unix scheduling

2 Types of CPU Schedulers
4/24/2017 Types of CPU Schedulers CPU scheduler (dispatcher or short-term scheduler) selects a process from the ready queue and lets it run on the CPU types: non-preemptive executes when: process is terminated process switches from running to blocked simple to implement but unsuitable for time-sharing systems preemptive executes at times above and: process is created blocked process becomes ready a timer interrupt occurs more overhead, but keeps long processes from monopolizing CPU must not preempt OS kernel while it’s servicing a system call (e.g., reading a file) or otherwise OS may end up in an inconsistent state

3 CPU Bursts process alternates between computing – CPU burst
4/24/2017 CPU Bursts process alternates between computing – CPU burst waiting for I/O processes can be loosely classified into CPU-bound — does mostly computation (long CPU burst), and very little I/O I/O-bound — does mostly I/O, and very little computation (short CPU burst)

4 Predicting the Length of CPU Burst
4/24/2017 Predicting the Length of CPU Burst some schedulers need to know CPU burst size cannot know deterministically, but can estimate on the basis of previous bursts exponential average n+1 =  tn+ (1-)n, where n+1 - predicted value, tn – actual nth burst extreme cases  = 0 n+1 = n recent history does not count  = 1 n+1 =  tn only last CPU burst counts

5 Scheduling Policy system oriented:
4/24/2017 Scheduling Policy system oriented: maximize CPU utilization – scheduler needs to keep CPU as busy as possible. Mainly, the CPU should not be idle if there are processes ready to run maximize throughput – number of processes completed per unit time ensure fairness of CPU allocation should avoid starvation – process is never scheduled minimize overhead – incurred due to scheduling in time or CPU utilization (e.g. due to context switches or policy computation) in space (e.g. data structures) user-oriented: minimize turnaround time – interval from time process becomes ready till the time it is done minimize average and worst-case waiting time – sum of periods spent waiting in the ready queue minimize average and worst-case response time – time from process entering the ready queue till it is first scheduled

6 First Come First Served (FCFS)
4/24/2017 First Come First Served (FCFS) add to the tail of the ready queue, dispatch from the head Process Arrival Order Burst Time Arrival Time P1 P2 P3 24 3 27 30 average waiting time = ( ) / 3 = 17 Example 1 Process Arrival Order Burst Time Arrival Time P3 P2 P1 3 24 6 30 average waiting time = ( ) / 3 = 3 Example 2

7 FCFS Evaluation non-preemptive
4/24/2017 FCFS Evaluation non-preemptive response time — may have variance or be long convoy effect – one long-burst process is followed by many short-burst processes, short processes have to wait a long time throughput — not emphasized fairness — penalizes short-burst processes starvation — not possible overhead — minimal

8 Round-Robin (RR) preemptive version of FCFS policy
4/24/2017 Round-Robin (RR) preemptive version of FCFS policy define a fixed time slice (also called a time quantum) – typically ms choose process from head of ready queue run that process for at most one time slice, and if it hasn’t completed or blocked, add it to the tail of the ready queue choose another process from the head of the ready queue, and run that process for at most one time slice … implement using hardware timer that interrupts at periodic intervals FIFO ready queue (add to tail, take from head)

9 RR Examples Example 1 time slice = 4 Example 2
4/24/2017 RR Examples average waiting time= ( (10–4)) / 3 = 5.66 P1 P2 P3 4 30 7 10 14 18 22 26 Process Arrival Order Burst Time Arrival Time 24 3 Example 1 time slice = 4 Process Arrival Order Burst Time Arrival Time P3 P2 P1 3 24 average waiting time = ( ) / 3 = 3 30 10 14 18 22 26 6 Example 2

10 RR Evaluation preemptive (at end of time slice)
4/24/2017 RR Evaluation preemptive (at end of time slice) response time — good for short processes long processes may have to wait n*q time units for another time slice n = number of other processes, q = length of time slice throughput — depends on time slice too small — too many context switches too large — approximates FCFS fairness — penalizes I/O-bound processes (may not use full time slice) starvation — not possible overhead — somewhat low

11 Shortest Job First (SJF)
4/24/2017 Shortest Job First (SJF) choose the process that has the smallest CPU burst, and run that process non-preemptively Process Arrival Order Burst Time Arrival Time P1 P2 P3 6 8 7 P4 3 average waiting time = ( ) / 4 = 7 P4 3 P1 P3 P2 9 16 24 SJF average waiting time = ( ) / 4 = 10.25 P4 6 P1 P3 P2 14 21 24 FCFS, same example

12 SJF Evaluation non-preemptive response time — okay (worse than RR)
4/24/2017 SJF Evaluation non-preemptive response time — okay (worse than RR) long processes may have to wait until a large number of short processes finish provably optimal average waiting time — minimizes average waiting time for a given set of processes (if preemption is not considered) average waiting time decreases if short and long processes are swapped in the ready queue throughput — better than RR fairness — penalizes long processes starvation — possible for long processes overhead — can be high (requires recording and estimating CPU burst times)

13 Shortest Remaining Time (SRT)
4/24/2017 Shortest Remaining Time (SRT) preemptive version of SJF policy choose the process that has the smallest next CPU burst, and run that process preemptively until termination or blocking, or until another process enters the ready queue at that point, choose another process to run if one has a smaller expected CPU burst than what is left of the current process’ CPU burst

14 SRT Examples SJF SRT Process Arrival Order Burst Time Arrival Time P1
4/24/2017 SRT Examples Process Arrival Order Burst Time Arrival Time P1 P2 P3 8 4 9 1 2 P4 5 3 average waiting time = (0 + (8–1) + (12–3) + (17–2)) / 4 = 7.75 P4 8 P1 P3 P2 12 17 26 arrival SJF average waiting time = ((0+(10–1)) + (1–1) + (17–2) + (5–3)) / 4 = 6.5 5 10 17 24 P4 P2 P1 P3 P 1 arrival SRT

15 SRT Evaluation preemptive (at arrival of process into ready queue)
4/24/2017 SRT Evaluation preemptive (at arrival of process into ready queue) response time — good (still worse than RR) provably optimal waiting time throughput — high fairness — penalizes long processes note that long processes may eventually become short processes starvation — possible for long processes overhead — can be high (requires recording and estimating CPU burst times)

16 Priority Scheduling policy: associate a priority with each process
4/24/2017 Priority Scheduling policy: associate a priority with each process externally defined ex: based on importance – employee’s processes given higher preference than visitor’s internally defined, based on memory requirements, file requirements, CPU requirements vs. I/O requirements, etc. SJF is priority scheduling, where priority is inversely proportional to length of next CPU burst run the process with the highest priority evaluation starvation — possible for low-priority processes

17 4/24/2017 Multilevel Queue use several ready queues, and associate a different priority with each queue schedule either the highest priority occupied queue problem: processes at low-level queues may starve each queue gets a certain amount of CPU time assign new processes permanently to a particular queue foreground, background system, interactive, editing, computing each queue has its own scheduling discipline example: two queues foreground 80%, using RR background 20%, using FCFS

18 Multilevel Feedback Queue
4/24/2017 Multilevel Feedback Queue feedback – allow scheduler to move processes between queues to ensure fairness aging – moving older processes to higher-priority queue decay – moving older processes to lower-priority queue example three queues, feedback with process decay Q0 – RR with time slice of 8 milliseconds Q1 – RR with time slice of 16 milliseconds Q2 – FCFS scheduling new job enters queue Q0; when it gains CPU, job receives 8 milliseconds If it does not finish in 8 milliseconds, job is moved to queue Q1 in Q1 job is again served RR and receives 16 additional milliseconds if it still does not complete, it is preempted and moved to queue Q2.

19 Traditional Unix CPU Scheduling
4/24/2017 Traditional Unix CPU Scheduling avoid sorting ready queue by priority – expensive. Instead: multiple queues (32), each with a priority value (low value = high priority): kernel processes (or user processes in kernel mode) the lower values (0-49) – kernel processes are not preemptible! user processes have higher value (50-127) choose the process from the occupied queue with the highest priority, and run that process preemptively, using a timer (time slice typically around 100ms) RR in each queue move processes between queues keep track of clock ticks (60/second) once per second, add clock ticks to priority value also change priority based on whether or not process has used more than it’s “fair share” of CPU time (compared to others) users can decrease priority note that Unix avoids sorting of priority queue that is required for pure priority scheduling (and incurs overhead) yet it achieves similar effect with the multitude of queues and priorities


Download ppt "Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts"

Similar presentations


Ads by Google