Presentation is loading. Please wait.

Presentation is loading. Please wait.

CPU Scheduling Introduction to Operating Systems: Module 13.

Similar presentations


Presentation on theme: "CPU Scheduling Introduction to Operating Systems: Module 13."— Presentation transcript:

1 CPU Scheduling Introduction to Operating Systems: Module 13

2 Overview u Basic concepts u Short, medium, long term scheduling u Scheduling criteria u Scheduling algorithms u Advanced scheduling  Multiple-processor scheduling  Real-time scheduling u Algorithm evaluation methods

3 Basic Concepts u During its life, a process alternately executes instructions and waits for resources availability  Interactive processes spend most of their time waiting for user input; this doesn't require use of the CPU u CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU executions and I/O waits u Most CPU bursts are very short, but occasionally processing requires an extended CPU burst  This behavior can be described as exponentially distributed

4 Histogram of CPU-burst Times Burst duration Frequency Many short CPU bursts Few long CPU burst

5 Process states terminated ready waiting running terminate block dispatch preempt wakeup inactive create activate suspend

6 Short-term CPU Scheduler u Selects from among the processes (threads) in memory that are ready to execute, and allocates the CPU to one of them u CPU scheduling decisions may take place when some process (not necessarily the running process): 1. Switches from running to waiting state 2. Switches from running to ready state 3.Switches from waiting to ready 4.Terminates u Scheduling using only 1 and 4 is non-preemptive u All other scheduling is preemptive

7 Dispatcher u Dispatcher module gives control of the CPU to the process (thread) selected by the short-term scheduler; this involves:  switching PCB contents to/from registers  Not needed when switching between threads in the same task  switching to user mode  jumping to the proper location in the user program to restart that program u Dispatch latency – time it takes for the dispatcher to stop one process and start another running

8 Medium-term process scheduler u Manages the degree of multiprogramming  We need to prevent thrashing u If system resources allow it, choose a suspended process to be re-activated, loading its context from disk into memory and placing it in a non-suspended state u The active process will now be able to execute on the CPU

9 Long-term process scheduling u Controls the creation of new processes u Admission to the system may be delayed until sufficient resources are available u The long term scheduler acts to control the degree of multiprogramming so that active processes need not be suspended to avoid thrashing

10 Short-term scheduling metrics u CPU utilization: user process time/total time u Throughput rate: # processes that finish/time u Turnaround time: amount of time to execute a process u Waiting time: time a process has been in the ready queue u Response time: amount of time it takes from when a request was submitted until the first response is produced

11 Optimization Criteria u Maximize CPU utilization u Maximize throughput rate u Minimize average turnaround time u Minimize average waiting time u Minimize response time

12 First-Come, First-Served Scheduling u Example:ProcessBurst Time P 1 24 P 2 3 P 3 3 u Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: u Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 u Average waiting time: (0 + 24 + 27)/3 = 17 P1P1 P2P2 P3P3 2427300

13 FCFS Scheduling Suppose that the processes arrive in the order P 2, P 3, P 1. u The Gantt chart for the schedule is: u Waiting time for P 1 = 6; P 2 = 0 ; P 3 = 3 u Average waiting time: (6 + 0 + 3)/3 = 3 u Much better than previous case u Convoy effect short process behind long process P1P1 P3P3 P2P2 63300

14 Shortest Process Next (SPN) Scheduling u Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. u Two schemes:  Non-preemptive (SPN) – once CPU given to the process it cannot be preempted until completes its CPU burst.  Preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF). u SPN is optimal among non-preemptive schedulers– gives minimum average waiting time for a given set of processes. u SRTF is waiting-time optimal over all schedulers

15 Idea for Proof u Look at the average cost for shortest job next (we can look at just the sum). u Assume t 1 <t 2 <…<t n are service times. u Wait time=(t 1 )+(t 1 +t 2 )+(t 1 +t 2 +t 3 ) +…+(t 1 +t 2 +…t n ). u Rearranging W=

16 Process Arrival Time Burst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 SPN average waiting time = (0 + 6 + 3 + 7)/4 = 4 Example of (Non-Preemptive) SPN P1P1 P3P3 P2P2 73160 P4P4 812

17 Example of (Preemptive) SRTF ProcessArrival TimeBurst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 SRTF average waiting time = (9 + 1 + 0 +2)/4 = 3 P1P1 P3P3 P2P2 0 2 4 5 7 11 16 P4P4 P2P2 P1P1

18 Determining Length of Next CPU Burst u Can only estimate the length u Use the length of previous CPU bursts  exponential averaging

19 Examples of Exponential Averaging u  =0   n+1 =  n  Recent history carries no weight u  =1   n+1 = t n  Only the actual last CPU burst carries weight

20 Examples of Exponential Averaging u If we expand the formula, we get:  n+1 =  t n + (1 -  )  t n -1 + … + (1 -  ) j  t n -j + … + (1 -  ) n-1  t 2 + (1 -  ) n t 1 u Since both  and (1 -  ) are less than or equal to 1, each successive term has less weight than its predecessor

21 Priority Scheduling u A priority number (integer) is associated with each process u The CPU is allocated to the process with the highest priority (smallest integer  highest priority)  both preemptive & non-preemptive are possible u SPN can be viewed as priority scheduling  priority is the next CPU burst duration u Starvation: low priority process may never execute u Solution: Aging – as time progresses, increase the priority of processes that are ready (waiting for the CPU)

22 Round Robin (RR) u Each process gets a small unit of CPU time. After this time has elapsed, the process is preempted and added to the end of the ready queue u If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units u Performance  q large  FIFO  q small  q must be large with respect to context switch, otherwise overhead is too high

23 RR with Time Quantum = 20 ProcessBurst Time P 1 53 P 2 17 P 3 68 P 4 24 u The Gantt chart is: u Typically, higher average turnaround than SPN, but better response. P1P1 P2P2 P3P3 P4P4 P1P1 P3P3 P4P4 P1P1 P3P3 P3P3 02037577797117121134154162

24 Highest response ratio next u A form of dynamic priority scheduling where a process priority is calculated by R = 1 + w/s.  R is response ratio  W is time spent waiting for the CPU  S is the expected duration of the next CPU burst, or service time u Doesn't really work as a preemptive algorithm; why?  Use time remaining on burst for duration of the next burst for the currently running process u Requires estimation of the service time

25 Example of HRRN ProcessArrival TimeBurst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 HRRN average waiting time = (0 + 6 + 3 +7)/4 = 4 P1P1 P2P2 0 7 8 12 16 P3P3 P4P4

26 Multilevel Feedback Queue u Ready queue is partitioned into separate queues:  Processes just starting their current CPU burst are in higher priority queues  Once they exceed the burst duration for that queue they are demoted  Processes on the same burst for a long time are in lower priority queues u Scheduling must be done between the queues  Fixed priority scheduling  serve all from processes in higher priority before those in low priority queues  Possibility of starvation  Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes  40% to queue 1  20% to queue 2  Etc. u To prevent starvation as new processes enter the system, queue durations could double at each level

27 Example: Multilevel Feedback Queue u Three queues:  Q 0 – time quantum 8 milliseconds  Q 1 – time quantum 16 milliseconds  Q 2 – FCFS (i.e. time quantum is infinite) u Scheduling  A new job enters queue Q 0 which is served FCFS  When it gains CPU, job receives 8 milliseconds  If it does not finish in 8 milliseconds, job is moved to queue Q 1  At Q 1 job is again served FCFS and receives 16 additional milliseconds.  If it still does not complete, it is preempted and moved to queue Q 2

28 Multiple-Processor Scheduling u CPU scheduling more complex when multiple CPUs are available u Homogeneous processors within a multiprocessor  Load sharing u Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing

29 Real-Time Scheduling u Hard real-time systems – required to complete a critical task within a guaranteed amount of time u Soft real-time computing – requires that critical processes receive priority over less fortunate ones

30 Algorithm Evaluation u Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload u Simulation: stochastic work load u Queuing models: mathematical results u Implementation

31 Evaluation of CPU Schedulers by Simulation performance statistics for FCFS performance statistics for SJF performance statistics for RR(Q = 14) simulation FCFS SJF RR(Q = 14) trace tape *** CPU 10 I/O 213 CPU 12 I/O 112 CPU 2 I/O 147 CPU17 *** actual process execution


Download ppt "CPU Scheduling Introduction to Operating Systems: Module 13."

Similar presentations


Ads by Google