Presentation on theme: "CPU Scheduling CPU Scheduler Performance metrics for CPU scheduling"— Presentation transcript:
1 CPU Scheduling CPU Scheduler Performance metrics for CPU scheduling Methods for analyzing schedulersCPU scheduling algorithmsCase study:CPU scheduling in Solaris, Window XP, and Linux.
2 CPU SchedulerA CPU scheduler, running in the dispatcher, is responsible for selecting of the next running process.Based on a particular strategyWhen does CPU scheduling happen?Four cases:A process switches from the running state to waiting state (e.g. I/O request)A process switches from the running state to the ready state.A process switches from waiting state to ready state (completion of an I/O operation)A process terminates
3 Scheduling queuesCPU schedulers use various queues in the scheduling process:Job queue: consists of all processesAll jobs (processes), once submitted, are in the job queue.Some processes cannot be executed (e.g. not in memory).Ready queueAll processes that are ready and waiting for execution are in the ready queue.Usually, a long-term scheduler/job scheduler selects processes from the job queue to the ready queue.CPU scheduler/short-term scheduler selects a process from the ready queue for execution.Simple systems may not have a long-term job scheduler
4 Scheduling queues Device queue When a process is blocked in an I/O operation, it is usually put in a device queue (waiting for the device).When the I/O operation is completed, the process is moved from the device queue to the ready queue.
5 An example scheduling queue structure Figure 3.6
6 Queueing diagram representation of process scheduling Figure 3.7
7 Performance metrics for CPU scheduling CPU utilization: percentage of the time that CPU is busy.Throughput: the number of processes completed per unit timeTurnaround time: the interval from the time of submission of a process to the time of completion.Wait time: the sum of the periods spent waiting in the ready queueResponse time: the time of submission to the time the first response is produced
8 Goal of CPU scheduling Other performance metrics: Goal: fairness: It is important, but harder to define quantitatively.Goal:Maximize CPU utilization, Throughput, and fairness.Minimize turnaround time, waiting time, and response time.
9 Which metric is used more? CPU utilizationTrivial in a single CPU systemFairnessTricky, even definition is non-trivialThroughput, turnaround time, wait time, and response timeCan usually be computed for a given scenario.They may not be more important than fairness, but they are more tractable than fairness.
10 Methods for evaluating CPU scheduling algorithms Simulation:Get the workload from a systemSimulate the scheduling algorithmCompute the performance metrics based on simulation resultsThis is practically the best evaluation method.Queueing models:Analytically model the queue behavior (under some assumptions).A lot of math, but may not be very accurate because of unrealistic assumptions.
11 Methods for evaluating CPU scheduling algorithms Deterministic ModelingTake a predetermined workloadRun the scheduling algorithm manuallyFind out the value of the performance metric that you care about.Not the best for practical uses, but can give a lot of insides about the scheduling algorithm.Help understand the scheduling algorithm, as well as it strengths and weaknesses
12 Deterministic modeling example: Suppose we have processes A, B, and C, submitted at time 0We want to know the response time, waiting time, and turnaround time of process Aturnaround time+wait timeresponse time = 0ABCABCACACTimeGantt chart: visualize how processes execute.
13 Deterministic modeling example Suppose we have processes A, B, and C, submitted at time 0We want to know the response time, waiting time, and turnaround time of process Bturnaround time+wait timeresponse timeABCABCACACTime
14 Deterministic modeling example Suppose we have processes A, B, and C, submitted at time 0We want to know the response time, waiting time, and turnaround time of process Cturnaround time+wait timeresponse timeABCABCACACTime
15 Preemptive versus nonpreemptive scheduling Many CPU scheduling algorithms have both preemptive and nonpreemptive versions:Preemptive: schedule a new process even when the current process does not intend to give up the CPUNon-preemptive: only schedule a new process when the current one does not want CPU any more.When do we perform non-preemptive scheduling?A process switches from the running state to waiting state (e.g. I/O request)A process switches from the running state to the ready state.A process switches from waiting state to ready state (completion of an I/O operation)A process terminates
16 Scheduling Policies FIFO (first in, first out) Round robin SJF (shortest job first)Priority SchedulingMultilevel feedback queuesLottery schedulingThis is obviously an incomplete list
17 FIFO FIFO: assigns the CPU based on the order of requests Nonpreemptive: A process keeps running on a CPU until it is blocked or terminatedAlso known as FCFS (first come, first serve)+ SimpleShort jobs can get stuck behind long jobsTurnaround time is not ideal (get an example from the class)
18 Round RobinRound Robin (RR) periodically releases the CPU from long-running jobsBased on timer interrupts so short jobs can get a fair share of CPU timePreemptive: a process can be forced to leave its running state and replaced by another running processTime slice: interval between timer interrupts
19 More on Round Robin If time slice is too long Scheduling degrades to FIFOIf time slice is too shortThroughput suffersContext switching cost dominates
20 FIFO vs. Round RobinWith zero-cost context switch, is RR always better than FIFO?
21 FIFO vs. Round Robin Suppose we have three jobs of equal length turnaround time of Cturnaround time of Bturnaround time of AABCTimeRound Robinturnaround time of Cturnaround time of Bturnaround time of AABCTimeFIFO
22 FIFO vs. Round Robin Round Robin + Shorter response time + Fair sharing of CPU- Not all jobs are preemptableNot good for jobs of the same lengthMore precisely, not good in terms of the turnaround time.
23 Shortest Job First (SJF) SJF runs whatever job puts the least demand on the CPU, also known as STCF (shortest time to completion first)+ Provably optimal in terms of turn-around time (anyone can give an informal proof?).+ Great for short jobs+ Small degradation for long jobsReal life example: supermarket express checkouts
24 SJF Illustrated turnaround time of A turnaround time of B turnaround time of Cwait time of A = 0wait time of Bwait time of Cresponse time of A = 0response time of Bresponse time of CABCTimeShortest Job First
25 Shortest Remaining Time First (SRTF) SRTF: a preemptive version of SJFIf a job arrives with a shorter time to completion, SRTF preempts the CPU for the new jobAlso known as SRTCF (shortest remaining time to completion first)Generally used as the base case for comparisons
26 SJF and SRTF vs. FIFO and Round Robin If all jobs are the same length, SJF FIFOFIFO is the best you can doIf jobs have varying lengthShort jobs do not get stuck behind long jobs under SRTF
27 Drawbacks of Shortest Job First - Starvation: constant arrivals of short jobs can keep long ones from running- There is no way to know the completion time of jobs (most of the time)Some solutionsAsk the user, who may not know any betterIf a user cheats, the job is killed
28 Priority Scheduling (Multilevel Queues) Priority scheduling: The process with the highest priority runs firstPriority 0:Priority 1:Priority 2:Assume that low numbers represent high priorityCABABTimeCPriority Scheduling
29 Priority Scheduling + Generalization of SJF - Starvation With SJF, priority = 1/requested_CPU_time- Starvation
30 Multilevel Feedback Queues Multilevel feedback queues use multiple queues with different prioritiesRound robin at each priority levelRun highest priority jobs firstOnce those finish, run next highest priority, etcJobs start in the highest priority queueIf time slice expires, drop the job by one levelIf time slice does not expire, push the job up by one level
41 Multilevel Feedback Queues Approximates SRTFA CPU-bound job drops like a rockI/O-bound jobs stay near the topStill unfair for long running jobsCounter-measure: AgingIncrease the priority of long running jobs if they are not serviced for a period of timeTricky to tune aging
42 Lottery SchedulingLottery scheduling is an adaptive scheduling approach to address the fairness problemEach process owns some ticketsOn each time slice, a ticket is randomly pickedOn average, the allocated CPU time is proportional to the number of tickets given to each job
43 Lottery Scheduling To approximate SJF, short jobs get more tickets To avoid starvation, each job gets at least one ticket
44 Lottery Scheduling Example short jobs: 10 tickets eachlong jobs: 1 ticket each# short jobs/# long jobs% of CPU for each short job% of CPU for each long job1/191%9%0/20%50%2/010/110%1%1/105%
45 Case study: CPU scheduling in Solaris: Priority-based scheduling Four classes: real time, system, time sharing, interactive (in order of priority)Different priorities and algorithm in different classesDefault class: time sharingPolicy in the time sharing class:Multilevel feedback queue with variable time slicesSee the dispatch table
46 Solaris dispatch table for interactive and time-sharing threads Good response time for interactive processes and good throughput for CPU-bound processes
47 Windows XP scheduling A priority-based, preemptive scheduling Highest priority thread will always runAlso have multiple classes and priorities within classesSimilar idea for user processes – Multilevel feedback queueLower priority when quantum runs outIncrease priority after a wait eventSome twists to improve “user perceived” performance:Boost priority and quantum for foreground process (the window that is currently selected).Boost priority more for a wait on keyboard I/O (as compared to disk I/O)
48 Linux schedulingA priority-based, preemptive with global round-robin schedulingEach process have a priorityProcesses with a larger priority also have a larger time slicesBefore the time slices is used up, processes are scheduled based on priority.After the time slice of a process is used up, the process must wait until all ready processes to use up their time slice (or be blocked) – a round-robin approach.No starvation problem.For a user process, its priority may + or – 5 depending whether the process is I/O- bound or CPU-bound.Giving I/O bound process higher priority.
49 Summary for the case study Basic idea for schedule user processes is the same for all systems:Lower priority for CPU bound processIncrease priority for I/O bound processThe scheduling in Solaris / Linux is more concerned about fairness.More popular as the OSes for servers.The scheduling in Window XP is more concerned about user perceived performance.More popular as the OS for personal computers.