CPU Scheduling CPU Scheduler Performance metrics for CPU scheduling

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 17 Scheduling III.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 2 nd Edition Chapter 6a: CPU Scheduling.
A. Frank - P. Weisberg Operating Systems Advanced CPU Scheduling.
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Operating Systems CPU Scheduling. Agenda for Today What is Scheduler and its types Short-term scheduler Dispatcher Reasons for invoking scheduler Optimization.
Chapter 3: CPU Scheduling
Operating Systems 1 K. Salah Module 2.1: CPU Scheduling Scheduling Types Scheduling Criteria Scheduling Algorithms Performance Evaluation.
Scheduling in Batch Systems
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (2) Process Management 10/03/2008 Yang Song (Prepared by Yang Song and.
Job scheduling Queue discipline.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
1 Scheduling Algorithms FCFS First-Come, First-Served Round-robin SJF Multilevel Feedback Queues.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Chapter 6 CPU SCHEDULING.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Chapter 6 Scheduling. Basic concepts Goal is maximum utilization –what does this mean? –cpu pegged at 100% ?? Most programs are I/O bound Thus some other.
CPU S CHEDULING Lecture: Operating System Concepts Lecturer: Pooja Sharma Computer Science Department, Punjabi University, Patiala.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
1 Our focus  scheduling a single CPU among all the processes in the system  Key Criteria: Maximize CPU utilization Maximize throughput Minimize waiting.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
CSC 322 Operating Systems Concepts Lecture - 10: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU Scheduling CS Introduction to Operating Systems.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
1 Module 5: Scheduling CPU Scheduling Scheduling Algorithms Reading: Chapter
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
CPU Scheduling Andy Wang Operating Systems COP 4610 / CGS 5765.
CPU SCHEDULING.
Chapter 5a: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter 6: CPU Scheduling
Process Scheduling Decide which process should run and for how long
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Don Porter Portions courtesy Emmett Witchel
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Presentation transcript:

CPU Scheduling CPU Scheduler Performance metrics for CPU scheduling Methods for analyzing schedulers CPU scheduling algorithms Case study: CPU scheduling in Solaris, Window XP, and Linux.

CPU Scheduler A CPU scheduler, running in the dispatcher, is responsible for selecting of the next running process. Based on a particular strategy When does CPU scheduling happen? Four cases: A process switches from the running state to waiting state (e.g. I/O request) A process switches from the running state to the ready state. A process switches from waiting state to ready state (completion of an I/O operation) A process terminates

Scheduling queues CPU schedulers use various queues in the scheduling process: Job queue: consists of all processes All jobs (processes), once submitted, are in the job queue. Some processes cannot be executed (e.g. not in memory). Ready queue All processes that are ready and waiting for execution are in the ready queue. Usually, a long-term scheduler/job scheduler selects processes from the job queue to the ready queue. CPU scheduler/short-term scheduler selects a process from the ready queue for execution. Simple systems may not have a long-term job scheduler

Scheduling queues Device queue When a process is blocked in an I/O operation, it is usually put in a device queue (waiting for the device). When the I/O operation is completed, the process is moved from the device queue to the ready queue.

An example scheduling queue structure Figure 3.6

Queueing diagram representation of process scheduling Figure 3.7

Performance metrics for CPU scheduling CPU utilization: percentage of the time that CPU is busy. Throughput: the number of processes completed per unit time Turnaround time: the interval from the time of submission of a process to the time of completion. Wait time: the sum of the periods spent waiting in the ready queue Response time: the time of submission to the time the first response is produced

Goal of CPU scheduling Other performance metrics: Goal: fairness: It is important, but harder to define quantitatively. Goal: Maximize CPU utilization, Throughput, and fairness. Minimize turnaround time, waiting time, and response time.

Which metric is used more? CPU utilization Trivial in a single CPU system Fairness Tricky, even definition is non-trivial Throughput, turnaround time, wait time, and response time Can usually be computed for a given scenario. They may not be more important than fairness, but they are more tractable than fairness.

Methods for evaluating CPU scheduling algorithms Simulation: Get the workload from a system Simulate the scheduling algorithm Compute the performance metrics based on simulation results This is practically the best evaluation method. Queueing models: Analytically model the queue behavior (under some assumptions). A lot of math, but may not be very accurate because of unrealistic assumptions.

Methods for evaluating CPU scheduling algorithms Deterministic Modeling Take a predetermined workload Run the scheduling algorithm manually Find out the value of the performance metric that you care about. Not the best for practical uses, but can give a lot of insides about the scheduling algorithm. Help understand the scheduling algorithm, as well as it strengths and weaknesses

Deterministic modeling example: Suppose we have processes A, B, and C, submitted at time 0 We want to know the response time, waiting time, and turnaround time of process A turnaround time + wait time response time = 0 A B C A B C A C A C Time Gantt chart: visualize how processes execute.

Deterministic modeling example Suppose we have processes A, B, and C, submitted at time 0 We want to know the response time, waiting time, and turnaround time of process B turnaround time + wait time response time A B C A B C A C A C Time

Deterministic modeling example Suppose we have processes A, B, and C, submitted at time 0 We want to know the response time, waiting time, and turnaround time of process C turnaround time + wait time response time A B C A B C A C A C Time

Preemptive versus nonpreemptive scheduling Many CPU scheduling algorithms have both preemptive and nonpreemptive versions: Preemptive: schedule a new process even when the current process does not intend to give up the CPU Non-preemptive: only schedule a new process when the current one does not want CPU any more. When do we perform non-preemptive scheduling? A process switches from the running state to waiting state (e.g. I/O request) A process switches from the running state to the ready state. A process switches from waiting state to ready state (completion of an I/O operation) A process terminates

Scheduling Policies FIFO (first in, first out) Round robin SJF (shortest job first) Priority Scheduling Multilevel feedback queues Lottery scheduling This is obviously an incomplete list

FIFO FIFO: assigns the CPU based on the order of requests Nonpreemptive: A process keeps running on a CPU until it is blocked or terminated Also known as FCFS (first come, first serve) + Simple Short jobs can get stuck behind long jobs Turnaround time is not ideal (get an example from the class)

Round Robin Round Robin (RR) periodically releases the CPU from long-running jobs Based on timer interrupts so short jobs can get a fair share of CPU time Preemptive: a process can be forced to leave its running state and replaced by another running process Time slice: interval between timer interrupts

More on Round Robin If time slice is too long Scheduling degrades to FIFO If time slice is too short Throughput suffers Context switching cost dominates

FIFO vs. Round Robin With zero-cost context switch, is RR always better than FIFO?

FIFO vs. Round Robin Suppose we have three jobs of equal length turnaround time of C turnaround time of B turnaround time of A A B C Time Round Robin turnaround time of C turnaround time of B turnaround time of A A B C Time FIFO

FIFO vs. Round Robin Round Robin + Shorter response time + Fair sharing of CPU - Not all jobs are preemptable Not good for jobs of the same length More precisely, not good in terms of the turnaround time.

Shortest Job First (SJF) SJF runs whatever job puts the least demand on the CPU, also known as STCF (shortest time to completion first) + Provably optimal in terms of turn-around time (anyone can give an informal proof?). + Great for short jobs + Small degradation for long jobs Real life example: supermarket express checkouts

SJF Illustrated turnaround time of A turnaround time of B turnaround time of C wait time of A = 0 wait time of B wait time of C response time of A = 0 response time of B response time of C A B C Time Shortest Job First

Shortest Remaining Time First (SRTF) SRTF: a preemptive version of SJF If a job arrives with a shorter time to completion, SRTF preempts the CPU for the new job Also known as SRTCF (shortest remaining time to completion first) Generally used as the base case for comparisons

SJF and SRTF vs. FIFO and Round Robin If all jobs are the same length, SJF  FIFO FIFO is the best you can do If jobs have varying length Short jobs do not get stuck behind long jobs under SRTF

Drawbacks of Shortest Job First - Starvation: constant arrivals of short jobs can keep long ones from running - There is no way to know the completion time of jobs (most of the time) Some solutions Ask the user, who may not know any better If a user cheats, the job is killed

Priority Scheduling (Multilevel Queues) Priority scheduling: The process with the highest priority runs first Priority 0: Priority 1: Priority 2: Assume that low numbers represent high priority C A B A B Time C Priority Scheduling

Priority Scheduling + Generalization of SJF - Starvation With SJF, priority = 1/requested_CPU_time - Starvation

Multilevel Feedback Queues Multilevel feedback queues use multiple queues with different priorities Round robin at each priority level Run highest priority jobs first Once those finish, run next highest priority, etc Jobs start in the highest priority queue If time slice expires, drop the job by one level If time slice does not expire, push the job up by one level

Multilevel Feedback Queues time = 0 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): A B C Time

Multilevel Feedback Queues time = 1 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): B C A A Time

Multilevel Feedback Queues time = 2 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): C A B A B Time

Multilevel Feedback Queues time = 3 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): A B C A B C Time

Multilevel Feedback Queues time = 3 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): A B C suppose process A is blocked on an I/O A B C Time

Multilevel Feedback Queues time = 3 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): A B C suppose process A is blocked on an I/O A B C Time

Multilevel Feedback Queues time = 5 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): A C suppose process A is returned from an I/O A B C B Time

Multilevel Feedback Queues time = 6 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): C A B C B A Time

Multilevel Feedback Queues time = 8 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): C A B C B A C Time

Multilevel Feedback Queues time = 9 Priority 0 (time slice = 1): Priority 1 (time slice = 2): Priority 2 (time slice = 4): A B C B A C C Time

Multilevel Feedback Queues Approximates SRTF A CPU-bound job drops like a rock I/O-bound jobs stay near the top Still unfair for long running jobs Counter-measure: Aging Increase the priority of long running jobs if they are not serviced for a period of time Tricky to tune aging

Lottery Scheduling Lottery scheduling is an adaptive scheduling approach to address the fairness problem Each process owns some tickets On each time slice, a ticket is randomly picked On average, the allocated CPU time is proportional to the number of tickets given to each job

Lottery Scheduling To approximate SJF, short jobs get more tickets To avoid starvation, each job gets at least one ticket

Lottery Scheduling Example short jobs: 10 tickets each long jobs: 1 ticket each # short jobs/# long jobs % of CPU for each short job % of CPU for each long job 1/1 91% 9% 0/2 0% 50% 2/0 10/1 10% 1% 1/10 5%

Case study: CPU scheduling in Solaris: Priority-based scheduling Four classes: real time, system, time sharing, interactive (in order of priority) Different priorities and algorithm in different classes Default class: time sharing Policy in the time sharing class: Multilevel feedback queue with variable time slices See the dispatch table

Solaris dispatch table for interactive and time-sharing threads Good response time for interactive processes and good throughput for CPU-bound processes

Windows XP scheduling A priority-based, preemptive scheduling Highest priority thread will always run Also have multiple classes and priorities within classes Similar idea for user processes – Multilevel feedback queue Lower priority when quantum runs out Increase priority after a wait event Some twists to improve “user perceived” performance: Boost priority and quantum for foreground process (the window that is currently selected). Boost priority more for a wait on keyboard I/O (as compared to disk I/O)

Linux scheduling A priority-based, preemptive with global round-robin scheduling Each process have a priority Processes with a larger priority also have a larger time slices Before the time slices is used up, processes are scheduled based on priority. After the time slice of a process is used up, the process must wait until all ready processes to use up their time slice (or be blocked) – a round-robin approach. No starvation problem. For a user process, its priority may + or – 5 depending whether the process is I/O- bound or CPU-bound. Giving I/O bound process higher priority.

Summary for the case study Basic idea for schedule user processes is the same for all systems: Lower priority for CPU bound process Increase priority for I/O bound process The scheduling in Solaris / Linux is more concerned about fairness. More popular as the OSes for servers. The scheduling in Window XP is more concerned about user perceived performance. More popular as the OS for personal computers.