1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.

Slides:



Advertisements
Similar presentations
CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Advertisements

1 Uniprocessor Scheduling Types of scheduling –The aim of processor scheduling is to assign processes to be executed by the processor so as to optimize.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 2 nd Edition Chapter 6a: CPU Scheduling.
 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Operating Systems Chapter 6
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 3: CPU Scheduling
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
CS 311 – Lecture 23 Outline Kernel – Process subsystem Process scheduling Scheduling algorithms User mode and kernel mode Lecture 231CS Operating.
Scheduling in Batch Systems
Uniprocessor Scheduling Chapter 9. Aim of Scheduling The key to multiprogramming is scheduling Scheduling is done to meet the goals of –Response time.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
What we will cover…  CPU Scheduling  Basic Concepts  Scheduling Criteria  Scheduling Algorithms  Evaluations 1-1 Lecture 4.
Chapter 5-CPU Scheduling
1 Uniprocessor Scheduling Chapter 9. 2 Aims of Scheduling Assign processes to be executed by the processor(s) Response time Throughput Processor efficiency.
Chapter 5: CPU Scheduling
02/11/2004CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
1 Uniprocessor Scheduling Chapter 9. 2 Aim of Scheduling Main Job: Assign processes to be executed by the processor(s) and processes to be loaded in main.
CSE 331 Operating Systems Design 1 Scheduling Scheduling is divided into various levels. These levels are defined by the location of the processes A process.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
CPU Scheduling Chapter 6 Chapter 6.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Lecture 5: Uniprocessor Scheduling
CS212: OPERATING SYSTEM Lecture 3: Process Scheduling 1.
Chapter 9 Uniprocessor Scheduling Spring, 2011 School of Computer Science & Engineering Chung-Ang University.
Chapter 9 Uniprocessor Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
Chapter 6 CPU SCHEDULING.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Operating Systems Lecture Notes CPU Scheduling Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
Uniprocessor Scheduling Chapter 9. Aim of Scheduling To improve: Response time: time it takes a system to react to a given input Turnaround Time (TAT)
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
1 Our focus  scheduling a single CPU among all the processes in the system  Key Criteria: Maximize CPU utilization Maximize throughput Minimize waiting.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Chapter 5 CPU Scheduling Bernard Chen Spring 2007.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
1 Uniprocessor Scheduling Chapter 9. 2 Aim of Scheduling Response time Throughput Processor efficiency.
Uniprocessor Scheduling Chapter 9. Aim of Scheduling Response time Throughput Processor efficiency.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Uniprocessor Scheduling
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: Process Scheduling.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Uniprocessor Scheduling Chapter 9. Aim of Scheduling Assign processes to be executed by the processor or processors: –Response time –Throughput –Processor.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
Processor Scheduling Hank Levy. 22/4/2016 Goals for Multiprogramming In a multiprogramming system, we try to increase utilization and thruput by overlapping.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
1 Module 5: Scheduling CPU Scheduling Scheduling Algorithms Reading: Chapter
Basic Concepts Maximum CPU utilization obtained with multiprogramming
3. CPU Scheduling. CPU Scheduling Process execution – a cycle of CPU execution and I/O wait – figure Preemptive Non-Preemptive.
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU SCHEDULING.
Uniprocessor Scheduling
CPU Scheduling Basic Concepts Scheduling Criteria
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Presentation transcript:

1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue and lets it run on the CPU Assumes all processes are in memory, and one of those is executing on the CPU Crucial in multiprogramming environment Goal is to maximize CPU utilization Non-preemptive scheduling — scheduler executes only when: Process is terminated Process switches from running to blocked

2Chapter 05, Fall 2008 Process Execution Behavior Assumptions: One process per user One thread per process Processes are independent, and compete for resources (including the CPU) Processes run in CPU - I/O burst cycle: Compute for a while (on CPU) Do some I/O Continue these two repeatedly Two types of processes: CPU-bound — does mostly computation (long CPU burst), and very little I/O I/O-bound — does mostly I/O, and very little computation (short CPU burst)

3Chapter 05, Fall 2008 First-Come-First-Served (FCFS) Other names: First-In-First-Out (FIFO) Run-Until-Done Policy: Choose process from ready queue in the order of its arrival, and run that process non-preemptively Early FCFS schedulers were overly non- preemptive: the process did not relinquish the CPU until it was finished, even when it was doing I/O Now, non-preemptive means the scheduler chooses another process when the first one terminates or blocks Implement using FIFO queue (add to tail, take from head)

4Chapter 05, Fall 2008 FCFS Example Example 1: Example 2:

5Chapter 05, Fall 2008 CPU Scheduling Goals CPU scheduler must decide: How long a process executes In which order processes will execute User-oriented scheduling policy goals: Minimize average response time (time from request received until response starts) while maximizing number of interactive users receiving adequate response Minimize turnaround time (time from process start until completion) Execution time plus waiting time Minimize variance of average response time Predictability is important Process should always run in (roughly) same amount of time regardless of the load on the system

6Chapter 05, Fall 2008 CPU Scheduling Goals (cont.) System-oriented scheduling policy goals: Maximize throughput (number of processes that complete in unit time) Maximize processor utilization (percentage of time CPU is busy) Other (non-performance related) system- oriented scheduling policy goals: Fairness — in the absence of guidance from the user or the OS, processes should be treated the same, and no process should suffer starvation (being infinitely denied service) May have to be less fair in order to minimize average response time! Balance resources — keep all system resources (CPU, memory, disk, I/O) busy Favor processes that will underuse stressed resources

7Chapter 05, Fall 2008 FCFS Evaluation Non-preemptive Response time — slow if there is a large variance in process execution times If one long process is followed by many short processes, short processes have to wait a long time If one CPU-bound process is followed many I/O-bound processes, there’s a “convoy effect” Low CPU and I/O device utilization Throughput — not emphasized Fairness —penalizes short processes and I/O bound processes Starvation — not possible Overhead — minimal

8Chapter 05, Fall 2008 Preemptive vs. Non-Preemptive Scheduling Non-preemptive scheduling — scheduler executes only when: Process is terminated Process switches from running to blocked Preemptive scheduler — scheduler can execute at (almost) any time: Executes at times above, also when: Process is created Blocked process becomes ready A timer interrupt occurs More overhead, but keeps long processes from monopolizing CPU Must not preempt OS kernel while it’s servicing a system call  Can still leave data shared between user processes in an inconsistent state

9Chapter 05, Fall 2008 Round-Robin Policy: Define a fixed time slice (also called a time quantum) Choose process from head of ready queue Run that process for at most one time slice, and if it hasn’t completed by then, add it to the tail of the ready queue If that process terminates or blocks before its time slice is up, choose another process from the head of the ready queue, and run that process for at most one time slice… Implement using: Hardware timer that interrupts at periodic intervals FIFO ready queue (add to tail, take from head)

10Chapter 05, Fall 2008 Round-Robin Example Example 1: Example 2:

11Chapter 05, Fall 2008 Round-Robin Evaluation Preemptive (at end of time slice) Response time — good for short processes Long processes may have to wait n*q time units for another time slice n = number of other processes, q = length of time slice Throughput — depends on time slice Too small — too many context switches Too large — approximates FCFS Fairness — penalizes I/O-bound processes (may not use full time slice) Starvation — not possible Overhead — low

12Chapter 05, Fall 2008 Shortest-Job-First (SJF) Other names: Shortest-Process-Next (SPN) Policy: Choose the process that has the smallest next CPU burst, and run that process non-preemptively (until termination or blocking) In case of a tie, FCFS is used to break the tie Difficulty: determining length of next CPU burst Approximation — predict length, based on past performance of the process, and on past predictions

13Chapter 05, Fall 2008 SJF Example SJF Example: Same Example, FCFS Schedule:

14Chapter 05, Fall 2008 SJF Evaluation Non-preemptive Response time — good for short processes Long processes may have to wait until a large number of short processes finish Provably optimal — minimizes average waiting time for a given set of processes Throughput — high Fairness — penalizes long processes Starvation — possible for long processes Overhead — can be high (recording and estimating CPU burst times)

15Chapter 05, Fall 2008 Shortest-Remaining-Time (SRT) SRT is a preemptive version of SJF (OSC just calls this preemptive SJF) Policy: Choose the process that has the smallest next CPU burst, and run that process preemptively… (until termination or blocking, or until a process enters the ready queue (either a new process or a previously blocked process)) At that point, choose another process to run if one has a smaller expected CPU burst than what is left of the current process’ CPU burst

16Chapter 05, Fall 2008 SJF & SRT Example SJF Example: Same Example, SRT Schedule:

17Chapter 05, Fall 2008 SRT Evaluation Preemptive (at arrival of process into ready queue) Response time — good Provably optimal — minimizes average waiting time for a given set of processes Throughput — high Fairness — penalizes long processes Note that long processes eventually become short processes Starvation — possible for long processes Overhead — can be high (recording and estimating CPU burst times)

18Chapter 05, Fall 2008 Priority Scheduling Policy: Associate a priority with each process Externally defined, based on importance, money, politics, etc. Internally defined, based on memory requirements, file requirements, CPU requirements vs. I/O requirements, etc. SJF is priority scheduling, where priority is inversely proportional to length of next CPU burst Choose the process that has the highest priority, and run that process either: preemptively, or non-preemptively Evaluation Starvation — possible for low-priority processes Can avoid by aging processes: increase priority as they spend time in the system

19Chapter 05, Fall 2008 Multilevel Queue Scheduling Policy: Use several ready queues, and associate a different priority with each queue Choose the process from the occupied queue that has the highest priority, and run that process either: preemptively, or non-preemptively Assign new processes permanently to a particular queue Foreground, background System, interactive, editing, computing Each queue can have a different scheduling policy Example: preemptive, using timer –80% of CPU time to foreground, using RR –20% of CPU time to background, using FCFS

20Chapter 05, Fall 2008 Multilevel Feedback Queue Scheduling Policy: Use several ready queues, and associate a different priority with each queue Choose the process from the occupied queue with the highest priority, and run that process either: preemptively, or non-preemptively Each queue can have a different scheduling policy Allow scheduler to move processes between queues Start each process in a high-priority queue; as it finishes each CPU burst, move it to a lower-priority queue Aging — move older processes to higher- priority queues Feedback = use the past to predict the future — favor jobs that haven’t used the CPU much in the past — close to SRT!

21Chapter 05, Fall 2008 CPU Scheduling in UNIX using Multilevel Feedback Queue Scheduling Policy: Multiple queues, each with a priority value (low value = high priority): Kernel processes have negative values –Includes processes performing system calls, that just finished their I/O and haven’t yet returned to user mode User processes (doing computation) have positive values Choose the process from the occupied queue with the highest priority, and run that process preemptively, using a timer (time slice typically around 100ms) Round-robin scheduling in each queue Move processes between queues Keep track of clock ticks (60/second) Once per second, add clock ticks to priority value Also change priority based on whether or not process has used more than it’s “fair share” of CPU time (compared to others)