Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 2 nd Edition Chapter 6a: CPU Scheduling.
 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Operating Systems Chapter 6
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Scheduling Algorithms
CPU Scheduling Section 2.5.
Chapter 3: CPU Scheduling
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Operating Systems 1 K. Salah Module 2.1: CPU Scheduling Scheduling Types Scheduling Criteria Scheduling Algorithms Performance Evaluation.
CS 311 – Lecture 23 Outline Kernel – Process subsystem Process scheduling Scheduling algorithms User mode and kernel mode Lecture 231CS Operating.
Scheduling in Batch Systems
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (2) Process Management 10/03/2008 Yang Song (Prepared by Yang Song and.
Chapter 5-CPU Scheduling
Job scheduling Queue discipline.
02/11/2004CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
CPU-Scheduling Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short term scheduler.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
CS212: OPERATING SYSTEM Lecture 3: Process Scheduling 1.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Chapter 6 CPU SCHEDULING.
Fair Resource Access & Allocation  OS Scheduling
Chapter 6 Scheduling. Basic concepts Goal is maximum utilization –what does this mean? –cpu pegged at 100% ?? Most programs are I/O bound Thus some other.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 Our focus  scheduling a single CPU among all the processes in the system  Key Criteria: Maximize CPU utilization Maximize throughput Minimize waiting.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Process A program in execution. But, What does it mean to be “in execution”?
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
Operating Systems 1 K. Salah Module 2.2: CPU Scheduling Scheduling Types Scheduling Criteria Scheduling Algorithms Performance Evaluation.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Introduction to Operating System Created by : Zahid Javed CPU Scheduling Fifth Lecture.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
Lecture 12 Scheduling Models for Computer Networks Dr. Adil Yousif.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
1 Module 5: Scheduling CPU Scheduling Scheduling Algorithms Reading: Chapter
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU Scheduling Andy Wang Operating Systems COP 4610 / CGS 5765.
Operating Systems Processes Scheduling.
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Operating System Concepts
Shortest-Job-First (SJR) Scheduling
Presentation transcript:

Process Scheduling

Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently running process voluntarily gives up executing to allow another process to run. The obvious disadvantage of this is that the process may decide to never give up execution, probably because of a bug causing some form of infinite loop, and consequently nothing else can ever run.  Preemptive scheduling is where the process is interrupted to stop it an allow another process to run. Each process gets a timeslice to run in; at the point of each context switch a timer will be reset and will deliver and interrupt when the timeslice is over.

First-Come, First-Served (FCFS)  runs the processes in the order they arrive at the short-term scheduler  removes a process from the processor only if it blocks (i.e., goes into the Wait state) or terminates  wonderful for long processes  terrible for short processes if they are behind a long process  FCFS is not preemptive.

Round Robin (RR)  processes are given equal time slices called quanta (or quantums)  takes turns of one quantum each  if a process finishes early, before its quantum expires, the next process starts immediately and gets a full quantum  in some implementations, the next process may get only the rest of the quantum  RR gives the processes with short bursts (interactive work) much better service  RR gives processes with long bursts somewhat worse service, but greatly benefits processes with short bursts and the long processes do not need to wait that much longer  RR is preemptive, that is sometimes the processor is taken away from a process that can still use it

Shortest-Job-First (SJF), Shortest next-CPU-burst first  assumes we know the length of the next CPU burst of all ready processes  the length of a cpu burst is the length of time a process would continue executing if given the processor and not preempted  SJF starts with a default expected burst length for a new process  SJF estimates the length of the next burst based on the lengths of recent cpu bursts  very short processes get very good service  shortest job first is optimal at finishing the maximum number of cpu bursts in the shortest time, if estimates are accurate  SJF cannot handle infinite loops  the Shortest Job First algorithm is a non preemptive algorithm  poor performance for processes with short burst times arriving after a process with a long burst time has started  processes with long burst times may starve.  a process may mislead the scheduler if it previously had a short bursts, but now may be cpu intensive (this algorithm fails very badly for such a case).

Shortest Remaining Time (SRT)  when a process arrives at the ready queue with an expected CPU-burst-time that is less than the expected remaining time of the running process, the new one preempts the running process  very short processes get very good service  this algorithm provably gives the highest throughput (number of processes completed) of all scheduling algorithms if the estimates are exactly correct.  a process may mislead the scheduler if it previously ran quickly but now may be cpu intensive (this algorithm fails very badly for such a case)  long processes can starve

Nonpreemptive Priority Algorithm  processor is allocated to the ready process with the highest priority  shortest remaining time (SJF) algorithm is a priority algorithm with the priority defined as the expected time  choose priority equal to the expected CPU burst time (p=t).

Preemptive Priority Algorithm  when a process arrives at the ready queue with a higher priority than the running process, the new one preempts the running process;  low priority processes can starve  it uses aging to prevent starvation  aging: if a process has not received service for a long time, its priority is increased again

Multilevel Feedback-Queue Scheduling  Unlike multilevel queue scheduling algorithm where processes are permanently assigned to a queue, multilevel feedback queue scheduling allows a process to move between queues. This movement is facilitated by the characteristic of the CPU burst of the process.  A new process is (usually) inserted at the end (tail) of the top-level FIFO queue.  priorities are implicit in the position of the queue that the ready process is currently waiting in  when process is running on the processor, the OS must remember which queue it came from  If the process voluntarily relinquishes control of the CPU, it leaves the queue, and when the process becomes ready again it is inserted at the tail of the same queue which it relinquished earlier.  after a process has executed for one time quantum, it is moved down one queue. process is placed at the back of the queue. This next lower level queue will have a time quantum which is more than that of the previous higher level queue.  This scheme leaves I/O-bound and interactive processes in the higher priority queues.  In addition, a process that waits too long in a lower-priority queue may be moved to a higher priority queue. This form of aging also helps to prevent starvation of certain lower priority processes.  the scheduling algorithm for each queue is usually RR.  The scheduler always start picking up processes from the head of the highest level queue. If the highest level queue has become empty, then only will the scheduler take up a process from the next lower level queue. (No starvation because there is aging).  this is close to what is used for the traditional UNIX scheduler

Priority Based Dynamic Round Robin  Intelligent Time Slice(ITS) is calculated which allocates different time quantum to each process based on priority, shortest CPU burst time and context switch avoidance time.

Fair-share scheduling  is a scheduling algorithm for computer operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes.

Lottery scheduling  is a probabilistic scheduling algorithm for processes in an operating system. Processes are each assigned some number of lottery tickets, and the scheduler draws a random ticket to select the next process. The distribution of tickets need not be uniform; granting a process more tickets provides it a relative higher chance of selection. This technique can be used to approximate other scheduling algorithms, such as Shortest job next

The O(n) scheduler  is the scheduler used in the Linux kernel between versions 2.4 and 2.6.  This scheduler divides processor time into epochs. Within each epoch, every task can execute up to its time slice. If a task does not use all of its time slice, then the scheduler adds half of the remaining time slice to allow it to execute longer in the next epoch.  This scheduler was an advantage in comparison to the previously used very simple scheduler based on a circular queue.  If the number of processes is big, the scheduler may use a notable amount of the processor time itself. Picking the next task to run requires iteration through all currently planned tasks, so the scheduler runs in O(n) time, where n is the number of the planned processes.

The O(1) Scheduler vs Completely Fair Scheduler O(log(n))  (2.6 prior to )  (2.6.23)