CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.

Slides:



Advertisements
Similar presentations
Operating Systems Process Scheduling (Ch 3.2, )
Advertisements

CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CPU Scheduling Section 2.5.
Project 2 – solution code
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU Scheduling Chapter 6 Chapter 6.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Fair Resource Access & Allocation  OS Scheduling
Chapter 6 Scheduling. Basic concepts Goal is maximum utilization –what does this mean? –cpu pegged at 100% ?? Most programs are I/O bound Thus some other.
Operating Systems Lecture Notes CPU Scheduling Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
1 Scheduling The part of the OS that makes the choice of which process to run next is called the scheduler and the algorithm it uses is called the scheduling.
1 Our focus  scheduling a single CPU among all the processes in the system  Key Criteria: Maximize CPU utilization Maximize throughput Minimize waiting.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
2.5 Scheduling Given a multiprogramming system. Given a multiprogramming system. Many times when more than 1 process is waiting for the CPU (in the ready.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 7: CPU Scheduling Chapter 5.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Cpr E 308 Spring 2005 Process Scheduling Basic Question: Which process goes next? Personal Computers –Few processes, interactive, low response time Batch.
ITFN 2601 Introduction to Operating Systems Lecture 4 Scheduling.
2.5 Scheduling. Given a multiprogramming system, there are many times when more than 1 process is waiting for the CPU (in the ready queue). Given a multiprogramming.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
CSC 322 Operating Systems Concepts Lecture - 10: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
CS333 Intro to Operating Systems Jonathan Walpole.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
Lecture 5 Scheduling. Today CPSC Tyson Kendon Updates Assignment 1 Assignment 2 Concept Review Scheduling Processes Concepts Algorithms.
1 Uniprocessor Scheduling Chapter 9. 2 Aim of Scheduling Assign processes to be executed by the processor(s) Response time Throughput Processor efficiency.
lecture 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5a: CPU Scheduling
Uniprocessor Scheduling
Chapter 2 Scheduling.
Operating Systems Processes Scheduling.
Chapter 2.2 : Process Scheduling
CPU Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
Chapter5: CPU Scheduling
Scheduling.
Chapter 6: CPU Scheduling
Outline Scheduling algorithms Multi-processor scheduling
CPU scheduling decisions may take place when a process:
Chapter 6: CPU Scheduling
Process Scheduling Decide which process should run and for how long
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: Scheduling Algorithms Dr. Amjad Ali
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
CPU Scheduling CSE 2431: Introduction to Operating Systems
Presentation transcript:

CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource so utilize it well. May have more than 1. With multiprogramming will have many processes (threads) to pick from. The scheduler runs an algorithm to decide which to pick. Want to make efficient use of cpu(s).

CPU Scheduling Remember process switching is expensive. Need to save state. *I/O Bound and cpu bound processes * A cpu bound process will use the cpu for long periods of time between I/O use. An I/O bound process will use the cpu for short periods of time, spending a lot of time doing I/O. When a process is doing I/O take it off of the cpu and let a process in ready state run.

CPU Scheduling Non-preemptive algorithms let processes run until they give up the cpu or do I/O. Preemptive algorithms will interrupt a process after it has used up its time quantum (limit on cpu) and let another one run.

CPU Scheduling Goals Fairness – give all processes a chance to have cpu. Balance – keep all parts of system that can be used busy. Throughput – maximize jobs per time. Turnaround time – average time for job to finish – minimize it. CPU utilization – keep cpu busy. Response time – be responsive to interactive users. Enforce policy and meet user expectations

CPU Scheduling Goals Some of the goals contradict each other. For example, better utilization would lead to worse response time. A better response time would decrease utilization Also for real time systems: Meet deadlines Predictability We'll look at some algorithms next.

First-Come First-Served (FCFS) Simple algorithm. Let first job in run first. Nonpreemptive. One queue of ready processes. Take off cpu when blocks and let next in line run. When process done blocking it goes to the end of the line. Like lines of people being served. Very simple to understand and code.

Problems with FCFS Favors cpu-bound over I/O bound process Penalizes short and I/O bound processes turnaround time = execution + wait time Waiting in line at the store or bank analogies.

Shortest Job First (SJF) Assumes know run times in advance. Nonpreemptive. Pick the shortest job in queue first and run Turnaround time is optimal. Penalizes long jobs. Good response time for short processes.

Shortest Remaining Time Next Preemptive version of SJF. Pick process with shortest remaining time. If a new job needs less time than the current running one, give it the cpu. Good response time for short jobs. Need to know time required. May starve long processes.

Round-Robin Scheduling Widely used, fair, simple, old, preemptive Time-slicing Define a time quantum – the amount of time a process can stay on cpu. Once used up take it off. Also taken off when blocks. Take next job on list. Overhead (waste of cpu) from context switch – make quantum long enough.

Round-Robin Scheduling (cont) Too long of a quantum can give bad interactive response. Better performance if preemption is rare and process switch on I/O mostly. All jobs of equal importance.

Priority Scheduling Assign priorities and let highest priority process have cpu. Priority may change as time goes by. May also use a quantum. Priorities may be static or dynamic. Group processes into priority classes and use round-robin on each class. Starvation of low-priority jobs possible. Unix nice values.

Multiple Queues Idea: Give cpu-bound process a large quantum and schedule them infrequently Have priority classes. Highest class run for one quantum. Next class two quantum, next 4 quantum and so on. If a process uses up its quantum (cpu-bound) it gets put in lower class. Interactive use could raise priority.

Fair share scheduling (FSS) Take into account who owns the process and be fair to the user rather than the processes. Added to Solaris 9. Very useful with zones (covered soon) Example: Give group A a share of 50, group B a share of 30 and group C a share of 20.

Some other algorithms Shortest Process Next - Run process with shortest estimated time based on past use. Guaranteed Scheduling - For n users (or processes) let each get 1/n of the cpu. Lottery Scheduling - Have lottery tickets. The winning prize is time on the cpu. Give higher priority processes more tickets.

Real-Time Scheduling Must get something done within a fixed period of time. Hard real time – absolute deadlines must be met. Soft real time – would rather not miss a deadline, but once in a while a miss is ok.

Thread Scheduling With only user-level threads the kernel schedules the process and the thread scheduler of the process schedules the threads. With kernel-level threads the kernel picks a thread to run.

Solaris Process Scheduling Kernel threads are scheduled. Derived from SVR4 and inherited 3 scheduling classes from it. - Time Share (TS) - Real Time (RT) - System (SYS) Later interactive (IA) was added.

Solaris Process Classes Time Share (TS) – as threads run, their priority gets worse; as they wait, it gets better. Designed to fairly allocate cpu. Run thread until gives up cpu (blocking) or use up the time quantum. Real Time (RT) – for processes that need real-time requirements. Stay on cpu as long as need it. System (SYS) – Used by the OS. Interactive (IA) – To give better response to GUI desktop processes.

Solaris Process Classes The Fixed priority class (FX) was added more recently. It gives processes a fixed priority. TS, IA, RT and FX are dynamically loadable kernel modules. SYS is integral to the kernel. priocntl -l Solaris has global priorities where higher priorities are better.

Solaris Process Classes high Interrupts Real time System Time Share and Interactive lo w If RT is not loaded then interrupts are from ps -eLc look at class and priority of LWPs

More Solaris Process Classes The IA class is not much different than TS They share the same dispatch table and global priority range. Lower priority threads get a larger time quantum. The quantum gets smaller as priority increases as higher priority threads get scheduled more often. Time quantum is typically tens of ms. Priority of thread changes as time goes by Every thread will be within every second.

More Solaris Process Classes CPU hogs will end up in the low priority range as they keep using up quantum. If they do not use all of time quantum priority will increase. An IA class process has priority increased when it is the process with the current window focus. The SYS class has no dispatch table or time quantum. No time-slicing and no re-prioritization. Run until voluntarily give up processor.

Priority Inversion Happens when a lower priority thread prevents a higher priority thread from running as it is holding a critical resource needed. Solved by letting the lower priority thread inherit a higher priority and continue to run. Done to prevent a deadlock.

Linux Scheduling Schedules threads. 1. Real-Time FIFO- is highest priority with no preemption except by new ones in this class. 2. Real-Time RR – are preemptable with no deadlines. 3. Timesharing – standard class