CE01000-3 Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation.

Slides:



Advertisements
Similar presentations
CPU Scheduling.
Advertisements

Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 17 Scheduling III.
 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Operating Systems Chapter 6
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems CPU Scheduling. Agenda for Today What is Scheduler and its types Short-term scheduler Dispatcher Reasons for invoking scheduler Optimization.
CPU Scheduling Algorithms
Chapter 3: CPU Scheduling
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
CS 311 – Lecture 23 Outline Kernel – Process subsystem Process scheduling Scheduling algorithms User mode and kernel mode Lecture 231CS Operating.
02/06/2008CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
Scheduling in Batch Systems
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
What we will cover…  CPU Scheduling  Basic Concepts  Scheduling Criteria  Scheduling Algorithms  Evaluations 1-1 Lecture 4.
Chapter 5-CPU Scheduling
02/11/2004CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
Chapter 5: CPU Scheduling (Continuation). 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Determining Length.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
CS212: OPERATING SYSTEM Lecture 3: Process Scheduling 1.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Introduction to Operating System Created by : Zahid Javed CPU Scheduling Fifth Lecture.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
1 Module 5: Scheduling CPU Scheduling Scheduling Algorithms Reading: Chapter
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
CPU Scheduling Algorithms CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU SCHEDULING.
CPU Scheduling Algorithms
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 6: CPU Scheduling
Outline Scheduling algorithms Multi-processor scheduling
Chapter 5: CPU Scheduling
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Presentation transcript:

CE Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation

Overview of lecture In this lecture we will be looking at : more scheduling algorithms – Priority scheduling, Round Robin scheduling, Multi-level queue Introduction to process synchronisation including: critical section problem & mutual exclusion hardware support for process synchronisation

Priority Scheduling A priority number (integer) is associated with each process and the CPU is allocated to the process with the highest priority (some schemes use smallest integer  highest priority). Can be preemptive or nonpreemptive SJF is effectively priority scheduling where priority is the predicted next CPU burst time.

Priority Scheduling (Cont.) Problem = Starvation – low priority processes may never execute. Solution = Aging – as time progresses increase the priority of the process.

Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.

Round Robin (RR) (Cont.) Performance q large  behaves similar to FCFS q small  overhead from context switch becomes bigger - when q becomes similar in length to context switch time then no work will get done - all context switching

Example: RR with Time Quantum = 20 ProcessBurst Time P 1 53 P 2 17 P 3 68 P 4 24 The Gantt chart is: P1P1 P2P2 P3P3 P4P4 P1P1 P3P3 P4P4 P1P1 P3P3 P3P

Typically, RR has higher average turnaround than SJF, but better response.

How a Smaller Time Quantum Increases Context Switches

Turnaround Time Varies With The Time Quantum

Multilevel Queue Ready queue is partitioned into separate queues with processes placed according to some property e.g. foreground (interactive) and background (batch) queues or system, interactive and batch queues Each queue may have its own scheduling algorithm, e.g. foreground – RR and background – FCFS

Multilevel Queue (Cont.) Scheduling must be done between the queues. Fixed priority scheduling; i.e., serve all from foreground then from background. But possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR; 20% to background in FCFS

Multilevel Feedback Queue (Cont.) A process can move between the various queues; aging can be implemented this way. Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process

Multilevel Feedback Queue (Cont.) method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service

Example of Multilevel Feedback Queues

Example of Multilevel Feedback Queue Three queues: Q 0 – time quantum 8 milliseconds Q 1 – time quantum 16 milliseconds Q 2 – FCFS Scheduling A new job enters queue Q 0 which is served in FCFS order. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q 1.

Example of Multilevel Feedback Queue (Cont.) At Q 1 job is again served FCFS order and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q 2.

Process Synchronization Topics: Background The Critical-Section Problem Synchronization Hardware Semaphores (a later lecture) Classical Problems of Synchronization (a later lecture)

Background Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes. For example 2 processes - one a producer of data which it writes to a buffer and the other a consumer of the data which it reads from the buffer

Background (Cont.) Suppose that the producer-consumer code has a shared variable counter, initialized to 0 and incremented each time a new item is added to the buffer and decremented when one is removed

Bounded-Buffer – Shared- Memory soln. Data structures: buffer for shared data - an array organised as a circular buffer in-index - where producer to put data out-index - where consumer to get data counter - keeps count of number of items in buffer n is the size of the buffer

Bounded-Buffer (Cont.) Producer Process repeat …. produce data item …. /* buffer full when counter = n */ while counter == n do no-op; buffer[in-index] = data item in-index = (in-index+1) modulo size-of-buffer counter = counter + 1; until false

Bounded-Buffer (Cont.) Consumer process repeat /* buffer empty when counter == 0 */ while counter == 0 do no-op; get next data item from buffer[out-index] out-index = out-index-1 modulo size-of-buffer counter = counter - 1; …. consume data item …. until false

Bounded-Buffer (Cont.) counter = counter + 1; and counter = counter - 1; must be executed atomically. Code for counter := counter + 1 may be implemented at machine level by several instructions e.g. move count,reg1 add #1,reg1 move reg1,count

Bounded-Buffer (Cont.) Multiprogammed execution of processes each accessing shared data means that the instructions belonging to the different processes to access the data will be interleaved in some arbitrary order. The outcome of the execution (value of data) depends on order in which accesses occur To prevent this we need to be able to ensure that only one process can access shared data at a time

Example interleaving P1P2 ….. move count,reg1. interrupted...….move count,reg1. interrupted add #1,reg1. move reg1,count. ….. interrupted..sub #1,reg1.move reg1,count

The Critical-Section Problem Assume you have N processes all competing to use some shared data - each process has a code segment, called critical section, in which the shared data is accessed. Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section

Critical-Section problem (Cont.) Execution must be mutually exclusive - requires synchronisation. repeat entry section // controls entry to critical section critical section exit section // manages exit from critical section remainder section // other code until false;

Solution to Critical-Section Problem 3 conditions need to be met to provide a solution to critical section problem 1.Mutual Exclusion. If process P i is executing in its critical section, then no other processes can be executing in their critical sections. 2.Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the making of a selection of the process that will enter the critical section next cannot be postponed indefinitely.

Solution to Critical-Section Problem (Cont.) 3.Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Synchronization Hardware Many processors provide instructions that as part of one single instruction both test a word for a value and change that value - being a single instruction it is atomic (indivisible) e.g. Test-and-set instruction - defined below as if it were a method boolean method TestAndSet (boolean wordToBeTested) { return wordToBeTested ; // return value of word wordToBeTested = true; // set value of target word to true }

Mutual Exclusion with Test-and- Set Shared data: var lock: boolean (initially false) Process P i repeat while Test-and-Set (lock) do no-op; critical section lock := false; remainder section until false; However without further modification does not satisfy bounded waiting condition

References Operating System Concepts. Chapter 5 & 6.