Download presentation
1
Chapter 5: Process Scheduling
2
Outline Scheduling Criteria Scheduling Algorithms Basic Concepts
CPU-I/O Burst Cycle CPU Scheduler Preemptive Scheduling Dispatcher Scheduling Criteria Scheduling Algorithms First-Come, First-Serve Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling
3
Basic Concepts CPU: One of the primary computer resource
Maximum CPU utilization obtained with multiprogramming Several processes are kept in memory CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait. CPU burst distribution determines how to schedule the processes.
4
Alternating Sequence of CPU And I/O Bursts
5
Histogram of CPU-burst Times
Normally a process starts with frequent short CPU bursts and then it stabilizes and shifts to long but infrequent CPU bursts I/O bound processes usually have many small CPU bursts CPU bound programs usually have few but long CPU bursts CPU-bound program
6
CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. Is it the same as Short-term scheduler? … Yes CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting to ready. 4. Terminates.(Own will) Scheduling Scheme given at 1 and 4 is nonpreemptive (Cooperative). All other scheduling is preemptive.
7
Preemptive vs Non Preemptive Scheduling
If a process is being executed then it keeps the CPU until one or four conditions of previous slide occur. MS Windows 3.x which was sort of multitasking system used this method Not at all efficient. Preemptive A process in running state can be brought back to ready state without any of the conditions of previous slide being met. Can you guess how it is done?
8
Problems with preemptive
Although better than non-preemptive still suffers from synchronization problems E.g. Process P and Q share some data (can be memory location or a file) P does some calculations changes some of the data but before it finishes its time expires Q reads the data but is the data valid?
9
Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running. Should be minimum.
10
Scheduling Criteria Following criteria is kept under consideration while designing a scheduler
CPU utilization – Keep the CPU as busy as possible Throughput – # of processes that complete their execution per unit time (Given Time). Turnaround time – Amount of time to execute a particular process (Sum of ready, waiting, running, loading times,I/O) Waiting time – Amount of time a process has been waiting in the ready queue Response time – Amount of time it takes from when a request was submitted until the first response is produced, not output (for interactive environment) Time it takes to start responding Not the time it takes to output the response
11
Optimization Criteria
So the best CPU Scheduling algorithm is which … Maximizes CPU utilization Maximizes throughput Minimizes turnaround time Minimizes waiting time Minimizes response time
12
Outline Scheduling Criteria Scheduling Algorithms Basic Concepts
CPU-I/O Burst Cycle CPU Scheduler Preemptive Scheduling Dispatcher Scheduling Criteria Scheduling Algorithms First-Come, First-Serve Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling
13
First-Come, First-Served (FCFS) Scheduling
Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: ( )/3 = 17 P1 P2 P3 24 27 30
14
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order P2 , P3 , P1 . The Gantt chart for the schedule is: Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: ( )/3 = 3 Much better than previous case. CPU bound processes will get and hold the CPU Convoy effect short process behind long process (Bad CPU utilization) P1 P3 P2 6 3 30
15
Shortest-Job-First (SJR) Scheduling
Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. Two schemes: nonpreemptive – once CPU given to the process it cannot be preempted until it completes its CPU burst. preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF). SJF is optimal – gives minimum average waiting time for a given set of processes. Shortest-next-cpu-burst algorithm
16
Example of Non-Preemptive SJF
Process Burst Time P1 6 P2 8 P3 7 P4 3 SJF Average waiting time = ( )/4= FCFS=10.25 8 P4 P1 P3 P2 3 9 16 24
17
Shortest-Job-First (SJR) Scheduling
Good for long-term Scheduler in batch systems User specifies process time during job submission Lower value may mean faster response Too low a value will cause a time-limit-exceeded error Difficult to implement at the level of short-term CPU scheduler Determining Length of Next CPU Burst Can only predict/ estimate the length of next CPU burst. Can be done by using the length of previous CPU bursts, using exponential averaging.
18
Examples of Exponential Averaging
=0 n+1 = n Recent history does not count. =1 n+1 = tn Only the actual last CPU burst counts. Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor. n+1 =αtn +(1- α) n tn: length of the nth CPU burst n+1 : Our predicted Value α: weight of recent & past history 0<= α<=1
19
Prediction of the Length of the Next CPU Burst
Initial guess for T(1) = 10 ms Actual 6 Determination of 2 Cycle based on actual (6) and last guess (10) T(2) = 0.5 x x 10 = 8 ms Determination of 3 cycle based on actual(4) and last guess (8) T(3) = 0.5 x x 8 = 6 ms … Now you can do it
20
For example: Suppose a process p is given a default expected burst length of 5 time units. When it is run, the actually burst lengths are 10,10,10,1,1,1 (although this information is not known in advance to any algorithm). The prediction of burst times for this process works as follows. Let e(1) = 5, as a default value. When process p runs, its first burst actually runs 10 time units, so, a(1) = 10. e(2) =0.5 * e(1) * a(1) = * * 10 = 7.5 This is the prediction for the second cpu burst The actual second cpu burst is 10. So the prediction for the third cpu burst is: e(3) = 0.5 * e(2) * a(2) = 0.5 * * = 8.75 e(4) = 0.5 * e(3) * a(3) = 0.5 * * 10 = 9.38, So, we predict that the next burst will be close to 10 (9.38) because the recent bursts have been of length 10. At this point, it happens that the process starts having shorter bursts, with a(4) = 1, so the algorithm gradually adjusts its estimated cpu burst (prediction) e(5) = 0.5 * e(4) * a(4) = 0.5 * * 1 = 5.19 e(6) = 0.5 * e(5) * a(5) = 0.5 * * 1 = 3.10 e(7) = 0.5 * e(6) * a(6) = 0.5 * * 1 = 2.05
21
Example of Preemptive SJF (Shortest Remaining Time first)
Process Arrival Time Burst Time P1 0 8 P2 1 4 P3 2 9 P4 3 5 SJF (preemptive) Average waiting time = 6.5 Non-Preemptive Average waiting time = 6.5 P1 P2 P1 P3 P4 1 5 10 17 26
22
Outline Scheduling Criteria Scheduling Algorithms Basic Concepts
CPU-I/O Burst Cycle CPU Scheduler Preemptive Scheduling Dispatcher Scheduling Criteria Scheduling Algorithms First-Come, First-Serve Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling
23
Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority). Preemptive Non-preemptive SJF is a priority scheduling where priority is the predicted next CPU burst time. Problem Starvation – low priority processes may never execute. Solution Aging – as time progresses increase the priority of the process.
24
Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Performance q large FIFO q small q must be large with respect to context switch, otherwise overhead is too high.
25
Example of RR with Time Quantum = 20
Process Burst Time P1 53 P2 17 P3 68 P4 24 The Gantt chart is: Typically, higher average turnaround than SJF, but better response. P1 P2 P3 P4 20 37 57 77 97 117 121 134 154 162
26
Time Quantum and Context Switch Time
If context switching time is approx 10 percent of the time quantum, then about 10 percent of the CPU time will be spent in context switching Most modern systems have time quanta ranging from 10 to 100 milliseconds Context switch typically takes less than 10 microseconds Remember: Context switch takes time
27
Turnaround Time Varies With The Time Quantum
80% of CPU bursts should be shorter than the time quantum Average Turnaround time doesn’t necessarily improve as the time-quantum size increases)
28
Outline Scheduling Criteria Scheduling Algorithms Basic Concepts
CPU-I/O Burst Cycle CPU Scheduler Preemptive Scheduling Dispatcher Scheduling Criteria Scheduling Algorithms First-Come, First-Serve Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling
29
Multilevel Queue Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm, foreground – RR background – FCFS Scheduling must be done between the queues. Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS
30
Multilevel Queue Scheduling
31
Multilevel Feedback Queue
A process can move between the various queues; aging can be implemented this way. Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service
32
Example of Multilevel Feedback Queue
Consider a case if we have three queues: Q0 – time quantum 8 milliseconds Q1 – time quantum 16 milliseconds Q2 – FCFS Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to tail of queue Q1. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to tail of queue Q2.
33
Multilevel Feedback Queues
34
Outline Multi-Processor Scheduling Scheduling Criteria
Basic Concepts CPU-I/O Burst Cycle CPU Scheduler Preemptive Scheduling Dispatcher Scheduling Criteria Scheduling Algorithms First-Come, First-Serve Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling Multi-Processor Scheduling
35
Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are available. Consider Homogeneous processors within a multiprocessor system. Load sharing Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing. SMP: Each processor is self scheduling May have separate ready queues
36
Multiple-Processor Scheduling –Process Affinity
Cache memory? The Data most recently accessed populates it Process migration to another process Invalidation & Repopulation Some OS attempt to keep a process running on the same processor –Affinity Soft Affinity: Attempt to keep a process running on the same processor but not guaranteeing Hard Affinity: Process cannot migrate to another processor
37
Multiple-Processor Scheduling –Load Balancing
Utilize the full benefits of multi-processors Load balancing: Distribute the workload evenly across all processors. Necessary where each processors has its own private queue of eligible processes Not necessary where processors have common run queue Modern OS --private queue or common? Push Migration: move processes from overloaded to idle processors Pull Migration: Idle processor pulls a waiting process from a busy processor Any link with Process Affinity?
38
Multiple-Processor Scheduling Symmetric Multithreading
Multiple logical rather than physical processors to run several threads concurrently (HPT) Create logical processors on the same physical processor—a view of many Processors to OS Each has its own machine-state registers Own interrupt handling Logical CPU Physical CPU System Bus
39
Multiple-Processor Scheduling Symmetric Multithreading
SMT: a feature provided by hardware Os needn’t be designed differently Performance gains –If OS is aware. But why? **Scheduler**
40
Outline Scheduling Criteria Scheduling Algorithms
Basic Concepts CPU-I/O Burst Cycle CPU Scheduler Preemptive Scheduling Dispatcher Scheduling Criteria Scheduling Algorithms First-Come, First-Serve Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling Multi-Processor Scheduling OS Examples –Self Study Algorithm Evaluation
41
Algorithm Evaluation How do we select the best algorithm?
Defining a criteria CPU Utilization, response time, throughput Criteria may include several measures e.g. Maximize CPU utilization |and| Response time is 1 e.g. Maximize Throughput |and| turnaround time Analytic Evaluation Method The algorithm and some system workload are used to produce a formula or number which gives the performance of the algorithm for that workload. Deterministic modeling Queuing models Simulations Implementation
42
Deterministic Modeling
Predetermined workload –defines performance of each algorithm for that workload. Use of Gantt charts. Simple and fast Exact numbers for comparison Answers apply only for the cases considered. Performance figures may not be true in general Suitable: If we are running the same program over and over again. We can easily select an algorithm. Over a set of examples certain trends can be analyzed, e.g. If all the processes arrive at time 0, the SJF policy will always result in min waiting time If the Arriving time is different the SJF may not always result in min average waiting time.
43
Deterministic Modeling
Process Arrival Time Burst Time P P P P Gantt chart Average waiting time = ( )/4 = 3 P3 P2 4 2 11 P4 5 7 P1 16
44
Queuing Modeling What if the processes that are run, vary from day to day? --Is deterministic method useful? The distributions of CPU and I/O bursts can be determined/ approximated/ estimated. Mathematical formula: probability of a particular CPU burst Arrival times distribution can also be estimated. Computer system viewed as a network of queues and servers: ready queue, I/O queue, event queues, CPUs, I/O device controllers, etc. e.g. CPU: ready queue, I/O system :device queue Input: Arrival and service rates Output: CPU utilization, average queue length, average waiting time, …
45
Queuing Modeling where Little’s Formula: n = λ* W
n = average queue length λ = average arrival rate W = average waiting time in a queue
46
Average Queue Length(n)
Queuing Modeling Let the average job arrival rate be 0.5 Algorithm Average Wait Time W=tw Average Queue Length(n) FCFS 4.6 2.3 SJF 3.6 1.8 SRTF 3.2 1.6 RR (q=1) 7.0 3.5 RR (q=4) 6.0 3.0
47
Queuing Modeling Complicated mathematics
Distributions (uniform, exponential, etc) for the arrival and departure rates can be difficult to work with Assumptions may not be accurate Approximation of the real system
48
Simulation To get more accurate evaluation
Workload generated by assuming some distribution and a random number generator, or by collecting data from the actual system. The distributions can be defined mathematically or empirically.
49
Simulation
50
Expensive: hours of programming and execution time
Simulation Characteristics Expensive: hours of programming and execution time May be erroneous because of the assumptions about distributions
51
Implementation Even simulation is of limited accuracy Best way: Code and implement the scheduling algorithm and see High cost –coding the algorithm Modify the OS to support it Reaction of the users to changing OS Environment changes: if short processes are given high priority then user may break their larger processes into many short ones For example: Design of an automatic Interactive/ non-interactive process classifier Algorithm
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.