Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 4 CPU Scheduling

Similar presentations


Presentation on theme: "Chapter 4 CPU Scheduling"— Presentation transcript:

1 Chapter 4 CPU Scheduling
BTT 205 OPERATING SYSTEM Chapter 4 CPU Scheduling

2 Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various CPU-scheduling algorithms To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

3 Basic Concept

4 Single-processor System
Basic Concepts Single-processor System VS Multiprogramming Single-Processor System Only one process can run at a time. Any others must wait until the CPU is free & can be rescheduled A process is executed until the completion of some I/O request. (Waiting time is wasted, no useful work is accomplished) Multiprogramming Multiprogramming, try to use time productively. When one process has to wait, the OS takes the CPU away from that process and gives the CPU to another process. Several processes are kept in memory at one time Every time one process has to wait, another process can take over use of the CPU

5 Basic Concepts (cont.) CPU–I/O Burst Cycle
Success of CPU scheduling depends on an observed property of processes: Processes alternate between two states: Process execution consists of a cycle of CPU execution and I/O wait. CPU Burst I/O Burst

6 Alternating Sequence of CPU And I/O Bursts
Processes alternate between two states: Begins with a CPU burst Followed by an I/O burst Followed by another CPU burst, another I/O burst and so on The final CPU burst ends with a system request to terminate execution.

7 Whenever CPU becomes idle
Basic Concepts (cont.) OS must select from among the processes in memory (ready queue) that are ready to execute, and allocates the CPU to one of them. This selection process is carries out by short-term scheduler (CPU scheduler) CPU Scheduler A selection of process in ready queue is not necessarily a first-in, first-out (FIFO). A ready queue can be implemented as FIFO queue, priority queue, a tree, unordered linked list. All processes waiting for a chance to run on the CPU. Whenever CPU becomes idle

8 Basic Concepts (cont.) CPU Scheduling Decisions CPU scheduling decisions may take place when a process: Switches from running to waiting state (e.g.: I/O request) Switches from running to ready state (e.g.: interrupt occurs) Switches from waiting to ready (e.g.: at completion of I/O) Terminates Scheduling Scheme = Nonpreemptive Scheduling under 1 and 4, no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution. All other scheduling is preemptive

9 Nonpreemptive Scheduling Preemptive Scheduling
Basic Concepts (cont.) Nonpreemptive Scheduling Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. This method was used by Microsoft Windows 3.x Preemptive Scheduling Allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. This method was used by Microsoft Windows 95 and all subsequent versions

10 Basic Concepts (cont.) Dispatcher
Another component involved in the CPU-scheduling function. Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatcher should be as fast as possible – invoked during every process switch. Dispatch latency – time it takes for the dispatcher to stop one process and start another running

11 Scheduling Criteria

12 Scheduling Criteria Different CPU-scheduling algorithms have different properties. The choice of algorithm may favor once class of processes over another. To choose which algorithm to use, we must consider the properties of the various algorithms. Several criteria have been suggested for comparing CPU-scheduling algorithms.

13 Scheduling Criteria (cont.)
Range from 0 to 100 % CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) Long processes, one process per hour. Short transaction, ten processes per second Sum of periods = spent waiting to get memory + waiting in ready queue + executing on the CPU + doing I/O Sum of periods spent waiting in the ready queue The time it takes to start responding

14 Scheduling Criteria (cont.)
It is desirable to: In most cases, we optimize the average measure. In some circumstances, it is desirable to maximize and minimize values rather than average. E.g.: To guarantee that all users get good service, minimize response time. Maximize CPU utilization Maximize throughput Minimize waiting time Minimize turnaround time Minimize response time

15 Scheduling Algorithms

16 SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated on the CPU. First-Come, First-Served Scheduling (FCFS) Shortest-Job-First Scheduling (SJF) Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback Queue Scheduling

17 SCHEDULING ALGORITHMS (cont.)
(1) First-Come, First-Served (FCFS) Scheduling The process that requests the CPU first is allocated the CPU first. Implementation of FCFS policy is easily managed with a FIFO queue When a process enters the ready queue, its PCB is linked onto the tail of the queue When a CPU is free, it is allocated to the process at the head of the queue The running process is then removed from the queue Negative side – the average waiting time is often quite long.

18 SCHEDULING ALGORITHMS (cont.)
(1) FCFS Scheduling (Cont.) Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0ms; P2 = 24ms; P3 = 27ms Turnaround time for P1 = 24ms; P2 = 27ms; P3 = 30ms Average waiting time: ( )/3 = 17ms P1 P2 P3 24 27 30

19 SCHEDULING ALGORITHMS (cont.)
(1) FCFS Scheduling (Cont.) Suppose that the processes arrive in the order: P2 , P3 , P1 The Gantt chart for the schedule is: Waiting time for P1 = 6ms; P2 = 0ms; P3 = 3ms Turnaround time for P1 = 30ms; P2 = 3ms; P3 = 6ms Average waiting time: ( )/3 = 3ms Much better than previous case P1 P3 P2 6 3 30

20 SCHEDULING ALGORITHMS (cont.)
(1) FCFS Scheduling (Cont.) There is a convoy effect as all the other processes wait for the one process to get off the CPU. FCFS scheduling algorithm is nonpreemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. Troublesome for time-sharing systems because it is important that each user get a share of the CPU at regular intervals.

21 SCHEDULING ALGORITHMS (cont.)
FCFS Scheduling (Cont.) Exercise Process Burst Time P1 5 P2 16 P3 2 Suppose that the processes arrive in the order: P1 , P2 , P3 Draw a Gantt chart by using FCFS scheduling. Calculate the turnaround time for each job & average waiting time.

22 SCHEDULING ALGORITHMS (cont.)
Process Burst Time P1 5 P2 16 P3 2 Waiting time for P1 = 0ms; P2 = 5ms; P3 = 21ms Turnaround time = P1 = 5ms; P2 = 21ms; P3 = 23ms Average waiting time: ( )/3 = 8.67ms P3 P2 P1 21 5 23

23 SCHEDULING ALGORITHMS (cont.)
Process Name Burst Time P1 5 P2 16 P3 2 P3 P2 P1 21 5 23   Turnaround Time Waiting Time P1 5 P2 21 P3 23 Average Waiting Time: = (0+5+21)/3 = 26/3 = 8.67ms

24 SCHEDULING ALGORITHMS (cont.)
FCFS Scheduling (Cont.) Exercise Process Burst Time P1 8 P2 4 P3 15 Suppose that the processes arrive in the order: P1 , P2 , P3 Draw a FCFS scheduling chart. Calculate the turnaround time for each job & average waiting time.

25 SCHEDULING ALGORITHMS (cont.)
Process Name Burst Time P1 8 P2 4 P3 15 ANSWER   Turnaround Time Waiting Time P1 P2 P3 Average Waiting Time: =

26 SCHEDULING ALGORITHMS (cont.)
(2) Shortest-Job-First (SJF) Scheduling This algorithm associates with the length of its next CPU burst for each process. Use these lengths to schedule the process with the shortest time. It is assigned to the process that has the smallest next CPU burst When the CPU is available If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.

27 SCHEDULING ALGORITHMS (cont.)
(2) SJF Scheduling (Cont.) Process Arrival Time Burst Time P P P P SJF scheduling chart Waiting time for P1 = 3ms; P2 = 16ms; P3 = 9ms; P4 = 0ms Turnaround time for P1 = 9ms; P2 = 24ms; P3 = 16ms; P4 = 3ms Average waiting time = ( ) / 4 = 7ms P4 P3 P1 3 16 9 P2 24

28 SCHEDULING ALGORITHMS (cont.)
Process Name Burst Time P1 6 P2 8 P3 7 P4 3 P4 P3 P1 3 16 9 P2 24   Turnaround Time Waiting Time P1 9 3 P2 24 16 P3 P4 Average Waiting Time: = ( )/4 = 28/4 = 7ms

29 SCHEDULING ALGORITHMS (cont.)
(2) SJF Scheduling (Cont.) For long-term (job) scheduling, we can use as the length the process time limit that a user specifies when he submits the job. SJF scheduling is used frequently in long-term scheduling Advantage Decrease the waiting time of the short process Disadvantages Increase the waiting time of the long process. Real difficulty with SJF algorithm is knowing the length of next CPU burst

30 SCHEDULING ALGORITHMS (cont.)
(2) SJF Scheduling (Cont.) Exercise Process Burst Time P1 5 P2 16 P3 2 Suppose that the processes arrive in the order: P1 , P2 , P3 Draw a SJF scheduling chart. Calculate the turnaround time for each job & average waiting time.

31 SCHEDULING ALGORITHMS (cont.)
Process Burst Time P1 5 P2 16 P3 2 Waiting time for P1 = 2ms; P2 = 7ms; P3 = 0ms Turnaround time for P1 = 7ms; P2 = 23ms; P3 = 2ms Average waiting time: ( )/3 = 9/3= 3ms P1 P2 P3 7 2 23

32 SCHEDULING ALGORITHMS (cont.)
Process Name Burst Time P1 5 P2 16 P3 2 P1 P2 P3 7 2 23   Turnaround Time Waiting Time P1 7 2 P2 23 P3 Average Waiting Time: = (2+7+0)/3 = 9/3 = 3ms

33 SCHEDULING ALGORITHMS (cont.)
(2) SJF Scheduling (Cont.) Exercise Process Burst Time P1 6 P2 3 P3 5 Suppose that the processes arrive in the order: P1 , P2 , P3 Draw a SJF scheduling chart . Calculate the turnaround time for each job & average waiting time.

34 SCHEDULING ALGORITHMS (cont.)
Process Name Burst Time P1 6 P2 3 P3 5   Turnaround Time Waiting Time P1 P2 P3 Average Waiting Time: =

35 SCHEDULING ALGORITHMS (cont.)
(2) SJF Scheduling (Cont.) SJF algorithm can be either preemptive or nonpreemptive. The choice arises when a new process arrives at the ready queue while a previous process is still executing. The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process. Preemptive SJF algorithm Will preempt the currently executing process Shortest-remaining-time-first scheduling Nonpreemptive SJF algorithm Will allow the currently running process to finish its burst.

36 SCHEDULING ALGORITHMS (cont.)
(2) SJF Scheduling (Cont.) Shortest-remaining-time-first Scheduling Process ArrivalArrival Time Burst Time P P P P Draw a Preemptive SJF scheduling chart Average waiting time = [P1 (10-1) +P2 (1-1) + P3 (17-2) + P4 (5-3)] / 4 = [(10-1) + (1-1) + (17-2) + (5-3) ]/ 4 = 26/4 = 6.5ms P2 P1 P4 5 17 10 P3 26 1

37 SCHEDULING ALGORITHMS (cont.)
Process Name Arrival Time Burst Time P1 8 P2 1 4 P3 2 9 P4 3 5 P2 P1 P4 5 17 10 P3 26 1   Turnaround Time Waiting Time P1 17 9 P2 4 P3 24 15 P4 7 2 Average Waiting Time: = ( )/4 = 26/4 = 6.5ms

38 SCHEDULING ALGORITHMS (cont.)
(2) SJF Scheduling (Cont.) Exercise Process Arrival Time Burst Time P P P P Draw a Preemptive SJF scheduling chart. Calculate the turnaround time for each job & average waiting time = [P1(11-2) + P2(5-2-2) + P3(4-4) + P4(7-5)]/4 = ( )/4 = 3 = 12/4 = 3 P1 P4 P2 2 11 4 16 5 7 P3

39 SCHEDULING ALGORITHMS (cont.)
Process Name Arrival Time Processing Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 P1 P4 P2 2 11 4 16 5 7 P3 Turnaround Time Waiting Time P1 7+9=16 11-2 = 9 P2 4+1=5 5-2-2 =1 P3 0+1=1 P4 2+4=6 7-5 = 2 Average Waiting Time: = ( ) / 4 = 12/4 = 3ms

40 SCHEDULING ALGORITHMS (cont.)
(2) SJF Scheduling (Cont.) Exercise Draw a Preemptive SJF scheduling chart. Calculate the turnaround time for each job & average waiting time Process Name Arrival Time Processing Time A 4 5 B 7 C 1 D 2 E 3

41 SCHEDULING ALGORITHMS (cont.)
Process Name Arrival Time Processing Time A 4 5 B 7 C 1 D 2 E 3 Turnaround Time Waiting Time A B C D E Average Waiting Time: =

42 SCHEDULING ALGORITHMS (cont.)
(3) Priority Scheduling FCFS is an equal-priority scheduling where the process that requests the CPU first is allocated the CPU first SJF is a priority scheduling where priority is the predicted next CPU burst time –the larger CPU burst, the lower the priority A priority number (integer) is associated with each process No general agreement on whether 0 is the highest or lowest priority. (in our lesson, we assume that low numbers represent high priority)

43 SCHEDULING ALGORITHMS (cont.)
(3) Priority Scheduling (CONT.) ProcessArrival TiBurst Timerival TiPriority P P P P P Priority scheduling chart Average waiting time = ( ) / 5 = 41/5 = 8.2ms P2 P3 P1 6 18 16 P4 19 1 P5

44 SCHEDULING ALGORITHMS (cont.)
Process Name Burst Time Priority P1 10 3 P2 1 P3 2 4 P4 5 P5 P2 P3 P1 6 18 16 P4 19 1 P5 Turnaround Time Waiting Time P1 16 6 P2 1 P3 18 P4 19 P5 Average Waiting Time: = ( ) / 5 = 41/5 = 8.2ms

45 SCHEDULING ALGORITHMS (cont.)
(3) Priority Scheduling (CONT.) Exercise Draw a Priority scheduling chart. Calculate the turnaround time for each job & average waiting time Process Name Burst Time Priority P1 4 3 P2 7 2 P3 P4 1 P5 5

46 SCHEDULING ALGORITHMS (cont.)
Process Name Burst Time Priority P1 4 3 P2 7 2 P3 P4 1 P5 5 Turnaround Time Waiting Time P1 P2 P3 P4 P5 Average Waiting Time: =

47 SCHEDULING ALGORITHMS (cont.)
(3) Priority Scheduling (CONT.) Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process Preemptive priority scheduling Will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. Nonpreemptive priority scheduling Will put the new process at the head of the ready queue, and will run the new process after the current process is completed.

48 SCHEDULING ALGORITHMS (cont.)
(3) Priority Scheduling (CONT.) Indefinite blocking / Starvation Problem A process that is ready to run but waiting for the CPU can be considered blocked Low priority processes may never execute (waiting indefinitely) Solution Aging A technique of gradually increasing the priority of processes that wait in the system for a long time. As time progresses, it will increase the priority of the process

49 SCHEDULING ALGORITHMS (cont.)
(4) Round Robin (RR) Designed especially for time-sharing systems. Similar to FCFS scheduling but preemption is added to enable the system to switch between processes. Each process gets a small unit of CPU time (time quantum), usually milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.

50 SCHEDULING ALGORITHMS (cont.)
(4) Round Robin (RR) (cont.) How to implement RR scheduling? Ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. CPU scheduler picks the first process from the ready queue, after 1 time quantum ended, it will proceed with the next process in the ready queue, the current process will be put at the tail of the ready queue. <= 1 time quantum > 1 time quantum If CPU burst The process will release the CPU voluntarily after completion. The timer will go off and will cause the interrupt to OS, the current process will be put at the tail of the ready queue.

51 SCHEDULING ALGORITHMS (cont.)
(4) RR Scheduling (CONT.) Time Quantum = 4ms Process Burst Time P1 13 P2 3 P3 3 The Gantt chart for RR schedule is: Waiting time = Calculate the empty space/spaces between the arrival time and execution time Average waiting time = P1(10-4=6)+P2(4)+P3(7) / 3 = 17/3 = 5.67ms P1 P2 P3 4 7 10 14 18 19

52 SCHEDULING ALGORITHMS (cont.)
Process Name Burst Time P1 13 P2 3 P3 P1 P3 7 14 10 19 4 P2 18 Turnaround Time Waiting Time P1 19 6 P2 7 4 P3 10 Average Waiting Time: = (6+4+7)/3 = 17/5 = 5.67ms

53 SCHEDULING ALGORITHMS (cont.)
(4) RR Scheduling (CONT.) Exercise Time quantum = 3ms Draw a Round Robin scheduling chart. Calculate the turnaround time for each job & average waiting time Process Name Burst Time P1 4 P2 7 P3 2

54 SCHEDULING ALGORITHMS (cont.)
Time quantum = 3ms Process Name Burst Time P1 4 P2 7 P3 2 ANSWER Turnaround Time Waiting Time P1 P2 P3 Average Waiting Time: =

55 SCHEDULING ALGORITHMS (cont.)
(4) RR Scheduling (CONT.) Exercise Time quantum = 2ms Draw a Round Robin scheduling chart. Calculate the turnaround time for each job & average waiting time Process Name Burst Time A 5 B 6 C 8

56 SCHEDULING ALGORITHMS (cont.)
Time quantum = 2ms Process Name Burst Time A 5 B 6 C 8 ANSWER Turnaround Time Waiting Time P1 P2 P3 Average Waiting Time: =

57 SCHEDULING ALGORITHMS (cont.)
(4) RR Scheduling (CONT.) Exercise A job running in a system, with variable time quantums per queue, needs 18 milliseconds to run to completion. If the first queue has a time quantum of 4 milliseconds and each queue thereafter has a time quantum that is twice as large as the previous one: How many times will the job be interrupted? At which queue will it finish its execution? Draw a Gantt chart to show the time frame, CPU cycle and interrupt(s). Solution: Time quantum = 4ms, 8ms, 16ms, 32ms, …

58 SCHEDULING ALGORITHMS (cont.)
IMPORTANT Time quantum = 4ms, 8ms, 16ms, 32ms, … Process Name Burst Time P1 18 8 32 4 8 16 Time quantum 16 Interrupt Interrupt Time frame / Burst Time How many times will the job be interrupted? 2 At which queue will it finish its execution? ms Draw a Gantt chart to show the time frame, CPU cycle and interrupt(s).

59 SCHEDULING ALGORITHMS (cont.)
Context Switch (Chapter 3) When an interrupt occurs, the system needs to save the current context of the process running on the CPU, so that it can restore that context when its processing is done. (suspend process  resume it) Context of a process represented in the PCB – includes value of CPU registers, the process state & memory management information. When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process to run via a context switch Context-switch time is overhead because the system does no useful work while switching

60 SCHEDULING ALGORITHMS (cont.)
Time Quantum and Context Switch Time In software, we need to consider the effect of context switching on the performance of RR scheduling. No overhead, process finishes less than 1 time quantum Requires 2 quanta, 1 context switch will occur 9 context switches will occur – slowing the execution of process

61 SCHEDULING ALGORITHMS (cont.)
(5) Multilevel Queue Scheduling Processes are classified into different groups. E.g.: Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue may have its own scheduling algorithm foreground – RR background – FCFS

62 SCHEDULING ALGORITHMS (cont.)
(5) Multilevel Queue Scheduling RR algorithm FCFS algorithm

63 SCHEDULING ALGORITHMS (cont.)
(5) Multilevel Queue Scheduling (cont.) E.g.: Use fixed-priority scheduling in multilevel queue algorithm with five queues in order of priority: System processes Interactive processes Interactive editing processes Batch processes Student processes Each queues has absolute priority over lower-priority queues.

64 SCHEDULING ALGORITHMS (cont.)
(5) Multilevel Queue Scheduling (cont.) No process in the batch queue could run unless the queues for system processes, interactive processes and interactive editing processes were all empty. If an interactive editing process entered the ready queue while a batch process was running, the batch process would be preempted. Another possibility: use time-slice among the queues. Each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground queue in RR, 20% to background queue in FCFS

65 SCHEDULING ALGORITHMS (cont.)
(6) Multilevel Feedback Queue Scheduling Multilevel queue scheduling algorithm Processes are permanently assigned to a queue when they enter the system. Processes do not move from one queue to another since the processes do not change their foreground and background nature (e.g.: RR, FCFS) Multilevel feedback queue Allows a process to move between queues. Separate processes according to the characteristics of their CPU burst. If a process uses too much CPU time, it will be moved to a lower-priority queue. A process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This forms of aging will prevents starvation.

66 SCHEDULING ALGORITHMS (cont.)
Example of Multilevel Feedback Queue Three queues: Q0 – RR with time quantum 8 milliseconds Q1 – RR time quantum 16 milliseconds Q2 – FCFS Scheduling A process in Q0 is given a time quantum of 8ms. If it does not finish within this time, it is moved to the tail of Q1. If Q0 is empty, the process at the head of Q1 is given a quantum of 16ms. If it does not finish, it is preempted and is put into Q2. Processes in Q2 are run on FCFS basis but are run only when Q0 and Q1 are empty.

67 SCHEDULING ALGORITHMS (cont.)
Multilevel Feedback Queues This algorithm gives highest priority to any process with a CPU burst of 8ms or less. Processes that need more than 8ms but less than 24ms are also served quickly, lower priority. Long processes automatically sink to Q2 and are served in FCFS order with any CPU cycles left over from Q0 from Q1

68 SCHEDULING ALGORITHMS (cont.)
(6) Multilevel Feedback Queue (cont.) In general, multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service Multilevel feedback scheduler – most general CPU-scheduling algorithm. Can be configured to match a specific system under design. Most complex algorithm – need to define the best scheduler to be used.

69 Thread Scheduling

70 THREAD SCHEDULING Thread – a basic unit of CPU utilization
Consists of a thread ID, a program counter, a register set and a stack. Share information with other threads belonging to the same process. Single Threaded Process Traditional Multithreaded Process Perform more than one task at a time

71 THREAD SCHEDULING (CONT.)
Motivation  Many software packages that run on modern desktop PCs are multithreaded. Application is implemented as a separate process with several threads of control E.g.: Word processor One thread display graphics One thread for responding keystrokes from user One thread for performing spelling, grammar checking in the background E.g.: Web browser One thread display images One thread retrieve data from network

72 THREAD SCHEDULING (CONT.)
Most OS kernels are now multithread Several threads operate in the kernel Each thread performs a specific task Manage devices Interrupt handling E.g.: When a request is made, server create another thread to service the request and resume listening for additional requests.

73 THREAD SCHEDULING (CONT.)
Benefits: Responsiveness Allow program to continue running even if part of it is blocked or performing a lengthy operation E.g.: Web browser allow user interaction in one thread while an image was being loaded in another thread Resource Sharing Threads share memory and the resources of the same process to which they belong to Economy Because threads share the resources of the process to which they belong Creating process is time consuming compared with threads Scalability Can be used in a multiprocessor architecture, increase parallelism.

74 THREAD SCHEDULING (CONT.)
Distinction between user-level and kernel-level threads. User-level threads are managed by a thread library To run on the CPU, user-level threads must be mapped to an associate kernel-level thread Known as process-contention scope (PCS) - scheduling competition is within the process Kernel thread scheduled onto available CPU used system-contention scope (SCS) – competition among all threads in system

75 Multiple-Processing Scheduling

76 Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are available Approaches to Multiple-Processor Scheduling Asymmetric multiprocessing Only one processor accesses the system data structures Reducing the need for data sharing Symmetric multiprocessing (SMP) Each processor is self-scheduling All processes in common ready queue, or each has its own private queue of ready processes

77 Operating System Examples

78 Operating System Examples
Solaris scheduling Solaris dispatch table for time sharing and interactive threads Priority – higher number indicates a higher priority Time quantum – for the associated priority Time quantum expired – the new priority of thread that has used its entire time quantum Return from sleep – the priority of thread that is returning from sleeping (e.g.: waiting for I/O)

79 Operating System Examples(CONT.)
Windows XP scheduling Use priority-based, preemptive scheduling algorithm Windows XP scheduler ensures that the highest-priority thread will always run. Dispatcher – handle scheduling for Windows XP kernel Select thread to be run Thread will be run until it is preempted by higher-priority thread, until it terminates, until its time quantum ends, until I/O call. If higher priority thread becomes ready while a lower priority thread is running, the lower priority thread will be preempted.

80 Operating System Examples(CONT.)
Linux scheduling Preemptive, priority based algorithm with two priority ranges: Real-time range (0-99) Nice value range ( ) Numerical lower values indicate higher priorities. Assign higher priority tasks longer time quanta Assign lower priority tasks shorter time quanta

81 SUMMARY Basic Concepts Scheduling Criteria Scheduling Algorithms
Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples

82 Q & A


Download ppt "Chapter 4 CPU Scheduling"

Similar presentations


Ads by Google