Presentation is loading. Please wait.

Presentation is loading. Please wait.

SCHEDULING Kshama Desai Bijal Shah Kishore Putta Kashyap Sheth.

Similar presentations


Presentation on theme: "SCHEDULING Kshama Desai Bijal Shah Kishore Putta Kashyap Sheth."— Presentation transcript:

1 SCHEDULING Kshama Desai Bijal Shah Kishore Putta Kashyap Sheth

2 SCHEDULING Scheduling manages CPU allocation among a group of ready processes and threads. Concept Of Scheduling: Based on mechanism of context switching. Requirement of shared resources by processes and threads. and threads.

3 Multiprogramming Allows many processes to load and time- multiplexing of the threads. Per CPU, a single thread executes at any given time. Need for threads to perform concurrent I/O operations. High CPU multiplexing rate gives the impression of concurrent execution of processes / threads.

4 Scheduling Policies and Mechanism Scheduling Policy determines the thread to which the CPU should be allocated and its time of execution. Scheduling mechanisms determine the way process manager multiplexes the CPU and the state of the thread. Thread Scheduling Scheduler determines the transition of thread from the running state to the ready state.

5 Steps in Thread Scheduling Wait in Ready list for CPU allocation. On CPU allocation, state of the thread changes from ready to running state. During execution, thread waits in resource manager’s pool on subsequent request for an unavailable resource (if needed). After complete execution, it leaves the CPU, or returns to ready state.

6 Job Simpler Processor Scheduling Model Ready ListScheduler CPU Resource Manager Resources Job Done Blocked Running Request Allocate New Thread Preemption / Yield Ready

7 Running Thread Ceases CPU use for following reasons: Running Thread Ceases CPU use for following reasons: Thread completes execution and leaves the system. Thread requests an unavailable resource. State is changed to Blocked. On availability of resource, state is changed to Ready state. Thread voluntarily decides to release CPU System Preempts the thread by changing its state from Running to Ready

8 Scheduling Mechanisms Scheduler is implemented in the hardware as well as in the OS software. Scheduler implements 3 mechanisms: a) Enqueuer b) Context Switching c) Dispatch element. Sequence of Schedule Mechanism Enqueuer mechanism places the process into Ready state and decides its priority. Context switch element helps to remove a process from CPU and bring in a new process. Dispatch element allocates CPU to the new incoming process.

9 CPU SHARING Two types of Schedulers: Voluntary CPU sharing (Non-preemptive scheduler) Involuntary CPU sharing (Preemptive scheduler) Voluntary CPU Sharing: Hardware includes special yield machine instruction. Address of the next instruction is saved in a designated memory location.

10 Disadvantage of yield instruction: Failure of periodic access of a process to yield instruction blocks other processes from instruction blocks other processes from executing CPU, until the process exits or executing CPU, until the process exits or requests a resource. requests a resource. Solution of this problem is if the system gives self-interrupt.

11 Involuntary CPU sharing Incorporates interval timer device. Interval of time is decided by the system programmer. Internal timer invokes the scheduler periodically Advantage : Process executing in an infinite loop cannot block other processes from running (executing ) the other processes from running (executing ) the CPU. CPU.

12 Performance Scheduler affects the performance of a multiprogrammed CPU to a great extent. Imbalanced process selection for CPU execution by a scheduler will lead to Starvation. This leads to the need for strategy selection of schedulers

13 Strategy Selection Criteria to select a scheduling strategy depends on the goals of the OS. Scheduling algorithms for modern OS use internal priorities. Involuntary CPU sharing have time quantum / timeslice length. Optimal schedule can be computed, provided there is no new entry in the ready list while prior present processes are being served.

14 Model For Scheduling Performance metrics to compare scheduling strategies: Service Time Wait time Turnaround time. Process model and the metrics are used to compare the performance characteristics of each algorithm. The general model must fit each specific class of the OS environment. environment. Turnaround time is the most critical performance metric in batch multiprogrammed system. Time sharing systems focus on single phase of thread execution.

15 Job Simpler Processor Scheduling Model Ready ListScheduler CPU Resource Manager Resources Job Done Blocked Running Request Allocate New Thread Preemption / Yield Ready

16 Types of Scheduling Methods Non- Pre-emptive Pre-emptive

17 Non-Preemptive Non-Pre-emptive: Once a process enters the running state, it is not removed from the processor until it has completed its service time. Once a process enters the running state, it is not removed from the processor until it has completed its service time. There are 4 Non-Pre-emptive strategies: 1. First Come First Serve (FCFS) 2. Shortest Job Next (SJN) 3. Priority Scheduling 4. Deadline Scheduling

18 Pre-emptive Based on prioritize computation. Process with highest priority should always be the one using the CPU. If a process is currently using the CPU and a new processor with higher priority enters the ready list, the process on the processor should be removed and returned to the ready list until it is once again the highest priority process in the system.

19 First Come First Serve (FCFS) Processes are assigned the CPU in the order they request it. If a running process blocks, the first process on the queue is run the next. When this blocked process becomes ready, like a newly arrived job, it is put on the end of the queue.

20 FCFS Example Example load i t(pi) 0 350 2 475 4 75 1 125 3 250 0 350 475 950 1200 1275 P0 P1 P2 P3 P4

21 Turnaround Time Average of finishing times: T TRnd (P0) = t(P0) = 350 T TRnd (P0) = t(P0) = 350 T TRnd (P1) = t(P1) + T TRnd (P0) = 125 + 350 = 475 T TRnd (P1) = t(P1) + T TRnd (P0) = 125 + 350 = 475 T TRnd (P2) = t(P2) + T TRnd (P1) = 475 + 475 = 950 T TRnd (P2) = t(P2) + T TRnd (P1) = 475 + 475 = 950 T TRnd (P3) = t(P3) + T TRnd (P2) = 250 + 950 = 1200 T TRnd (P3) = t(P3) + T TRnd (P2) = 250 + 950 = 1200 T TRnd (P4) = t(P4) + T TRnd (P3) = 75 + 950 = 1275 T TRnd (P4) = t(P4) + T TRnd (P3) = 75 + 950 = 1275 Average turnaround time: T TRnd = (350 + 475 + 950 + 1200 + 1275)/5 T TRnd = (350 + 475 + 950 + 1200 + 1275)/5 = 4250/5 = 4250/5 = 850 = 850 FCFS Example

22 Wait Time Average time in wait before first run: From the Gantt chart: From the Gantt chart: W(P0) = 0 W(P1) = T TRnd (P0) = 350 W(P2) = T TRnd (P1) = 475 W(P3) = T TRnd (P2) = 950 W(P4) = T TRnd (P3) = 1200 Average wait time: W = (0 + 350 + 475 + 950 + 1200)/5 = 2975/5 = 595 FCFS Example

23 Advantages and Disadvantages of FCFS Advantages Easy to understand and easy to program Easy to understand and easy to program It is fair It is fair Requires only single linked list to keep track of all ready processes Requires only single linked list to keep track of all ready processesDisadvantages Does not perform well in real systems Does not perform well in real systems Ignores the service time request and all other criteria that may influence the performance with respect to the turnaround or waiting time. Ignores the service time request and all other criteria that may influence the performance with respect to the turnaround or waiting time.

24 Shortest Job Next (SJN) Here each process is associated with its length. The ready queue is maintained in the order of increasing job lengths. When current process is done, pick the one at the head of the queue and run it.

25 SJN Example Example load i t(pi) 0 350 2 475 4 75 1 125 3 250 0 75 200 450 800 1275 P4 P1 P3 P0 P2 Average turnaround time: T TRnd = (800 + 200 + 1275 + 450 + 75)/5= 2800/5 = 560 Average wait time: W = (450 + 74 + 800 + 200 + 0)/5 = 1525/5 = 305

26 Advantages Minimizes wait time Minimizes wait timeDisadvantages Long running threads may starve Long running threads may starveNOTE: 1) It requires prior knowledge of service time 2) Optimal only when all the jobs are available simultaneously. Advantages and Disadvantages of SJN

27 Priority Scheduling Each process is assigned a priority and the runnable process with highest priority is allowed to run. Priorities can be assigned statically or dynamically With static priority, starvation is possible With static priority, starvation is possible Dynamic (internal) priority solves the problem of starvation Dynamic (internal) priority solves the problem of starvation

28 Priority Scheduling Example load: i t(pi) Priority 0 350 5 2 475 3 4 75 4 1 125 2 3 250 1 0 250 375 850 925 1275 P3 P1 P2 P4 P0 Average turnaround time: T TRnd = (1275 + 375 + 850 + 250 + 925)/5= 3675/5 = 735 Average wait time: W = (925 + 250 + 375 + 0 + 850)/5 = 2400/5 = 480

29 Deadline Scheduling For hard real-time systems, process must finish by a certain time Turnaround time and wait times are irrelevant We need to know maximum service time for each process Deadline must be met for each period in a process’s life

30 Deadline Scheduling Example Example load i t(pi) Deadline 0 350 575 2 475 1050 4 75 200 1 125 550 3 250 (none) 0 75 200 550 1025 1275 P4 P1 P0 P2 P3

31 Round Robin Most widely used scheduling algorithm It tries to be fair by equally distributing the processing time among all the processes When the process uses up its quantum, it is put on the end of the ready list

32 How to choose the length of the quantum? Setting the quantum length too short causes many process switches and lowers the CPU efficiency Setting the quantum length too long may cause poor response to short interactive requests Solution: Around 20-50 msec is a reasonable compromise. Around 20-50 msec is a reasonable compromise.

33 Multiple-Level Queues It is an extension of priority scheduling The ready queue is partitioned into separate queues; Foreground Foreground Background Background Scheduling must be done between the queues It uses 2 strategies of scheduling; One to select the queue One to select the queue Another to select the process in the queue such as Round Robin Another to select the process in the queue such as Round Robin

34 LINUX Scheduling Mechanism Linux threads are kernel threads hence scheduling is based on threads. It is based on time-sharing techniques. Each thread has a scheduling priority Default value is 20, but can be altered using the nice (value) system call to a value of 20-value Default value is 20, but can be altered using the nice (value) system call to a value of 20-value Value must be in the range -20 to +19, hence the range of priority is between 1 and 40 Value must be in the range -20 to +19, hence the range of priority is between 1 and 40 Quality of service is proportional to priority. The scheduler keeps track of what processes are doing and adjust the priorities periodically, i.e., the priorities are dynamic

35 Windows NT Scheduling Mechanism Windows NT is a pre-emptive multithreading OS. Even here the unit of scheduling is thread. It uses 32 numerical thread priorities from 1 to 31 (0 being reserved for system use). 16 to 31 for use by time critical operations 16 to 31 for use by time critical operations 1 to 15 (dynamic priorities) for program threads of typical applications. 1 to 15 (dynamic priorities) for program threads of typical applications.

36 References Nutt, Gary. Operating Systems. Third Edition, Pearson Education Inc, 2004. Tanenbaum, Andrew. Modern Operating Systems, Prentice-Hall Of India Pvt. Ltd. http://www.windowsitpro.com/Article/Articl eID/302/302.html http://www.windowsitpro.com/Article/Articl eID/302/302.html


Download ppt "SCHEDULING Kshama Desai Bijal Shah Kishore Putta Kashyap Sheth."

Similar presentations


Ads by Google