Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 3 CPU Scheduling. Lecture Highlights  Introduction to CPU scheduling  What is CPU scheduling  Related Concepts of Starvation, Context Switching.

Similar presentations


Presentation on theme: "Lecture 3 CPU Scheduling. Lecture Highlights  Introduction to CPU scheduling  What is CPU scheduling  Related Concepts of Starvation, Context Switching."— Presentation transcript:

1 Lecture 3 CPU Scheduling

2 Lecture Highlights  Introduction to CPU scheduling  What is CPU scheduling  Related Concepts of Starvation, Context Switching and Preemption  Scheduling Algorithms  Parameters Involved  Parameter-Performance Relationships  Some Sample Results

3 Introduction What is CPU scheduling An operating system must select processes for execution in some fashion. CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. The selection process is carried out by an appropriate scheduling algorithm.

4 Related Concepts Starvation This is the situation where a process waits endlessly for CPU attention. As a result of starvation, the starved process may never complete its designated task.

5 Related Concepts Context Switch Switching the CPU to another process requires saving the state of the old process and loading the state of the new process. This task is known as a context switch. Increased context switching affects performance adversely because the CPU spends more time switching between tasks than it does with the tasks itself.

6 Related Concepts Pre-emption Preemption involves the CPU purging a process being served in favor of another process. The new process may be favored due to a variety of reasons like higher priority, smaller execution time, etc as compared to the currently executing process.

7 Scheduling Algorithms Scheduling algorithms deal with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. Some commonly used scheduling algorithms: First In First Out (FIFO) Shortest Job First (SJF) without preemption Preemptive SJF Priority based without preemption Preemptive Priority base Round robin Multilevel Feedback Queue

8 Scheduling algorithms A process-mix When studying scheduling algorithms, we take a set of processes and their parameters and calculate certain performance measures This set of processes, also termed a process-mix, is comprised of some of the PCB parameters.

9 Scheduling algorithms A sample process-mix Following is a sample process-mix which we’ll use to study the various scheduling algorithms. Process ID Time of Arrival PriorityExecution Time 102010 22 1 34582 48404 512303

10 Scheduling Algorithms First In First Out (FIFO) This is the simplest algorithm and processes are served in order in which they arrive. The timeline on the following slide should give you a better idea of how the FIFO algorithm works.

11 Scheduling Algorithms First In First Out (FIFO) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 P1 P2 P3 P4 P5 P1 arrives P2 arrives P3 arrives P4 arrives P5 arrives P1 ends P2 ends P3 ends P4 ends P5 ends

12 FIFO Scheduling Algorithm Calculating Performance Measures Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 10 – 0 = 10 Turnaround Time for P2 = 11 – 2 = 9 Turnaround Time for P3 = 13 – 4 = 9 Turnaround Time for P4 = 17 – 8 = 9 Turnaround Time for P5 = 20 – 12 = 8 Total Turnaround Time = 10+9+9+9+8 = 45 Average Turnaround Time = 45/5 = 9

13 FIFO Scheduling Algorithm Calculating Performance Measures Waiting Time = Turnaround Time – Execution Time Waiting Time for P1 = 10 – 10 = 0 Waiting Time for P2 = 9 – 1 = 8 Waiting Time for P3 = 9 – 2 = 7 Waiting Time for P4 = 9 – 4 = 5 Waiting Time for P5 = 8 – 3 = 5 Total Waiting Time = 0+8+7+5+5 = 25 Average Waiting Time = 25/5 = 5

14 FIFO Scheduling Algorithm Calculating Performance Measures Throughput Total time for completion of 5 processes = 20 Therefore, Throughput = 5/20 = 0.25 processes per unit time

15 FIFO Scheduling Algorithm Pros and cons Pros FIFO is an easy algorithm to understand and implement. Cons The average waiting time under the FIFO scheme is not minimal and may vary substantially if the process execution times vary greatly. The CPU and device utilization is low. It is troublesome for time-sharing systems.

16 Scheduling Algorithms Shortest Job First (SJF) without preemption The CPU chooses the shortest job available in its queue and processes it to completion. If a shorter job arrives during processing, the CPU does not preempt the current process in favor of the new process. The timeline on the following slide should give you a better idea of how the SJF algorithm without preemption works.

17 Scheduling Algorithms Shortest Job First (SJF) without preemption 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 P1 P2 P3P4 P5 P1 arrives P2 arrives P3 arrives P4 arrives P5 arrives P1 ends P2 ends P3 ends P4 ends P5 ends

18 SJF without preemption Calculating Performance Measures Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 10 – 0 = 10 Turnaround Time for P2 = 11 – 2 = 9 Turnaround Time for P3 = 13 – 4 = 9 Turnaround Time for P4 = 20 – 8 = 12 Turnaround Time for P5 = 16 – 12 = 4 Total Turnaround Time = 10+9+9+12+4 = 44 Average Turnaround Time = 44/5 = 8.8

19 SJF without preemption Calculating Performance Measures Waiting Time = Turnaround Time – Execution Time Waiting Time for P1 = 10 – 10 = 0 Waiting Time for P2 = 9 – 1 = 8 Waiting Time for P3 = 9 – 2 = 7 Waiting Time for P4 = 12 – 4 = 8 Waiting Time for P5 = 4 – 3 = 1 Total Waiting Time = 0+8+7+8+1 = 24 Average Waiting Time = 24/5 = 4.8

20 SJF without preemption Calculating Performance Measures Throughput Total time for completion of 5 processes = 20 Therefore, Throughput = 5/20 = 0.25 processes per unit time

21 SJF without preemption Pros and cons Pros The SJF scheduling is provably optimal, in that it gives the minimum average waiting time for a given set of processes. Cons Since there is no way of knowing execution times, the same need to be estimated. This makes the implementation more complicated. The scheme suffers from possible starvation.

22 Scheduling Algorithms Shortest Job First (SJF) with preemption The CPU chooses the shortest job available in its queue and begins processing it. If a shorter job arrives during processing, the CPU preempts the current process in favor of the new process.Execution times are compared by time remaining and not total execution time. The timeline on the following slide should give you a better idea of how the SJF algorithm with preemption works.

23 Scheduling Algorithms Shortest Job First (SJF) with preemption 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 P1 arrives P2 arrives P3 arrives P4 arrives P1 ends P2 ends P3 ends P4 ends P5 ends P1 P2P3P4 P5 P1 P5 arrives

24 SJF with preemption Calculating Performance Measures Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 20 – 0 = 20 Turnaround Time for P2 = 3 – 2 = 1 Turnaround Time for P3 = 6 – 4 = 2 Turnaround Time for P4 = 12 – 8 = 4 Turnaround Time for P5 = 15 – 12 = 3 Total Turnaround Time = 20+1+2+4+3 = 30 Average Turnaround Time = 30/5 = 6

25 SJF with preemption Calculating Performance Measures Waiting Time = Turnaround Time – Execution Time Waiting Time for P1 = 20 – 10 = 10 Waiting Time for P2 = 1 – 1 = 0 Waiting Time for P3 = 2 – 2 = 0 Waiting Time for P4 = 4 – 4 = 0 Waiting Time for P5 = 3 – 3 = 0 Total Waiting Time = 10+0+0+0+0 = 10 Average Waiting Time = 10/5 = 2

26 SJF with preemption Calculating Performance Measures Throughput Total time for completion of 5 processes = 20 Therefore, Throughput = 5/20 = 0.25 processes per unit time

27 SJF with preemption Pros and cons Pros Preemptive SJF scheduling further reduces the minimum average waiting time for a given set of processes. Cons Since there is no way of knowing execution times, the same need to be estimated. This makes the implementation more complicated. Like the non-preemptive counterpart, the scheme suffers from possible starvation. Too much preemption may result in higher context switching times adversely affecting system performance.

28 Scheduling Algorithms Priority based without preemption The CPU chooses the highest priority job available in its queue and processes it to completion. If a higher priority job arrives during processing, the CPU does not preempt the current process in favor of the new process. The timeline on the following slide should give you a better idea of how the priority based algorithm without preemption works.

29 Scheduling Algorithms Priority based without preemption 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 P1 P2 P3P4 P5 P1 arrives P2 arrives P3 arrives P4 arrives P5 arrives P1 ends P2 ends P3 ends P4 ends P5 ends

30 Priority based without preemption Calculating Performance Measures Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 10 – 0 = 10 Turnaround Time for P2 = 11 – 2 = 9 Turnaround Time for P3 = 20 – 4 = 16 Turnaround Time for P4 = 15 – 8 = 7 Turnaround Time for P5 = 18 – 12 = 6 Total Turnaround Time = 10+9+16+7+6 = 48 Average Turnaround Time = 48/5 = 9.6

31 Priority based without preemption Calculating Performance Measures Waiting Time = Turnaround Time – Execution Time Waiting Time for P1 = 10 – 10 = 0 Waiting Time for P2 = 9 – 1 = 8 Waiting Time for P3 = 16 – 2 = 14 Waiting Time for P4 = 7 – 4 = 3 Waiting Time for P5 = 6 – 3 = 3 Total Waiting Time = 0+8+14+3+3 = 28 Average Waiting Time = 28/5 = 5.6

32 Priority based without preemption Calculating Performance Measures Throughput Total time for completion of 5 processes = 20 Therefore, Throughput = 5/20 = 0.25 processes per unit time

33 Priority based without preemption Pros and cons Performance measures in a priority scheme have no meaning. Cons A major problem with priority based algorithms is indefinite blocking or starvation.

34 Scheduling Algorithms Priority based with preemption The CPU chooses the job with highest priority available in its queue and begins processing it. If a job with higher priority arrives during processing, the CPU preempts the current process in favor of the new process. The timeline on the following slide should give you a better idea of how the priority based algorithm with preemption works.

35 Scheduling Algorithms Priority based with preemption 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 P1 arrives P2 arrives P3 arrives P4 arrives P1 ends P2 ends P3 ends P5 ends P1 P2 P3 P4 P5 P1 P4 P5 arrives P4 ends

36 Priority based with preemption Calculating Performance Measures Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 11 – 0 = 11 Turnaround Time for P2 = 3 – 2 = 1 Turnaround Time for P3 = 20 – 4 = 16 Turnaround Time for P4 = 18 – 8 = 10 Turnaround Time for P5 = 15 – 12 = 3 Total Turnaround Time = 11+1+16+10+3 = 41 Average Turnaround Time = 41/5 = 8.2

37 Priority based with preemption Calculating Performance Measures Waiting Time = Turnaround Time – Execution Time Waiting Time for P1 = 11 – 10 = 1 Waiting Time for P2 = 1 – 1 = 0 Waiting Time for P3 = 16 – 2 = 14 Waiting Time for P4 = 10 – 4 = 6 Waiting Time for P5 = 3 – 3 = 0 Total Waiting Time = 1+0+14+6+0 = 21 Average Waiting Time = 21/5 = 4.2

38 Priority based with preemption Calculating Performance Measures Throughput Total time for completion of 5 processes = 20 Therefore, Throughput = 5/20 = 0.25 processes per unit time

39 Priority based with preemption Pros and cons Performance measures in a priority scheme have no meaning. Cons A major problem with priority based algorithms is indefinite blocking or starvation. Too much preemption may result in higher context switching times adversely affecting system performance.

40 Scheduling Algorithms Round Robin The CPU cycles through it’s process queue and in succession gives its attention to every process for a predetermined time slot. A very large time slot will result in a FIFO behavior and a very small time slot results in high context switching and a tremendous decrease in performance. The timeline on the following slide should give you a better idea of how the round robin algorithm works.

41 Scheduling Algorithms Round Robin 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 P1 P4 P5 P1 arrives P2 arrives P3 arrives P4 arrives P5 arrives P1 ends P2 ends P3 ends P4 ends P5 ends P2 P3 P1 P4P1 P5

42 Round Robin Algorithm Calculating Performance Measures Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 19 – 0 = 19 Turnaround Time for P2 = 3 – 2 = 1 Turnaround Time for P3 = 7 – 4 = 3 Turnaround Time for P4 = 15 – 8 = 7 Turnaround Time for P5 = 20 – 12 = 8 Total Turnaround Time = 19+1+3+7+8 = 38 Average Turnaround Time = 38/5 = 7.6

43 Round Robin Algorithm Calculating Performance Measures Waiting Time = Turnaround Time – Execution Time Waiting Time for P1 = 19 – 10 = 9 Waiting Time for P2 = 1 – 1 = 0 Waiting Time for P3 = 3 – 2 = 1 Waiting Time for P4 = 7 – 4 = 3 Waiting Time for P5 = 8 – 3 = 5 Total Waiting Time = 9+0+1+3+5 = 18 Average Waiting Time = 18/5 = 3.6

44 Round Robin Algorithm Calculating Performance Measures Throughput Total time for completion of 5 processes = 20 Therefore, Throughput = 5/20 = 0.25 processes per unit time

45 Round Robin Algorithm Pros and cons Pros It is a fair algorithm. Cons The performance of the scheme depends heavily on the size of the time quantum. If context-switch time is about 10% of the time quantum, then about 10% of the CPU time will be spent in context switch (the next example will give further insight into this).

46 Comparing Scheduling Algorithms The following table summarizes the performance measures of the different algorithms: AlgorithmAv. Waiting Time Av. Turnaround Time Throughput FIFO5.09.00.25 SJF (no preemption) 4.88.80.25 SJF (preemptive) 2.06.00.25 Priority (no preemption) 5.69.60.25 Priority (preemptive) 4.28.20.25 Round Robin3.67.60.25

47 Comparing Scheduling Algorithms The preemptive counterparts perform better in terms of waiting time and turnaround time. However, they suffer from the disadvantage of possible starvation FIFO and round robin ensure no starvation. FIFO, however, suffers from poor performance measures The performance of round robin is dependent on the time slot value. It would also be the most affected once context switching time is taken into account as you’ll see in the next example (in the above cases context switching time = 0).

48 Round Robin Algorithm An example with context switching 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 P1 P4 P5 P1 arrives P2 arrives P3 arrives P4 arrives P5 arrives P1 ends P4 ends P5 ends P2 P3 P1 P4 P1P5 P2 ends P3 ends Context Switch

49 Round Robin Algorithm An example with context switching Turnaround Time = End Time – Time of Arrival Turnaround Time for P1 = 30 – 0 = 30 Turnaround Time for P2 = 4 – 2 = 2 Turnaround Time for P3 = 10 – 4 = 6 Turnaround Time for P4 = 25 – 8 = 17 Turnaround Time for P5 = 27 – 12 = 15 Total Turnaround Time = 30+2+6+17+15 = 70 Average Turnaround Time = 70/5 = 14

50 Round Robin Algorithm An example with context switching Waiting Time = Turnaround Time – Execution Time Waiting Time for P1 = 30 – 10 = 20 Waiting Time for P2 = 2 – 1 = 1 Waiting Time for P3 = 6 – 2 = 4 Waiting Time for P4 = 17 – 4 = 13 Waiting Time for P5 = 15 – 3 = 12 Total Waiting Time = 20+1+4+13+12 = 50 Average Waiting Time = 50/5 = 10

51 Round Robin Algorithm An example with context switching Throughput Total time for completion of 5 processes = 30 Therefore, Throughput = 5/30 = 0.17 processes per unit time CPU Utilization % CPU Utilization = 20/30 * 100 = 66.67%

52 Comparing Performance Measures of Round Robin Algorithms Performance Measures Round Robin Scheme without Context Switching Round Robin Scheme with Context Switching Av. Waiting Time3.610 Av. Turnaround Time7.614 Throughput0.250.17 % CPU Utilization100%66.67% The following table summarizes the performance measures of the round robin algorithm with and without context switching:

53 Context Switching Remarks The comparison of performance measures for the round robin algorithm with and without context switching clearly indicates – Context switching time is pure overload, because the system does no useful work while switching. Thus, the time quantum should be large with respect to context switching time. However, if it is too big, round robin scheduling degenerates to a FIFO policy. Consequently, finding the optimal value of time quantum for a system is crucial.

54 Scheduling Algorithms Multi-level Feedback Queue In the previous algorithms, we had a single process queue and applied the same scheduling algorithm to every process. A multi-level feedback queue involves having multiple queues with different levels of priority and different scheduling algorithms for each. The figure on the following slide depicts a multi-level feedback queue.

55 Scheduling Algorithms Multi-level Feedback Queue Queue 1 System JobsRound Robin Queue 2 Computation IntenseSJF with preemption Queue 3 Less intense calculation Priority-based Queue 4 Multimedia TasksFIFO

56 Scheduling Algorithms Multi-level Feedback Queue As depicted in the diagram, different kinds of processes are assigned to queues with different scheduling algorithms. The idea is to separate processes with different CPU-burst characteristics and their relative priorities. Thus, we see that system jobs are assigned to the highest priority queue with round robin scheduling. On the other hand, multimedia tasks are assigned to the lowest priority queue. Moreover, the queue uses FIFO scheduling which is well-suited for processes with short CPU-burst characteristics.

57 Scheduling Algorithms Multi-level Feedback Queue Processes are assigned to a queue depending on their importance. Each queue has a different scheduling algorithm that schedules processes for the queue. To prevent starvation, we utilize the concept of aging. Aging means that processes are upgraded to the next queue after they spend a predetermined amount of time in their original queue. This algorithm, though optimal, is most complicated and high context switching is involved.

58  -value and initial time estimates Many algorithms use execution time of a process to determine what job is processed next. Since it is impossible to know the execution time of a process before it begins execution, this value has to be estimated. , a first degree filter is used to estimate the execution time of a process.

59 Estimating execution time Estimated execution time is obtained using the following formula: z n =  z n-1 + (1 -  ) t n-1 where, z is estimated execution time t is the actual time  is the first degree filter and 0  1 The following example provides a more complete picture

60 Estimating execution time Here,  = 0.5 z 0 = 10 Then by formula, z 1 =  z 0 + (1-  ) t 0 = (0.5)(10)+(1-0.5)(6) = 8 and similarly z 2, z 3 ….z 5 are calculated. Processesznzn tntn P0P0 106 P1P1 84 P2P2 66 P3P3 64 P4P4 517 P5P5 1113

61 Estimating execution time Effect of  The value of  affects the estimation process.  = 0 implies that z n does not depend on z n-1 and is equal to t n-1  = 1 implies that z n does not depend on t n-1 and is equal to z n-1 Consequently, it is more favorable to start off with a random value of  and use an updating scheme as described in the next slide.

62 Estimating execution time  -update Steps of  - updating scheme Start with random value of  (here, 0.5) Obtain the sum of square difference I.e. f(  ) Differentiate f(  ) and find the new value of  The following slide shows these steps for our example. znzn tntn Square Difference 106 (  )10+(1-  )6=6 + 4  4 [(6+4  ) – 4] 2 = (2+4  ) 2 (6+4  )  + (1-  )4= 4  2 +2  +4 6 [(4  2 +2  +4)-6] 2 =(4  2 +2  -2) 2

63 Estimating execution time  -update In the above example, at the time of estimating execution time of P3, we update  as follows. The sum of square differences is given by, SSD = (2+4  ) 2 + (4  2 +2  -2) 2 =16  4 + 16  3 + 4  2 + 8  + 8 And, d/dx [SSD] = 0 gives us, 8  3 + 6  2 +  + 1 = 0 (Equation 1) Solving Equation 1, we get  = 0.7916. Now, z 3 =  z 2 + (1-  ) t 2 Substituting values, we have z 3 = (0.7916) 6 + (1-0.7916) 6 = 6

64 Multilevel Feedback Queue Parameters Involved Round Robin Queue Time Slot FIFO, Priority and SJF Queue Aging Thresholds Preemptive vs. Non-Preemptive Scheduling (2 switches in priority and SJF queues) Context Switching Time  -values and initial execution time estimates

65 Multilevel Feedback Queue Effect of round robin time slot Small round robin quantum value lowers system performance with increased context switching time. Large quantum values result in FIFO behavior with effective CPU utilization, lower turnaround and waiting times as also the potential of starvation. Finding an optimal time slot value becomes imperative for maximum CPU utilization with lowered starvation problem.

66 Multilevel Feedback Queue Effect of aging thresholds A very large value of aging thresholds makes the waiting and turnaround times unacceptable. These are signs of processes nearing starvation. On the other hand, a very small value makes it equivalent to one round robin queue.

67 Multilevel Feedback Queue Effect of preemption choice Preemption undoubtedly increases the number of context switches and increased number of context switches inversely affects the efficiency of the system. However, preemptive scheduling has been shown to decrease waiting and turnaround time measures in certain instances. In SJF scheduling, the advantage of choosing preemption over non-preemption is largely dependent on the CPU burst time predictions that in itself is a difficult proposition.

68 Multilevel Feedback Queue Effect of context switching time An increasing value of context switching time inversely affects the system performance in an almost linear fashion. The context switching time tends to effect system performance inversely since as the context switching time increases so does the average turnaround and average waiting time. The increase of the above pulls down the CPU utilization.

69 Multilevel Feedback Queue Effect of  -values and initial execution time estimates There is no proven trend for this parameter. In one of the studies, for the given process mix, it is reported that the turnaround time obtained from predicted burst time is significantly lower than the one obtained by randomly generated burst time estimates. [Courtsey: Su’s project report]

70 Multilevel Feedback Queue Performance Measures The performance measures used in this module are: Average Turnaround Time Average Waiting Time CPU utilization CPU throughput

71 Multilevel Feedback Queue Implementation As part of Assignment 1, you’ll implement a multilevel feedback queue within an operating system satisfying the given requirements. (For complete details refer to Assignment 1)Assignment 1 We’ll see a brief explanation of the assignment in the following slides.

72 Multilevel Feedback Queue Implementation Details Following are some specifications of the scheduling system you’ll implement: The scheduler consists of 4 linear Q's: The first Q is FIFO, second Q is priority-based, third Q is SJF.and the fourth (highest Priority) is round robin. Feed back occurs through aging, aging parameters differ, i.e., each Q has a different aging threshold (time) before a process can migrate to a higher priority Q. (should be a variable parameter for each Q). The time slot for the Round Robin Q is a variable parameter. You should allow a variable context switching time to be specified by the user.

73 Multilevel Feedback Queue Implementation Details Implement a switch (variable parameter) to allow choosing pre-emptive or non pre-emptive scheduling in the SJF, and priority-based Q's. Jobs cannot execute in a Q unless there are no jobs in higher priority Q's. The jobs are to be created with the following fields in their PCB: Job number, arrival time, actual execution time, priority,Queue number (process type 1 - 4). The creation is done randomly, choose your own ranges for the different fields in the PCB definition. Output should indicate a time line, i.e, every time step,indicate which processes are created (if any), which ones are completed (if any), processes which aged in different Q's, etc.

74 Multilevel Feedback Queue Implementation Details Once you’re done with the implementation, think of the problem from an algorithmic design point of view. The implementation involves many parameters such as: Average waiting time Average turnaround time CPU utilization Maximum turnaround time Maximum wait time CPU Throughput.

75 Multilevel Feedback Queue Implementation Details The eventual goal would be to optimize several performance measures (enlisted earlier) Perform several test runs and write a summation indicating how sensitive are some of the performance measures to some of the above parameters

76 Multilevel Feedback Queue Sample Screenshot of Simulation Sample Run 1: Round Robin Q Time slot: 2

77 Multilevel Feedback Queue Sample tabulated data from simulation RRTimeSlot Av.Turnaround Time Av. Waiting Time CPU Utilization Throughput 219.751766.67 %0.026 322.672075.19 %0.023 443.674180.00 %0.024 526.52583.33 %0.017 638.53786.21 %0.017

78 Multilevel Feedback Queue Sample Graphs(using data from simulation)

79 Multilevel Feedback Queue Conclusions from the sample simulation The following emerged as the optimizing parameters for the given process mix: optimal value of the round robin quantum smallest possible context switching time  update had no effect on the performance measures in this case

80 Lecture Summary Introduction to CPU scheduling What is CPU scheduling Related Concepts of Starvation, Context Switching and Preemption Scheduling Algorithms Parameters Involved Parameter-Performance Relationships Some Sample Results

81 Preview of next lecture The following topics shall be covered in the next lecture: Introduction to Process Synchronization and Deadlock Handling Synchronization Methods Deadlocks What are deadlocks The four necessary conditions Deadlock Handling


Download ppt "Lecture 3 CPU Scheduling. Lecture Highlights  Introduction to CPU scheduling  What is CPU scheduling  Related Concepts of Starvation, Context Switching."

Similar presentations


Ads by Google