Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5, CPU Scheduling 1. 5.1 Basic Concepts The goal of multi-programming is to maximize the utilization of the CPU as a system resource by having.

Similar presentations


Presentation on theme: "Chapter 5, CPU Scheduling 1. 5.1 Basic Concepts The goal of multi-programming is to maximize the utilization of the CPU as a system resource by having."— Presentation transcript:

1 Chapter 5, CPU Scheduling 1

2 5.1 Basic Concepts The goal of multi-programming is to maximize the utilization of the CPU as a system resource by having a process running on it at all times Supporting multi-programming means encoding the ability in the O/S to switch between currently running jobs Switching between jobs can be non- preemptive or preemptive 2

3 Simple, non-preemptive scheduling means that a new process can be scheduled on the CPU only when the current job has begun waiting, for I/O, for example Non-preemptive means that the O/S will not preempt the currently running job in favor of another one I/O is the classic case of waiting, and it is the scenario that is customarily used to explain scheduling concepts 3

4 The CPU-I/O Burst Cycle A CPU burst refers to the period of time when a given process occupies the CPU before making an I/O request or taking some other action which causes it to wait CPU bursts are of varying length and can be plotted in a distribution by length 4

5 Overall system activity can also be plotted as a distribution of CPU and other activity bursts by processes The distribution of CPU burst lengths tends to be exponential or hyperexponential 5

6 6

7 The CPU scheduler = the short term scheduler Under non-preemptive scheduling, when the processor becomes idle, a new process has to be picked from the ready queue and have the CPU allocated to it Note that the ready queue doesn’t have to be FIFO, although that is a simple, initial assumption It does tend to be some sort of linked data structure with a queuing discipline which implements the scheduling algorithm 7

8 Preemptive scheduling Preemptive scheduling is more advanced than non-preemptive scheduling. Preemptive scheduling can take into account factors besides I/O waiting when deciding which job should be given the CPU. A list of scheduling points will be given next. It is worthwhile to understand what it means. 8

9 Scheduling decisions can be made at these points: 1.A process goes from the run state to the wait state (e.g., I/O wait, wait for a child process to terminate) 2.A process goes from the run state to the ready state (e.g., as the result of an interrupt) 3.A process goes from the wait state to the ready state (e.g., I/O completes) 4.A process terminates 9

10 Scheduling has to occur at points 1 and 4. If it only occurs then, this is non-preemptive or cooperative scheduling If scheduling is also done at points 2 and 3, this is preemptive scheduling 10

11 Points 1 and 4 are given in terms of the job that will give up the CPU. Points 2 and 3 seem to relate to which process might become available to run that could preempt the currently running process. 11

12 Historically, simple systems existed without timers, just like they existed without mode bits, for example It is possible to write a simple, non- preemptive operating system for multi- programming without multi-tasking Without a timer or other signaling, jobs could only be switched when one was waiting for I/O 12

13 However, recall that much of the discussion in the previous chapters assumed the use of interrupts, timers, etc., to trigger a context switch This implies preemptive scheduling Preemptive schedulers are more difficult to write than non-preemptive schedulers, and they raise complex technical questions 13

14 The problem with preemption comes from data sharing between processes If two concurrent processes share data, preemption of one or the other can lead to inconsistent data, lost updates in the shared data, etc. 14

15 Note that kernel data structures hold state for user processes. The user processes do not directly dictate what the kernel data structures contain, but by definition, the kernel loads the state of >1 user process 15

16 This means that the kernel data structures themselves have the characteristic of data shared between processes As a consequence, in order to be correctly implemented, preemptive scheduling has to prevent inconsistent state in the kernel data structures 16

17 Concurrency is rearing its ugly head again, even though it still hasn’t been thoroughly explained. The point is that it will become apparent that concurrency is a condition that is inherent to a preemptive scheduler. Therefore, a complete explanation of operating systems eventually requires a complete explanation of concurrency issues. 17

18 The idea that the O/S is based on shared data about processes can be explained concretely by considering the movement of PCB’s from one queue to another If an interrupt occurs while one system process is moving a PCB, and the PCB has been removed from one queue, but not yet added to another, this is an error state In other words, the data maintained internally by the O/S is now wrong/broken/incorrect… 18

19 Possible solutions to the problem So the question becomes, can the scheduler be coded so that inconsistent queue state couldn’t occur? One solution would be to only allow switching on I/O blocks. The idea is that interrupts will be queued rather than instantaneous (a queuing mechanism will be needed) 19

20 This means that processes will run to a point where they can be moved to an I/O queue and the next process will not be scheduled until that happens This solves the problem of concurrency in preemptive scheduling in a mindless way This solution basically means backing off to non-preemptive scheduling 20

21 Other solutions to the problem 1. Only allow switching after a system call runs to completion. In other words, make kernel processes uninterruptible. If the code that moves PCB’s around can’t be interrupted, inconsistent state can’t result. This solution also assumes a queuing system for interrupts. 21

22 2. Make certain code segments in the O/S uninterruptible. This is the same idea as the previous one, but with finer granularity. It increases concurrency because interrupts can at least occur in parts of kernel code, not just at the ends of kernel code calls. 22

23 Note that interruptibility of the kernel is related to the problem of real time operating systems If certain code blocks are not interruptible, you are not guaranteed a fixed, maximum response time to any particular system request or interrupt that you generate 23

24 You may have to wait an indeterminate amount of time while the uninterruptible code finishes processing This violates the requirement for a hard real- time system 24

25 Scheduling and the dispatcher The dispatcher = the module called by the short term scheduler which – Switches context – Switches to user mode – Jumps to the location in user code to run Speed is desirable. Dispatch latency refers to time lost in the switching process 25

26 Scheduling criteria There are various algorithms for scheduling There are also various criteria for evaluating them Performance is always a trade-off You can never maximize all of the criteria with one scheduling algorithm 26

27 Criteria CPU utilization. The higher, the better. 40%-90% is realistic Throughput = processes completed / unit time Turnaround time = total time for any single process to complete Waiting time = total time spent waiting in O/S queues Response time = time between submission and first visible sign of response to the request— important in interactive systems 27

28 Depending on the criterion, you may want to: Strive to attain an absolute maximum or minimum (utilization, throughput) Minimize or maximize the average (turnaround, waiting) Minimize or maximize the variance (for time- sharing, minimize the variance, for example) 28

29 5.3 Scheduling Algorithms 5.3.1 First-Come, First-Served (FCFS) 5.3.2 Shortest-Job-First (SJF) 5.3.3 Priority 5.3.4 Round Robin (RR) 5.3.5 Multilevel Queue 5.3.6 Multilevel Feedback Queue 29

30 Reality involves a steady stream of many, many CPU bursts Reality involves balancing a number of different performance criteria or measures Examples of the different scheduling algorithms will be given below based on a very few processes and a limited number of bursts The examples will be illustrated using Gantt charts The scheduling algorithms will be evaluated and compared based on a simple measure of average waiting time 30

31 FCFS Scheduling The name, first-come, first-served, should be self-explanatory This is an older, simpler scheduling algorithm It is non-preemptive It is not suitable for interactive time sharing It can be implemented with a simple FIFO queue of PCB’s 31

32 Consider the following scenario ProcessBurst length P124 ms. P23 ms. P33 ms. 32

33 Avg. wait time = (0 + 24 + 27) / 3 = 17 ms. 33

34 Compare with a different arrival order: P2, P3, P1 34

35 Avg. wait time = (0 + 3 + 6) / 3 = 3 ms. 35

36 Additional comments on performance analysis It is clear that average wait time varies greatly depending on the arrival order of processes and their varying burst lengths As a consequence, it is also possible to conclude that for any given set of processes and burst lengths, arbitrary FCFS scheduling does not result in a minimal or optimal average wait time 36

37 FCFS scheduling is subject to the convoy effect There is the initial arrival order of process bursts After that, the processes enter the ready queue after I/O waits, etc. Let there be one CPU bound job (long CPU burst) Let there be many I/O bound jobs (short CPU bursts) 37

38 Scenario: The CPU bound job holds the CPU The other jobs finish their I/O waits and enter the ready queue Each of the other jobs is scheduled, FCFS, and is quickly finished with the CPU due to an I/O request The CPU bound job then takes the CPU again 38

39 CPU utilization may be high (good) under this scheme The CPU bound job is a hog The I/O bound jobs spend a lot of their time waiting Therefore, the average wait time will tend to be high Recall that FCFS is not preemptive, so once the jobs have entered, scheduling only occurs when a job voluntarily enters a wait state due to an I/O request or some other condition 39

40 SJF Scheduling The name, shortest-job-first, is not quite self- explanatory Various ideas involved deserve explanation Recall that these thumbnail examples of scheduling are based on bursts, not the overall job time For scheduling purposes, it is the length of the next burst that is important There is no perfect way of predicting the length of the next burst 40

41 Implementing SJF in reality involves devising formulas for predicting the next burst length based on past performance SJF can be a non-preemptive algorithm. The assumption now is that all processes are available at time 0 for scheduling and the shortest is chosen A more descriptive name for the algorithm is “shortest next CPU burst” scheduling 41

42 SJF can also be implemented as a preemptive algorithm. The assumption is that jobs enter the ready queue at different times. If a job with a shorter burst enters the queue when a job with a longer burst is running, the shorter job preempts the longer one Under the preemptive scenario a more descriptive name for the algorithm would be “shortest remaining time first” scheduling 42

43 Non-preemptive Example Consider the following scenario: Processburst length P16 ms. P28 ms. P37 ms. P43 ms. 43

44 SJF order: P4, P1, P3, P2 average wait time = (0 + 3 + 9 + 16) / 4 = 7 ms. 44

45 SJF average wait time is lower than the average wait time for FCFS scheduling of the same processes: FCFS average wait time = (0 + 6 + 14 + 21) / 4 = 10.25 ms. 45

46 In theory, SJF is optimal for average wait time performance Always doing the shortest burst first minimizes the aggregate wait time for all processes This is only theoretical because burst length can’t be known In a batch system user estimates might be used In an interactive system user estimates make no sense 46

47 Devising a formula for predicting burst time The only basis for such a formula is past performance What follows is the definition of an exponential average function for this purpose Let t n = actual, observed length of n th CPU burst for a given process Let T n+1 = predicted value of next burst Let a be given such that 0 <= a < 1 Then define T n+1 as follows: T n+1 = at n + (1 – a)T n 47

48 Explanation: a is a weighting factor. How important is the most recent actual performance vs. performance before that To get an idea of the function it serves, consider a = 0, a = ½, a = 1 48

49 T n appears in the formula. It is the previous prediction. It includes real past performance because T n = at n-1 + (1 – a)T n-1 Ultimately this expansion depends on the initial predicted value, T 0 Some arbitrary constant can be used, a system average can be used, etc. 49

50 Expanding the formula This illustrates how come it is known as an exponential average It gives a better feel for the role of the components in the formula T n+1 = at n + (1-a)(at n-1 + (1-a)(…at 0 + (1-a)T 0 )…) = at n + (1-a)at n-1 + (1-a) 2 at n-2 + … + (1-a) n at 0 + (1-a) n+1 T 0 50

51 The first term is: at n The general term is: (1 – a) j at n-j The last term is: (1 – a) n+1 T 0 51

52 In words The most recent actual performance, t n, gets weight a All previous performances, t i, are multiplied by a and by a factor of (1 – a) j, where the value of j is determined by how far back in time t occurred Since (1 – a) < 1, as you go back in time, the weight of a given term on the current prediction is exponentially reduced 52

53 The following graph illustrates the results of applying the formula with T 0 = 10 and a = ½ With a = ½, the exponential coefficients on the terms of the prediction are ½, (½) 2, (½) 3, … Note that the formula tends to produce a lagging, not a leading indicator In other words, as the actual values shift up or down, the prediction gradually approaches the new reality, whatever it might be 53

54 54

55 Preemptive SJF If a waiting job enters the ready queue with an estimated burst length shorter than the time remaining of the burst length of the currently running job, then the shorter job preempts the one on the CPU. This can be called “shortest remaining time first” scheduling. Unlike in the previous examples, the arrival time of a process now makes a difference 55

56 Consider the following scenario: Processarrival timeburst length P108 ms. P214 ms. P329 ms. P435 ms. 56

57 P reemptive SJF average wait time = (0 + 9 + 0 + 15 + 2) / 4 = 6.5 ms. 57

58 Walking through the example P1 arrives at t = 0 and starts P2 arrivess at t = 1 – P2’s burst length = 4 – P1’s remaining burst length = 8 – 1 = 7 – P2 preempts P3 arrives at t = 2 – P3’s burst length burst length = 9 – P2’s remaining burst length = 4 – 1 = 3 – P1’s remaining burst length = 7 – No preemption 58

59 P4 arrives at t = 3 – P4’s burst length = 5 – P3’s remaining burst length = 9 – P2’s remaining burst length = 3 – 1 = 2 – P1’s remaining burst length = 7 – No preemption P2 runs to completion at t = 5 P4 is scheduled. It runs to completion at t = 10 P1 is rescheduled. It runs to completion at 17 P3 is scheduled. It runs to completion at 26 59

60 Calculating the wait times for the example P1 has 2 episodes – 1 st, enters at t = 0, starts at t = 0, wait time = 0 – 2 nd, waits from t = 1 to t = 10, wait time = 10 – 1 = 9 – Total P1 wait time = 0 + 9 P2 has 1 episode – Enters at t = 1, starts at t = 1, wait time = 1 – 1 = 0 P3 has 1 episode – Enters at t = 2, starts at t = 17, wait time = 17 – 2 = 15 P4 has 1 episode – Enters at t = 3, starts at t = 5, wait time = 5 – 3 = 2 Total wait time = 0 + 9 + 0 + 15 + 2 = 26 Average wait time = 26 / 4 = 6.5 60

61 The same processes under non- preemptive SJF 61

62 P1 wait = 0 – 0 = 0 P2 wait = 8 – 1 = 7 P3 wait = 17 – 2 = 15 P4 wait = 12 – 3 = 9 Total wait time = 0 + 7 + 15 + 9 = 31 Average wait time = 31 / 4 = 7.75 62

63 Priority Scheduling A priority is assigned to each process High priority processes are scheduled before low priority ones Processes of equal priority are handled in FCFS order In the textbook a high priority process is given a low number and a low priority process is given a high number, e.g., 0-7, 0-4095 Note that SJF is a type of priority scheduling where the priority is inversely proportional to the predicted length of the next burst 63

64 Priority Example Consider the following scenario: Processburst lengthpriority P110 ms.3 P21 ms.1 P32 ms.4 P41 ms.5 P55 ms.2 64

65 Average wait time = (0 + 1 + 6 + 16 + 18) / 5 = 41 / 5 = 8.2 ms. 65

66 Internal priority setting SJF is an example Other criteria that have been used: – Time limits – Memory requirements – (I/O burst) / (CPU burst) 66

67 External priority setting: – Importance of process – Type or amount of funding – Sponsoring department – politics 67

68 Priority scheduling can be either preemptive or non-preemptive Priority scheduling can lead to indefinite blocking = process starvation Low priority jobs may be delayed until low load times Low priority jobs might be lost (in system crashes, e.g.) before they’re finished Solution to starvation: aging. Raise a process’s priority by n units for every m time units it’s been in the system 68

69 Round Robin Scheduling This is the time-sharing scheduling algorithm It is FCFS with fixed time-slice preemption The time slice, or time quantum, is in the range of 10ms.- 100ms. The ready queue is a circularly linked list The scheduler goes around the list allocating 1 quantum per process A process may block (I/O, e.g.) before the quantum is over When an unfinished process leaves the CPU, it is added to the “tail” of the circularly linked list The tail “moves”. It is the point behind the currently scheduled process 69

70 70

71 RR scheduling depends on a hardware timer The tradeoff in RR scheduling is fairness in dividing up the CPU as a shared resource Vs. long average waiting times for all processes contending for it If this is interactive time-sharing, the waiting for human I/O will far outweigh the waiting time for access to the CPU 71

72 RR Example Consider the following scenario: Let the time slice be 4 ms. Processburst length P124 ms. P23 ms. P33 ms. 72

73 Average wait time = (0 + 6 + 4 + 7) / 3 = 17 / 3 = 5 2/3 73

74 Wait time for P1 = 0 initially Wait time for P1 = 10 – 4 = 6 when scheduled again Wait time for P2 = 4 Wait time for P3 = 7 74

75 The performance of round robin depends on the length of the time slice If the length of the slice is > any single process burst, then RR = FCFS If the slice is short, then in theory a machine with n users behaves like n machines, each 1/n th as fast as the actual machine This is the ideal, which ignores the overhead from switching between jobs 75

76 A simple measure to gauge overhead cost is: (context switch time) / (time slice length) In order for time sharing to be practical, this ratio has to be relatively small The size of the ratio is dependent on hardware speed and O/S code efficiency (speed) Note that even if the ratio is acceptable, the number of users determines 1/n. The actual system speed determines how small you can make a time slice (slices per unit time) and how many users you can practically support at one time 76

77 Round robin scheduling conveniently illustrates other performance parameters besides average waiting time Consider overall average process turnaround time as a function of time slice size Smaller time slices mean more context switching overhead on a percentage basis They also mean longer delays as each process has to wait for multiple slices 77

78 On the other hand, if time slices are long, scheduling can degenerate into FCFS FCFS doesn’t fairly allocate the CPU in a time sharing environment The rule of thumb for system design and tuning is that 80% of all process CPU bursts should finish within 1 time slice Empirically, this shares the CPU while still achieving reasonable performance 78

79 RR time slice size variations Consider the following scenario: Processburst length P16 ms. P23 ms. P31 ms. P47 ms. 79

80 Average turnaround time = (14 + 7 + 8 + 17) / 4 = 46 / 4 = 11 ½ 80

81 Average wait time = ((0 + 8) + 4 + 7 + (8 + 2)) / 4 = 29 / 4 = 7 ¼ 81

82 Average waiting time and average turnaround time ultimately measure the same thing Average turnaround time varies as the time slice size varies However, it doesn’t vary in a regular fashion Depending on the relative length of process bursts and time slice size, a larger slice may lead to slower turnaround 82

83 83

84 Keep in mind that all of these examples are thumbnails They are designed to give some idea of what’s going on, but they are not realistic in size In real life design and tuning would be based on an analysis of a statistically significant mass (relatively large) of historical or ongoing data 84

85 Multi-level Queue Scheduling A simple example Let interactive jobs be foreground jobs Let batch jobs be background jobs Let foreground and background be distinguished by keeping the jobs in separate queues where the queues have separate queuing disciplines/scheduling algorithms For example, use RR scheduling for foreground jobs Use FCFS for batch jobs 85

86 The follow-up question becomes, how do you coordinate scheduling between the two queues? One possibility: Fixed priority preemptive scheduling. Batch jobs only run if the interactive queue is empty Another possibility: Time slicing. For example, the interactive queue is given 80% of the time slices and the batch queue is given 20% 86

87 Let different classes of jobs be permanently assigned to different queues Let the queues have priorities relative to each other Let each queue implement its own scheduling algorithm for the processes in it, which are of equal priority 87

88 An Example 88

89 The coordination between queues would be similar to the interactive/batch example Fixed priority preemptive scheduling would mean that any time a job entered a queue of a higher priority, any currently running job would have to step aside Lower priority jobs could only run if all higher priority queues were empty You could time slice between the queues, giving a certain percent of CPU time to each one 89

90 Multi-level Feedback Queue Scheduling This introduces the possibility that processes move between queues This may be based on characteristics such as CPU or I/O usage or time spent in system In general, CPU greedy processes can be moved to a lower queue This gives interactive jobs and I/O bound jobs with shorter CPU bursts higher priority It can also handle ageing. If a job is in a lower priority queue too long, it can be moved to a higher one, preventing starvation 90

91 An Example 91

92 Queuing Discipline 1. The relative priority of the queues is fixed. – Jobs in queue 1 execute only if queue 0 is empty. – Jobs in queue 2 execute only if queue 1 is empty. 2. Every new job enters queue 0. – If its burst is <= 8, it stays there. – Otherwise, it’s moved to queue 1. 3. When a job in queue 1 is scheduled – If it has a burst length > 16, it’s preempted and moved to queue 2. 92

93 4. Jobs can move back up to a different queue if their burst lengths are within the quantum of the higher priority queue. 5. Note that in a sense, this queuing scheme predicts future performance on the basis of the most recent burst length. 93

94 Defining Characteristics of a General Multi- Level Feedback Queue Scheduling System 1. The number of queues. 2. The scheduling algorithm for each queue. 3. The method used to determine when to upgrade a process. 4. The method used to determine when to downgrade a process. 5. The method used to determine which queue a job will enter when it needs service (initially). 94

95 Multi-level feedback queue systems are the most general and the most complex. The example given was simply that, an example. In theory, such a system can be configured to perform well for a particular hardware environment and job mix. In reality, there are no ways of setting the scheduling parameters except for experience, analysis, and trial and error. 95

96 5.4 Multiple Processor Scheduling Load sharing = the possibility of spreading work among >1 processor, assuming you can come up with a scheduling algorithm. Homogeneous systems = each processor is the same. Any process can be assigned to any processor in the system. Even in homogeneous systems, a process may be limited to a certain processor if a needed peripheral is attached to that processor 96

97 Approaches to Multiple Processor Scheduling Asymmetric multi-processing = master-slave architecture. The scheduling code runs on one processor only. Symmetric Multi-Processing (SMP) = each processor is self- scheduling. There is still the question of whether the ready queue is local or global. To maximize concurrency, you need a global ready queue. Maintaining a global ready queue requires cooperation (concurrency control). This is a difficult problem, so most systems maintain local ready queues for each processor. Most modern O/S’s support SMP: Windows, Solaris, Linux, Max OS X. 97

98 Processor Affinity This term refers to trying to keep the same job on the same processor. Moving jobs between processors is expensive. Everything that might have been cached would be lost unless explicitly recovered. Soft affinity = not guaranteed to stay on the same processor. Hard affinity = guaranteed to stay on the same processor. 98

99 Load Balancing This term refers to trying to keep all processors busy at all times. This is an issue if there are at least as many jobs as there are processors. If a global ready queue is implemented, load balancing would naturally be part of the algorithm. 99

100 If a system only maintains local ready queues and there is not hard affinity, there are two approaches to moving jobs among processors: Push migration = a single system process regularly checks processor utilization and pushes processes from busy processors to idle ones. Pull migration = an idle processor reaches into the ready queue of a busy processor and extracts a process for itself. 100

101 Both kinds of migration can be built into a system (Linux for example). By definition, migration and affinity are in opposition. There is a performance trade-off. Some systems try to gauge imbalance in load and only do migration if the imbalance rises above a certain threshold. 101

102 Symmetric multi-threading = SMT Definition: Provide multiple logical processors rather than multiple physical processors. This is known as hyperthreading on Intel chips. At a hardware level: – Each logical processor has its own architecture state (register values). – Each logical processor receives and handles its own interrupts. – All hardware resources are shared. 102

103 An O/S doesn’t have to be designed specifically for SMT. SMT should be transparent—the machine “looks like” an SMP machine. A system may combine SMT and SMP—I.e., there would be >1 logical processor on each of >1 physical processor. If the O/S is system aware, it could be written to avoid this scheduling case: >1 process on >1 local processor of 1 physical processor while another physical processor is idle. 103

104 Thread Scheduling This is essentially an expansion on ideas raised in the last chapter. The term “contention scope” refers to the level at which scheduling is occurring. Process Contention Scope (PCS) = the scheduling of threads on lightweight processes. In many-to-one or many-to-many schemes, threads of one or more user processes contend with each other to be scheduled. This is usually priority based, but not necessarily preemptive. 104

105 System contention scope (SCS) = the scheduling of kernel level threads on the actual machine. In a one-to-one mapping scheme, these kernel threads happen to represent user threads belonging to one or more processes. 105

106 Operating System Examples In most previous chapters, the O/S example sections have been skipped because they involve needless detail. Concrete examples will be covered here for two reasons: – To give an idea of how complex real system are. – To show that if you know the basic principles, you can tease apart the different pieces of an actual implementation. 106

107 Solaris Scheduling Solaris scheduling is based on four priority classes: – Real time – System – Time sharing – Interactive 107

108 Practical points of Solaris scheduling: – High numbers = high priority, range of values: 0- 59 – The four different priority classes are implemented in three queues (3 and 4 are together). – The distinction between 3 and 4 is that if a process requires the generation of windows, it is given a higher priority. 108

109 There is an inverse relationship between priority and time slice size. – A small time slice = quick response for high priority (interactive type) jobs. – A large time slice = good throughput for low priority (CPU bound) jobs. 109

110 Solaris Scheduling Queue—Notice that Jobs Don’t Move Between Queues 110

111 Solaris Dispatch Table for Interactive and Time-sharing Threads Starting PriorityAllocated Time Quantum New Priority after Quantum Expiration New Priority after Return from Sleep 0200050 5200050 10160051 15160551 201201052 251201552 30802053 35802554 40 3055 45403556 5040 58 55404558 59204959 111

112 Later Versions of Solaris Add these Details Fixed priority threads Fair share priority threads System processes don’t change priorities Real-time processes have the absolutely highest priorities Each scheduling class has a set of priorities. These are translated into global priorities and the schedule uses the global priorities to schedule Among threads of equal priority, the scheduler does RR scheduling 112

113 Windows XP Scheduling XP (kernel) thread scheduling is priority based preemptive This supports soft real-time applications There are 32 priorities, 0-31 A high number = a high priority There is a separate queue for each priority Priority 0 is used for memory management and will not come up further 113

114 There is a relationship between priorities in the dispatcher and classes of jobs defined in the Win32 API There are 6 API classes divided into 2 groups according to the priorities they have 114

115 Class: Real time. Priorities: 16-31 Variable (priority) classes: – High priority – Above normal priority – Normal priority – Below normal priority – Idle priority The priorities of these classes can vary from 1- 15 115

116 Within each class there are 7 additional subdivisions: – Time critical – Highest – Above normal – Below normal – Lowest – idle 116

117 Each thread has a base priority This corresponds to the relative priority it’s given within its class The default base value would be the “normal” relative priority for the class The distribution of values among classes and relative priorities is shown in the following table 117

118 Columns = Priority Classes, Rows = Relative Priorities within Classes The ‘Normal’ row contains the base priorities for the classes. Real-timeHighAbove normal NormalBelow normal Idle priority Time- critical 3115 Highest2615131086 Above normal 251411975 Normal241310864 Below normal 23139753 Lowest22118642 Idle1611111 118

119 The scheduling algorithm dynamically changes a thread’s priority if it’s in the variable group If a thread’s time quantum expires, it’s priority is lowered, but not below its base priority When a thread is released from waiting, it’s priority is raised. How much it’s raised depends on what it was waiting for. For example: – Waiting for keyboard I/O—large raise – Waiting for disk I/O—smaller raise 119

120 For an interactive process, if the user thread is given a raise, the windowing process it’s running in is also given a raise These policies favor interactive and I/O bound jobs and attempt to control threads that are CPU hogs XP has another feature that aids windowing performance If several process windows are on the screen and one is brought to the foreground, it’s time quantum is increased by a factor such as 3 so that it can get something done before being preempted. 120

121 Linux Scheduling Skip this Two concrete examples are enough 121

122 Java Scheduling The JVM scheduling specification isn’t detailed Thread scheduling is supposed to be priority based It does not have to be preemptive Round-robin scheduling is not required, but a given implementation may have it 122

123 If a JVM implementation doesn’t have time- slicing or preemption, the programmer can try to devise cooperative multi-tasking in application code The relevant Java API method call is Thread.yield(); This can be called in the run() method of a thread at the point where it is willing to give up the CPU to another thread 123

124 Java Thread class name priorities: – Thread.MIN_PRIORITY (value = 1) – Thread.MAX_PRIORITY (value = 10) – Thread.NORM_PRIORITY (value = 5) A new thread is given the priority of the thread that created it The default priority is NORM The system doesn’t change a thread’s priority 124

125 The programmer can assign a thread a priority value in the range 1-10 The relevant Java API method call is: Thread.currentThread().setPriority(value); This is done in the thread’s run() method This specification isn’t foolproof though Java thread priorities have to be mapped to O/S kernel thread priorities If the difference between Java priorities isn’t great enough, they may be mapped to the same priority in the implementing system The author gives Windows NT as an example where this can happen 125

126 Algorithm Evaluation In general, algorithm selection is based on multiple criteria. For example: Maximiz CPU utilization under the constraint that maximum response time is <1 second Maximize throughput such that turnaround time, on average, is linearly proportional to total execution time 126

127 Deterministic Modeling This is a form of analytic evaluation For a given set of jobs, with all parameters known, you can determine performance under various scheduling scenarios This is OK for developing examples and exploring possibilities It’s not generally a practical way to pick a scheduling algorithm for a real system with an unknown mix of jobs The thumbnail analyses with Gantt charts are a simplified example of deterministic modeling 127

128 Queuing Models These are basically statistical models where the statistical assumptions are based on past observation of real systems The first distribution of interest Arrival of jobs into the system Typically Poisson 128

129 All of the rest of the distributions tend to be exponential – CPU burst occurrence distribution – CPU burst length distribution – I/O burst occurrence distribution – I/O wait length distribution 129

130 Given these distributions it is possible to calculate: Throughput CPU utilization Waiting time 130

131 A Simple Example of an Analysis Formula Let the following parameters be given: N = number of processes in a queue L = arrival rate of processes W = average waiting time in queue Then N = L * W This is known as Little’s formula Given any two of the parameters, the third can be calculated Note that the formula applies when the system is in a steady state—the number of processes entering the queue = the number of processes leaving the queue Increase of decrease in the queue length occurs when the system is not in steady state 131

132 Queuing models are not perfect They are limited by the match or mismatch between the chose distributions and actual behavior They are simplifications because they aggregate behavior and may overlook some factors They rely on mathematical assumptions (they treat arrival, service, and waiting as mathematically independent distributions, when for each process/burst these successive events are related) They are useful for getting ideas, but they do not perfectly match reality 132

133 Simulation Modeling Basic elements of a simulation: – A clock (discrete event simulation) – Data structures modeling state – Modules which model activity which changes state 133

134 Simulation input: Random number generation based on statistical distributions for processes (again, mathematical simplification) Trace tapes. These are records of events in actual runs. They provide an excellent basis for comparing two algorithms 134

135 Simulation models can generate statistics on all aspects of performance under the simulated workload Obtaining suitable input data and coding the simulation are not trivial tasks. Coding and O/S and living with the implementation choices it embodies are also not trivial Making the model may be worth the cost if it aids in developing the O/S 135

136 Implementation Implementation is the gold standard of algorithm evaluation and system testing You code an algorithm, install it in the O/S, and test it under real conditions Problems: – Coding cost – Installation cost – User reactions to modifications 136

137 User Reactions Changes in O/S code result from perceived shortcomings in performance Performance depends on the algorithm, the mix of jobs, and the behavior of jobs If the algorithm is changed, users will changed their code and behavior to adapt to the altered O/S runtime environment This can cause the same performance problem to recur, or a new problem to occur 137

138 Examples: If job priority is gauged by size (smaller jobs given higher priority), programmers may break their applications into separate processes If job priority is gauged by frequency of I/O (I/O bound processes are given higher priority), programmers may introduce (needless) I/O into their applications 138

139 Without resorting to subterfuge, a Java application programmer has some control over behavior in a single application by threading and using calls like yield() and setPriority() A large scale O/S will be tunable. The system administrator will be able to set scheduling algorithms and their parameters to meet the job mix at any given time 139

140 The End 140


Download ppt "Chapter 5, CPU Scheduling 1. 5.1 Basic Concepts The goal of multi-programming is to maximize the utilization of the CPU as a system resource by having."

Similar presentations


Ads by Google