25.1 Basic ConceptsThe goal of multi-programming is to maximize the utilization of the CPU as a system resource by having a process running on it at all timesSupporting multi-programming means encoding the ability in the O/S to switch between currently running jobsSwitching between jobs can be non-preemptive or preemptive
3Simple, non-preemptive scheduling means that a new process can be scheduled on the CPU only when the current job has begun waiting, for I/O, for exampleNon-preemptive means that the O/S will not preempt the currently running job in favor of another oneI/O is the classic case of waiting, and it is the scenario that is customarily used to explain scheduling concepts
4The CPU-I/O Burst Cycle A CPU burst refers to the period of time when a given process occupies the CPU before making an I/O request or taking some other action which causes it to waitCPU bursts are of varying length and can be plotted in a distribution by length
5Overall system activity can also be plotted as a distribution of CPU and other activity bursts by processesThe distribution of CPU burst lengths tends to be exponential or hyperexponential
7The CPU scheduler = the short term scheduler Under non-preemptive scheduling, when the processor becomes idle, a new process has to be picked from the ready queue and have the CPU allocated to itNote that the ready queue doesn’t have to be FIFO, although that is a simple, initial assumptionIt does tend to be some sort of linked data structure with a queuing discipline which implements the scheduling algorithm
8Preemptive scheduling Preemptive scheduling is more advanced than non-preemptive scheduling.Preemptive scheduling can take into account factors besides I/O waiting when deciding which job should be given the CPU.A list of scheduling points will be given next.It is worthwhile to understand what it means.
9Scheduling decisions can be made at these points: A process goes from the run state to the wait state (e.g., I/O wait, wait for a child process to terminate)A process goes from the run state to the ready state (e.g., as the result of an interrupt)A process goes from the wait state to the ready state (e.g., I/O completes)A process terminates
10Scheduling has to occur at points 1 and 4. If it only occurs then, this is non-preemptive or cooperative schedulingIf scheduling is also done at points 2 and 3, this is preemptive scheduling
11Points 1 and 4 are given in terms of the job that will give up the CPU. Points 2 and 3 seem to relate to which process might become available to run that could preempt the currently running process.
12Historically, simple systems existed without timers, just like they existed without mode bits, for exampleIt is possible to write a simple, non-preemptive operating system for multi-programming without multi-taskingWithout a timer or other signaling, jobs could only be switched when one was waiting for I/O
13However, recall that much of the discussion in the previous chapters assumed the use of interrupts, timers, etc., to trigger a context switchThis implies preemptive schedulingPreemptive schedulers are more difficult to write than non-preemptive schedulers, and they raise complex technical questions
14The problem with preemption comes from data sharing between processes If two concurrent processes share data, preemption of one or the other can lead to inconsistent data, lost updates in the shared data, etc.
15Note that kernel data structures hold state for user processes. The user processes do not directly dictate what the kernel data structures contain, but by definition, the kernel loads the state of >1 user process
16This means that the kernel data structures themselves have the characteristic of data shared between processesAs a consequence, in order to be correctly implemented, preemptive scheduling has to prevent inconsistent state in the kernel data structures
17Concurrency is rearing its ugly head again, even though it still hasn’t been thoroughly explained. The point is that it will become apparent that concurrency is a condition that is inherent to a preemptive scheduler.Therefore, a complete explanation of operating systems eventually requires a complete explanation of concurrency issues.
18The idea that the O/S is based on shared data about processes can be explained concretely by considering the movement of PCB’s from one queue to anotherIf an interrupt occurs while one system process is moving a PCB, and the PCB has been removed from one queue, but not yet added to another, this is an error stateIn other words, the data maintained internally by the O/S is now wrong/broken/incorrect…
19Possible solutions to the problem So the question becomes, can the scheduler be coded so that inconsistent queue state couldn’t occur?One solution would be to only allow switching on I/O blocks.The idea is that interrupts will be queued rather than instantaneous (a queuing mechanism will be needed)
20This means that processes will run to a point where they can be moved to an I/O queue and the next process will not be scheduled until that happensThis solves the problem of concurrency in preemptive scheduling in a mindless wayThis solution basically means backing off to non-preemptive scheduling
21Other solutions to the problem 1. Only allow switching after a system call runs to completion.In other words, make kernel processes uninterruptible.If the code that moves PCB’s around can’t be interrupted, inconsistent state can’t result.This solution also assumes a queuing system for interrupts.
222. Make certain code segments in the O/S uninterruptible. This is the same idea as the previous one, but with finer granularity.It increases concurrency because interrupts can at least occur in parts of kernel code, not just at the ends of kernel code calls.
23Note that interruptibility of the kernel is related to the problem of real time operating systems If certain code blocks are not interruptible, you are not guaranteed a fixed, maximum response time to any particular system request or interrupt that you generate
24You may have to wait an indeterminate amount of time while the uninterruptible code finishes processingThis violates the requirement for a hard real-time system
25Scheduling and the dispatcher The dispatcher = the module called by the short term scheduler whichSwitches contextSwitches to user modeJumps to the location in user code to runSpeed is desirable.Dispatch latency refers to time lost in the switching process
26Scheduling criteria There are various algorithms for scheduling There are also various criteria for evaluating themPerformance is always a trade-offYou can never maximize all of the criteria with one scheduling algorithm
27Criteria CPU utilization. The higher, the better. 40%-90% is realistic Throughput = processes completed / unit timeTurnaround time = total time for any single process to completeWaiting time = total time spent waiting in O/S queuesResponse time = time between submission and first visible sign of response to the request—important in interactive systems
28Depending on the criterion, you may want to: Strive to attain an absolute maximum or minimum (utilization, throughput)Minimize or maximize the average (turnaround, waiting)Minimize or maximize the variance (for time-sharing, minimize the variance, for example)
30Reality involves a steady stream of many, many CPU bursts Reality involves balancing a number of different performance criteria or measuresExamples of the different scheduling algorithms will be given below based on a very few processes and a limited number of burstsThe examples will be illustrated using Gantt chartsThe scheduling algorithms will be evaluated and compared based on a simple measure of average waiting time
31FCFS SchedulingThe name, first-come, first-served, should be self-explanatoryThis is an older, simpler scheduling algorithmIt is non-preemptiveIt is not suitable for interactive time sharingIt can be implemented with a simple FIFO queue of PCB’s
32Consider the following scenario Process Burst lengthP ms.P2 3 ms.P3 3 ms.
36Additional comments on performance analysis It is clear that average wait time varies greatly depending on the arrival order of processes and their varying burst lengthsAs a consequence, it is also possible to conclude that for any given set of processes and burst lengths, arbitrary FCFS scheduling does not result in a minimal or optimal average wait time
37FCFS scheduling is subject to the convoy effect There is the initial arrival order of process burstsAfter that, the processes enter the ready queue after I/O waits, etc.Let there be one CPU bound job (long CPU burst)Let there be many I/O bound jobs (short CPU bursts)
38Scenario:The CPU bound job holds the CPUThe other jobs finish their I/O waits and enter the ready queueEach of the other jobs is scheduled, FCFS, and is quickly finished with the CPU due to an I/O requestThe CPU bound job then takes the CPU again
39CPU utilization may be high (good) under this scheme The CPU bound job is a hogThe I/O bound jobs spend a lot of their time waitingTherefore, the average wait time will tend to be highRecall that FCFS is not preemptive, so once the jobs have entered, scheduling only occurs when a job voluntarily enters a wait state due to an I/O request or some other condition
40SJF SchedulingThe name, shortest-job-first, is not quite self-explanatoryVarious ideas involved deserve explanationRecall that these thumbnail examples of scheduling are based on bursts, not the overall job timeFor scheduling purposes, it is the length of the next burst that is importantThere is no perfect way of predicting the length of the next burst
41Implementing SJF in reality involves devising formulas for predicting the next burst length based on past performanceSJF can be a non-preemptive algorithm. The assumption now is that all processes are available at time 0 for scheduling and the shortest is chosenA more descriptive name for the algorithm is “shortest next CPU burst” scheduling
42SJF can also be implemented as a preemptive algorithm SJF can also be implemented as a preemptive algorithm. The assumption is that jobs enter the ready queue at different times. If a job with a shorter burst enters the queue when a job with a longer burst is running, the shorter job preempts the longer oneUnder the preemptive scenario a more descriptive name for the algorithm would be “shortest remaining time first” scheduling
43Non-preemptive Example Consider the following scenario:Process burst lengthP1 6 ms.P2 8 ms.P3 7 ms.P4 3 ms.
45SJF average wait time is lower than the average wait time for FCFS scheduling of the same processes: FCFS average wait time = ( ) / 4 = ms.
46In theory, SJF is optimal for average wait time performance Always doing the shortest burst first minimizes the aggregate wait time for all processesThis is only theoretical because burst length can’t be knownIn a batch system user estimates might be usedIn an interactive system user estimates make no sense
47Devising a formula for predicting burst time The only basis for such a formula is past performanceWhat follows is the definition of an exponential average function for this purposeLet tn = actual, observed length of nth CPU burst for a given processLet Tn+1 = predicted value of next burstLet a be given such that 0 <= a < 1Then define Tn+1 as follows:Tn+1 = atn + (1 – a)Tn
48Explanation:a is a weighting factor. How important is the most recent actual performance vs. performance before thatTo get an idea of the function it serves, consider a = 0, a = ½, a = 1
49Tn appears in the formula. It is the previous prediction. It includes real past performance becauseTn = atn-1 + (1 – a)Tn-1Ultimately this expansion depends on the initial predicted value, T0Some arbitrary constant can be used, a system average can be used, etc.
50Expanding the formulaThis illustrates how come it is known as an exponential averageIt gives a better feel for the role of the components in the formulaTn+1 = atn + (1-a)(atn-1 + (1-a)(…at0 + (1-a)T0)…)= atn + (1-a)atn-1 + (1-a)2atn-2 + … + (1-a)nat0 + (1-a)n+1T0
51The first term is:atnThe general term is:(1 – a)jatn-jThe last term is:(1 – a)n+1T0
52In wordsThe most recent actual performance, tn, gets weight aAll previous performances, ti, are multiplied by a and by a factor of (1 – a)j, where the value of j is determined by how far back in time t occurredSince (1 – a) < 1, as you go back in time, the weight of a given term on the current prediction is exponentially reduced
53The following graph illustrates the results of applying the formula with T0 = 10 and a = ½ With a = ½, the exponential coefficients on the terms of the prediction are ½, (½)2, (½)3, …Note that the formula tends to produce a lagging, not a leading indicatorIn other words, as the actual values shift up or down, the prediction gradually approaches the new reality, whatever it might be
55Preemptive SJFIf a waiting job enters the ready queue with an estimated burst length shorter than the time remaining of the burst length of the currently running job, then the shorter job preempts the one on the CPU.This can be called “shortest remaining time first” scheduling.Unlike in the previous examples, the arrival time of a process now makes a difference
56Consider the following scenario: Process arrival time burst lengthP ms.P ms.P ms.P ms.
57Preemptive SJF average wait time = (0 + 9 + 0 + 15 + 2) / 4 = 6.5 ms.
58Walking through the example P1 arrives at t = 0 and startsP2 arrivess at t = 1P2’s burst length = 4P1’s remaining burst length = 8 – 1 = 7P2 preemptsP3 arrives at t = 2P3’s burst length burst length = 9P2’s remaining burst length = 4 – 1 = 3P1’s remaining burst length = 7No preemption
59P2 runs to completion at t = 5 P4 arrives at t = 3P4’s burst length = 5P3’s remaining burst length = 9P2’s remaining burst length = 3 – 1 = 2P1’s remaining burst length = 7No preemptionP2 runs to completion at t = 5P4 is scheduled. It runs to completion at t = 10P1 is rescheduled. It runs to completion at 17P3 is scheduled. It runs to completion at 26
60Calculating the wait times for the example P1 has 2 episodes1st, enters at t = 0, starts at t = 0, wait time = 02nd, waits from t = 1 to t = 10, wait time = 10 – 1 = 9Total P1 wait time = 0 + 9P2 has 1 episodeEnters at t = 1, starts at t = 1, wait time = 1 – 1 = 0P3 has 1 episodeEnters at t = 2, starts at t = 17, wait time = 17 – 2 = 15P4 has 1 episodeEnters at t = 3, starts at t = 5, wait time = 5 – 3 = 2Total wait time = = 26Average wait time = 26 / 4 = 6.5
63Priority Scheduling A priority is assigned to each process High priority processes are scheduled before low priority onesProcesses of equal priority are handled in FCFS orderIn the textbook a high priority process is given a low number and a low priority process is given a high number, e.g., 0-7,Note that SJF is a type of priority scheduling where the priority is inversely proportional to the predicted length of the next burst
64Priority Example Consider the following scenario: Process burst length priorityP ms. 3P2 1 ms. 1P3 2 ms. 4P4 1 ms. 5P5 5 ms. 2
66Internal priority setting SJF is an example Other criteria that have been used:Time limitsMemory requirements(I/O burst) / (CPU burst)
67External priority setting: Importance of processType or amount of fundingSponsoring departmentpolitics
68Priority scheduling can be either preemptive or non-preemptive Priority scheduling can lead to indefinite blocking = process starvationLow priority jobs may be delayed until low load timesLow priority jobs might be lost (in system crashes, e.g.) before they’re finishedSolution to starvation: aging. Raise a process’s priority by n units for every m time units it’s been in the system
69Round Robin Scheduling This is the time-sharing scheduling algorithmIt is FCFS with fixed time-slice preemptionThe time slice, or time quantum, is in the range of 10ms.-100ms.The ready queue is a circularly linked listThe scheduler goes around the list allocating 1 quantum per processA process may block (I/O, e.g.) before the quantum is overWhen an unfinished process leaves the CPU, it is added to the “tail” of the circularly linked listThe tail “moves”. It is the point behind the currently scheduled process
71RR scheduling depends on a hardware timer The tradeoff in RR scheduling is fairness in dividing up the CPU as a shared resourceVs. long average waiting times for all processes contending for itIf this is interactive time-sharing, the waiting for human I/O will far outweigh the waiting time for access to the CPU
72RR Example Consider the following scenario: Let the time slice be 4 ms.Process burst lengthP ms.P2 3 ms.P3 3 ms.
74Wait time for P1 = 0 initially Wait time for P1 = 10 – 4 = 6 when scheduled againWait time for P2 = 4Wait time for P3 = 7
75The performance of round robin depends on the length of the time slice If the length of the slice is > any single process burst, then RR = FCFSIf the slice is short, then in theory a machine with n users behaves like n machines, each 1/nth as fast as the actual machineThis is the ideal, which ignores the overhead from switching between jobs
76A simple measure to gauge overhead cost is: (context switch time) / (time slice length) In order for time sharing to be practical, this ratio has to be relatively smallThe size of the ratio is dependent on hardware speed and O/S code efficiency (speed)Note that even if the ratio is acceptable, the number of users determines 1/n. The actual system speed determines how small you can make a time slice (slices per unit time) and how many users you can practically support at one time
77Round robin scheduling conveniently illustrates other performance parameters besides average waiting timeConsider overall average process turnaround time as a function of time slice sizeSmaller time slices mean more context switching overhead on a percentage basisThey also mean longer delays as each process has to wait for multiple slices
78On the other hand, if time slices are long, scheduling can degenerate into FCFS FCFS doesn’t fairly allocate the CPU in a time sharing environmentThe rule of thumb for system design and tuning is that 80% of all process CPU bursts should finish within 1 time sliceEmpirically, this shares the CPU while still achieving reasonable performance
79RR time slice size variations Consider the following scenario:Process burst lengthP1 6 ms.P2 3 ms.P3 1 ms.P4 7 ms.
80Average turnaround time = (14 + 7 + 8 + 17) / 4 = 46 / 4 = 11 ½
82Average waiting time and average turnaround time ultimately measure the same thing Average turnaround time varies as the time slice size variesHowever, it doesn’t vary in a regular fashionDepending on the relative length of process bursts and time slice size, a larger slice may lead to slower turnaround
84Keep in mind that all of these examples are thumbnails They are designed to give some idea of what’s going on, but they are not realistic in sizeIn real life design and tuning would be based on an analysis of a statistically significant mass (relatively large) of historical or ongoing data
85Multi-level Queue Scheduling A simple exampleLet interactive jobs be foreground jobsLet batch jobs be background jobsLet foreground and background be distinguished by keeping the jobs in separate queues where the queues have separate queuing disciplines/scheduling algorithmsFor example, use RR scheduling for foreground jobsUse FCFS for batch jobs
86The follow-up question becomes, how do you coordinate scheduling between the two queues? One possibility: Fixed priority preemptive scheduling. Batch jobs only run if the interactive queue is emptyAnother possibility: Time slicing. For example, the interactive queue is given 80% of the time slices and the batch queue is given 20%
87Let different classes of jobs be permanently assigned to different queues Let the queues have priorities relative to each otherLet each queue implement its own scheduling algorithm for the processes in it, which are of equal priority
89The coordination between queues would be similar to the interactive/batch example Fixed priority preemptive scheduling would mean that any time a job entered a queue of a higher priority, any currently running job would have to step asideLower priority jobs could only run if all higher priority queues were emptyYou could time slice between the queues, giving a certain percent of CPU time to each one
90Multi-level Feedback Queue Scheduling This introduces the possibility that processes move between queuesThis may be based on characteristics such as CPU or I/O usage or time spent in systemIn general, CPU greedy processes can be moved to a lower queueThis gives interactive jobs and I/O bound jobs with shorter CPU bursts higher priorityIt can also handle ageing. If a job is in a lower priority queue too long, it can be moved to a higher one, preventing starvation
92Queuing Discipline 1. The relative priority of the queues is fixed. Jobs in queue 1 execute only if queue 0 is empty.Jobs in queue 2 execute only if queue 1 is empty.2. Every new job enters queue 0.If its burst is <= 8, it stays there.Otherwise, it’s moved to queue 1.3. When a job in queue 1 is scheduledIf it has a burst length > 16, it’s preempted and moved to queue 2.
934. Jobs can move back up to a different queue if their burst lengths are within the quantum of the higher priority queue.5. Note that in a sense, this queuing scheme predicts future performance on the basis of the most recent burst length.
94Defining Characteristics of a General Multi-Level Feedback Queue Scheduling System 1. The number of queues.2. The scheduling algorithm for each queue.3. The method used to determine when to upgrade a process.4. The method used to determine when to downgrade a process.5. The method used to determine which queue a job will enter when it needs service (initially).
95Multi-level feedback queue systems are the most general and the most complex. The example given was simply that, an example.In theory, such a system can be configured to perform well for a particular hardware environment and job mix.In reality, there are no ways of setting the scheduling parameters except for experience, analysis, and trial and error.
965.4 Multiple Processor Scheduling Load sharing = the possibility of spreading work among >1 processor, assuming you can come up with a scheduling algorithm.Homogeneous systems = each processor is the same. Any process can be assigned to any processor in the system.Even in homogeneous systems, a process may be limited to a certain processor if a needed peripheral is attached to that processor
97Approaches to Multiple Processor Scheduling Asymmetric multi-processing = master-slave architecture. The scheduling code runs on one processor only.Symmetric Multi-Processing (SMP) = each processor is self-scheduling.There is still the question of whether the ready queue is local or global.To maximize concurrency, you need a global ready queue.Maintaining a global ready queue requires cooperation (concurrency control).This is a difficult problem, so most systems maintain local ready queues for each processor.Most modern O/S’s support SMP: Windows, Solaris, Linux, Max OS X.
98Processor AffinityThis term refers to trying to keep the same job on the same processor.Moving jobs between processors is expensive.Everything that might have been cached would be lost unless explicitly recovered.Soft affinity = not guaranteed to stay on the same processor.Hard affinity = guaranteed to stay on the same processor.
99Load BalancingThis term refers to trying to keep all processors busy at all times.This is an issue if there are at least as many jobs as there are processors.If a global ready queue is implemented, load balancing would naturally be part of the algorithm.
100If a system only maintains local ready queues and there is not hard affinity, there are two approaches to moving jobs among processors:Push migration = a single system process regularly checks processor utilization and pushes processes from busy processors to idle ones.Pull migration = an idle processor reaches into the ready queue of a busy processor and extracts a process for itself.
101Both kinds of migration can be built into a system (Linux for example). By definition, migration and affinity are in opposition.There is a performance trade-off.Some systems try to gauge imbalance in load and only do migration if the imbalance rises above a certain threshold.
102Symmetric multi-threading = SMT Definition: Provide multiple logical processors rather than multiple physical processors.This is known as hyperthreading on Intel chips.At a hardware level:Each logical processor has its own architecture state (register values).Each logical processor receives and handles its own interrupts.All hardware resources are shared.
103An O/S doesn’t have to be designed specifically for SMT. SMT should be transparent—the machine “looks like” an SMP machine.A system may combine SMT and SMP—I.e., there would be >1 logical processor on each of >1 physical processor.If the O/S is system aware, it could be written to avoid this scheduling case: >1 process on >1 local processor of 1 physical processor while another physical processor is idle.
104Thread SchedulingThis is essentially an expansion on ideas raised in the last chapter.The term “contention scope” refers to the level at which scheduling is occurring.Process Contention Scope (PCS) = the scheduling of threads on lightweight processes.In many-to-one or many-to-many schemes, threads of one or more user processes contend with each other to be scheduled.This is usually priority based, but not necessarily preemptive.
105System contention scope (SCS) = the scheduling of kernel level threads on the actual machine. In a one-to-one mapping scheme, these kernel threads happen to represent user threads belonging to one or more processes.
106Operating System Examples In most previous chapters, the O/S example sections have been skipped because they involve needless detail.Concrete examples will be covered here for two reasons:To give an idea of how complex real system are.To show that if you know the basic principles, you can tease apart the different pieces of an actual implementation.
107Solaris SchedulingSolaris scheduling is based on four priority classes:Real timeSystemTime sharingInteractive
108Practical points of Solaris scheduling: High numbers = high priority, range of values: 0-59The four different priority classes are implemented in three queues (3 and 4 are together).The distinction between 3 and 4 is that if a process requires the generation of windows, it is given a higher priority.
109There is an inverse relationship between priority and time slice size. A small time slice = quick response for high priority (interactive type) jobs.A large time slice = good throughput for low priority (CPU bound) jobs.
110Solaris Scheduling Queue—Notice that Jobs Don’t Move Between Queues
111Solaris Dispatch Table for Interactive and Time-sharing Threads Starting PriorityAllocated Time QuantumNew Priority after Quantum ExpirationNew Priority after Return from Sleep200505101605115201205225308053355440554556585949
112Later Versions of Solaris Add these Details Fixed priority threadsFair share priority threadsSystem processes don’t change prioritiesReal-time processes have the absolutely highest prioritiesEach scheduling class has a set of priorities. These are translated into global priorities and the schedule uses the global priorities to scheduleAmong threads of equal priority, the scheduler does RR scheduling
113Windows XP SchedulingXP (kernel) thread scheduling is priority based preemptiveThis supports soft real-time applicationsThere are 32 priorities, 0-31A high number = a high priorityThere is a separate queue for each priorityPriority 0 is used for memory management and will not come up further
114There is a relationship between priorities in the dispatcher and classes of jobs defined in the Win32 APIThere are 6 API classes divided into 2 groups according to the priorities they have
115Class: Real time. Priorities: 16-31 Variable (priority) classes: High priorityAbove normal priorityNormal priorityBelow normal priorityIdle priorityThe priorities of these classes can vary from 1-15
116Within each class there are 7 additional subdivisions: Time criticalHighestAbove normalBelow normalLowestidle
117Each thread has a base priority This corresponds to the relative priority it’s given within its classThe default base value would be the “normal” relative priority for the classThe distribution of values among classes and relative priorities is shown in the following table
118Columns = Priority Classes, Rows = Relative Priorities within Classes The ‘Normal’ row contains the base priorities for the classes.Real-timeHighAbove normalNormalBelow normalIdle priorityTime-critical3115Highest26131086251411975244233Lowest222Idle161
119When a thread is released from waiting, it’s priority is raised. The scheduling algorithm dynamically changes a thread’s priority if it’s in the variable groupIf a thread’s time quantum expires, it’s priority is lowered, but not below its base priorityWhen a thread is released from waiting, it’s priority is raised.How much it’s raised depends on what it was waiting for. For example:Waiting for keyboard I/O—large raiseWaiting for disk I/O—smaller raise
120For an interactive process, if the user thread is given a raise, the windowing process it’s running in is also given a raiseThese policies favor interactive and I/O bound jobs and attempt to control threads that are CPU hogsXP has another feature that aids windowing performanceIf several process windows are on the screen and one is brought to the foreground, it’s time quantum is increased by a factor such as 3 so that it can get something done before being preempted.
121Linux SchedulingSkip thisTwo concrete examples are enough
122Java Scheduling The JVM scheduling specification isn’t detailed Thread scheduling is supposed to be priority basedIt does not have to be preemptiveRound-robin scheduling is not required, but a given implementation may have it
123If a JVM implementation doesn’t have time-slicing or preemption, the programmer can try to devise cooperative multi-tasking in application codeThe relevant Java API method call is Thread.yield();This can be called in the run() method of a thread at the point where it is willing to give up the CPU to another thread
124Java Thread class name priorities: Thread.MIN_PRIORITY (value = 1)Thread.MAX_PRIORITY (value = 10)Thread.NORM_PRIORITY (value = 5)A new thread is given the priority of the thread that created itThe default priority is NORMThe system doesn’t change a thread’s priority
125The programmer can assign a thread a priority value in the range 1-10 The relevant Java API method call is: Thread.currentThread().setPriority(value);This is done in the thread’s run() methodThis specification isn’t foolproof thoughJava thread priorities have to be mapped to O/S kernel thread prioritiesIf the difference between Java priorities isn’t great enough, they may be mapped to the same priority in the implementing systemThe author gives Windows NT as an example where this can happen
126Algorithm EvaluationIn general, algorithm selection is based on multiple criteria. For example:Maximiz CPU utilization under the constraint that maximum response time is <1 secondMaximize throughput such that turnaround time, on average, is linearly proportional to total execution time
127Deterministic Modeling This is a form of analytic evaluationFor a given set of jobs, with all parameters known, you can determine performance under various scheduling scenariosThis is OK for developing examples and exploring possibilitiesIt’s not generally a practical way to pick a scheduling algorithm for a real system with an unknown mix of jobsThe thumbnail analyses with Gantt charts are a simplified example of deterministic modeling
128Queuing ModelsThese are basically statistical models where the statistical assumptions are based on past observation of real systemsThe first distribution of interestArrival of jobs into the systemTypically Poisson
129All of the rest of the distributions tend to be exponential CPU burst occurrence distributionCPU burst length distributionI/O burst occurrence distributionI/O wait length distribution
130Given these distributions it is possible to calculate: ThroughputCPU utilizationWaiting time
131A Simple Example of an Analysis Formula Let the following parameters be given:N = number of processes in a queueL = arrival rate of processesW = average waiting time in queueThenN = L * WThis is known as Little’s formulaGiven any two of the parameters, the third can be calculatedNote that the formula applies when the system is in a steady state—the number of processes entering the queue = the number of processes leaving the queueIncrease of decrease in the queue length occurs when the system is not in steady state
132Queuing models are not perfect They are limited by the match or mismatch between the chose distributions and actual behaviorThey are simplifications because they aggregate behavior and may overlook some factorsThey rely on mathematical assumptions (they treat arrival, service, and waiting as mathematically independent distributions, when for each process/burst these successive events are related)They are useful for getting ideas, but they do not perfectly match reality
133Simulation Modeling Basic elements of a simulation: A clock (discrete event simulation)Data structures modeling stateModules which model activity which changes state
134Simulation input:Random number generation based on statistical distributions for processes (again, mathematical simplification)Trace tapes. These are records of events in actual runs. They provide an excellent basis for comparing two algorithms
135Simulation models can generate statistics on all aspects of performance under the simulated workload Obtaining suitable input data and coding the simulation are not trivial tasks.Coding and O/S and living with the implementation choices it embodies are also not trivialMaking the model may be worth the cost if it aids in developing the O/S
136ImplementationImplementation is the gold standard of algorithm evaluation and system testingYou code an algorithm, install it in the O/S, and test it under real conditionsProblems:Coding costInstallation costUser reactions to modifications
137User ReactionsChanges in O/S code result from perceived shortcomings in performancePerformance depends on the algorithm, the mix of jobs, and the behavior of jobsIf the algorithm is changed, users will changed their code and behavior to adapt to the altered O/S runtime environmentThis can cause the same performance problem to recur, or a new problem to occur
138Examples:If job priority is gauged by size (smaller jobs given higher priority), programmers may break their applications into separate processesIf job priority is gauged by frequency of I/O (I/O bound processes are given higher priority), programmers may introduce (needless) I/O into their applications
139Without resorting to subterfuge, a Java application programmer has some control over behavior in a single application by threading and using calls like yield() and setPriority()A large scale O/S will be tunable. The system administrator will be able to set scheduling algorithms and their parameters to meet the job mix at any given time