3 Basic Conceptsmultiprogramming has been introduced in order to maximize CPU utilizationCPU–I/O Burst CycleProcess execution consists of a cycle of CPU execution and I/O wait.CPU and I/O burst distributionCan be viewed as a random variableCan be characterized with an empirical and/or theoretical probability distribution
6 CPU SchedulerSelects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.CPU scheduling decisions may take place when a process:1. Switches from running to waiting state.2. Switches from running to ready state.3. Switches from waiting to ready.4. Terminates.Scheduling under 1 and 4 is nonpreemptive.Once a process is given the CPU it holds it until it terminates or changes state to waitingAll other scheduling is preemptive.
7 DispatcherThe Dispatcher module gives control of the CPU to the process selected by the CPU (short-term) schedulerThe dispatcher’s function involves:switching contextswitching to user modejumping to the proper location in the user program to restart that programDispatch latencyThe time it takes for the dispatcher to stop one process and start running running.
8 Scheduling Algorithms Before we look into scheduling algorithms need to setCriteria (measures) and objectives that will be used in designing and evaluating such algorithmsScheduling algorithms should optimize some objectives derived from certain criteriaNeed to understand various properties of scheduling algorithms so that we can choose an algorithm that is most suited to a particular computing environment
9 Scheduling Criteria CPU utilization Throughput Turnaround time Percent of time the CPU is busy executing processesThroughputnumber of processes that complete their execution per time unitTurnaround timeamount of time to execute a particular processWaiting timeamount of time a process has been waiting in the ready queueResponse timeamount of time it takes from when a request (program) was submitted until the first response is produced,Does not include output time
10 Optimization Objectives Maximize CPU utilizationMaximize throughputMinimize the maximum turnaround timeMinimize average waiting timeMinimize average response timeMinimize the variance of response timeEtcWe will considerMinimizing the average waiting time
11 First-Come First-Served (FCFS) Scheduling Process requesting the CPU first gets it firstSimple to implement with a single FIFO queue of ready processesLink their PCB’s into that queueNon-preemptive scheduling algorithm
12 FCFS Scheduling Example Process Burst TimeP1 24P2 3P3 3Suppose that the processes arrive at time 0 in the order: P1 , P2 , P3 The Gantt Chart for the schedule is:Waiting time for P1 = 0; P2 = 24; P3 = 27Average waiting time: ( )/3 = 17P1P2P3242730
13 FCFS Scheduling Example Suppose that the processes arrive in the order P2 , P3 , P1 .The Gantt chart for the schedule is:Waiting time for P1 = 6; P2 = 0; P3 = 3Average waiting time: ( )/3 = 3Average waiting time depends on order of processes in FIFO queueConvoy effectArises when a number of short (I/O bound) processes wait behind a long (CPU bound) processP1P3P26330
14 Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time.Two schemes:Non-preemptiveonce the CPU is given to the process it cannot be preempted until completes its CPU burst.preemptiveif a new process arrives with CPU burst length less than the remaining time of the current executing process, preempt it.This scheme is known as the Shortest-Remaining-Time-First (SRTF).SJF is optimalIt gives the minimum average waiting time for a given set of processes.
15 Non-Preemptive SJF Example Process Arrival Time Burst TimeP1 0 7P2 2 4P3 4 1P4 5 4SJF (non-preemptive)Average waiting time = ( )/4 =4P1P3P27316P4812
16 Preemptive SJF Example Process Arrival Time Burst TimeP1 0 7P2 2 4P3 4 1P4 5 4SJF (preemptive)Average waiting time = ( )/4 =3P1P3P24211P45716
17 Difficulties with SJFLength of next CPU burst is generally not known ahead of timeCan only estimate the length of the next CPU burstNeed appropriate estimating functionsCan be done by using the length of previous CPU bursts, using exponential averaging.
19 Examples of Exponential Averaging =0n+1 = nRecent history does not count. =1n+1 = tnOnly the actual last CPU burst counts.If we expand the formula, we get:n+1 = tn+(1 - ) tn -1 + …+(1 - )j tn -1 + …+(1 - )n=1 tn 0Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor.
20 Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priorityHere, we use the convention that smallest integer highest priority.It can bePreemptiveNon-preemptiveSJF is a priority scheduling where priority is determined by the predicted next CPU burst time.Can cause StarvationA low priority processes may never executeAging is a technique that can be used to avoid starvationas time progresses increase the priority of processes waiting for the CPU
21 Round Robin (RR) Scheduling Each process gets a small unit of CPU time (time quantum or slice), usually milliseconds.After this time slice has elapsed, the process is preempted and added to the end of the ready queue.If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once.No process waits more than (n-1)q time units.When deciding on the time slice need to consider the overhead due to the dispatcherPerformance characteristicsq large FIFOq small q must be large with respect to context switch, otherwise overhead is too high.
22 RR Scheduling Example with Time Quantum = 20 Process Burst TimeP1 53P2 17P3 68P4 24The Gantt chart is:Typically, higher average turnaround time than SJF, but better response time.P1P2P3P42037577797117121134154162
27 Multilevel Queue Scheduling Scheduling must be done between the queues.Fixed priority schedulingserve all from foreground then from backgroundPossibility of starvation.Time sliceeach queue gets a certain amount of CPU time which it can schedule amongst its processes;i.e., 80% to foreground in RR, 20% to background in FCFS
28 Multilevel Feedback Queue Scheduling A process can move between the various queuesaging can be implemented this way.Multilevel-feedback-queue scheduler is defined by the following parametersnumber of queuesscheduling algorithms for each queuemethod used to determine when to promote a processmethod used to determine when to demote a processmethod used to determine which queue a process will enter when that process needs service
30 Multilevel Feedback Queue Scheduling Example Three queuesQ0 – time quantum 8 millisecondsQ1 – time quantum 16 millisecondsQ2 – FCFSSchedulingA new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.
31 Multiple-Processor Scheduling CPU scheduling is more complex when multiple CPUs are available.Homogeneous processorsAll processors are identical within the multiprocessor systemLoad sharingIdle processors share the load of busy processorsMaintain a single ready queue shared among all the processorsSymmetric multiprocessingEach processor schedules a process autonomously from the shared ready queueAsymmetric multiprocessingonly one processor accesses the system data structures, alleviating the need for data sharing.
32 Real-Time Scheduling Hard real-time systems Soft real-time computing When it is required to complete a critical task within a guaranteed amount of timeSoft real-time computingWhen it is required that critical processes receive priority over less fortunate ones.For soft real-time schedulingSystem must have priority schedulingThe dispatch latency must be smallProblem caused by the fact that many OSs wait for a context switch until either a system call completes or an I/O blocks
33 Keeping the Dispatch Latency Small Introduce preemption points within the kernelWhere it is safe to preempt a system call by a higher priority processTrouble is only few such points can be practically addedMake the entire kernel preemptiveNeed to ensure that it is safe for a higher priority process to access shared kernel data structuresComplex method that is widely usedWhat happens when a high priority process waits for lower priority processes to finish using a shared kernel data structure (priority inversion)?Priority inheritance
35 Evaluating Scheduling Algorithms Deterministic modelingtake a particular predetermined workload and determine the performance of each algorithm for that workloadQueueing ModelsUse mathematical formulas to analyze the performance of algorithms under some (simple and possible unrealistic) workloadsSimulation ModelsUse probabilistic models of workloadsUse workloads captured in a running system (traces)Implementation
36 Queueing Models Assuming that Process inter-arrival times and CPU burst times are independent identical exponentially distributed random variablesFCFS scheduling
37 Evaluation of CPU Scheduling Algorithms by Simulation
40 Linux Scheduling Two algorithms: time-sharing and real-time Prioritized credit-based – process with most credits is scheduled nextCredit subtracted when timer interrupt occursWhen credit = 0, another process chosenWhen all processes have credit = 0, recrediting occursBased on factors including priority and historyReal-timeSoft real-timePosix.1b compliant – two classesFCFS and RRHighest priority process always runs first
41 The Relationship Between Priorities and Time-slice length