Presentation is loading. Please wait.

Presentation is loading. Please wait.

COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.

Similar presentations


Presentation on theme: "COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM."— Presentation transcript:

1 COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM

2 2 22222 Lecture 25 Attention: project phase 4 and HW 6 – due Tuesday November 24  Final exam – Thursday December 10 4-6:50 PM Last time:  Multi-level memories  Memory characterization  Multilevel memories management using virtual memory  Adding multi-level memory management to virtual memory Today:  Scheduling Next Time:  Network properties (Chapter 7) - available online from the publisher of the textbook

3 Scheduling The process of allocating resource e.g., CPU cycles, to threads/processes. Distinguish  Policies  Mechanisms to implement policies Scheduling problems have evolved in time:  Early on: emphasis on CPU scheduling  Now: more interest in transaction processing and I/O optimization Scheduling decisions are made at different levels of abstraction and it is not always easy to mediate.

4 Example: an overloaded transaction processing system Incoming transaction are queued in a buffer which may fill up; The interrupt handler is constantly invoked as dropped requests are re- issued; The transaction processing thread has no chance to empty the buffer; Solution: when the buffer is full disable the interrupts caused by incoming transactions and allow the transaction processing thread to run.

5 Scheduling objectives Performance metrics:  CPU Utilization  Fraction of time CPU does useful work over total time  Throughput  Number of jobs finished per unit of time  Turnaround time  Time spent by a job in the system  Response time  Time to get the results  Waiting time  Time waiting to start processing All these are random variables  we are interested in averages!! The objectives - system managers (M) and users (U):  Maximize CPU utilization  M  Maximize throughput  M  Minimize turnaround time  U  Minimize waiting time  U  Minimize response time  U

6 CPU burst CPU burst  the time required by the thread/process to execute

7 Scheduling policies First-Come First-Serve (FCFS) Shortest Job First (SJF) Round Robin (RR) Preemptive/non-preemptive scheduling

8 First-Come, First-Served (FCFS) ThreadBurst Time P 1 24 P 2 3 P 3 3 Processes arrive in the order: P 1  P 2  P 3 Gantt Chart for the schedule: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Convoy effect  short process behind long process P1P1 P2P2 P3P3 2427300

9 FCFS Scheduling (Cont’d.) Now threads arrive in the order: P 2  P 3  P 1 Gantt chart: Waiting time for P 1 = 6; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better!! P1P1 P3P3 P2P2 63300

10 Shortest-Job-First (SJF) Use the length of the next CPU burst to schedule the thread/process with the shortest time. SJF is optimal  minimum average waiting time for a given set of threads/processes Two schemes:  Non-preemptive  the thread/process cannot be preempted until completes its CPU burst  Preemptive  if a new thread/process arrives with CPU burst length less than remaining time of current executing process, preempt. known as Shortest-Remaining-Time-First (SRTF)

11 Example of non-preemptive SJF ThreadArrival TimeBurst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 SJF (non-preemptive) Average waiting time = (0 + 6 + 3 + 7)/4 = 4 P1P1 P3P3 P2P2 73160 P4P4 812

12 Example of Shortest-Remaining-Time-First (SRTF) (Preemptive SJF) ThreadArrival TimeBurst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 Shortest-Remaining-Time-First Average waiting time = (9 + 1 + 0 +2)/4 = 3 P1P1 P3P3 P2P2 42 11 0 P4P4 57 P2P2 P1P1 16

13 Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the thread/process is preempted and added to the end of the ready queue. If there are n threads/processes in the ready queue and the time quantum is q, then each thread/process gets 1/n of the CPU time in chunks of at most q time units at once. No thread/process waits more than (n-1)q time units. Performance  q large  FIFO  q small  q must be large with respect to context switch, otherwise overhead is too high

14 RR with time slice q = 20 ThreadBurst Time P 1 53 P 2 17 P 3 68 P 4 24 Typically, higher average turnaround than SJF, but better response P1P1 P2P2 P3P3 P4P4 P1P1 P3P3 P4P4 P1P1 P3P3 P3P3 02037577797117121134154162

15 Time slice (quantum) and context switch time

16 Turnaround time function of time quantum

17 JobArrival timeWorkStart timeFinish timeWait time till start Time in system A030303 B1533 + 5 = 83 – 1 = 28 – 1 = 7 C3288 + 2 = 108 – 3 = 510 – 3 = 7 A030303 B1555 + 5 = 10410 – 1 = 9 C3233 + 2 = 505 – 3 = 2 A030606 – 0 = 6 B151101 – 1 = 010 – 1 = 9 C32585 – 3 = 28 – 3 = 5

18 Scheduling policy Average waiting time till the job started Average time in system FCFS7/317/3 SJF4/314/3 RR3/320/3

19 Priority scheduling Each thread/process has a priority and the one with the highest priority (smallest integer  highest priority) is scheduled next.  Preemptive  Non-preemptive SJF is a priority scheduling where priority is the predicted next CPU burst time Problem  Starvation – low priority threads/processes may never execute Solution to starvation  Aging – as time progresses increase the priority of the thread/process Priority my be computed dynamically

20 Priority inversion A lower priority thread/process prevents a higher priority one from running. T 3 has the highest priority, T 1 has the lowest priority; T 1 and T 3 share a lock. T 1 acquires the lock, then it is suspended when T 3 starts. Eventually T 3 requests the lock and it is suspended waiting for T 1 to release the lock. T 2 has higher priority than T 1 and runs; neither T 3 nor T 1 can run; T 1 due to its low priority, T 3 because it needs the lock help by T 1. Allow a low priority thread holding a lock to run with the higher priority of the thread which requests the lock

21 Estimating the length of next CPU burst Done using the length of previous CPU bursts, using exponential averaging

22 Exponential averaging  =0   n+1 =  n  Recent history does not count  =1   n+1 =  t n  Only the actual last CPU burst counts If we expand the formula, we get:  n+1 =  t n +(1 -  )  t n -1 + … +(1 -  ) j  t n -j + … +(1 -  ) n +1  0 Since both  and (1 -  ) are less than or equal to 1, each successive term has less weight than its predecessor

23 Predicting the length of the next CPU burst

24 Multilevel queue Ready queue is partitioned into separate queues each with its own scheduling algorithm :  foreground (interactive)  RR  background (batch)  FCFS Scheduling between the queues  Fixed priority scheduling - (i.e., serve all from foreground then from background). Possibility of starvation.  Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS

25 Multilevel Queue Scheduling

26 Multilevel feedback queue A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler characterized by:  number of queues  scheduling algorithms for each queue  strategy when to upgrade/demote a process  strategy to decide the queue a process will enter when it needs service

27 Example of a multilevel feedback queue exam Three queues:  Q 0 – RR with time quantum 8 milliseconds  Q 1 – RR time quantum 16 milliseconds  Q 2 – FCFS Scheduling  A new job enters queue Q 0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q 1.  At Q 1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q 2.

28 Multilevel Feedback Queues

29 Unix scheduler The higher the number quantifying the priority the lower the actual process priority. Priority = (recent CPU usage)/2 + base Recent CPU usage  how often the process has used the CPU since the last time priorities were calculated. Does this strategy raises or lowers the priority of a CPU-bound processes? Example:  base = 60  Recent CPU usage: P1 =40, P2 =18, P3 = 10

30 Comparison of scheduling algorithms Round RobinFCFSMFQ Multi-Level Feedback Queue SFJ Shortest Job First SRJN Shortest Remaining Job Next Throughput Response time May be low is quantum is too small Shortest average response time if quantum chosen correctly Not emphasized May be poor May be low is quantum is too small Good for I/O bound but poor for CPU- bound processes High Good for short processes But maybe poor for longer processes High Good for short processes But maybe poor for longer processes

31 Round Robin FCFSMFQ Multi-Level Feedback Queue SFJ Shortest Job First SRJN Shortest Remaining Job Next IO-bound Infinite postponem ent No distinction between CPU-bound and IO-bound Does not occur No distinction between CPU-bound and IO-bound Does not occur Gets a high priority if CPU- bound processes are present May occur for CPU bound processes No distinction between CPU-bound and IO-bound May occur for processes with long estimated running times No distinction between CPU-bound and IO-bound May occur for processes with long estimated running times

32 Round Robin FCFSMFQ Multi-Level Feedback Queue SFJ Shortest Job First SRJN Shortest Remaining Job Next Overhead CPU- bound Low No distinction between CPU-bound and IO-bound The lowest No distinction between CPU-bound and IO-bound Can be high Complex data structures and processing routines Gets a low priority if IO- bound processes are present Can be high Routine to find to find the shortest job for each reschedule No distinction between CPU-bound and IO-bound Can be high Routine to find to find the minimum remaining time for each reschedule No distinction between CPU-bound and IO-bound

33 Terminology for scheduling algorithms A scheduling problems is defined by :  The machine environment  A set of side constrains and characteristics  The optimality criterion Machine environments:  1  One-machine.  P  Parallel identical machines  Q  Parallel machines of different speeds  R  Parallel unrelated machines  O  Open shop. m specialized machines; a job requires a number of operations each demanding processing by a specific machine  F  Floor shop

34 One-machine environment n jobs  1,2,….n. p j  amount of time required by job j. r j  the release time of job j, the time when job j is available for processing. w j  the weight of job j. d j  due time of job j; time job j should be completed. A schedule S specifies for each job j which p j units of time are used to process the job. C S j  the completion time of job j under schedule S. The makespan of S is: C S max = max C S j The average completion time is

35 One-machine environment (cont’d) Average weighted completion time: Optimality criteria  minimize:  the makespan C S max  the average completion time :  The average weighted completion time:  the lateness of job j  maximum lateness of any job under schedule S. Another optimality criteria, minimize maximum lateness.

36 Priority rules for one machine environment Theorem: scheduling jobs according to SPT – shortest processing time is optimal for Theorem: scheduling jobs in non-decreasing order of is optimal for

37 Real-time schedulers Soft versus hard real-time systems  A control system of a nuclear power plant  hard deadlines  A music system  soft deadlines Time to extinction  time until it makes sense to begin the action

38 Earliest deadline first (EDF) Dynamic scheduling algorithm for real-time OS. When a scheduling event occurs (task finishes, new task released, etc.) the priority queue will be searched for the process closest to its deadline. This process will then be scheduled for execution next. EDF is an optimal scheduling preemptive algorithm for uniprocessors, in the following sense: if a collection of independent jobs, each characterized by an arrival time, an execution requirement, and a deadline, can be scheduled (by any algorithm) such that all the jobs complete by their deadlines, the EDF will schedule this collection of jobs such that they all complete by their deadlines. 38

39 Schedulability test for Earliest Deadline First 39 Process Execution Time Period P118 P225 P3410 In this case U = 1/8 +2/5 + 4/10 = 0.925 = 92.5% It has been proved that the problem of deciding if it is possible to schedule a set of periodic processes is NP-hard if the periodic processes use semaphores to enforce mutual exclusion.


Download ppt "COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM."

Similar presentations


Ads by Google