Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.

Similar presentations


Presentation on theme: "Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting."— Presentation transcript:

1 Lecture 4 CPU scheduling

2 Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting time is wasted 2

3 Alternating Sequence of CPU And I/O Bursts 3

4 Basic Concepts Cycle – Process execution consists of a cycle of CPU execution and I/O wait CPU burst: a time interval when a process uses CPU only. I/O burst: a time interval when a process uses I/O devices only.

5 CPU Scheduler CPU idle, operating system selects one process from ready Queue to be executed Many mechanisms: FIFO, priority … The Record in the queue are generally process control block PCBs of the processes 5

6 CPU Scheduler Long term scheduling Short term scheduling: CPU scheduler 6

7 CPU Scheduler CPU scheduling decisions may take place when a process: 1.Switches from running to waiting state 2.Switches from running to ready state 3.Switches from waiting to ready 4.Terminates Scheduling under 1 and 4 is nonpreemptive All other scheduling is preemptive 7

8 Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: – switching context – switching to user mode – jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running 8

9 Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time- sharing environment) 9

10 Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time 10

11 Scheduling Algorithms First-Come, First-Served Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback Queue Scheduling 11

12 First-Come, First-Served Scheduling 12

13 First-Come, First-Served (FCFS) Scheduling By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue

14 14 FIRST-COME, FIRST SERVED:  ( FCFS) same as FIFO  Simple, fair, but poor performance. Average queueing time may be long.  What are the average queueing and residence times for this scenario?  How do average queueing and residence times depend on ordering of these processes in the queue? First-Come, First-Served (FCFS) Scheduling

15 ProcessBurst Time P 1 24 P2 3 P2 3 P3 3 P3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 P1P1 P2P2 P3P3 2427300 15

16 FCFS Scheduling (Cont) Suppose that the processes arrive in the order P 2, P 3, P 1 The Gantt chart for the schedule is: Waiting time for P 1 = 6; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect short process behind long process P1P1 P3P3 P2P2 63300 16

17 17 EXAMPLE DATA: Process Arrival Service Time Time 1 0 8 2 1 4 3 2 9 4 3 5 08122126 P1P2P3P4 FCFS Average wait = ( (0-0) + (8-1) + (12-2) + (21-3) )/4 = 0+ 7+10+18 35/4 = FCFS Scheduling (another example) Residence Time at the CPU

18 Shortest-Job-First (SJF) Scheduling 18

19 Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time SJF is optimal – gives minimum average waiting time for a given set of processes – The difficulty is knowing the length of the next CPU request 19

20 Example of SJF ProcessBurst Time P16 P16 P 2 8 P37 P37 P43 P43 SJF scheduling chart Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 P4P4 P3P3 P1P1 3 16 0 9 P2P2 24 20

21 The same example using FCFS P1: 0 P2: 6 P3: 14 P4: 21 Average: 41/4= 10.25

22 Shortest-Job-First (SJR) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time Two schemes: – nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst – preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF) SJF is optimal – gives minimum average waiting time for a given set of processes

23 ProcessArrival TimeBurst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 SJF (non-preemptive) Average waiting time = (0 + 6 + 3 + 7)/4 - 4 Example of Non-Preemptive SJF P1P1 P3P3 P2P2 73160 P4P4 812

24 Example of Preemptive SJF ProcessArrival TimeBurst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 SJF (preemptive) Average waiting time = (9 + 1 + 0 +2)/4 = 3 P1P1 P3P3 P2P2 42 11 0 P4P4 57 P2P2 P1P1 16

25 Determining Length of Next CPU Burst The Real difficulty with the SJF algorithm is knowing the length of the next CPU request Can only estimate the length We may not know but we can predict Can be done by using the length of previous CPU bursts, using exponential averaging 25

26 5: CPU-Scheduling26 SHORTEST JOB FIRST:  Optimal for minimizing queueing time, but impossible to implement. Tries to predict the process to schedule based on previous history.  Predicting the time the process will use on its next schedule: t( n+1 ) = w * t( n ) + ( 1 - w ) * T( n ) Here: t(n+1) is time of next burst. t(n) is time of current burst. T(n) is average of all previous bursts. W is a weighting factor emphasizing current or previous bursts. Shortest-Job-First (SJR) Scheduling

27 Priority Scheduling 27

28 Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer  highest priority) Preemptive nonpreemptive SJF is a priority scheduling where priority is the predicted next CPU burst time Problem  Starvation – low priority processes may never execute Solution  Aging – as time progresses increase the priority of the process 28

29 Round Robin (RR) 29

30 Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Performance q large  FIFO q small  Process Sharing q must be large with respect to context switch, otherwise overhead is too high 30

31 Example of RR with Time Quantum = 4 ProcessBurst Time P 1 24 P 2 3 P33 P33 The Gantt chart is: Typically, higher average turnaround than SJF, but better response P1P1 P2P2 P3P3 P1P1 P1P1 P1P1 P1P1 P1P1 0 47 101418222630 31

32 Example of RR with Time Quantum = 20 ProcessBurst Time P 1 53 P 2 17 P 3 68 P 4 24 The Gantt chart is: Typically, higher average turnaround than SJF, but better response P1P1 P2P2 P3P3 P4P4 P1P1 P3P3 P4P4 P1P1 P3P3 P3P3 02037577797117121134154162

33 33 EXAMPLE DATA: Process Arrival Service Time Time 1 0 8 2 1 4 3 2 9 4 3 5 0 Round Robin, quantum = 4, no priority-based preemption Example of RR with Time Quantum= 4 Note: Example violates rules for quantum size since most processes don’t finish in one quantum.

34 34 EXAMPLE DATA: Process Arrival Service Time Time 1 0 8 2 1 4 3 2 9 4 3 5 08121626 P2P3P4P1 Round Robin, quantum = 4, no priority-based preemption Average wait = ( (12) + (4-1) + ((8-2)+(20-12)+(25-24)) + ((12-3)+(24-16)) )/4 = 12+3+15+17= 47/4= 11.75 P1 4 P3P4 202425 P3 Example of RR with Time Quantum= 4 Note: Example violates rules for quantum size since most processes don’t finish in one quantum.

35 Multilevel Queue 35

36 Multilevel Queue Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm – foreground – RR – background – FCFS Scheduling must be done between the queues – Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. – Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR – 20% to background in FCFS

37 Multilevel Queue Scheduling

38 Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available Some approaches for multiprocessor scheduling – Asymmetric multiprocessing: only one processor accesses the system data structures, alleviating the need for data sharing and scheduling decisions are handled by this processor. – Symmetric multiprocessing (SMP): Each processor is self scheduling. All processes may be in a common ready queue or each processor may have its own ready queue

39 Some issues related to SMP Processor affinity: – When a process running on a processor is migrated to another processor, the contents of the cache memory for that processor must be invalidated and repopulated to the cache of the processor to which the process is migrated. This involves large overhead. To prevent this overhead, most SMP systems try to avoid migration of processes from one processor to another processor and instead keep a process running on the same processor. This is called processor affinity. Load balancing: This attempts to keep workload evenly distributed across all processors. This is especially needed in systems where each processor has its own queue which is the case in most contemporary Operating systems. Note that load balancing counteracts the benefits of processor affinity. So, this is not an easy problem to solve.

40 Operating System Examples Example: Solaris Scheduling : – Solaris uses priority-based thread scheduling Example: Windows Scheduling – Windows schedules threads using a priority- based preemptive scheduling algorithm. 40


Download ppt "Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting."

Similar presentations


Ads by Google