OS Spring ’ 04 Processes Operating Systems Spring 2004.

Slides:



Advertisements
Similar presentations
CPU Scheduling.
Advertisements

Uniprocessor Scheduling Chapter 9 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
CS 149: Operating Systems February 3 Class Meeting
CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CPU Scheduling CS 3100 CPU Scheduling1. Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various.
Chapter 3: CPU Scheduling
Operating Systems 1 K. Salah Module 2.1: CPU Scheduling Scheduling Types Scheduling Criteria Scheduling Algorithms Performance Evaluation.
OS Fall ’ 02 Processes Operating Systems Fall 2002.
Scheduling in Batch Systems
OS Spring ’ 04 Scheduling Operating Systems Spring 2004.
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
CPU Scheduling Chapter 6 Chapter 6.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
CS212: OPERATING SYSTEM Lecture 3: Process Scheduling 1.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Chapter 5 – CPU Scheduling (Pgs 183 – 218). CPU Scheduling  Goal: To get as much done as possible  How: By never letting the CPU sit "idle" and not.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Scheduling.
lecture 5: CPU Scheduling
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 6: CPU Scheduling
Operating Systems Processes
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Introduction What is an operating system bootstrap
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Process Scheduling Decide which process should run and for how long
Chapter 5: CPU Scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Presentation transcript:

OS Spring ’ 04 Processes Operating Systems Spring 2004

OS Spring ’ 04 What is a process?  An instance of an application execution  Process is the most basic abstractions provided by OS An isolated computation context for each application  Computation context CPU state + address space + environment

OS Spring ’ 04 CPU state=Register contents  Process Status Word (PSW) exec. mode, last op. outcome, interrupt level  Instruction Register (IR) Current instruction being executed  Program counter (PC)  Stack pointer (SP)  General purpose registers

OS Spring ’ 04 Address space  Text Program code  Data Predefined data (known in compile time)  Heap Dynamically allocated data  Stack Supporting function calls

OS Spring ’ 04 Environment  External entities Terminal Open files Communication channels  Local  With other machines

OS Spring ’ 04 Process control block (PCB) state memory files accounting priority user CPU registers storage text data heap stack PSW IR PC SP general purpose registers PCB CPU kernel user

OS Spring ’ 04 Process States running ready blocked created schedule preempt event done wait for event terminated

OS Spring ’ 04 UNIX Process States running user running kernel ready user ready kernel blocked zombie sys. call interrupt schedule created return terminated wait for event event done schedule preempt interrupt

OS Spring ’ 04 Multiprocesses mechanisms  Context switch  Create process and dispatch (in Unix, fork()/exec())  End process (in Unix, exit() and wait())

OS Spring ’ 04 Threads  Thread: an execution within a process  A multithreaded process consists of many co-existing executions  Separate: CPU state, stack  Shared: Everything else  Text, data, heap, environment

OS Spring ’ 04 Thread Support  Operating system Advantage: thread scheduling done by OS  Better CPU utilization Disadvantage: overhead if many threads  User-level Advantage: low overhead Disadvantage: not known to OS  E.g., a thread blocked on I/O blocks all the other threads within the same process

OS Spring ’ 04 Multiprogramming  Multiprogramming: having multiple jobs (processes) in the system Interleaved (time sliced) on a single CPU Concurrently executed on multiple CPUs Both of the above  Why multiprogramming? Responsiveness, utilization, concurrency  Why not? Overhead, complexity

OS Spring ’ 04 Responsiveness Job 1 arrives Job 1 terminates Job1 Job2 Job3 Job 2 terminates Job 3 terminates Job 2 arrives Job 3 arrives Job1 Job3 Job2 Job 1 terminates Job 3 terminates Job 2 terminates

OS Spring ’ 04 Workload matters! ? Would CPU sharing improve responsiveness if all jobs were taking the same time? No. It makes it worse!  For a given workload, the answer depends on the value of coefficient of variation (CV) of the distribution of job runtimes CV=stand. dev. / mean CV CPU sharing does not help CV > 1 => CPU sharing does help

OS Spring ’ 04 Real workloads  Exp. Dist: CV=1;Heavy Tailed Dist: CV>1  Dist. of job runtimes in real systems is heavy tailed CV ranges from 3 to 70  Conclusion: CPU sharing does improve responsiveness  CPU sharing is approximated by Time slicing: interleaved execution

OS Spring ’ 04 Utilization idle 1 st I/O operation I/O ends 2 nd I/O operation I/O ends 3 rd I/O operation CPU Disk CPU Disk idle Job1 Job2

OS Spring ’ 04 Workload matters! ? Does it really matter? Yes, of course: If all jobs are CPU bound (I/O bound), multiprogramming does not help to improve utilization  A suitable job mix is created by a long- term scheduling Jobs are classified on-line to be CPU (I/O) bound according to the job ’ s history

OS Spring ’ 04 Concurrency  Concurrent programming Several process interact to work on the same problem  ls – l | more Simultaneous execution of related applications  Word + Excel + PowerPoint Background execution  Polling/receiving while working on smth else

OS Spring ’ 04 The cost of multiprogramming  Switching overhead Saving/restoring context wastes CPU cycles  Degrades performance Resource contention Cache misses  Complexity Synchronization, concurrency control, deadlock avoidance/prevention

OS Spring ’ 04 Short-Term Scheduling running ready blocked created schedule preempt event done wait for event terminated

OS Spring ’ 04 Short-Term scheduling  Process execution pattern consists of alternating CPU cycle and I/O wait  CPU burst – I/O burst – CPU burst – I/O burst...  Processes ready for execution are hold in a ready (run) queue  STS schedules process from the ready queue once CPU becomes idle

OS Spring ’ 04 Metrics: Response time Job arrives/ becomes ready to run Starts running Job terminates/ blocks waiting for I/O T wait T run T resp T resp = T wait + T run  Response time (turnaround time) is the average over the jobs ’ T resp

OS Spring ’ 04 Other Metrics  Wait time: average of T wait This parameter is under the system control  Response ratio or slowdown slowdown=T resp / T run  Throughput, utilization depend on user imposed workload=> Less useful

OS Spring ’ 04 Note about running time (T run )  Length of the CPU burst When a process requests I/O it is still “ running ” in the system But it is not a part of the STS workload  STS view: I/O bound processes are short processes Although text editor session may last hours!

OS Spring ’ 04 Off-line vs. On-line scheduling  Off-line algorithms Get all the information about all the jobs to schedule as their input Outputs the scheduling sequence Preemption is never needed  On-line algorithms Jobs arrive at unpredictable times Very little info is available in advance Preemption compensates for lack of knowledge

OS Spring ’ 04 First-Come-First-Serve (FCFS)  Schedules the jobs in the order in which they arrive Off-line FCFS schedules in the order the jobs appear in the input  Runs each job to completion  Both on-line and off-line  Simple, a base case for analysis  Poor response time

OS Spring ’ 04 Shortest Job First (SJF)  Best response time Short Long job Short  Inherently off-line All the jobs and their run-times must be available in advance

OS Spring ’ 04 Preemption  Preemption is the action of stopping a running job and scheduling another in its place  Context switch: Switching from one job to another

OS Spring ’ 04 Using preemption  On-line short-term scheduling algorithms Adapting to changing conditions  e.g., new jobs arrive Compensating for lack of knowledge  e.g., job run-time  Periodic preemption keeps system in control  Improves fairness Gives I/O bound processes chance to run

OS Spring ’ 04 Shortest Remaining Time first (SRT)  Job run-times are known  Job arrival times are not known  When a new job arrives: if its run-time is shorter than the remaining time of the currently executing job: preempt the currently executing job and schedule the newly arrived job else, continue the current job and insert the new job into a sorted queue  When a job terminates, select the job at the queue head for execution

OS Spring ’ 04 Round Robin (RR)  Both job arrival times and job run-times are not known  Run each job cyclically for a short time quantum Approximates CPU sharing Job 1 arrives Job13 Job 2 arrives Job 3 arrives Job2 Job 3 terminates Job 1 terminates Job 2 terminates

OS Spring ’ 04 Responsiveness Job 1 arrives Job 1 terminates Job1 Job2 Job3 Job 2 terminates Job 3 terminates Job 2 arrives Job 3 arrives Job1 Job3 Job2 Job 1 terminates Job 3 terminates Job 2 terminates

OS Spring ’ 04 Priority Scheduling  RR is oblivious to the process past I/O bound processes are treated equally with the CPU bound processes  Solution: prioritize processes according to their past CPU usage  T n is the duration of the n-th CPU burst  E n+1 is the estimate of the next CPU burst

OS Spring ’ 04 Multilevel feedback queues quantum=10 quantum=20 quantum=40 FCFS new jobs terminated

OS Spring ’ 04 Multilevel feedback queues  Priorities are implicit in this scheme  Very flexible  Starvation is possible Short jobs keep arriving => long jobs get starved  Solutions: Let it be Aging

OS Spring ’ 04 Priority scheduling in UNIX  Multilevel feedback queues The same quantum at each queue A queue per priority  Priority is based on past CPU usage pri=cpu_use+base+nice  cpu_use is dynamically adjusted Incremented each clock interrupt: 100 sec -1 Halved for all processes: 1 sec -1

OS Spring ’ 04 Fair Share scheduling  Given a set of processes with associated weights, a fair share scheduler should allocate CPU to each process in proportion to its respective weight  Achieving pre-defined goals Administrative considerations  Paying for machine usage, importance of the project, personal importance, etc. Quality-of-service, soft real-time  Video, audio

OS Spring ’ 04 Perfect Fairness A fair share scheduling algorithm achieves perfect fairness in time interval (t 1,t 2 ) if

OS Spring ’ 04 Wellness criterion for FSS An ideal fair share scheduling algorithm achieves perfect fairness for all time intervals  The goal of a FSS algorithm is to receive CPU allocations as close as possible to a perfect FSS algorithm

OS Spring ’ 04 Fair Share scheduling: algorithms  Weighted Round Robin Shares are not uniformly spread in time  Lottery scheduling: Each process gets a number of lottery tickets proportional to its CPU allocation The scheduler picks a ticket at random and gives it to the winning client Only statistically fair, high complexity

OS Spring ’ 04 Fair Share scheduling: VTRR  Virtual Time Round Robin (VTRR) Order ready queue in the order of decreasing shares (the highest share at the head) Run Round Robin as usual Once a process that exhausted its share is encountered: Go back to the head of the queue

OS Spring ’ 04 Multiprocessor Scheduling  Homogeneous vs. heterogeneous  Homogeneity allows for load sharing Separate ready queue for each processor or common ready queue?  Scheduling Symmetric Master/slave

OS Spring ’ 04 A Bank or a Supermarket? departing jobs departing jobs departing jobs departing jobs arriving jobs shared queue CPU1 CPU2 CPU3 CPU4 CPU1 CPU2 CPU3 CPU4 arriving jobs M/M/44 x M/M/1

OS Spring ’ 04 It is a Bank! M/M/4 4 x M/M/1