Presentation on theme: "Operating Systems Chapter 6. Main functions of an operating system 1. User/computer interface: Provides an interface between the user and the computer."— Presentation transcript:
Operating Systems Chapter 6
Main functions of an operating system 1. User/computer interface: Provides an interface between the user and the computer 2. Resource manager: manages all computers resources. Process manager Memory manager Device manager File manager, etc.
Operation System User command interface Resource management Process Manager Memory manager Device Manager File manager Network manager A model of an operation System
Operating system as a user/computer interface A user command such as open, save or print would correspond a sequence of machine- code instructions. The user does not need to provide these sequences of instructions. Operating system translates these commands to a machine-code instructions.
Operating system as a resource manager Resource management Process Manager: Next program to be executed? Time to be given to each program ? Memory manager: Best use of the memory to run as many programs as possible I/O Device (e.g.printer) Manager: Which program should use a particular I/O device? Network manager: which computer should execute a particular program?
Type of operating systems Multi-programming Operating system can handle several programs at once. Time-sharing Operating system allows many user to share the same computer and interact with it. Or, in case of a single-user computer (e.g. PC), the user can work on several programs at the same time.
How the operating system get started? Main memory has a small section of permanent read only memory (ROM) ROM contains a program, bootstrap. At the start the CPU runs bootstrap. Which directs the CPU to load the operation system from disk and transfer control to it.
Operating System Main memory Bootstrap program Main memory Bootstrap Program Operating System Disk storage ROMROM ROMROM RAMRAM RAMRAM
Operating system as a process manager Coordinates the occupation of the main memory by different processes and their data. At any time the operation system may be dealing with many processes. e.g. a process may be executed or allowed to wait in main memory, or swapped out of the main memory.
Process State As a process executes, it changes state new: The process is being created. running: Instructions are being executed. waiting: The process is waiting for some event to occur. ready: The process is waiting to be assigned to a process. terminated: The process has finished execution.
Process Control Block (PCB) Information associated with each process. Identifier Process state Program counter CPU registers CPU scheduling information Memory-management information Accounting information I/O status information
CPU Switch From Process to Process The PCB is saved when a process is removed from the CPU and another process takes its place (context switch).
Process Scheduling Queues Job queue – set of all processes in the system. Ready queue – set of all processes residing in main memory, ready and waiting to execute. Device queues – set of processes waiting for an I/O device. Process migration between the various queues.
Schedulers Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue. Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU.
Medium Term Scheduling Time sharing Operating systems may introduce a medium term scheduler Removes processes from memory (and thus CPU contention) to reduce the degree of multiprogramming – swapping Swapping may be needed to improve the process mix or to free up memory if it has become overcommitted
Intermediate queue Job queue CPU I/O Ready queue Process request End
Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process waiting to get into memory + waiting in the ready queue + executing on the CPU + I/O Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced,
Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time In most cases we optimize the average measure
Scheduling Algorithms First-Come, First-Served (FCFS) ProcessBurst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: ( )/3 = 17 P1P1 P2P2 P3P CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.
Shortest-Job-First (SJR) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time (on a tie use FCFS) Two schemes: nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the shortest-Remaining-Time-First (SRTF). SJF is optimal – gives minimum average waiting time for a given set of processes.
Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority). Can be preemptive (compares priority of process that has arrived at the ready queue with priority of currently running process) or non-preemptive (put at the head of the ready queue) SJF is a priority scheduling where priority is the predicted next CPU burst time. Problem Starvation – low priority processes may never execute. Solution Aging – as time progresses increase the priority of the process.
Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
Memory Management When a process is executed it has to be in main memory as the main memory can be accessed quicker. An efficient use of the main memory is an important task of the operation system. Different memory management techniques are used for this purpose.
Memory partition How processes are arranged in the main memory before been executed? Fixed-sized partitions Variable-sized partitions
Fixed-sized partitions OS 8M
Variable-sized partitions OS 8M 2M 4M 8M 18M
Swapping I/O operations are slow If a running process requires an I/O operation. The CPU will move to another process in the main memory. Suppose the main memory is full of processes waiting on I/O. CPU becomes idle To solve this problem Swapping technique is used.
Operation System Operation System disk Long-term queue Long-term queue Medium-term Completed processes Completed processes No Swapping With Swapping Main memory
Fragmentation Memory is divided into partitions Each partition has a different size Processes are allocated space and later freed After a while memory will be full of small holes! No free space large enough for a new process even though there is enough free memory in total If we allow free space within a partition we have internal fragmentation Fragmentation: External fragmentation = unused space between partitions Internal fragmentation = unused space within partitions
Problems with swapping Swapped process are I/O output processes. I/O processes are slower. The swapping process is slow as well. Solution: Reduce the amount of codes that needs to be swapped. Paging
A program is divided into small fixed-sized chunks(pages). Main memory is divided into small fixed-sized chunks (frames). A page is stored in one frame. A program is stored in a set of frames. These frames do not need to be continuous.
disk Process A page 0 page 1 page 2 page 3 In use disk Process A page 0 page 1 page 2 page 3 In use page 3 of A page 2 of A page 1 of A page 0 of A A- page table
Logical and physical address disk Process A page 0 page 1 page 2 page 3 In use page 3 of A page 2 of A page 1 of A page 0 of A A- page table Page 1 I. J(30) Logical address(J) 1:30 Physical address(J) 14:30
simple paging is not efficient Better than fixed and variable-sized partitions. OS - loads all pages of a process in the main memory. However, not all pages of a process need to be in the main memory in order to be executed. OS - can still execute a process if only some of the pages are loaded Demand paging.
Demand paging Operating system – loads a page only when it is required No swapping in or out of unused pages is needed. Better use of memory. CPU can access only a number of pages of a process at one time. Then asks for more pages to be loaded.
Virtual memory Demand paging gives rise the concept of virtual memory. Only a small part of a process needs to be in main memory at one time. Programs which require bigger memory that main memory can still be executed. Impression of a bigger computer memory. This concept of the main memory is called virtual memory. Demand paging and virtual memory are widely used in todays operation systems ( wind-2000, XP).
Interrupts Definition of Interrupt Event that disrupts the normal execution of a program and causes the execution of special instructions
I/O devices Called peripherals: Keyboard Mouse Speakers Monitor scanner Printer Disk drive CD-drive. OS – manages all I/O operations and devices
OS - I/O management There are four main I/O operations. Control: tell the system to perform some action (e.g. rewind tape). Test: check the status of the device Read: read data from the device Write write data to the device.
I/O modules System bus CPU Main memory I/O module I/O device I/O device
Advantages of I/O modules They allow the CPU to view a wide range of devices in a simple-minded format CPU does not need to know details of timing, format, or electronic mechanics. CPU only needs to function in terms of a simple read and write commands. They help the CPU to work more efficiently They are 3 ways in which I/O modules can work Programmed I/O Interrupt-driven I/O Direct memory access.
Programmed I/O The CPU controls I/O device directly Via the I/O modules. The CPU sends an I/O command the I/O module. And waits until the I/O operation is completed before sending another I/O command. The performance is poor as the CPU spends too much time waiting the I/O device.
Programmed I/O Issue Read to I/O module Check status Read word from I/O module Write word To memory done yes NO Next instruction Ready
Interrupt-driven I/O The CPU issues a command to the I/O module and then gets on with executing other instructions. The I/O module interrupts the CPU when it is ready to exchange data with the CPU. The CPU then executes the data transfer. Most computer have interrupt lines to detect and record the arrival of an interrupt request.
Interrupt-driven I/O Issue Read to I/O module Check status Read word from I/O module Write word To memory done yes NO Next instruction Ready CPU goes to do Other things When the status Is ready the I/O module sends An interrupt-signal
Disadvantages of Interrupt- driven I/O CPU is responsible for managing I/O data transfer. Every transferred word must go through the CPU. Devices with large transfer, e.g. disk drive, the CPU wastes time dealing with data transfer. Solution: Direct-memory-access(DMA).
Direct-memory-access - DMA Special-purpose processor. Handles data transfer. CPU issues to the DMA: starting address in main memory to read/write to. Starting address in the I/O device to read/write to. The number of words to be transferred. DMA transfers data without intervention from the CPU. DMA sends interrupt to the CPU when transfer is completed.
DMA/CPU - bus system DMA take care data transfer. CPU free to do other jobs. However, they can not use the bus at the same time. DMA can use the bus only when the CPU is not using it. Some times it has to force to CPU to free the bus, cycles stealing.
DMA/CPU System bus CPU Main memory I/O module I/O device DMA
Summery OS- memory manager Fixed-sized partition: waist of memory. Variable-sized partition: fragmentation. Swapping. Time wasted in swapping the whole process Simple paging: process divided into pages and loaded into main memory(divided into frames). Demand paging: only the required pages are loaded to main memory. OS- I/O manager Programmed I/O: CPU waste waiting for I/O operation. Interrupt-driven I/O: CPU responsible for data transfer. DMA: takes care of data transfer instead the CPU.