Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operating Systems Chapter 6.

Similar presentations


Presentation on theme: "Operating Systems Chapter 6."— Presentation transcript:

1 Operating Systems Chapter 6

2 What is an operating system?
A program that runs on the hardware and supports Resource Abstraction Resource Sharing Abstracts and standardises the interface to the user across different type of hardware Virtual machine hides the messy details witch must be performed. Manages the hardware resources Each program gets time with the resource Each program gets space on the resource

3 Introduction The aims of an operating system are: User convenience
System performance Number of requests serviced per unit time, etc

4 Introduction Fundamental tasks of an operation system
Management of Programs Organize their execution by sharing the CPU Ensure good user service and efficient use Management of Resources Efficient allocation/de-allocation without constraining user programs Security and Protection Ensure absence of interference with programs and resources by entities within and outside the operating system

5 Operating Systems Application programs needs to access the devices connected to a computer. Operation System: (system program – slide-19, chapter 2). is a software layer between the hardware and the user. provides a consistent application program interface (API). first program that runs when the computer boots up. is a program that is always running when the machine is on.

6 Main functions of an operating system
User/computer interface: Provides an interface between the user and the computer Resource manager: manages all computer’s resources. Process manager Memory manager Device manager File manager, etc.

7 A model of an operation System
User command interface Process Manager Memory manager Device Manager File manager Network manager Operation System Resource management

8 Operating system as a user/computer interface
A user command such as open, save or print would correspond a sequence of machine-code instructions. The user does not need to provide these sequences of instructions. Operating system translates these commands to a machine-code instructions.

9 Operating system as a resource manager
Process Manager: Next program to be executed? Time to be given to each program? Memory manager: Best use of the memory to run as many programs as possible I/O Device (e.g.printer) Manager: Which program should use a particular I/O device? Network manager: which computer should execute a particular program? Resource management process management: which program should be executed next? and how much time should be given to each program? memory management: how to make the best use of the available memory to run as many program as possible? I/O(e.g. a printer) management: which program should use a particular I/O device.

10 Type of operating systems
Multi-programming Operating system can handle several programs at once. Time-sharing Operating system allows many user to share the same computer and interact with it. Or, in case of a single-user computer (e.g. PC), the user can work on several programs at the same time. Multiprogramming OS All the programs to be run are loaded in the main memory. The operation system picks one program and execute it. Once it is finished, the operation system picks another one and execute it. A program might involve I/O operation, which is usually a slow operation. If that happens, instead of waiting the I/O to operation the complete, the operation system starts executing another program. Time sharing OS Operation system allows many user to share the same computer and interact with it. Many users can use one computer through terminals. The OS allocates a very short time to each user-program, and switches rapidly from one user-program to another. Each user has the impression that the entire computer is dedicated to his/her. Time sharing can also be used on single-user computer(e.g. PC), in which case the user can work on several program at the same time (e.g. printing, , editing, …). In this case the CPU switches rapidly between the programs A time sharing operation system is an extension the multiprogramming operation system.

11 How the operating system get started?
Main memory has a small section of permanent read only memory (ROM) ROM contains a program, bootstrap. At the start the CPU runs bootstrap. Which directs the CPU to load the operation system from disk and transfer control to it. When the computer is turned on, the program counter starts with a particular predetermined address. At this address, a program in the ROM called ‘Bootstrap’ is stored. this program is executed, it directs the CPU to load the operation system into the main memory from mass storage. The operation system is now executed by the CPU. The operation system takes over and begins controlling the computer’s activities.

12 Main memory Main memory Disk storage R O M R O M Bootstrap program Bootstrap Program Operating System Operating System R A M R A M

13 Operating system as a process manager
Coordinates the occupation of the main memory by different processes and their data. At any time the operation system may be dealing with many processes. e.g. a process may be executed or allowed to wait in main memory, or swapped out of the main memory.

14 Processes Definition of a process Process Scheduling
Operations on Processes Cooperating Processes

15 What is a process Process – a program in execution; process execution must progress in sequential fashion. A process includes: program counter stack data section heap

16 Process State As a process executes, it changes state
new: The process is being created. running: Instructions are being executed. waiting: The process is waiting for some event to occur. ready: The process is waiting to be assigned to a process. terminated: The process has finished execution.

17 Process Control Block (PCB)
Information associated with each process. Identifier Process state Program counter CPU registers CPU scheduling information Memory-management information Accounting information I/O status information

18 CPU Switch From Process to Process
The PCB is saved when a process is removed from the CPU and another process takes its place (context switch).

19 Process Scheduling Queues
Job queue – set of all processes in the system. Ready queue – set of all processes residing in main memory, ready and waiting to execute. Device queues – set of processes waiting for an I/O device. Process migration between the various queues.

20 Schedulers Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue. Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU.

21 Medium Term Scheduling
Time sharing Operating systems may introduce a medium term scheduler Removes processes from memory (and thus CPU contention) to reduce the degree of multiprogramming – swapping Swapping may be needed to improve the process mix or to free up memory if it has become overcommitted

22 Intermediate queue Job queue Ready queue CPU End Process request I/O
This figure illustrates these queues and the interaction between them. A process is first added to the job queue. If the resources are available, it is put into main memory and it joins the ready queue. It is then executed by the CPU. If it requires an I/O operation, it is put in the I/O queue on the relevant I/O device. A process requiring I/O may be temporary kicked out of the main memory on to the hard disk (joining the intermediate queue), in order to to free some some space in main memory for other operation to come in. When a process is completely executed it is removed from all the queues. I/O I/O I/O

23 Scheduling Criteria CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process waiting to get into memory + waiting in the ready queue + executing on the CPU + I/O Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced,

24 Optimization Criteria
Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time In most cases we optimize the average measure

25 Scheduling Algorithms First-Come, First-Served (FCFS)
Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: ( )/3 = 17 P1 P2 P3 24 27 30 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.

26 FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order P2 , P3 , P1 The Gantt chart for the schedule is: Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: ( )/3 = 3 Much better than previous case. Average waiting time is generally not minimal and may vary substantially if the process CPU-burst times vary greatly P1 P3 P2 6 3 30

27 FCFS Scheduling (Cont.)
FCFS is non-preemptive Not good for time sharing systems where where each user needs to get a share of the CPU at regular intervals Short process(I/O bound) wait for one long CPU-bound process to complete a CPU burst before they get a turn lowers CPU and device utilization I/O bound processes complete their burst and enter ready queue – I/O devices idle and I/O bound processes waiting CPU bound process completes CPU burst and moves to I/O device I/O bound processes all quickly complete their CPU bursts and enter I/O queue – now CPU is idle CPU bound completes I/O and executes on CPU; back to step 1

28 Shortest-Job-First (SJR) Scheduling
Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time (on a tie use FCFS) Two schemes: nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the shortest-Remaining-Time-First (SRTF). SJF is optimal – gives minimum average waiting time for a given set of processes.

29 Example of Non-Preemptive SJF
Process Arrival Time Burst Time P P P P SJF (non-preemptive) Average waiting time = ( )/4 = 4 P1 P3 P2 7 3 16 P4 8 12

30 Example of Preemptive SJF
Process Arrival Time Burst Time P P P P SJF (preemptive) Average waiting time = ( )/4 = 3 P1 P3 P2 4 2 11 P4 5 7 16

31 Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer  highest priority). Can be preemptive (compares priority of process that has arrived at the ready queue with priority of currently running process) or non-preemptive (put at the head of the ready queue) SJF is a priority scheduling where priority is the predicted next CPU burst time. Problem  Starvation – low priority processes may never execute. Solution  Aging – as time progresses increase the priority of the process.

32 Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.

33 Example of RR with Time Quantum = 20
Process Burst Time P1 53 P2 17 P3 68 P4 24 The Gantt chart is: Typically, higher average turnaround than SJF, but better response. P1 P2 P3 P4 20 37 57 77 97 117 121 134 154 162

34 Memory Management When a process is executed it has to be in main memory as the main memory can be accessed quicker. An efficient use of the main memory is an important task of the operation system. Different memory management techniques are used for this purpose. The operation system manages the use of the main memory, to make the best and efficient of it use.

35 Memory partition How processes are arranged in the main memory before been executed? Fixed-sized partitions Variable-sized partitions

36 Fixed-sized partitions
OS 8M 8M 8M The main memory is divided into equal-sized chunks. The size of the chunks are predetermined, fixed before the process is loaded. Fixed-sized partitioning is not an efficient way of managing main memory. Only a few processes will require exactly the same amount of memory as provided by the partition. A process of 3M will occupy one chunk and therefore the 5M is wasted. 8M 8M

37 Variable-sized partitions
OS 8M 2M 4M 8M A better way of partitioning the main memory is variable-sized. Each process is allocated exactly the amount of memory it requires. 18M

38 Swapping I/O operations are slow
If a running process requires an I/O operation. The CPU will move to another process in the main memory. Suppose the main memory is full of processes waiting on I/O. CPU becomes idle To solve this problem Swapping technique is used. If the CPU becomes idle, Some processes are moved temporarily out the main memory to the disk, to free a space for new process to come to be executed by the CPU.

39 disk Main memory No Swapping Completed processes Main memory With
Operation System Long-term queue No Swapping Completed processes Main memory Long-term queue Operation System With Swapping Completed processes Medium-term

40 os os os os P1 P1 P1 p2 p2 p3 a c b d os os os os P1 P1 p2 P4 P4 P4 P3
Process 1,2,3 (b,c,d) are loaded and each one of them is given the exact amount of memory they require. In (e-f) P4 needs to be loaded. And there is not enough space left to accommodate it. P4 < P2 and P2 requires an I/O operation. There P2 is swapped out of the memory and P4 is loaded. In (g-h) P2 becomes ready again after completing the I/O operation. P1 which now require an I/O operation is swapped out of the memory and P2 is swapped back in. Variable-sized partitioning was good initially, but gets worst quickly. The main memory will become more and more fragmented. To solve this problem we use paging technique. P4 P4 P3 P3 p3 p3 e g f h

41 Fragmentation Memory is divided into partitions
Each partition has a different size Processes are allocated space and later freed After a while memory will be full of small holes! No free space large enough for a new process even though there is enough free memory in total If we allow free space within a partition we have internal fragmentation Fragmentation: External fragmentation = unused space between partitions Internal fragmentation = unused space within partitions

42 Problems with swapping
Swapped process are I/O output processes. I/O processes are slower. The swapping process is slow as well. Solution: Reduce the amount of codes that needs to be swapped. Paging

43 Paging A program is divided into small fixed-sized chunks(pages).
Main memory is divided into small fixed-sized chunks (frames). A page is stored in one frame. A program is stored in a set of frames. These frames do not need to be continuous.

44 disk disk 13 13 14 14 15 15 16 In use In use 16 17 In use In use 17 18
page 0 of A Process A Process A 13 14 page 1 of A page 0 page 1 page 2 page 3 page 0 page 1 page 2 page 3 14 page 2 of A 15 15 16 In use In use 16 17 In use In use 17 18 A- page table page 3 of A 18 19 In use In use 13 19 14 20 20 15 When a program is loaded in main memory it is stored in frames. The wasted space is a fraction of the last page. If a process is divided the to 100 the first 99page are full. And therefore occupy 99 frames. The last page is the only one that might not be full. But still occupy the whole frame. As it is shown before the process A is loaded some frames in the main memory are already used. The frames representing a process A are not necessarily continuous. The operation system must know where the pages of a process are loaded in which frames in order to find the pages in main memory for the CPU to execute. The operation system maintain a page table for each process. 18

45 Logical and physical address
disk page 0 of A Page 1 Process A 13 14 page 1 of A I . J(30) page 0 page 1 page 2 page 3 Logical address(J) page 2 of A 15 1:30 16 In use 17 In use The operation system must also know where each instruction of a process is? The instruction in a process has a logical address. Logical address: Its relative position in the program: In case of paging, it is relative position in the page containing the instruction. e.g. 1:30 ~ position 30 in page 1 Physical address: When an instruction is loaded into the main memory, it is stored in a frame or a set of frames. Thus, each instruction has a physical address. The page table helps to know the physical address of each instruction in the process. e.g. physical address of the instruction (J) is 14:30 (frame 14 position 30). A- page table 18 page 3 of A 13 19 In use 14 20 Physical address(J) 15 18 14:30

46 simple paging is not efficient
Better than fixed and variable-sized partitions. OS - loads all pages of a process in the main memory. However, not all pages of a process need to be in the main memory in order to be executed. OS - can still execute a process if only some of the pages are loaded Demand paging.

47 Demand paging Operating system – loads a page only when it is required
No swapping in or out of unused pages is needed. Better use of memory. CPU can access only a number of pages of a process at one time. Then asks for more pages to be loaded.

48 Virtual memory Demand paging gives rise the concept of virtual memory.
Only a small part of a process needs to be in main memory at one time. Programs which require bigger memory that main memory can still be executed. Impression of a bigger computer memory. This concept of the main memory is called virtual memory. Demand paging and virtual memory are widely used in today’s operation systems (wind-2000, XP).

49 Definition of ‘Interrupt’
Interrupts Definition of ‘Interrupt’ Event that disrupts the normal execution of a program and causes the execution of special instructions

50 Interrupts Interrupt Program time t

51 Interrupts Program time t

52 Interrupt Service Routine
Program Program Interrupt Service Routine time t

53 Interrupts Interrupt Program time t fahr= (cent * ) +32 9 5
mov R1, cent mul R1, 9 div R1, 5 add R1, 32 mov fahr, R1 time t

54 Interrupt Service Routine
Program Program Interrupt Service Routine mul R1, 9 mov R1, cent time t

55 Interrupt Service Routine
Program Program Save Context Restore Context Interrupt Service Routine mul R1, 9 mov R1, cent time t

56 Interrupt Service Routine
Program Program Save Context Restore Context Interrupt Service Routine mul R1, 9 mov R1, cent eg push R1 eg pop R1 time t

57 I/O devices Called peripherals:
Keyboard Mouse Speakers Monitor scanner Printer Disk drive CD-drive. OS – manages all I/O operations and devices

58 OS - I/O management There are four main I/O operations.
Control: tell the system to perform some action (e.g. rewind tape). Test: check the status of the device Read: read data from the device Write write data to the device.

59 I/O modules System bus I/O module I/O module CPU Main memory I/O
CPU and Main memory are linked to I./O devices through the system bus. I/O module are just interface between the CPU and main memory on one side and the an I/O device on the other side. Data transfer rate in I/O devices is much lower than the CPU and the main memory. Different I/O devices use different data formats and word lengths. I/O modules facilitate the communication between the CPU and an I/O device. I/O device I/O device

60 Advantages of I/O modules
They allow the CPU to view a wide range of devices in a simple-minded format CPU does not need to know details of timing, format, or electronic mechanics. CPU only needs to function in terms of a simple read and write commands. They help the CPU to work more efficiently They are 3 ways in which I/O modules can work Programmed I/O Interrupt-driven I/O Direct memory access.

61 Programmed I/O The CPU controls I/O device directly Via the I/O modules. The CPU sends an I/O command the I/O module. And waits until the I/O operation is completed before sending another I/O command. The performance is poor as the CPU spends too much time waiting the I/O device.

62 Programmed I/O Ready NO yes Issue Read to I/O module Check status
Read word from I/O module For example the use of programmed I/O device to read in a block of data, which consists of many words. Suppose that each instruction is to read a word and write it to the main memory. Reading a block of data consists of large number instructions. The CPU first issues a read command to the I/O module. The I/O module then tries to obtain a word from the I/O device. This takes some times and when this is done the word is put in I/O register in the I/O module which is linked the data bus. This procedure continues until all the instructions are executed – the whole block of data is transferred to the main memory. A great deal of the CPU time is wasted when a large block of data is transferred. Write word To memory NO done yes Next instruction

63 Interrupt-driven I/O The CPU issues a command to the I/O module and then gets on with executing other instructions. The I/O module interrupts the CPU when it is ready to exchange data with the CPU. The CPU then executes the data transfer. Most computer have interrupt lines to detect and record the arrival of an interrupt request.

64 Interrupt-driven I/O Ready NO yes CPU goes to do Other things
Issue Read to I/O module CPU goes to do Other things Check status When the status Is ready the I/O module sends An interrupt-signal Ready Read word from I/O module When the CPU wants to read a word the I/O device. It issues a read signal to the CPU. The CPU then goes on doing other things. When the I/O module finishes reading the word from the device, it sends an interrupt signal to the CPU. The CPU then comes back to get the word from the I/O module and write it the main memory. In this way the CPU is doing many jobs while the I/O operation is taking place. This approach is more efficient than Programmed I/O. Write word To memory NO done yes Next instruction

65 How does I/O module send an interrupt to the CPU?
I/O module is linked to the control bus. I/O module reads a word from the I/O device. Puts the word in the data register which is linked to data bus. Sends a interrupt signal to the CPU via control bus.

66 How does CPU know Interrupt-signal?
The CPU executes an instruction cycle. An interrupt stage is added at the end of the cycle. At the end of an instruction cycle the CPU checks for interrupts. The CPU hardware has a wire, interrupt-request line that the CPU can sense. If no interrupt the CPU carries on executing next instruction. Otherwise, it updates the process control block, save it. Then process the interrupt. Resume the execution of the interrupted process.

67 How does CPU process interrupts?
Interrupt detection. CPU executes Interrupt-handler program. Interrupt-handler program makes use of the process control block save earlier. Interrupt-handler decides what to do with interrupt. Then asks the CPU to resume the execution interrupted.

68 Disadvantages of Interrupt-driven I/O
CPU is responsible for managing I/O data transfer. Every transferred word must go through the CPU. Devices with large transfer, e.g. disk drive, the CPU wastes time dealing with data transfer. Solution: Direct-memory-access(DMA).

69 Direct-memory-access - DMA
Special-purpose processor. Handles data transfer. CPU issues to the DMA: starting address in main memory to read/write to. Starting address in the I/O device to read/write to. The number of words to be transferred. DMA transfers data without intervention from the CPU. DMA sends interrupt to the CPU when transfer is completed.

70 DMA/CPU - bus system DMA take care data transfer.
CPU free to do other jobs. However, they can not use the bus at the same time. DMA can use the bus only when the CPU is not using it. Some times it has to force to CPU to free the bus, cycles stealing.

71 DMA/CPU System bus DMA CPU Main memory I/O module I/O device

72 Summery OS- memory manager OS- I/O manager
Fixed-sized partition: waist of memory. Variable-sized partition: fragmentation. Swapping. Time wasted in swapping the whole process Simple paging: process divided into pages and loaded into main memory(divided into frames). Demand paging: only the required pages are loaded to main memory. OS- I/O manager Programmed I/O: CPU waste waiting for I/O operation. Interrupt-driven I/O: CPU responsible for data transfer. DMA: takes care of data transfer instead the CPU.


Download ppt "Operating Systems Chapter 6."

Similar presentations


Ads by Google