Operating System. Main goal of OS: –Run programs efficiently –Make the computer easier to use Provide a user-friendly interface –Improve the efficiency.

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

Chapter 6 Operating Systems
Chapter 3 Memory Management: Virtual Memory
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
Dr Mohamed Menacer College of Computer Science and Engineering Taibah University CS-334: Computer.
Memory Design Example. Selecting Memory Chip Selecting SRAM Memory Chip.
Operating System Support Focus on Architecture
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Multiprocessing Memory Management
Chapter 10 Operating Systems. 2 Chapter Goals Describe the two main responsibilities of an operating system Define memory and process management Explain.
CS 104 Introduction to Computer Science and Graphics Problems
Virtual Memory Chapter 8.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Chapter 11 Operating Systems
Computer Organization and Architecture
Silberschatz, Galvin and Gagne  Operating System Concepts Multistep Processing of a User Program User programs go through several steps before.
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 9 – Real Memory Organization and Management Outline 9.1 Introduction 9.2Memory Organization.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
CS364 CH08 Operating System Support TECH Computer Science Operating System Overview Scheduling Memory Management Pentium II and PowerPC Memory Management.
Layers and Views of a Computer System Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System.
Computer Organization and Architecture Operating System Support Chapter 8.
Chapter 3 Memory Management: Virtual Memory
Chapter 10 Operating Systems *. 2 Chapter Goals Describe the main responsibilities of an operating system Define memory and process management Explain.
Review of Memory Management, Virtual Memory CS448.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 8: Main Memory.
Swapping and Contiguous Memory Allocation. Multistep Processing of a User Program User programs go through several steps before being run. Program components.
Memory Management Chapter 7.
Memory Management. Process must be loaded into memory before being executed. Memory needs to be allocated to ensure a reasonable supply of ready processes.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 8: Memory-Management Strategies Objectives To provide a detailed description.
Chapter 5 Operating System Support. Outline Operating system - Objective and function - types of OS Scheduling - Long term scheduling - Medium term scheduling.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
Chapter 10 Operating Systems.
Chapter 4 Storage Management (Memory Management).
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
Memory Management. Roadmap Basic requirements of Memory Management Memory Partitioning Basic blocks of memory management –Paging –Segmentation.
6 Memory Management and Processor Management Management of Resources Measure of Effectiveness – On most modern computers, the operating system serves.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Copyright © Curt Hill More on Operating Systems Continuation of Introduction.
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Computer Architecture Chapter (8): Operating System Support
Memory Management.
Chapter 2 Memory and process management
Understanding Operating Systems Seventh Edition
Chapter 9 – Real Memory Organization and Management
William Stallings Computer Organization and Architecture
Main Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Multistep Processing of a User Program
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Main Memory Background Swapping Contiguous Allocation Paging
Operating Systems.
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Operating Systems Concepts
COMP755 Advanced Operating Systems
Presentation transcript:

Operating System

Main goal of OS: –Run programs efficiently –Make the computer easier to use Provide a user-friendly interface –Improve the efficiency of hardware utilization Manage the resources of the computer

Types of Operating Systems Classifying OS –Number of users that OS can support at one time Single-job system Multiprogramming system Multiprocessor system –Access provided to a user Batch processing system Time-sharing system Real-time system –Organization of a network of computers Network operating system –User is aware of the existence of the network Distributed operating system –User views the entire network as a single system

Run-Time Environment OS supports a run-time environment for user programs: –Service routines for use during program execution –Facilities for resource management Service routines can be thought of as defining an extended machine for use by programs The extended machine is easier to use and is less error-prone. User programs can call operating system functions by –Referring directly to fixed locations in memory –Using some special hardware instruction such as a supervisor call (SVC) which generates an interrupt.

Supervisor Mode Generation of an interrupt causes the CPU to switch from user mode to supervisor mode. In supervisor mode, all machine instructions and features can be used. In user mode, the use of privileged instructions is restricted. The only way is to make use of the services provided by the run-time environment.

Interrupt Processing An interrupt is a signal that causes a computer to alter its normal flow of instruction execution. The interrupt automatically transfers control to an interrupt-processing routine (interrupt handler) that is usually a part of OS. Generation and processing of the interrupt may be completely unrelated (asynchronous) to the running program.

SIC/XE Interrupts Four classes of SIC/XE interrupts –Class I: SVC interrupt Generated when a program use a SVC to request OS functions. –Class II: program interrupt Generated by some conditions that occurs during the program execution, e.g., divide by zero. –Class III: timer interrupt Generated by an interval timer containing a register that can be set by the privileged instruction STI. –Class IV: I/O interrupt Generated by an I/O channel or device to signal the completion or error condition of some I/O operations.

Status Word (SW) of SIC/XE

Interrupt Mask MASK is used to control whether interrupts are allowed. This control is necessary to prevent loss of the stored processor status information by prohibiting certain interrupts from occurring while the first one is being processed. MASK contains one bit for each class of interrupt. Pending interrupts: masked (inhibited or disabled) interrupts are not lost but saved temporarily. Masking of interrupts is under control of OS –Set all the bits in MASK to 0 when processing the interrupt, which prevents the occurrence of any other interrupt. –Is it necessary to do so?

Nested Interrupts Each class of interrupt is assigned an interrupt priority: SVC > program > timer > I/O The MASK field is set so that all interrupts of equal or lower priority are inhibited; however, interrupts of higher priority are allowed to occur.

Process Scheduling A process (task) is a program in execution. In a multiprogramming system, process scheduling is the management of the CPU (CPUs) by switching control among the various competing processes according to some scheduling policy. During the existence, the process can be in one of three states: –Running: actually executing the instructions using the CPU –Blocked: waiting for some event to occur –Ready: waiting as a candidate to be assigned the CPU

Process State Transitions Time-slice: a maximum amount of CPU time the process is allowed to use before giving up control. Dispatching: the selection of a process in ready state and the transfer of control to it. Process status block (PSB): where the status information for each process is saved by the OS –Process state (running, ready, or blocked) –Registers (including SW and PC) –Variety of other information (e.g., system resources used by the process) create destroy

Dispatcher Algorithm How to find it?

Dispatcher Algorithm Process selection policy: –Round robin Treat all processes equally and give the same length of time-slice to all the processes. –Priority scheme Priorities can be –predefined based on the nature of the user job –assigned by the OS –allowed to be vary dynamically, depending on the system load and performance. Preemptive scheduling: a higher-priority process can seize control from a lower-priority process that is currently running.

Mutual exclusion Deadlock Synchronization Starvation

I/O Supervision SIC: CPU is involved with each byte of data being transferred to or from I/O device –Test the status of the I/O device –Read-data instruction SIC/XE: I/O are performed by special hardware, simple processors known as I/O channels –Start I/O (SIO) instruction Channel number Channel programs (a series of channel commands) –The channel generates an I/O interrupt after completing the channel program.

Programmed I/O (Polling I/O) –CPU busy-waiting Interrupt-driven I/O Direct Memory Access

I/O Process of OS Operating System I/O Channels User Programs I/O request I/O completion notification I/O operation control I/O interrupt

Management of Real Memory Any OS that supports more than one user at a time must provide a mechanism for dividing central memory among the concurrent processes. Level of multiprogramming is limited only by the number of jobs that can fit into central memory. Many multiprogramming and multiprocessing systems divide memory into partitions, with each process being assigned to a different partition. –Fixed partitions: predefined in size and position –Variable partitions: allocated dynamically according to the requirements of the jobs being executed

User Jobs for Memory Allocation Examples

Fixed Partition Example

Strategy of Fixed Partition Allocation scheme: –load each incoming job into the smallest free partition in which it will fit. Initial selection of the partition sizes: –Considerations: There must be enough large partitions so that large jobs can be run without too much delay. If there are too many large partitions, a great deal of memory may be wasted when small jobs are running. –Scheme: Tailor a set of partitions to the expected population of job sizes

Variable Partition Example

Strategy of Variable Partition Allocation scheme: –For each job to be loaded, OS allocates, from the free memory areas, a new partition of exactly the size required. OS must keep track of allocated and free areas of memory, usually by maintaining a linked list of free memory areas. This list is scanned when a new partition is to be allocated. –First-fit allocation: the first free area in which it will fit –Best-fit allocation: the smallest free area in which it will fit When a partition is released, its assigned memory is returned to the free list and combined with adjacent free areas.

Memory Protection Memory protection: –When a job is running in one partition, it must be prevented from reading and writing memory locations in any other partition or in the OS. Approaches (hardware support is necessary) –Using a pair of bounds registers that contain the beginning and ending addresses of a job’s partition OS sets the bounds registers (in supervisor mode) when a partition is assigned to a new job. The values in theses registers are automatically saved and restored during context switching. For every memory reference, the hardware automatically checks the referenced address against the bounds registers. –Using storage protection key

Storage Protection Key in SIC/XE Each 800-byte block of memory has associated with it a 4-bit storage protection key. These keys can be set by the OS using the privileged instruction SSK (Set Storage Key). When a partition is assigned, OS sets the storage keys for all blocks of memory within the partition to the 4-bit process identifier of the process. For each memory reference by a user program, the hardware automatically compares the process identifier from SW to the protection key for the block of memory being addressed.

Memory Fragmentation Memory fragmentation occurs when the available free memory is split into several separate blocks, with each block being too small to be of use. One possible solution: relocatable partitions –After each job terminates, the remaining partitions are moved as far as possible toward one end of memory. –Pros: More efficient utilization of memory –Cons: Substantial amount of time required to copy jobs Problems with program relocation

Relocatable Partition Example

Program Relocation

Relocatable Partitions Practical implementation of relocatable partitions requires some hardware support: a special relocation register containing the beginning address of the program currently being executed. –The value of this register is modified when the process is moved to a new location. –This register is automatically saved and restored during context switching. –The value of this register is automatically added to the address for every memory reference made by the user program.

Relocation Register for Address Calculation

Basic Concept of Virtual Memory A virtual resource is one that appears to a user program to have characteristics that are different from those of the actual implementation of the resource. User programs are allowed to use a large contiguous virtual memory, or virtual address space. Virtual memory –is stored on some external device, the backing store. –may even be larger than the total amount of real memory available on the computer. –can increase the level of multiprogramming because only portions of virtual memory are mapped into real memory as they are needed.

Basic Concept of Virtual Memory

Demand Paging Demand paging –One common method for implementing virtual memory. –Virtual memory of a process is divided into pages of some fixed length. –Real memory of the computer is divided into page frames of the same length as the pages. –Mapping of pages onto page frames is described by a page map table (PMT). –PMT is used by the hardware to convert addresses in a program’s virtual memory into the corresponding addresses in real memory. –This address conversion is known as dynamic address translation.

Program for Illustration of Demand Paging

Example of Dynamic Address Translation and Demand Paging

Virtual-to-Real Mapping Using a PMT

Algorithm for Dynamic Address Translation

Algorithm for Page Fault Interrupt Processing OS maintains a table describing the status of all page frames Why?

Page Selection for Removal Strategies: –Least recently used (LRU) method Keep records of when each page is memory was last referenced and replace the page that has been unused for the longest time. Overhead for this kind of record keeping can be high, simpler approximations to LRU are often used. –Working set Determine the set of pages that are frequently used by the process in question. The systems attempt to replace pages in such a way that each process always has its working set in memory.

Implementation of Page Tables Method 1: –Implement these tables as arrays in central memory. –A register is set by the OS to point to the beginning of the PMT for the currently executing process. –This method can be very inefficient because it requires an extra memory access for each address translation. Method 2: –Combine method 1 with a high-speed buffer to improve average access time. Method 3: –Implement the page tables in a special high-speed associative memory. –This is very efficient, but may be too expensive.

Demand-Paging Systems Advantages: –Efficient memory utilization Avoid most of the wasted memory due to fragmentation associated with partitioning schemes. Parts of a program that are not used during a particular execution need not be loaded. Disadvantages: –Vulnerable to thrashing problem: The computing system spends most of its time swapping pages but not doing useful work. Consider a case: –Memory reference: 1  sec –Fetch a page from the backing store:  sec –Page fault rate: 1% –Only about 1% of its time is for useful work.

Locality of Reference To avoid thrashing, page fault rate has to be much lower. It is possible because the locality of reference can be observed in most real programs: –Memory references are not randomly distributed, but tend to be clustered together in the address space. –This clustering is due to common program characteristics: Sequential instruction execution Compact coding of loops Sequential processing of data structures

Localized Memory References and Their Effect on Page Fault Rate thrashin g Working set of pages