Part IV: Memory Management

Slides:



Advertisements
Similar presentations
Chapter 5 : Memory Management
Advertisements

Memory.
Memory Management Chapter 7.
CS 311 – Lecture 21 Outline Memory management in UNIX
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Silberschatz, Galvin and Gagne  Operating System Concepts Multistep Processing of a User Program User programs go through several steps before.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 8: Main Memory.
Swapping and Contiguous Memory Allocation. Multistep Processing of a User Program User programs go through several steps before being run. Program components.
Memory Management. Process must be loaded into memory before being executed. Memory needs to be allocated to ensure a reasonable supply of ready processes.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 8: Memory-Management Strategies Objectives To provide a detailed description.
Chapter 4 Storage Management (Memory Management).
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz and Galvin  Operating System Concepts Module 8: Memory Management Background Logical versus Physical Address Space Swapping Contiguous.
CE Operating Systems Lecture 14 Memory management.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Memory Management. Introduction To improve both the utilization of the CPU and the speed of its response to users, the computer must keep several processes.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Memory Management 1. Outline Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation 2.
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Module 9: Memory Management
Chapter 9: Memory Management
Memory Management.
Chapter 8: Memory Management
Chapter 8: Main Memory.
Chapter 8: Memory Management
Main Memory Management
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Storage Management Chapter 9: Memory Management
Operating System Concepts
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Module 9: Memory Management
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Main Memory Session -15.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
CSS 430: Operating Systems - Main Memory
Multistep Processing of a User Program
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
So far… Text RO …. printf() RW link printf Linking, loading
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Lecture 3: Main Memory.
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
CSE 542: Operating Systems
Presentation transcript:

Part IV: Memory Management Operating Systems Part IV: Memory Management

Main Memory Large array of words or bytes each having its own address Several processes must be kept in main memory to improve utilization and system response: memory sharing Memory management algorithms vary from simple approach to paging and segmentation strategies

Memory Basics Types of memory addresses Symbolic (e.g. program variables) Relocatable (e.g. offset + 100) Absolute (e.g. 255) Address binding: Which data goes where. May be determined in any of the following phases Compile time: compiler generates absolute code for execution in a fixed address Load time: compiler generates relocatable code, link- loader computes relocation addresses during link time Execution time: allows executing processes to move, by simply changing values of segment registers.

Logical & Physical Addresses Logical or virtual address – address generated by the CPU. Physical address – address loaded into the memory-address register of RAM Compile-time and load-time binding generate identical logical and physical addresses Execution-time binding generate differing logical and physical addresses Memory management unit – maps virtual to physical addresses during runtime.

Improving Memory Utilization Dynamic loading Subroutines are stored on disk in relocatable format, and are not loaded until called Dynamic linking Linking to subroutine libraries postponed until run-time; stub used as pointer to routine in a shared DLL. Conserves disk space – a single copy of library is shared by all executing processes. Versioning issue. Overlays Keeps only needed instructions in memory Other instructions overwrite previously occupied address when needed. May be implemented by programmer.

Swapping Used if memory is no longer sufficient The roll out (swap out to disk) and roll in (swap to MM) drastically increases time for context switch

Swapping Needs a fast backing store (secondary memory) for efficient implementation Waiting processes are good candidates to be swapped out to disk Processes waiting on I/O should not be swapped out, since the I/O could be into their address space, or I/O should be done into OS buffers only.

Swapping Modified version of swapping used in Unix. Swapping normally disabled. Enabled when memory runs out due to many running processes.In Linux, only R/W data segment needs to be swapped out, R/O code segment are just overwritten, since it can be reread from disk. Microsoft Windows provides partial swapping - if not enough memory for a new program, current program is swapped out to disk. User determines swap rather than scheduler

Memory Protection OS must protect itself from user processes, and must protect user processes from each other. Each process is assigned a relocation register and a limit register. Relocation register contains the value of the smallest physical address. Logical address must be less than the limit register

How memory protection works

Contiguous Allocation Type of allocation where each process occupies only a single continuous block in main memory Simple two-partition scheme Low memory: usually contains resident O/S since interrupt vectors are here Rest of memory: for user processes

Contiguous Allocation Multiple-Partition Algorithm Simplest is multiple fixed partition Each partition is allocated to a process Multiprogramming limited to number of partitions Dynamic partitioning Starts with one large memory block called a “hole” Processes arrive and are allocated a block Holes become available as processes terminate

Contiguous Allocation Multiple-Partition Algorithm Dynamic Partitioning (cont’d) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

Contiguous Allocation Dynamic storage allocation problem looking for freed memory for waiting processes set of holes searched to see w/c one to allocate first-fit - allocate first hole big enough for process (search may start from beginning or end) best-fit - allocate smallest hole that is big enough worst-fit - allocate largest hole -> produces largest leftover (sometimes useful than smaller ones) first/best better in time/storage utilization, respectively first-fit is generally faster

Contiguous Allocation Fragmentation External Enough memory exists but not contiguous 50% rule - given N allocated blocks, the next 0.5N blocks will be lost due to fragmentation (1/3 is unusable) when first-fit is used. Internal Allocate memory in fixed-sized units, say 4k blocks. Memory actually allocated to a process may be slightly larger than requested memory. The difference is internal fragmentation.

Contiguous Allocation Compaction A solution to external fragmentation – shuffle memory contents to place all free memory together in one large block. Not always possible if relocatable addressing at execution time is not supported Algorithm 1: move processes to one end (expensive if many processes are moved) Algorithm 2: create big hole in the middle Swapping may be combined w/ compaction processes rolled out to backing store then back in compaction not possible w/ disk due to slow access

Paging Allows non-contiguous allocation -> solves external fragmentation Logical memory Fixed sized blocks called pages Physical memory Fixed-sized blocks called frames Page table converts pages to frames, using address translation hardware page# + offset -> frame# + offset, using page table Backing store (swap partition in Linux) same structure as logical memory

Paging Examples

Internal Fragmentation in Paging External fragmentation is eliminated but internal fragmentation is not since processes rarely take up all the memory space allocated to them Worst case: Process needs (n pages + 1 byte) Wasted space of (page size bytes – 1 byte)

Page Table Structure Each O/S has own method for storing page tables -> most allocate a page table per process Context-switch time increases with paging due to need to store page tables in PCB of each process Each page table entry might include access type as r/o (constant data), r/w (variable data), or x/o (code), to implement a further level of memory protection Page table structure: linear, multilevel hierarchical, hashed, inverted.

Shared Pages Shared read-only/execute-only code When several instances of a single program are running, only one copy of the program code is stored in memory, but each instance has its own program data. Code must be reentrant. Possibility of sharing common code is another advantage of paging Similar to sharing of address space of a task by threads.

Shared Pages: An Example

Segmentation Scheme divides logical memory into segments More intuitive view of memory from user’s point of view (e.g. program divided into segments -- subroutines, procedures, data -- that have different length)

Logical View of Segmentation 1 4 2 3 1 2 3 4 user space physical memory space

Segmentation Hardware Logical (2-dimensional) vs. physical (1- dimensional) -> mapping effected through a segment table Entry consists of segment base (=starting address in physical memory) and limit (= length of segment) Similar to concept of pages except segments do not have fixed length Intel 80x86 is based on segmentation

Mapping Segments to Physical Memory

Segments and Fragmentation Segmentation may cause external fragmentation Happens when all free blocks are too small to accommodate a segment Compaction may be used to solve the problem Process may wait if segment cannot be found

Segmentation with Paging Solves the external fragmentation problem of pure segmentation Each segment is composed of several equal-sized pages