Operating Systems Memory Management (Chapter 9). Overview Provide Services (done) –processes(done) –files(after memory management) Manage Devices –processor.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
CSS430 Memory Management Textbook Ch8
Operating Systems Lecture Notes Memory Management Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
CS 311 – Lecture 21 Outline Memory management in UNIX
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
Operating Systems I Memory Management. Overview F Provide Services –processes  –files  F Manage Devices –processor  –memory  –disk 
Memory Management.
1 Friday, June 30, 2006 "Man's mind, once stretched by a new idea, never regains its original dimensions." - Oliver Wendell Holmes, Jr.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 8: Memory Management Dr. Mohamed Hefeeda.
Memory Management (Ch )
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Lecture 9: SHELL PROGRAMMING (continued) Creating shell scripts!
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
Operating Systems Memory Management (Ch )
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Operating Systems Memory Management (Ch 9). Overview Provide Services (done) –processes(done) –files(done in cs4513) Manage Devices –processor (done)
Operating Systems Memory Management (Ch )
Chapter 8: Main Memory.
Chapter 8 Main Memory Bernard Chen Spring Objectives To provide a detailed description of various ways of organizing memory hardware To discuss.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Example of a Resource Allocation Graph CS1252-OPERATING SYSTEM UNIT III1.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Operating Systems Chapter 8
Memory Management. Process must be loaded into memory before being executed. Memory needs to be allocated to ensure a reasonable supply of ready processes.
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
COSC 3407: Operating Systems Lecture 13: Address Translation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
CE Operating Systems Lecture 14 Memory management.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CSC 360, Instructor Kui Wu Memory Management I: Main Memory.
WEEK 5 LINKING AND LOADING MEMORY MANAGEMENT PAGING AND SEGMENTATION Operating Systems CS3013 / CS502.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Memory Management Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management 1. Outline Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation 2.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Chapter 9: Memory Management
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Chapter 8: Main Memory.
Operating System Concepts
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
CSS 430: Operating Systems - Main Memory
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
So far… Text RO …. printf() RW link printf Linking, loading
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
CS399 New Beginnings Jonathan Walpole.
Lecture 3: Main Memory.
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Memory Management (Ch 4: )
CSE 542: Operating Systems
Presentation transcript:

Operating Systems Memory Management (Chapter 9)

Overview Provide Services (done) –processes(done) –files(after memory management) Manage Devices –processor (done) –memory(next!) –disk(done after files)

Simple Memory Management One process in memory, using it all –each program needs I/O drivers –until 1960 RAM User Prog I/O drivers

Simple Memory Management Small, protected OS, drivers –DOS OS RAM ROM Device Drivers ROM OS RAM User Prog User Prog User Prog “Mono-programming” -- No multiprocessing! - Early efforts used “Swapping”, but slooooow

Multiprocessing w/Fixed Partitions OS Partition 1 Partition 2 Partition 3 Partition 4 200k 300k 500k 900k OS Partition 1 Partition 2 Partition 3 Partition 4 (a) (b) Unequal queues Waste large partition Skip small jobs Simple! Hey, processes can be in different memory locations!

Address Binding Compile Time –maybe absolute binding (.com ) Link Time –dynamic or static libraries Load Time –relocatable code Run Time –relocatable memory segments –overlays –paging Source Object RAM Binary Compile Load Module Link Run

Normal Linking and Loading Printf.c Printf.o Static Library gcc ar a.out Linker Memory Main.c gcc Main.o Loader X Window code: - 500K minimum - 450K libraries

Load Time Dynamic Linking Printf.c Printf.o Dynamic Library gcc ar a.out Linker Memory Main.c gcc Main.o Loader Save disk space. Libraries move? Moving code? Library versions? Load time still the same.

Run-Time Dynamic Linking Printf.c Printf.o Dynamic Library gcc ar a.out Linker Memory Main.c gcc Main.o Loader Save disk space. Startup fast. Might not need all. Run-time Loader

Memory Linking Performance Comparisons

Design Technique: Static vs. Dynamic Static solutions –compute ahead of time –for predictable situations Dynamic solutions –compute when needed –for unpredictable situations Some situations use dynamic because static too restrictive ( malloc ) ex: memory allocation, type checking

Logical vs. Physical Addresses Compile-Time + Load Time addresses same Run time addresses different CPU Relocation Register MMU Logical Address 346 Physical Address Memory User goes from 0 to max Physical goes from R +0 to R +max

Relocatable Code Basics Allow logical addresses Protect other processes CPU Limit Reg < error no Reloc Reg + yes physical address Memory Addresses must be contiguous! MMU

Variable-Sized Partitions Idea: want to remove “wasted” memory that is not needed in each partition Definition: –Hole - a block of available memory –scattered throughout physical memory New process allocated memory from hole large enough to fit it

Variable-Sized Partitions OS process 5 process 8 process 2 OS process 5 process 2 8 done OS process 5 process 2 process 9 9 arrv OS process 2 process 9 10 arrv process 10 5 done OS keeps track of: –allocated partitions –free partitions (holes) –queues!

Memory Request? What if a request for additional memory? OS process 2 process 3 process 8 malloc(20k)?

Internal Fragmentation Have some “empty” space for each processes Internal Fragmentation - allocated memory may be slightly larger than requested memory and not being used. OS A program A data A stack Room for growth Allocated to A

External Fragmentation External Fragmentation - total memory space exists to satisfy request but it is not contiguous OS process 2 process 3 process 8 50k 100k Process 9 125k ? “But, how much does this matter?”

Analysis of External Fragmentation Assume: –system at equilibrium –process in middle –if N processes, 1/2 time process, 1/2 hole + ==> 1/2 N holes! –Fifty-percent rule –Fundamental: + adjacent holes combined + adjacent processes not combined

Compaction Shuffle memory contents to place all free memory together in one large block Only if relocation dynamic! Same I/O DMA problem Process 9 125k OS process 2 process 3 process 8 OS process 2 process 3 process 8 50k 100k OS process 2 process 3 process 8 90k 60k (a)(b)

Cost of Compaction process 2 process 3 process 8 50k 100k 90k 60k process 1 process 2 process 3 process MB RAM, 100 nsec/access  1.5 seconds to compact! Disk much slower!

Solution? Want to minimize external fragmentation –Large Blocks –But internal fragmentation! Tradeoff –Sacrifice some internal fragmentation for reduced external fragmentation –Paging

Where Are We? Memory Management –fixed partitions(done) –linking and loading(done) –variable partitions(done) Paging  Misc

Paging Logical address space noncontiguous; process gets memory wherever available –Divide physical memory into fixed-size blocks + size is a power of 2, between 512 and 8192 bytes + called Frames –Divide logical memory into bocks of same size + called Pages

Paging Address generated by CPU divided into: –Page number (p) - index to page table + page table contains base address of each page in physical memory (frame) –Page offset (d) - offset into page/frame CPUpd page table f f d physical memory

Paging Example Page 0 Page size 4 bytes Memory size 32 bytes (8 pages) Page 1Page 2Page Page Table Logical Memory Page 1Page 3 Physical Memory Page 0Page

Paging Example Page 0 Page 1 Page 2 Page Page Table Logical Memory Physical Memory Page Offset Frame

Paging Hardware page number p page offset d address space 2 m page offset 2 n page number 2 m-n m-n n note: not losing any bytes! phsical memory 2 m bytes

Paging Example Consider: –Physical memory = 128 bytes –Physical address space = 8 frames How many bits in an address? How many bits for page number? How many bits for page offset? Can a logical address space have only 2 pages? How big would the page table be?

Another Paging Example Consider: –8 bits in an address –3 bits for the frame/page number How many bytes (words) of physical memory? How many frames are there? How many bytes is a page? How many bits for page offset? If a process’ page table is 12 bits, how many logical pages does it have?

Page Table Example Page Table Page 0 Page 1 Process A Page 1 A Page 1 B Physical Memory Page 0 A Page 0 B Page 0 Page 1 Process B Page Table page number p page offset d m-n=3 n=4 b=7

Paging Tradeoffs Advantages –no external fragmentation (no compaction) –relocation (now pages, before were processes) Disadvantages –internal fragmentation + consider: 2048 byte pages, 72,766 byte proc –35 pages bytes = 962 bytes + avg: 1/2 page per process + small pages! –overhead + page table / process (context switch + space) + lookup (especially if page to disk)

Implementation of Page Table Page table kept in registers Fast! Only good when number of frames is small Expensive! Registers Memory Disk

Implementation of Page Table Page table kept in main memory Page Table Base Register (PTBR) Page 0Page Page Table Logical Memory Page 1 Physical Memory Page PTBR Page Table Length Two memory accesses per data/inst access. –Solution? Associative Registers

Associative Registers CPU pd page table f f d physical memory page number frame number associative registers miss hit logical address physical address 10-20% mem time (Intel P3 has 32 entries) (Intel P4 has 128 entries)

Associative Register Performance Hit Ratio - percentage of times that a page number is found in associative registers Effective access time = hit ratio x hit time + miss ratio x miss time hit time = reg time + mem time miss time = reg time + mem time * 2 Example: –80% hit ratio, reg time = 20 nanosec, mem time = 100 nanosec –.80 * * 220 = 140 nanoseconds

Protection Protection bits with each frame Store in page table Expand to more perms Page 0Page 1Page Page Table Logical Memory Physical Memory Page 0Page v vv i Page 1 Protection Bit

Large Address Spaces Typical logical address spaces: –4 Gbytes => 2 32 address bits (4-byte address) Typical page size: –4 Kbytes = 2 12 bits Page table may have: –2 32 / 2 12 = 2 20 = 1million entries Each entry 3 bytes => 3MB per process! Do not want that all in RAM Solution? Page the page table –Multilevel paging

Multilevel Paging Page 0... Outer Page Table Logical Memory... Page Table... page number p1 page offset d p2 10

Multilevel Paging Translation page number p1 page offset dp2 outer page table inner page table desired page d p2 p1

Inverted Page Table Page table maps to physical addresses CPU pidd p search pid p i id Physical Memory Still need page per process --> backing store Memory accesses longer! (search + swap)

Memory View Paging lost users’ view of memory Need “logical” memory units that grow and contract subroutine stack symbol table main Solution? Segmentation! ex: stack, shared library

Segmentation Logical address: Segment table - maps two-dimensional user defined address into one-dimensional physical address –base - starting physical location –limit - length of segment Hardware support –Segment Table Base Register –Segment Table Length Register

Segmentation CPU s d logical address limitbase < error no + yes physical address physical memory main stack (“Er, what have we gained?”)  Paged segments!

Memory Management Outline Basic (done) –Fixed Partitions (done) –Variable Partitions(done) Paging(done) –Basic(done) –Enhanced (done) Specific  –WinNT –Linux

Memory Management in WinNT 32 bit addresses (2 32 = 4 GB address space) –Upper 2GB shared by all processes (kernel mode) –Lower 2GB private per process Page size is 4 KB (2 12, so offset is 12 bits) Multilevel paging (2 levels) –10 bits for outer page table (page directory) –10 bits for inner page table –12 bits for offset

Memory Management in WinNT Each page-table entry has 32 bits –only 20 needed for address translation –12 bits “left-over” Characteristics –Access: read only, read-write –States: valid, zeroed, free … Inverted page table –points to page table entries –list of free frames

Memory Management in Linux Page size: –Alpha AXP has 8 Kbyte page –Intel x86 has 4 Kbyte page Multilevel paging (3 levels) –Makes code more portable –Even though no hardware support on x86! + “middle-layer” defined to be 1