Principles of Virtual Memory

Slides:



Advertisements
Similar presentations
Chapter 8: Main Memory.
Advertisements

Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Chapter 5 : Memory Management
Chapter 4 Memory Management Basic memory management Swapping
Project 5: Virtual Memory
Memory.
Page Replacement Algorithms
Chapter 3.3 : OS Policies for Virtual Memory
Chapter 10: Virtual Memory
Virtual Memory II Chapter 8.
Memory Management.
Chapter 8 Virtual Memory
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Part IV: Memory Management
Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
Segmentation and Paging Considerations
C SINGH, JUNE 7-8, 2010IWW 2010, ISATANBUL, TURKEY Advanced Computers Architecture, UNIT 2 Advanced Computers Architecture Virtual Memory By Rohit Khokher.
CS 153 Design of Operating Systems Spring 2015
Memory Management Design & Implementation Segmentation Chapter 4.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Lecture 11: Memory Management
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Memory Management 2010.
Virtual Memory Chapter 8.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Chapter 3.2 : Virtual Memory
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Virtual Memory I Chapter 8.
Virtual Memory:Part 2 Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai.
Chapter 91 Translation Lookaside Buffer (described later with virtual memory) Frame.
Silberschatz, Galvin and Gagne  Operating System Concepts Segmentation Memory-management scheme that supports user view of memory. A program.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
CSE378 Virtual memory.1 Evolution in memory management techniques In early days, single program ran on the whole machine –used all the memory available.
Review of Memory Management, Virtual Memory CS448.
Operating Systems Chapter 8
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
CE Operating Systems Lecture 14 Memory management.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Virtual Memory 1 1.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
COMBINED PAGING AND SEGMENTATION
Virtual Memory Chapter 8.
Chapter 8: Main Memory.
Operating System Concepts
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Chapter 9: Virtual-Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Memory Management-I 1.
Lecture 3: Main Memory.
CSE 451: Operating Systems Autumn 2005 Memory Management
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
COMP755 Advanced Operating Systems
CSE 542: Operating Systems
Virtual Memory 1 1.
Presentation transcript:

Principles of Virtual Memory Virtual Memory, Paging, Segmentation

Overview Virtual Memory Paging Segmentation Combined Segmentation and Paging Bibliography

1. Virtual Memory 1.1 Why Virtual Memory (VM)? 1.2 What is VM ? 1.3 The Mapping Process 1.4 Terms & Definitions 1.5 The Principle of Locality 1.6 VM: Features 1.7 VM: Advantages 1.8 VM: Disadvantages 1.9 VM: Implementation

1.1 Why Virtual Memory (VM)? Shortage of memory Efficient memory management needed Process 3 Process may be too big for physical memory More active processes than physical memory can hold Requirements of multiprogramming Efficient protection scheme Simple way of sharing Process 2 Process 4 Process 1 Mention external fragmentation OS Memory

1.2 What is VM? 0xA0F4 Program: .... Mov AX, 0xA0F4 Table (one per Process) 0xC0F4 Mapping Unit (MMU) Virtual Address Not only address to address mapping, but piece to piece VAS usually much larger than PAS „Piece“ of Virtual Memory Physical Memory „Piece“ of Physical Memory Physical Address Virtual Memory Note: It does not matter at which physical address a „piece“ of VM is placed, since the corresponding addresses are mapped by the mapping unit.

check using mapping table 1.3 The Mapping Process Usually every process has its own mapping table  own virtual address space (assumed from now on) Not every „piece“ of VM has to be present in PM „Pieces“ may be loaded from HDD as they are referenced Rarely used „pieces“ may be discarded or written out to disk ( swapping) check using mapping table MMU piece in physical memory? OS brings „piece“ in from HDD virtual address memory access fault translate address yes OS adjusts mapping table physical address

1.4 Terms & Notions Virtual memory (VM) is Not a physical device but an abstract concept Comprised of the virtual address spaces (of all processes) Virtual address space (VAS) (of one process) Set of visible virtual addresses (Some systems may use a single VAS for all processes) Resident set Pieces of a process currently in physical memory Working set Set of pieces a process is currently working on

1.5 The Principle of Locality Memory references within a process tend to cluster Working set should be part of the resident set to operate efficiently (else: frequent memory access faults)  honor the principle of locality to achieve this repeated references: single jumps: working set: initialization data code early phase of process lifetime code 1 code 2 data main phase of process lifetime finalization code final phase of process lifetime Principle of locality is weakened by modern programming techniques ! OO leads to references to objects all over multithreaded apps -> sudden jumps in control flow

no need to swap out complete process!!! 1.6 VM: Features Swapping „piece“ modified? lack of memory find rarely used „piece" adjust mapping table discard „piece“ no save HDD location of „piece“ write „piece“ out to disk yes no need to swap out complete process!!! Danger: Thrashing: „Piece“ just swapped out is immediately requested again System swaps in/out all the time, no real work is done Thus: „piece“ for swap out has to be chosen carefully Keep track of „piece“ usage („age of piece“) Hopefully „piece“ used frequently lately will be used again in near future (principle of locality!)

1.6 VM: Features Protection Each process has its own virtual address space Processes invisible to each other Process cannot access another processes memory MMU checks protection bits on memory access (during address mapping) „Pieces“ can be protected from being written to or being executed or even being read System can distinguish different protection levels (user / kernel mode) Write protection can be used to implement copy on write ( Sharing)

1.6 VM: Features Sharing „Pieces“ of different processes mapped to one single „piece“ of physical memory Allows sharing of code (saves memory), e.g. libraries Copy on write: „piece“ may be used by several processes until one writes to it (then that process gets its own copy) Simplifies interprocess-communication (IPC) Piece 2 Piece 1 Virtual memory Process 1 Piece 0 Virtual memory Process 2 Piece 1 Piece 0 Piece 2 Piece 1 Piece 2 Piece 0 Physical memory Shared Code must be reentrant (non-self-modifying) shared memory

1.7 VM: Advantages (1) VM supports Swapping Rarely used „pieces“ can be discarded or swapped out „Piece“ can be swapped back in to any free piece of physical memory large enough, mapping unit translates addresses Protection Sharing Common data or code may be shared to save memory Process need not be in memory as a whole No need for complicated overlay techniques (OS does job) Process may even be larger than all of physical memory Data / code can be read from disk as needed

1.7 VM: Advantages (2) Code can be placed anywhere in physical memory without relocation (adresses are mapped!) Increased cpu utilization more processes can be held in memory (in part)  more processes in ready state (consider: 80% HDD I/O wait time not uncommon)

1.8 VM: Disadvantages Memory requirements (mapping tables) Longer memory access times (mapping table lookup) Can be improved using TLB

1.9 VM: Implementation VM may be implemented using Paging Segmentation Combination of both Note: Everything said in the first chapter still holds for the following chapters!

2. Paging 2.1 What is Paging? 2.2 Paging: Implementation 2.3 Paging: Features 2.4 Paging: Advantages 2.5 Paging: Disadvantages 2.6 Summary: Conversion of a Virtual Address

2.1 What is Paging? Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Virtual memory (divided into equal size pages) Page 0 0x00 Page Table (one per process, one entry per page maintained by OS) Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Page 0 v Frame 0 Frame 1 Frame 3 Frame 0 Frame 1 Frame 2 Frame 3 Physical memory (divided into equal size page frames) 0x00 v VM usually much larger than physical memory -> usually most pages are not mapped Contiguous piece of virtual memory may be mapped all over physical mem

2.2 Paging: Implementation Typical Page Table Entry valid v r w x execute x write w read r Page Frame # modified m re referenced re v shared s m caching disabled c s super-page su c process id pid su guard data (extended) guard gd g pid g gd other other Read protect bits: may define in which mode (kernel / user) page is accessible Write protect bits: protect code pages from writing (readonly) may be used for copy on write Execute bits: define, if page may be executed also different bits for user/kernel mode possible Valid: does entry point to valid page in physical memory ? else: OS must decide, if page paged out or error Referenced: for OS to determine how often page is used can be paged out or not ? Modified (dirty): must page be swapped out, or can be discarded ? Shared: (obvious) Caching disabled: important for machines with memory mapped I/O: actual device must be read, not a cached copy Others superpages guarded pagetables PID

2.2 Paging: Implementation Singlelevel Page Tables 0x14 0x2 Virtual address Page # Offset Physical address 0x14 Offset Page Table Base Register (PTBR) Page Table 0x8 ... 0x0 * L 0x1 * L 0x2 * L L : size of entry 0x8 Frame # one entry per page one table per process Problem: Page tables can get very large, e.g. 32 bit address space, 4KB pages  2^20 entries per process  4MB at 4B per entry 64 bit  16777216 GB page table!!!!

2.2 Paging: Implementation Multilevel Page Tables Offset Page #1 Page #2 Page #3 Offset Offset Frame # Oversized Super-Page Page Directory Page Table Page Frame # Page Middle Directory table size can be restricted to one page Not all address ranges will be used (principle of locality!) -> some page tables need not be present One may notice that multilevel page tables are still not a satifying solution, since every translation step must be made no matter how sparsely a table is filled -> saves memory but costs time Page tables can be limited to one page -> more easily be paged out -> multiple page faults possible v=0 not all need be present saves memory

2.2 Paging: Implementation Inverted Page Tables 0x14 0xA Virtual address Page # Offset 0x14 Physical address Offset Inverted Page Table 0xA PID ... hash fkt. 0x0 0x1 hash table one entry per frame one table for all processes 0x2 Frame # Saves memory ! System not as simple as indexing into a table (hash function must be evaluated !!) (time effects ???) Faster context switch (one table for all processes) Problems with sharing !!! Problem: Additional information about pages not presently in memory must still be kept somehow (e.g. normal page tables on HDD)

2.2 Paging: Implementation Guarded Page Tables (1) 0xA 0x5 0x14 0x2 Virtual address Page #1 Offset Page #2 Page #3 0x14 Physical address Offset Page Directory (guarded) = page fault no Frame # 0x3B2 yes 0xA5 0x3B2 0x8 Page Middle Directory Page Table only one valid entry per table guard Especially interesting on systems that aside from TLB provide no additional hardware support for paging (zero-level paging system) Persue topic -> ask for additional time ! guard length Frame # or page table base table not needed, if guard in place

2.2 Paging: Implementation Guarded Page Tables (2) Guarded page tables especially interesting if hardware offers only TLB (zero-level paging, MIPS) OS has total flexibility, may use Different sizes of pages and page tables (all powers of 2 ok) and as many levels as desired Guarded page tables Inverted page tables Optimization: guarded table entry will usually not contain guard and guard length but equivalent information Note that handling of protection is to be modified For didactical reasons and for reasons of time I have left out some detail and some optimizations, please read yourself ! Rather: extended guard, length of address still to translate and that length minus length of index into subordinate table may be stored (details: [LIE2])

2.3 Paging: Features Prepaging Process requests consecutive pages (or just one)  OS loads following pages into memory as well (expecting they will also be needed) Saves time when large contiguous structures are used (e.g. huge arrays) Wastes memory and time case pages not needed May waste time: another process generating page fault at same time has to wait !!! VM referenced by process prepaged by OS

2.3 Paging: Features Demand Paging On process startup only first page is loaded into physical memory Pages are then loaded as referenced Saves memory But: may cause frequent page faults until process has its working set in physical memory. OS may adjust its policy (demand / prepaging) dependent on Available free physical memory Process types and history

2.3 Paging: Features Cheap Memory Allocation No search for large enough a piece of PM necessary Any requested amount of memory is divided into pages  can be distributed over the available frames OS keeps a list of free frames If memory is requested the first frame is taken Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 PM Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 PM 4 Page 1 Page 2 Page 1 Page 2 Process started, requiring 6KB (4KB pages) 1 6 4 Linked list of free frames

2.3 Paging: Features Simplified Swapping Process requires memory Paging VM system Non-Paging VM system PM PM 3 pages 1 „piece“ rarely used Process requires 3 frames swap out 3 most seldomly used pages Swapping out the 3 most seldomly used „pieces“ will not work  Swap algo must try to create free pieces as big as possible (costly!)

2.4 Paging: Advantages Allocating memory is easy and cheap Any free page is ok, OS can take first one out of list it keeps Eliminates external fragmentation Data (page frames) can be scattered all over PM  pages are mapped appropriately anyway Allows demand paging and prepaging More efficient swapping No need for considerations about fragmentation Just swap out page least likely to be used Without paging demand paging might be very costly (find fitting piece of free memory) Equal size blocks (pages) well suited for HDD

2.5 Paging: Disadvantages Longer memory access times (page table lookup) Can be improved using TLB Guarded page tables Inverted page tables Memory requirements (one entry per VM page) Improve using Multilevel page tables and variable page sizes (super-pages) Page Table Length Register (PTLR) to limit virtual memory size Internal fragmentation Yet only an average of about ½ page per contiguous address range Guarded: time savings dependent on how many tables are skipped

2.6 Summary: Conversion of a Virtual Address Hard ware OS Virtual address yes bring in page from HDD! process into blocking state TLB page table miss page in mem? hit access rights? yes update TLB memory full? exception to process no swap out a page yes HDD I/O read req. no reference legal? page fault no HDD I/O complete: interrupt Hardware hand controle to OS interrupting control flow of process through faults (exceptions / interrupts) process into blocking state yes copy on write? no protection fault copy page update page table exception to process no process into ready state Physical address

3. Segmentation 3.1 What is Segmentation? 3.2 Segmentation: Advantages 3.3 Segmentation: Disadvantages

External fragmentation 3.1 What is Segmentation? Segment # Offset virtual address External fragmentation Seg 1 (code) Seg 2 (data) Seg 3 (stack) Virtual memory offset < limit ? MMU STBR STLR Base Limit Other Segment table Physical memory Seg 2 (data) Seg 1 (code) Seg 3 (stack) 0x00 as in paging: valid, modified, protection, etc. memory access fault no Segment Base + Offset physical address yes

3.2 Segmentation: Advantages As opposed to paging: No internal fragmentation (but: external fragmentation) May save memory if segments are very small and should not be combined into one page (e.g. for reasons of protection) Segment tables: only one entry per actual segment as opposed to one per page in VM Average segment size >> average page size  less overhead (smaller tables) Check array boundaries by placing into fitting segment Swapping: code can be placed anywhere without relocating again, but free mem problem ! Average segment size >> average page size: fragmentation problem gets worse

3.3 Segmentation: Disadvantages External fragmentation Costly memory management algorithms Segmentation: find free memory area big enough (search!) Paging: keep list of free pages, any page is ok (take first!) Segments of unequal size not suited as well for swapping Dynamic storage allocation problem: first fit or best fit algo, compaction my be used (no relocation problem) Equal size blocks better suited for HDD No linear address space

4. Combined Segmentation and Paging (CoSP) 4.1 What is CoSP? 4.2 CoSP: Advantages 4.3 CoSP: Disadvantages

size limited by segment limit 4.1 What is CoSP? Offset Seg # Page #1 Page #2 Virtual Address Page Table Page Frame # Offset Physical Address limit base Segment Table Page Directory Segment tables may also be paged (mulitcs) segment number broken into page number and segment table offset size limited by segment number size limited by segment limit not all need be present

4.2 CoSP: Advantages Reduces memory usage as opposed to pure paging Page table size limited by segment size Segment table has only one entry per actual segment Simplifies handling protection and sharing of larger modules (define them as segments) Most advantages of paging still hold Simplifies memory allocation Eliminates external fragmentation Supports swapping, demand paging, prepaging etc. Epecially well suited for prepaging: segment limits tell memory management system where to stop

internal fragmentation 4.3 CoSP: Disadvantages Internal fragmentation Yet only an average of about ½ page per contiguous address range Page 1 Page 2 Process requests a 6KB address range (4KB pages) internal fragmentation No linear address space

The End