COS 318: Operating Systems Virtual Memory Design Issues.

Slides:



Advertisements
Similar presentations
Paging: Design Issues. Readings r Silbershatz et al: ,
Advertisements

Memory Management: Overlays and Virtual Memory
Virtual Memory Operating System Concepts chapter 9 CS 355
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
9.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory OSC: Chapter 9. Demand Paging Copy-on-Write Page Replacement.
Virtual Memory Introduction to Operating Systems: Module 9.
CS 153 Design of Operating Systems Spring 2015
Memory Management (II)
Operating System Support Focus on Architecture
Memory Management 2010.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Computer Organization and Architecture
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual Memory.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
CSE378 Virtual memory.1 Evolution in memory management techniques In early days, single program ran on the whole machine –used all the memory available.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Memory Management in Windows and Linux &. Windows Memory Management Virtual memory manager (VMM) –Executive component responsible for managing memory.
By Matthew Smith, John Allred, Chris Fulton. Requirements Relocation Protection Sharing Logical Organization Physical Organization.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Review of Memory Management, Virtual Memory CS448.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
Lecture 19: Virtual Memory
Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Lecture Topics: 11/17 Page tables TLBs Virtual memory flat page tables
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
Chapter 4 Memory Management Virtual Memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory.
Virtual Memory 1 Chapter 13. Virtual Memory Introduction Demand Paging Hardware Requirements 4.3 BSD Virtual Memory 4.3 BSD Memory Management Operations.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Lecture 11 Page 1 CS 111 Online Working Sets Give each running process an allocation of page frames matched to its needs How do we know what its needs.
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Lecture Topics: 11/24 Sharing Pages Demand Paging (and alternative) Page Replacement –optimal algorithm –implementable algorithms.
Processes and Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts Slides adapted from Bryant.
Windows XP Memory Management Aaron Lanoy and Jason Farnsworth.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
CS203 – Advanced Computer Architecture Virtual Memory.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
COS 318: Operating Systems Virtual Memory Paging.
CS161 – Design and Architecture of Computer
Virtual Memory, Address-Translation and Paging
Virtual Memory CSSE 332 Operating Systems
CS161 – Design and Architecture of Computer
CS703 - Advanced Operating Systems
Chapter 9: Virtual Memory
Modeling Page Replacement Algorithms
CSC 322 Operating Systems Concepts Lecture - 16: by
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory: Concepts /18-213/14-513/15-513: Introduction to Computer Systems 17th Lecture, October 23, 2018.
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Modeling Page Replacement Algorithms
CSE 451: Operating Systems Autumn 2005 Memory Management
Contents Memory types & memory hierarchy Virtual memory (VM)
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Lecture 7: Flexible Address Translation
CSE 153 Design of Operating Systems Winter 2019
Virtual Memory: Working Sets
CS703 - Advanced Operating Systems
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
COMP755 Advanced Operating Systems
Chapter 8 & 9 Main Memory and Virtual Memory
CSE 542: Operating Systems
Presentation transcript:

COS 318: Operating Systems Virtual Memory Design Issues

2 Design Issues  Thrashing and working set  Backing store  Simulate certain PTE bits  Pin/lock pages  Zero pages  Shared pages  Copy-on-write  Distributed shared memory  Virtual memory in Unix and Linux  Virtual memory in Windows 2000

3 Virtual Memory Design Implications  Revisit Design goals Protection Isolate faults among processes Virtualization Use disk to extend physical memory Make virtualized memory user friendly (from 0 to high address)  Implications TLB overhead and TLB entry management Paging between DRAM and disk  VM access time Access time = h  memory access time + ( 1 - h )  disk access time E.g. Suppose memory access time = 100ns, disk access time = 10ms If h = 90%, VM access time is 1ms! What’s the worst case?

4 Thrashing  Thrashing Paging in and paging out all the time Processes block, waiting for pages to be fetched from disk  Reasons Process requires more physical memory than system has Does not reuse memory well Reuses memory, but it does not fit Too many processes, even though they individually fit  Solution: working set (last lecture) Pages referenced by a process in the last T seconds Two design questions Which working set should be in memory? How to allocate pages?

5 Working Set: Fit in Memory  Maintain two groups of processes Active: working set loaded Inactive: working set intentionally not loaded  Two schedulers A short-term scheduler schedules processes A long-term scheduler decides which one active and which one inactive, such that active working sets fits in memory  A key design point How to decide which processes should be inactive Typical method is to use a threshold on waiting time

Working Set: Global vs. Local Page Allocation  The simplest is global allocation only Pros: Pool sizes are adaptable Cons: Too adaptable, little isolation  A balanced allocation strategy Each process has its own pool of pages Paging allocates from its own pool and replaces from its own working set Use a “slow” mechanism to change the allocations to each pool while providing isolation  Do global and local always make sense?  Design questions: What is “slow?” How big is each pool? When to migrate? User 1User 2 ?

Backing Store  Swap space Separate partition on disk to handle swap (often separate disk) When process is created, allocate swap space for it (keep disk address in process table entry) Need to load or copy executables to the swap space, or page out as needed  Dealing with process space growth Separate swap areas for text, data and stack, each with > 1 disk chunk No pre-allocation, just allocate swap page by page as needed  Mapping pages to swap portion of disk Fixed locations on disk for pages (easy to compute, no disk addr per page) E.g. shadow pages on disk for all pages Select disk pages on demand as needed (need disk addr per page)  What if no space is available on swap partition?  Are text files different than data in this regard?

8 Revisit Address Translation  Map to page frame and disk If valid bit = 1, map to pp# physical page number If valid bit = 0, map to dp# disk page number  Page out Invalidate page table entry and TLB entry Copy page to disk Set disk page number in PTE  Page in Find an empty page frame (may trigger replacement) Copy page from disk Set page number in PTE and TLB entry and make them valid. subl $20 %esp movl 8(%esp), %eax. vp# v i v v Process TLB pp# dp# pp# v

9 Example: x86 Paging Options  Flags PG flag (Bit 31 of CR0): enable page translation PSE flag (Bit 4 of CR4): 0 for 4KB page size and 1 for large page size PAE flag (Bit 5 of CR4): 0 for 2MB pages when PSE = 1 and 1 for 4MB pages when PSE = 1 extending physical address space to 36 bit  2MB and 4MB pages are mapped directly from directory entries  4KB and 4MB pages can be mixed

10 Pin (or Lock) Page Frames  When do you need it? When I/O is DMA’ing to memory pages If process doing I/O is suspended and another process comes in and pages the I/O (buffer) page out Data could be over-written  How to design the mechanism? A data structure to remember all pinned pages Paging algorithm checks the data structure to decide on page replacement Special calls to pin and unpin certain pages  How would you implement the pin/unpin calls? If the entire kernel is in physical memory, do we still need these calls?

11 Zero Pages  Zeroing pages Initialize pages with 0’s  How to implement? On the first page fault on a data page or stack page, zero it Have a special thread zeroing pages  Can you get away without zeroing pages?

12 Shared Pages  PTEs from two processes share the same physical pages What use cases?  APIs Shared memory calls  Implementation issues Destroying a process with shared pages? Page in, page out shared pages Pin and unpin shared pages Page table 2 vp# v v pp# v Page table 1 vp# v v pp# v Physical pages

13 Copy-On-Write  A technique to avoid copying all pages to run a large process  Method Child’s address space uses the same mapping as parent’s Make all pages read-only Make child process ready On a read, nothing happens On a write, generates a fault map to a new page frame copy the page over restart the instruction Only written pages are copied  Issues How to destroy an address space? How to page in and page out? How to pin and unpin? Child Page table vvp#pp# r v Parent Page table Physical pages rpp#v r v r v r v r v Write fault w

14 Distributed Shared Memory  Run shared memory program on a cluster of computers  Method Multiple address space mapped to “shared virtual memory” Page access bits are set according to coherence rules Exclusive writer N readers A read fault will invalidate the writer, make read only and copy the page A write fault will invalidate another writer or all readers and copy page  Issues Thrashing Copy page overhead Virtual Address space Virtual Address space... Shared Virtual Memory

15 Address Space in Unix  Stack  Data Un-initialized: BSS (Block Started by Symbol) Initialized brk(addr) to grow or shrink  Text: read-only  Mapped files Map a file in memory mmap(addr, len, prot, flags, fd, offset) unmap(addr, len) Stack BSS Data Text Address space Mapped file

16 Virtual Memory in BSD4  Physical memory partition Core map (pinned): everything about page frames Kernel (pinned): the rest of the kernel memory Frames: for user processes  Page replacement Run page daemon until there is enough free pages Early BSD used the basic Clock (FIFO with 2nd chance) Later BSD used Two-handed Clock algorithm Swapper runs if page daemon can’t get enough free pages Looks for processes idling for 20 seconds or more 4 largest processes Check when a process should be swapped in

17 Virtual Memory in Linux  Linux address space for 32-bit machines 3GB user space 1GB kernel (invisible at user level)  Backing store Text segments and mapped files uses file on disk as backing storage Other segments get backing storage on demand (paging files or swap area) Pages are allocated in backing store when needed  Copy-on-write for forking off processes  Multi-level paging: supports jumbo pages (4MB)  Replacement Keep certain number of pages free Clock algorithm on paging cache and file buffer cache Clock algorithm on unused shared pages Modified Clock on memory of user processes

18 Address Space in Windows 2K/XP  Win2k user address space Upper 2GB for kernel (shared) Lower 2GB – 256MB are for user code and data (Advanced server uses 3GB instead) The 256MB contains system data (counters and stats) for user to read 64KB guard at both ends  Virtual pages Page size 4KB for x86 8 or 16KB for IA64 States Free: not in use and cause a fault Committed: mapped and in use Reserved: not mapped but allocated guard System data 2GB 4GB 0 Page table

19 Backing Store in Windows 2K/XP  Backing store allocation Win2k delays backing store page assignments until paging out There are up to 16 paging files, each with an initial and max sizes  Memory mapped files Multiple processes can share mapped files Implement copy-on-write

20 Paging in Windows 2K/XP  Each process has a working set with Min size with initial value of pages Max size with initial value of pages  On a page fault If working set < min, add a page to the working set If working set > max, replace a page from the working set  If a process has a lot of paging activities, increase its max  Working set manager maintains a large number of free pages In the order of process size and idle time If working set < min, do nothing Otherwise, page out the pages with highest “non-reference” counters in a working set for uniprocessors Page out the oldest pages in a working set for multiprocessors

21 Summary  Must consider many issues Global and local replacement strategies Management of backing store Primitive operations Pin/lock pages Zero pages Shared pages Copy-on-write  Shared virtual memory can be implemented using access bits  Real system designs are complex