Lecture 8 Memory Management. Paging Too slow -> TLB Too big -> multi-level page table What if all that stuff does not fit into memory?

Slides:



Advertisements
Similar presentations
Lecture 7 Memory Management. Virtual Memory Approaches Time Sharing: one process uses RAM at a time Static Relocation: statically rewrite code before.
Advertisements

Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Memory/Storage Architecture Lab Computer Architecture Virtual Memory.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
1 Lecture 20: Cache Hierarchies, Virtual Memory Today’s topics:  Cache hierarchies  Virtual memory Reminder:  Assignment 8 will be posted soon (due.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
Computer ArchitectureFall 2008 © CS : Computer Architecture Lecture 22 Virtual Memory (1) November 6, 2008 Nael Abu-Ghazaleh.
1 Virtual Memory vs. Physical Memory So far, all of a job’s virtual address space must be in physical memory However, many parts of programs are never.
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
Paging Algorithms Vivek Pai / Kai Li Princeton University.
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
Virtual Memory CSCI 444/544 Operating Systems Fall 2008.
CS 61C: Great Ideas in Computer Architecture
©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 24 Instructor: L.N. Bhuyan
CS 241 Section Week #12 (04/22/10).
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
CS 153 Design of Operating Systems Spring 2015 Final Review.
Part 8: Virtual Memory. Silberschatz, Galvin and Gagne ©2005 Virtual vs. Physical Address Space Each process has its own virtual address space, which.
Lecture 19: Virtual Memory
Lecture 15: Virtual Memory EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
Virtual Memory Virtual Memory is created to solve difficult memory management problems Data fragmentation in physical memory: Reuses blocks of memory.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 3:00-4:00 PM.
Lecture 9 VM & Threads. Virtual Memory Approaches Time Sharing, Static Relocation, Base, Base+Bounds Segmentation Paging Too slow – TLB Too big – smaller.
Chapter 21 Swapping: Mechanisms Chien-Chung Shen CIS, UD
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
PA1 due in one week TA office hour is 1-3 PM The grade would be available before that.
Lecture 7 TLB. Virtual Memory Approaches Time Sharing Static Relocation Base Base+Bounds Segmentation Paging.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Virtual Memory Questions answered in this lecture: How to run process when not enough physical memory? When should a page be moved from disk to memory?
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Chapter 21 Swapping: Mechanisms Chien-Chung Shen CIS/UD
CS161 – Design and Architecture of Computer
Lecture 11 Virtual Memory
Paging Review Page tables store translations from virtual pages to physical frames Issues: Speed - TLBs Size – Multi-level page tables.
Beyond Physical Memory: Policies
CS161 – Design and Architecture of Computer
Section 9: Virtual Memory (VM)
Chapter 21 Virtual Memoey: Policies
CS 704 Advanced Computer Architecture
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Morgan Kaufmann Publishers
Module 9: Virtual Memory
Lecture 6 Memory Management
Replacement Policies Assume all accesses are: Cache Replacement Policy
Demand Paged Virtual Memory
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSC3050 – Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Computer Architecture
Lecture 9: Caching and Demand-Paged Virtual Memory
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Module 9: Virtual Memory
Review What are the advantages/disadvantages of pages versus segments?
Virtual Memory.
Presentation transcript:

Lecture 8 Memory Management

Paging Too slow -> TLB Too big -> multi-level page table What if all that stuff does not fit into memory?

How Many Physical Accesses Assume a linear page table Assume 256-byte pages, 16-bit addresses. Assume ASID of current process is 211 0xAA10: movl 0x1111, %edi 0xBB13: addl $0x3, %edi 0x0519: movl %edi, 0xFF10 validVPNPFNASIDProt 10xBB0x91211? 10xFF0x23211? 10x050x91112? 00x050x12211?

How Many Physical Accesses Assume a 3-level page table Assume 256-byte pages, 16-bit addresses. Assume ASID of current process is 211 0xAA10: movl 0x1111, %edi 0xBB13: addl $0x3, %edi 0x0519: movl %edi, 0xFF10 validVPNPFNASIDProt 10xBB0x91211? 10xFF0x23211? 10x050x91112? 00x050x12211?

Page Table Size The page size is increased by 4x, but the total sizes of physical and virtual memory are unchanged. PTE’s are originally 20 bits. (a) does the number of virtual pages increase or decrease? By how much? (b) by what factor does the number of PTE’s (rows in PT) increase or decrease? (c) does the number of physical pages increase or decrease? By how much? (d) how many more (or fewer) bits are needed to store a physical page number? (e) by what factor does the size of PTE’s (columns in PT) increase or decrease? (f) by what factor does the size of the page table increase or decrease?

Two-level PT Translations Assume 12-bit virtual address (a) 0x123 (b) 0xCBA (c) 0x777

The Memory Hierarchy Registers Cache Main Memory Secondary Storage

Swap Space Reserved disk space for moving pages back and forth

How to know where a page lives? The Present Bit Now Proc 1 accesses VPN 0, …

Translation Steps H/W: for each mem reference: extract VPN from VA check TLB for VPN TLB hit: build PA from PFN and offset fetch PA from memory TLB miss: fetch PTE if (!valid): exception [segfault] else if (!present): exception [page fault, or page miss] else: extract PFN, insert in TLB, retry

The Page Fault OS handles the page fault Regardless of hardware-managed or OS-managed TLB Page faults to disk are slow, so no need to use hardware Page faults are complicated to handler, so easier for OS Where is the page on disk? Store the disk address in the PTE

p

Page-Fault Handler (OS) PFN = FindFreePage() if (PFN == -1) PFN = EvictPage() DiskRead(PTE.DiskAddr, PFN) PTE.present = 1 PTE.PFN = PFN retry instruction <- policy <- blocking

When Replacements Really Occur High watermark (HW) and low watermark (LW) A background thread (swap daemon/page daemon) Frees pages when there are fewer than LW Clusters or groups a number of pages The page fault handler leverages this

Average Memory Access Time (AMAT) Hit% = portion of accesses that go straight to RAM Miss% = portion of accesses that go to disk first Tm = time for memory access Td = time for disk access AMAT = (Hit% * Tm) + (Miss% * Td) Mem-access time is 100 nanoseconds, disk-access time is 10 milliseconds, what is AMAT when hit rate is (a) 50% (b) 98% (c) 99% (d) 100%

The Optimal Replacement Policy Replace the page that will be accessed furthest in the future Given 0, 1, 2, 0, 1, 3, 0, 3, 1, 2, 1, hit rate? Assume cache for three pages Three C’s: types of cache misses compulsory miss capacity miss conflict miss

FIFO Given 0, 1, 2, 0, 1, 3, 0, 3, 1, 2, 1, hit rate? Assume cache for three pages Belady’s Anomaly 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5, hit rate if 3-page cache 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5, hit rate if 4-page cache Random

Let’s Consider History Principle of locality: spatial and temporal LRU evicts least-recently used LFU evicts least-frequently used Given 0, 1, 2, 0, 1, 3, 0, 3, 1, 2, 1, LRU hit rate? Assume cache for three pages MRU, MFU

Implementing Historical Algorithms Need to track every page access Accurate implementation is expensive Approximating LRU Adding reference bit: set upon access, cleared by OS Clock algorithm

Other factors Assume page is both in RAM and on disk Do we have to write to disk for eviction? not if page is clean track with dirty bit

Other VM Policies Demand paging Don’t even load pages into memory until they are first used Less I/O needed Less memory needed Faster response More users Rrefetching

Thrashing A machine is thrashing when there is not enough RAM, and we constantly swap in/out pages Solutions? admission control (like scheduler project) buy more memory Linux out-of-memory killer!

Next Other VM-related material not in OSTEP Concurrency

Discuss Can Belady’s anomaly happen with LRU? Stack property: smaller cache always subset of bigger The set of pages in memory when we have f frames is always a subset of The set of pages in memory when we have f+1 frames Said a different way, having more frames will let the algorithm keep additional pages in memory, but, it will never choose to throw out a page that would have remained in memory with fewer frames Does optimal have stack property?