Lecture 9: Caching and Demand-Paged Virtual Memory

Slides:



Advertisements
Similar presentations
Chapter 11 – Virtual Memory Management
Advertisements

Module 10: Virtual Memory
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Virtual Memory Management G. Anuradha Ref:- Galvin.
Virtual Memory. 2 What is virtual memory? Each process has illusion of large address space –2 32 for 32-bit addressing However, physical memory is much.
Virtual Memory.
Lecture 8 Memory Management. Paging Too slow -> TLB Too big -> multi-level page table What if all that stuff does not fit into memory?
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Paging Algorithms Vivek Pai / Kai Li Princeton University.
Chapter 11 – Virtual Memory Management Outline 11.1 Introduction 11.2Locality 11.3Demand Paging 11.4Anticipatory Paging 11.5Page Replacement 11.6Page Replacement.
Virtual Memory CSCI 444/544 Operating Systems Fall 2008.
Virtual Memory Management B.Ramamurthy. Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
Caching and Demand-Paged Virtual Memory
Page Replacement Algorithms
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Virtual Memory.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Memory Management Page replacement algorithms, segmentation Tanenbaum, ch. 3 p Silberschatz, ch. 8, 9 p
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
Demand Paged Virtual Memory Andy Wang Operating Systems COP 4610 / CGS 5765.
CPS110: Page replacement Landon Cox. Replacement  Think of physical memory as a cache  What happens on a cache miss?  Page fault  Must decide what.
Lecture Topics: 11/24 Sharing Pages Demand Paging (and alternative) Page Replacement –optimal algorithm –implementable algorithms.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Virtual Memory Questions answered in this lecture: How to run process when not enough physical memory? When should a page be moved from disk to memory?
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
1 Lecture 8: Virtual Memory Operating System Fall 2006.
1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples (not covered.
10.1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
COS 318: Operating Systems Virtual Memory Paging.
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
Virtual Memory Chapter 8.
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Logistics Homework 5 will be out this evening, due 3:09pm 4/14
CSE 351 Section 9 3/1/12.
Introduction to Operating Systems
Lecture 10: Virtual Memory
Memory Management.
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Introduction to Operating Systems
EECE.4810/EECE.5730 Operating Systems
Chapter 9: Virtual-Memory Management
Lecture 39 Syed Mansoor Sarwar
5: Virtual Memory Background Demand Paging
Demand Paged Virtual Memory
Andy Wang Operating Systems COP 4610 / CGS 5765
Translation Buffers (TLB’s)
Contents Memory types & memory hierarchy Virtual memory (VM)
Computer Architecture
Operating Systems CMPSC 473
Lecture 8: Efficient Address Translation
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Module 9: Virtual Memory
Sarah Diesburg Operating Systems CS 3430
Virtual Memory.
Sarah Diesburg Operating Systems COP 4610
Presentation transcript:

Lecture 9: Caching and Demand-Paged Virtual Memory SE350: Operating Systems Lecture 9: Caching and Demand-Paged Virtual Memory

Main Points Caching in computer systems Demand-paged virtual memory How can we improve performance of memory accesses? Demand-paged virtual memory Can we provide the illusion of near infinite memory in limited physical memory? Cache replacement policies On a cache miss, how do we choose which entry to replace?

Definitions Cache Temporal locality Spatial locality Copy of data that is faster to access than the original Temporal locality Programs tend to reference same locations multiple times E.g., instructions in a loop Spatial locality Programs tend to reference nearby locations E.g., data in a loop

Cache Concept (Read)

Cache Concept (Write) Write through Write back Changes sent immediately to next level of storage Write back Changes stored in cache until cache block is replaced

Memory Hierarchy Intel Core i9 has 512KB 1st level, 2MB 2nd level, and 16MB 3rd level cache

Main Points Caching in computer systems Demand-paged virtual memory How can we improve performance of memory accesses? Demand-paged virtual memory Can we provide the illusion of near infinite memory in limited physical memory? Cache replacement policies On a cache miss, how do we choose which entry to replace?

Demand Paging (Before) Page table has an invalid entry for the referenced page and the data for the page is stored on disk

Demand Paging (After) After page fault, page table has a valid entry for referenced page with the page frame containing the data that had been stored on disk Old contents of the page frame are stored on disk and the page table entry that previously pointed to the page frame is set to invalid

Demand Paging TLB miss Page table walk Page fault (page invalid in page table) Trap to kernel Convert virtual address to file + offset Allocate page frame Evict page if needed Initiate disk block read into page frame Disk interrupt when DMA complete Mark page as valid Resume process at faulting instruction TLB miss Page table walk to fetch translation Execute instruction

Allocating a Page Frame Select old page to evict Find all page table entries that refer to old page If page frame is shared Set each page table entry to invalid Remove any TLB entries Copies of now invalid page table entry Write changes on page back to disk, if necessary Why didn’t we need to shoot down the TLB before?

Has the Page been Accessed? Every page table entry has some bookkeeping Has page been modified? Dirty bit: Set by hardware on store instruction In both TLB and page table entry Has page been recently used? Use bit: Set by hardware in page table entry on every TLB miss Bookkeeping bits can be reset by the OS kernel When changes to page are flushed to disk To track whether page is recently used

Main Points Caching in computer systems Demand-paged virtual memory How can we improve performance of memory accesses? Demand-paged virtual memory Can we provide the illusion of near infinite memory in limited physical memory? Cache replacement policies On a cache miss, how do we choose which entry to replace?

Cache Replacement Policy Assume new entry is more likely to be used in the near future Policy goal: reduce cache misses to Improve expected case performance Reduce likelihood of very poor performance

MIN, LRU, LFU MIN Least Recently Used (LRU) Replace the cache entry that will not be used for the longest time into the future Optimality proof based on exchange: if evict an entry used sooner, that will trigger an earlier cache miss Least Recently Used (LRU) Replace the cache entry that has not been used for the longest time in the past Approximation of MIN Least Frequently Used (LFU) Replace the cache entry used the least often (in the recent past)

LRU vs MIN for Sequential Scan Scanning through memory, where the scan is slightly larger than the cache size

LRU vs FIFO vs MIN Reference pattern exhibits temporal locality For a reference pattern that exhibits locality Reference pattern exhibits temporal locality

Belady’s Anomaly Larger cache suffers ten cache misses, while the smaller cache has one fewer

Clock Algorithm: Estimating LRU Periodically, sweep through all pages If page is unused, reclaim If page is used, mark as unused

Nth Chance: Not Recently Used Instead of one bit per page, keep an integer notInUseSince: number of sweeps since last use Periodically sweep through all page frames if (page is used) { notInUseSince = 0; } else if (notInUseSince < N) { notInUseSince++; } else { reclaim page; }

Implementation Note Clock and Nth Chance can run synchronously In page fault handler, run algorithm to find next page to evict Might require writing changes back to disk first Or asynchronously Create a thread to maintain a pool of recently unused, clean pages Find recently unused dirty pages, write mods back to disk Find recently unused clean pages, mark as invalid and move to pool On page fault, check if requested page is in pool! If not, evict a page from the pool

Acknowledgment This lecture is a slightly modified version of the one prepared by Tom Anderson