There’s not always room for one more. Brian Bershad

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management Basic memory management Swapping
Advertisements

VM Page Replacement Hank Levy.
Page Replacement Algorithms
4.4 Page replacement algorithms
Page Replacement Algorithms
Virtual Memory (Chapter 4.3)
Virtual Memory 3 Fred Kuhns
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
9/3/20141 VM Paging Example May 3, 2000 Instructor: Gary Kimura.
Virtual Memory. 2 What is virtual memory? Each process has illusion of large address space –2 32 for 32-bit addressing However, physical memory is much.
Virtual Memory.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Virtual Memory.
1 Virtual Memory vs. Physical Memory So far, all of a job’s virtual address space must be in physical memory However, many parts of programs are never.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Virtual Memory Chapter 8.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Revisiting Virtual Memory.
CS444/CS544 Operating Systems Virtual Memory 4/06/2007 Prof. Searleman
Page Replacement Algorithms
03/29/2004CSCI 315 Operating Systems Design1 Page Replacement Algorithms (Virtual Memory)
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Virtual Memory.
Operating Systems CMPSC 473 Virtual Memory Management (3) November – Lecture 20 Instructor: Bhuvan Urgaonkar.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
Lecture 11 Page 1 CS 111 Online Virtual Memory A generalization of what demand paging allows A form of memory where the system provides a useful abstraction.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 9: Virtual-Memory Management.
Silberschatz, Galvin and Gagne  Operating System Concepts Virtual Memory Virtual memory – separation of user logical memory from physical memory.
Page Buffering, I. Pages to be replaced are kept in main memory for a while to guard against poorly performing replacement algorithms such as FIFO Two.
CSE 153 Design of Operating Systems Winter 2015 Lecture 12: Page Replacement.
1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples (not covered.
1 Page Replacement Algorithms. 2 Virtual Memory Management Fundamental issues : A Recap Key concept: Demand paging  Load pages into memory only when.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
COS 318: Operating Systems Virtual Memory Paging.
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
Virtual Memory.
© 2013 Gribble, Lazowska, Levy, Zahorjan
CS703 - Advanced Operating Systems
Chapter 9: Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Lecture 10: Virtual Memory
CSE 120 Principles of Operating
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Chapter 9: Virtual-Memory Management
CSE 451: Operating Systems Autumn 2012 Module 12 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
5: Virtual Memory Background Demand Paging
Demand Paged Virtual Memory
VM Page Replacement Hank Levy.
CSE 451: Operating Systems Autumn 2003 Lecture 11 Demand Paging and Page Replacement Hank Levy Allen Center
CSE 451: Operating Systems Autumn Demand Paging and Page Replacement
CSE 451: Operating Systems Winter 2003 Lecture 11 Demand Paging and Page Replacement Hank Levy 412 Sieg Hall 1.
Operating Systems CMPSC 473
CSE 153 Design of Operating Systems Winter 19
CSE 451: Operating Systems Spring 2006 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement John Zahorjan
Module 9: Virtual Memory
CSE 451: Operating Systems Autumn 2009 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
CSE 451: Operating Systems Autumn 2010 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
There’s not always room for one more. Brian Bershad
Demand Paging We’ve hinted that pages can be moved between memory and disk this process is called demand paging is different than swapping (entire process.
CSE 451: Operating Systems Winter 2006 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
CSE 451: Operating Systems Winter 2007 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
Presentation transcript:

There’s not always room for one more. Brian Bershad Page Replacement There’s not always room for one more. Brian Bershad

All Paging Schemes Depend on Locality Processes tend to reference pages in localized patterns Temporal locality locations referenced recently likely to be referenced again Spatial locality locations near recently referenced locations are likely to be referenced soon. Goal of a paging system is stay out of the way when there is plenty of available memory not bring the system to its knees when there is not.

Demand Paging Demand Paging refers to a technique where program pages are loaded from disk into memory as they are referenced. Each reference to a page not previously touched causes a page fault. The fault occurs because the reference found a page table entry with its valid bit off. As a result of the page fault, the OS allocates a new page frame and reads the faulted page from the disk. When the I/O completes, the OS fills in the PTE, sets its valid bit, and restarts the faulting process.

Paging Demand paging Prepaging don’t load page until absolutely necessary commonly used in most systems doing things one at a time can be slower than batching them. Prepaging anticipate fault before it happens overlap fetch with computation hard to predict the future some simple schemes (hints from programmer or program behavior) can work. vm_advise larger “virtual” page size

High Level Imagine that when a program starts, it has: no pages in memory a page table with all valid bits off The first instruction to be executed faults, loading the first page. Instructions fault until the program has enough pages to execute for a while. It continues until the next page fault. Faults are expensive, so once the program is running they should not occur frequently, assuming the program is “well behaved” (has good locality).

Page Replacement When a fault occurs, the OS loads the faulted page from disk into a page of memory. At some point, the process has used all of the page frames it is allowed to use. When this happens, the OS must replace a page for each page faulted in. That is, it must select a page to throw out of primary memory to make room. How it does this is determined by the page replacement algorithm. The goal of the replacement algorithm is to reduce the fault rate by selecting the best victim page to remove.

Finding the Best Page A good property if you put more memory on the machine, then your page fault rate will go down. Increasing the size of the resource pool helps everyone. The best page to toss out is the one you’ll never need again that way, no faults. Never is a long time, so picking the one closest to “never” is the next best thing. Replacing the page that won’t be used for the longest period of time absolutely minimizes the number of page faults. Example:

What do you do to pages? If the page is dirty, you have to write it out to disk. record the disk block number for the page in the PTE. If the page is clean, you don’t have to do anything. just overwrite the page with new data make sure you know where the old copy of the page came from Want to avoid THRASHING When a paging algorithm breaks down Most of the OS time spent in ferrying pages to and from disk no time spent doing useful work. the system is OVERCOMMITTED no idea what pages should be resident in order to run effectively Solutions include: SWAP Buy more memory

Optimal Algorithm The optimal algorithm, called Belady’s algorithm, has the lowest fault rate for any reference string. Basic idea: replace the page that will not be used for the longest time in the future. Basic problem: phone calls to psychics are expensive. Basic use: gives us an idea of how well any implementable algorithm is doing relative to the best possible algorithm. compare the fault rate of any proposed algorithm to Optimal if Optimal does not do much better, then your proposed algorithm is pretty good. If your proposed algorithm doesn’t do much better than Random, go home.

Evaluating Replacement Policies Eff. Access.Time = (1-p)*Tm + p*Td Tm = time to access main memory Td = time to fault Execution time = (roughly) #memory refs * E.A.T. Random execution time Up here, forget it. LRU Down in this range, it doesn’t matter so much what you do. In here, you can expect to have some effect. Opt # of physical page frames

FIFO FIFO is an obvious algorithm and simple to implement. Basic idea, maintain a list or queue of pages in the order in which they were paged into memory. On replacement, remove the one brought in the longest time ago. Why might it work? Maybe the one brought in the longest ago is one we’re not using now. Why it might not work? Maybe it’s not. We have no real information to tell us if it’s being used or not. FIFO suffers from “Belady’s anomaly” the fault rate might actually increase when the algorithm is given more memory -- a bad property. section 8.5 for more details This page was referenced a long time ago. This page was referenced recently

An Example of Optimal and FIFO in Action Reference stream is A B C A B D A D B C OPTIMAL A B C A B D A D B C B toss C 5 Faults toss A or D A B C D FIFO A B C A B D A D B C B toss A 7 Faults toss ?

Least Recently Used (LRU) Basic idea: we can’t look into the future, but let’s look at past experience to make a good guess. LRU: on replacement, remove the page that has not been used for the longest time in the past. Implementation: to really implement this, we would need to time stamp every reference, or maintain a stack that’s updated on every reference. This would be too costly. So, we can’t implement this exactly, but we can try to approximate it. why is an approximate solution totally acceptable? This page was least recently used. This page was recently used. Our “bet” is that pages which you used recently are ones which you will use again (principle of locality) and, by implication, those that you didn’t, you won’t.

Using the Reference Bit Various LRU approximations use the PTE reference bit. keep a counter for each page at regular intervals, do: for every page: if ref bit = 0, increment its counter if ref bit = 1, zero its counter zero the reference bit the counter will thus contain the number of intervals since the last reference to the page. the page with the largest counter will be least recently used one. If we don’t have a reference bit, we can simulate it using the VALID bit and taking a few extra faults. therefore want impact when there is plenty of memory to be low.

LRU Clock (Not Recently Used) Basic idea is to reflect the passage of time in the actual data structures and sweeping method. Arrange all of physical pages in a big circle (a clock). A clock hand is used to select a good LRU candidate sweep through the pages in circular order like a clock. if the ref bit is off, it’s a good page. else, turn the ref bit off and try again. Arm moves quickly when pages are needed. Low overhead when plenty of memory If memory is big, “accuracy” of information degrades. add in additional hands P0 P1 P2

Fixed Space Vs. Variable Space In a multiprogramming system, we need a way to allocate memory to the competing processes. Question is: how to determine how much memory to give to each process? In a fixed-space algorithm each process is given a limit of pages it can use; when it reaches its limit, it starts replacing new faults with its own pages. This is called local replacement. some processes may do well while others suffer. In variable-spaced algorithms, each process can grow or shrink dynamically, displacing other process’ pages. This is global replacement. one process can ruin it for the rest.

Page Fault Frequency PFF is a variable space algorithm that uses a more ad hoc approach. Basic idea: monitor the fault rate for each process if the fault rate is above a high threshold, give it more memory should fault less but it doesn’t always if the rate is below a low threshold, take away memory should fault more Hard to tell between changes in locality and changes in size of working set. Fault Rate TIME