Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 9: Virtual Memory

Similar presentations

Presentation on theme: "Chapter 9: Virtual Memory"— Presentation transcript:

1 Chapter 9: Virtual Memory
As each process requests memory, the memory manager must satisfy or deny the request If free memory is not available, the request is denied If the physical memory of the system is not sufficient, overlays must be used to allow a process to overlay its allocated memory The memory thus limits the number and size of ready processes How can we more efficiently use our limited physical memory? Virtual address – separation of user logical memory address from physical memory address Virtual memory – separation of user logical memory from physical memory We can extend the concept of “swapping” - instead of swapping the entire memory space of a process in or out, we swap part of the memory space in or out on demand Logical address space can therefore be much larger than physical address space CEG 433/633 - Operating Systems I

2 CEG 433/633 - Operating Systems I
Demand Paging Only part of the program needs to be in memory for execution. Need to allow pages to be swapped in and out Parts of a program (rare features, error handling routines, large static data structures, etc.) may never be needed in a particular run Virtual memory can be implemented via: Demand paging (swapper replaced with pager) Demand segmentation Bring a page into memory (via pager) only when it is needed Less I/O needed (initially) Less memory needed Faster response More users/processes When memory is accessed, one of the following occurs: Page is memory resident  satisfy reference Page is not memory resident  bring page into memory (I/O) Page is invalid  abort Page fault CEG 433/633 - Operating Systems I

3 CEG 433/633 - Operating Systems I
Valid-Invalid Bit With each page table entry a valid–invalid bit is associated (1  in-memory, 0  not-in-memory) Initially valid–invalid but is set to 0 on all entries Example of a page table snapshot: During address translation, if valid–invalid bit in page table entry is 0  page fault 1 Frame # valid-invalid bit page table CEG 433/633 - Operating Systems I

4 CEG 433/633 - Operating Systems I
Page Fault If there is ever a reference to a page, first reference will trap to OS  page fault OS looks at another table to decide: Just not currently in memory Processing a page fault: If invalid, abort process Find (or make) an empty frame Swap page into frame (I/O) Reset tables, validation bit = 1. Restart instruction Systems which start executing a process with no frames initially in memory use the pure demand pageing scheme The hardware to support demand paging is the same as the hardware for paged memory (Page table) and swapping (backing store) CEG 433/633 - Operating Systems I

5 What happens if there is no free frame?
When a page fault occur the OS must find a frame for the new page If no frames are free, a victim must be selected for eviction page replacement – find some page in memory, but not currently in use, page (swap) it out this requires a decision, thus we need an algorithm performance goal: minimum number of page faults (effectively the same as increasing the hit ratio) If the victim page has been modified (dirty), it must first be stored to disk (Is RAM a cache?) Each page must be brought into memory at least once try to avoid paging the same page several times…. Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory CEG 433/633 - Operating Systems I

6 Performance of Demand Paging
Page Fault Rate 0  p  1.0 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 – p) x memory access + p ( page fault overhead + [swap page out (if dirty)] + swap page in + restart overhead ) Goal: Reduce PFR to decrease EAT CEG 433/633 - Operating Systems I

7 Optimal Page-Replacement Algorithm
Page-replacement algorithms are evaluated by running them on a particular string of memory references (a reference string) and computing the number of page faults on that string Optimal Algorithm: remove the page which will be referenced farthest in the future not used for longest period of time requires an oracle use as a metric to judge how well other algorithms perform Example: Consider the following references on a system w/4 frames 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 1 2 3 4 6 page faults 5 CEG 433/633 - Operating Systems I

8 First-In-First-Out (FIFO) Algorithm
Keep a list, remove oldest page. Example: reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Note: More frames does not imply less page faults! FIFO replacement suffers Belady’s Anomaly 3 frames 4 frames 1 2 3 4 5 9 page faults 1 2 3 5 4 10 page faults CEG 433/633 - Operating Systems I

9 Least Recently Used (LRU) Algorithm
1 2 3 5 4 The victim is the page not used for the longest time Example: reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Problem: How do we implement this? Updating a list on every memory reference is expensive Solutions: Counter implementation Add a hardware counter; when the page is referenced copy the counter value into the page table entry When a page needs to be changed, look at the counters to determine which are to change Software/Stack implementation – keep a stack of page numbers in a double link form, update on page table lookup: Page referenced: move it to the top, change 6 pointers No search for replacement CEG 433/633 - Operating Systems I

10 LRU Approximation Algorithms
Reference bit With each page associate a bit, initially cleared to 0 When page is referenced bit set to 1 Select as a victim a page whose reference bit = 0 Clear bits periodically FIFO - second chance algorithm If the victim has been referenced lately (R bit set) clear the bit and put it on the tail of the FIFO, clear reference bits periodically Slightly better, but still suffers from Belady’s Anomaly LRU w/aging Requires a n-bit register for each entry The MSB is the reference bit Right shift periodically (~100ms or so) Frames with the largest value are (approximately) the most frequently referenced CEG 433/633 - Operating Systems I

11 Not-Recently-Used (NFU) Algorithm
AKA: enhanced second chance LFU approximation algorithm R bit is set when page is referenced (memory read) M bit is set when page is modified (memory write) Clear all R bits on each clock interrupt Each page is a member of one of four classes Class 0: R’ M’ Class 1: R’ M Class 2: R M’ Class 3: R, M Select (randomly) a page in the lowest class Easy to implement “Ok” performance (used in Macintosh OS) Paging Daemon: Many OSes invoke a paging daemon when memory is full to select “victims” to writeback data in preparation for normal eviction CEG 433/633 - Operating Systems I

12 Processing a Page Fault
Hardware trap to kernel (save PC on stack or special register). Enter monitor mode, call assembly routine to save registers and call OS. OS “discovers” page fault and tries to find which page is needed (from hardware register or by examining the the saved PC). Check to see if page is valid and allowed access. If not, abort process. Otherwise, find a free frame. If necessary, select a victim. If victim is dirty, schedule I/O, mark frame as busy, and switch context until done. Look up disk address for page and request transfer to clean frame. Switch context and wait until disk interrupt for finished I/O arrives. Update tables and mark frame as normal (not busy). Back up faulting instruction to original state and reset PC. Schedule process for execution (take off wait queue) and return to the assembly routine that invoked the OS. Restore registers and return to user mode as if no fault had occurred. CEG 433/633 - Operating Systems I

13 CEG 433/633 - Operating Systems I
Allocation of Frames How do we decide how many frames each process gets? Working set: The subset of its pages that a process requires to be in memory resident at a given point in its execution A system that uses memory segmentation with paging may require a minimum of 3 pages per process (text, data, stack) Some processes require far more All processes “want” their working set to be fully resident Major allocation schemes equal fixed allocation: frames divided evenly among processes proportional fixed allocation: partition weighted by process size priority allocation: proportion is based on a non-size priority Major replacement policies Global replacement: process selects a replacement frame from the set of all frames; one process can take a frame from another Local replacement: each process selects from only its own set of allocated frames CEG 433/633 - Operating Systems I

14 CEG 433/633 - Operating Systems I
Thrashing If a process does not have enough frames to hold its current working set, the page-fault rate is very high This leads to: low CPU utilization operating system thinks that it needs to increase the degree of multiprogramming another process added to the system (Argh!) Thrashing a process is thrashing when it spends more time paging than executing w/ local replacement algorithms, a process may thrash even though memory is available w/ global replacement algorithms, the entire system may thrash Less thrashing in general, but is it fair? CEG 433/633 - Operating Systems I

15 CEG 433/633 - Operating Systems I
Thrashing Diagram Why does paging work? Locality model Process migrates from one locality to another. Localities may overlap. Why does thrashing occur?  size of locality > total memory size What should we do? suspend one or more processes! CEG 433/633 - Operating Systems I

16 Page-Fault Frequency Scheme
Establish “acceptable” page-fault frequency (PFF) If PFF falls below minimum, remove a frame from processes If PFF rate too high, allocate a new frame to process If no free frames are available, suspend process CEG 433/633 - Operating Systems I

17 CEG 433/633 - Operating Systems I
Program Structure How should we arrange memory references to large arrays? Is the array stored in row-major or column-major order? Example: Array A[1024, 1024] of type integer Page size = 1K Each row is stored in one page System has one frame Program 1 for i := 1 to 1024 do for j := 1 to 1024 do A[i,j] := 0; 1024 page faults Program 2 for j := 1 to 1024 do for i := 1 to 1024 do A[i,j] := 0; 1024 x 1024 page faults CEG 433/633 - Operating Systems I

18 CEG 433/633 - Operating Systems I
Demand Segmentation Used when insufficient hardware to implement demand paging OS/2 allocates memory in segments, which it keeps track of through segment descriptors Segment descriptor contains a valid bit to indicate whether the segment is currently in memory. If segment is in main memory, access continues, If not in memory, segment fault Hybrid Scheme: Segmentation with demand paging CEG 433/633 - Operating Systems I

Download ppt "Chapter 9: Virtual Memory"

Similar presentations

Ads by Google