Presentation is loading. Please wait.

Presentation is loading. Please wait.

Virtual MemoryCS-3013 C-term 20081 Virtual Memory CS-3013, Operating Systems C-term 2008 (Slides include materials from Operating System Concepts, 7 th.

Similar presentations


Presentation on theme: "Virtual MemoryCS-3013 C-term 20081 Virtual Memory CS-3013, Operating Systems C-term 2008 (Slides include materials from Operating System Concepts, 7 th."— Presentation transcript:

1 Virtual MemoryCS-3013 C-term 20081 Virtual Memory CS-3013, Operating Systems C-term 2008 (Slides include materials from Operating System Concepts, 7 th ed., by Silbershatz, Galvin, & Gagne and from Modern Operating Systems, 2 nd ed., by Tanenbaum)

2 Virtual MemoryCS-3013 C-term 20082 Review Virtual Memory — the address space in which a process “thinks” As distinguished from Physical Memory, the address space of the hardware memory system Multiple forms Base and Limit registers Segmentation Paging Memory Management Unit (MMU) Present in most modern processors Converts all virtual addresses to physical addresses Transparent to execution of (almost all) programs

3 Virtual MemoryCS-3013 C-term 20083 Review (continued) Translation Lookaside Buffer (TLB) Hardware associative memory for very fast page table lookup of small set of active pages Can be managed in OS or in hardware (or both) Must be flushed on context switch Page table organizations Direct:– maps virtual  physical address Hashed Inverted:– maps physical address  virtual address

4 Virtual MemoryCS-3013 C-term 20084 Useful terms Thrashing Too many page faults per unit time Results from insufficient physical memory to support the working set System spends most of its time swapping pages in and out, rather than executing process Working set The set of pages needed to keep a process from thrashing Caching The art and science of keeping the most active elements in fast storage for better execution speed Depends upon locality of references

5 Virtual MemoryCS-3013 C-term 20085 Outline for this topic Performance metrics Swap-in strategies Page replacement strategies Miscellaneous topics –More on segmentation –Kernel memory allocation –VM summary: Linux & Windows

6 Virtual MemoryCS-3013 C-term 20086 Demand Paging Performance Page Fault Rate (p) 0 < p < 1.0 (measured in average number of faults / reference) Page Fault Overhead = fault service time + read page time + restart process time Fault service time ~ 0.1–10  sec Restart process time ~ 0.1–10–100  sec Read page time ~ 8-20 milliseconds! Dominated by time to read page in from disk!

7 Virtual MemoryCS-3013 C-term 20087 Demand Paging Performance (continued) Effective Access Time (EAT) = (1-p) * (memory access time) + p * (page fault overhead) Want EAT to degrade no more than, say, 10% from true memory access time –i.e., EAT < (1 + 10%) * memory access time

8 Virtual MemoryCS-3013 C-term 20088 Performance Example Memory access time = 100 nanosec = 10 -7 Page fault overhead = 25 millisec = 0.025 Page fault rate = 1/1000 = 10 -3 EAT = (1-p) * 10 -7 + p * (0.025) = (0.999) * 10 -7 + 10 -3 * 0.025  25 microseconds per reference! I.e.,  250 * memory access time!

9 Virtual MemoryCS-3013 C-term 20089 Performance Example (continued) Goal: achieve less than 10% degradation (1-p) * 10 -7 + p * (0.025) < 1.1 * 10 -7 i.e., p < (0.1 * 10 -7) / (0.025 - 10 -7 )  0.0000004 I.e., 1 fault in 2,500,000 accesses!

10 Virtual MemoryCS-3013 C-term 200810 Working Set Size Assume average swap time of 25 millisec. For memory access time = 100 nanoseconds Require < 1 page fault per 2,500,000 accesses For memory access time = 1 microsecond Require < 1 page fault per 250,000 accesses For memory access time = 10 microseconds Require < 1 page fault per 25,000 accesses

11 Virtual MemoryCS-3013 C-term 200811 Object Lesson Working sets must get larger in proportion to memory speed! Disk speed ~ constant (nearly) I.e., faster computers need larger physical memories to exploit the speed!

12 Virtual MemoryCS-3013 C-term 200812 Class Discussion 1.What is first thing to do when the PC you bought last year becomes too slow? 2.What else might help? 3.Can we do the same analysis on TLB performance?

13 Virtual MemoryCS-3013 C-term 200813 TLB fault performance Assumptions –m = memory access time = 100 nsec –t = TLB load time from memory = 300 nsec = 3 * m Goal is < 5% penalty for TLB misses –I.e., EAT < 1.05 * m How low does TLB fault rate need to be?

14 Virtual MemoryCS-3013 C-term 200814 TLB fault performance Assumptions –m = memory access time = 100 nsec –t = TLB load time from memory = 300 nsec = 3 * m Goal is < 5% penalty for TLB misses –I.e., EAT < 1.05 * m EAT = (1-p) * m + p * t < 1.05 *m  < (0.05 * m) / (t – m) = 0.05 * m / 2 * m = 0.025 I.e., TLB fault rate should be < 1 per 40 accesses!

15 Virtual MemoryCS-3013 C-term 200815 TLB fault performance (continued) Q: How large should TLB be so that TLB faults are not onerous, in these circumstances? A: Somewhat less than 40 entries Assuming a reasonable degree of locality!

16 Virtual MemoryCS-3013 C-term 200816 Summary of this Topic A quantitative way of estimating how large the cache needs to be to avoid excessive thrashing, where –Cache = Working set in physical memory –Cache = TLB size in hardware Applicable to all forms of caching

17 Virtual MemoryCS-3013 C-term 200817 Next topic

18 Virtual MemoryCS-3013 C-term 200818 Caching issues When to put something in the cache What to throw out to create cache space for new items How to keep cached item and stored item in sync after one or the other is updated How to keep multiple caches in sync across processors or machines Size of cache needed to be effective Size of cache items for efficiency … From previous topic

19 Virtual MemoryCS-3013 C-term 200819 Physical Memory is a Cache of Virtual Memory, so … When to swap in a page On demand? or in anticipation? What to throw out Page Replacement Policy Keeping dirty pages in sync with disk Flushing strategy Keeping pages in sync across processors or machines Defer to another time Size of physical memory to be effective See previous discussion Size of pages for efficiency One size fits all, or multiple sizes?

20 Virtual MemoryCS-3013 C-term 200820 Physical Memory as cache of Virtual Memory When to swap in a page On demand? or in anticipation? What to throw out Page Replacement Policy Keeping dirty pages in sync with disk Flushing strategy Keeping pages in sync across processors or machines Defer to another time Size of physical memory to be effective See previous discussion Size of pages for efficiency One size fits all, or multiple sizes?

21 Virtual MemoryCS-3013 C-term 200821 VM Page Replacement If there is an unused frame, use it. If there are no unused frames available, select a victim (according to policy) and –If it contains a dirty page (M == 1) write it to disk –Invalidate its PTE and TLB entry –Load in new page from disk (or create new page) –Update the PTE and TLB entry! –Restart the faulting instruction What is cost of replacing a page? How does the OS select the page to be evicted?

22 Virtual MemoryCS-3013 C-term 200822 Page Replacement Algorithms Want lowest page-fault rate. Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string. Reference string – ordered list of pages accessed as process executes Ex. Reference String is A B C A B D A D B C B

23 Virtual MemoryCS-3013 C-term 200823 The Best Page to Replace The best page to replace is the one that will never be accessed again Optimal Algorithm – Belady’s Rule –Lowest fault rate for any reference string –Basically, replace the page that will not be used for the longest time in the future. –Belady’s Rule is a yardstick –We want to find close approximations

24 Virtual MemoryCS-3013 C-term 200824 Page Replacement – NRU (Not Recently Used) Periodically (e.g., on a clock interrupt) Clear R bit from all PTE’s When needed, rank order pages as follows 1.R = 0, M = 0 2.R = 0, M = 1 3.R = 1, M = 0 4.R = 1, M = 1 Evict a page at random from lowest non-empty class (write out if M = 1) Characteristics Easy to understand and implement Not optimal, but adequate in some cases

25 Virtual MemoryCS-3013 C-term 200825 PTE Structure Valid bit gives state of this PTE –says whether or not a virtual address is valid – in memory and VA range –If not set, page might not be in memory or may not even exist! Reference bit says whether the page has been accessed –it is set by hardware whenever a page has been read or written to Modify bit says whether or not the page is dirty –it is set by hardware during every write to the page Protection bits control which operations are allowed –read, write, execute, etc. Page frame number (PFN) determines the physical page –physical page start address Other bits dependent upon machine architecture page frame numberprotMRV 202111

26 Virtual MemoryCS-3013 C-term 200826 Page Replacement – FIFO (First In, First Out) Easy to implement When swapping a page in, place its page id on end of list Evict page at head of list Page to be evicted has been in memory the longest time, but … Maybe it is being used, very active even We just don’t know A weird phenomenon:– Belady’s Anomaly (§9.4.2) fault rate may increase when there is more physical memory! FIFO is rarely used in practice

27 Virtual MemoryCS-3013 C-term 200827 FIFO Illustrating Belady’s Anomaly

28 Virtual MemoryCS-3013 C-term 200828 Second Chance Maintain FIFO page list When a page frame is needed, check reference bit of top page in list If R == 1 then move page to end of list and clear R, repeat If R == 0 then evict page I.e., a page has to move to top of list at least twice I.e., once after the last time R-bit was cleared Disadvantage Moves pages around on list a lot (bookkeeping overhead)

29 Virtual MemoryCS-3013 C-term 200829 Clock Replacement (A slight variation of Second Chance) Create circular list of PTEs in FIFO Order One-handed Clock – pointer starts at oldest page –Algorithm – FIFO, but check Reference bit If R == 1, set R = 0 and advance hand evict first page with R == 0 –Looks like a clock hand sweeping PTE entries –Fast, but worst case may take a lot of time

30 Virtual MemoryCS-3013 C-term 200830 Clock Algorithm (illustrated)

31 Virtual MemoryCS-3013 C-term 200831 Enhanced Clock Algorithm Two-handed clock – add another hand that is n PTEs ahead –Extra hand clears Reference bit –Allows very active pages to stay in longer Also rank order the frames 1.R = 0, M = 0 2.R = 0, M = 1 3.R = 1, M = 0 4.R = 1, M = 1 Select first entry in lowest category May require multiple passes Gives preference to modified pages

32 Virtual MemoryCS-3013 C-term 200832 Least Recently Used (LRU) Replace the page that has not been used for the longest time 3 Page Frames Reference String - A B C A B D A D B C A B C A B D A D B C LRU – 5 faults

33 Virtual MemoryCS-3013 C-term 200833 LRU Past experience may indicate future behavior Perfect LRU requires some form of timestamp to be associated with a PTE on every memory reference !!! Counter implementation –Every page entry has a counter; each time page is referenced through this entry, copy the clock into the counter. –When a page needs to be changed, look at the counters to determine which to select Stack implementation – keep a stack of page numbers in a double link form: –Page referenced: move it to the top –No search for replacement

34 Virtual MemoryCS-3013 C-term 200834 LRU Approximations Aging –Keep a counter for each PTE –Periodically (clock interrupt) – check R-bit If R = 0 increment counter (page has not been used) If R = 1 clear the counter (page has been used) Clear R = 0 –Counter contains # of intervals since last access –Replace page having largest counter value Alternative –§§9.4.4-9.4.5 in Silbershatz

35 Virtual MemoryCS-3013 C-term 200835 When to Evict Pages (Cleaning Policy) An OS process called the paging daemon –wakes periodically to inspect pool of frames –if insufficient # of free frames Mark pages for eviction according to policy, set valid bit to zero Schedule disk to write dirty pages –on page fault If desired page is marked but still in memory, use it Otherwise, replace first clean marked page in pool Advantage Writing out dirty pages is not in critical path to swapping in

36 Virtual MemoryCS-3013 C-term 200836 Physical Memory as cache of Virtual Memory When to swap in a page On demand? or in anticipation? What to throw out Page Replacement Policy Keeping dirty pages in sync with disk Flushing strategy Keeping pages in sync across processors or machines Defer to another time Size of physical memory to be effective See previous discussion Size of pages for efficiency One size fits all, or multiple sizes?

37 Virtual MemoryCS-3013 C-term 200837 What to Page in Demand paging brings in the faulting page –To bring in more pages, we need to know the future Users don’t really know the future, but a few OSs have user-controlled pre-fetching In real systems, –load the initial page –Start running –Some systems (e.g. WinNT & WinXP) will bring in additional neighboring pages (clustering) Alternatively –Figure out working set from previous activity –Page in entire working set of a swapped out process

38 Virtual MemoryCS-3013 C-term 200838 Working Set A working set of a process is used to model the dynamic locality of its memory usage –Working set = set of pages a process currently needs to execute without too many page faults –Denning in late 60’s Definition: –WS(t,w) = set of pages referenced in the interval between time t-w and time t t is time and w is working set window (measured in page refs) Page is in working set only if it was referenced in last w references

39 Virtual MemoryCS-3013 C-term 200839 Working Set w  working-set window  a fixed number of page references Example: 10,000 – 2,000,000 instructions WS i (working set of Process P i ) = set of pages referenced in the most recent w (varies in time) –if w too small will not encompass entire locality. –if w too large will encompass several localities. –as w  , encompasses entire program.

40 Virtual MemoryCS-3013 C-term 200840 Assume 3 page frames Let interval be w = 5 1 2 3 2 3 1 2 4 3 4 7 4 3 3 4 1 1 2 2 2 1 w={1,2,3} w={3,4,7} w={1,2} –if w too small, will not encompass locality –if w too large, will encompass several localities –if w  infinity, will encompass entire program if Total WS > physical memory  thrashing –Need to free up some physical memory –E.g., suspend a process, swap all of its pages out Working Set Example

41 Virtual MemoryCS-3013 C-term 200841 Working Set Page Replacement In practice, convert references into time –E.g. 100ns/ref, 100,000 references  10msec WS algorithm in practice –On each clock tick, all R bits are cleared and record process virtual time t –When looking for eviction candidates, scan all pages of process in physical memory If R == 1 Store t in LTU (last time used) of PTE and clear R If R == 0 If (t – LTU) > WS_Interval (i.e., w), evict the page (because it is not in working set) Else select page with the largest difference

42 Virtual MemoryCS-3013 C-term 200842 WSClock (combines Clock and WS algorithms) WSClock –Circular list of entries containing R, M, time of last use R and time are updated on each clock tick –Clock “hand” progresses around list If R = 1, reset and update time If R = 0, and if age > WS_interval, and if clean, then claim it. If R = 0, and if age > WS_interval, and if dirty, then schedule a disk write Step “hand” to next entry on list Very common in practice

43 Virtual MemoryCS-3013 C-term 200843 Review of Page Replacement Algorithms

44 Virtual MemoryCS-3013 C-term 200844 More on Segmentation Paging is invisible to programmer, but segmentation is not Even paging with two-level page tables is invisible Segment: an open-ended piece of VM Multics (H6000) : 2 18 segments of 64K words each Pentium: 16K segments of 2 30 bytes each –8K global segments, plus 8K local segments per process –Each segment may be paged or not –Each segment assigned to one of four protection levels Program consciously loads segment descriptors when accessing a new segment Only OS/2 used full power of Pentium segments Linux concatenates 3 segments to create contiguous VM

45 Virtual MemoryCS-3013 C-term 200845 OS Design Issue — Where does Kernel execute? In physical memory Old systems (e.g., IBM 360/67) Extra effort needed to look inside of VM of any process In virtual memory Most modern systems Shared segment among all processes Advantages of kernel in virtual memory Easy to access, transfer to/from VM of any process No context switch needed for traps, page faults No context switch needed for purely kernel interrupts

46 Virtual MemoryCS-3013 C-term 200846 Kernel Memory Requirements Interrupt handlers Must be pinned into physical memory At locations known to hardware Critical kernel code Pinned, never swapped out I/O buffers (user and kernel) Must be pinned and in contiguous physical memory Kernel data (e.g., PCB’s, file objects, semaphores, etc.) Pinned in physical memory Dynamically allocated & freed Not multiples of page size; fragmentation is an issue

47 Virtual MemoryCS-3013 C-term 200847 Kernel Memory Allocation E.g., Linux PCB ( struct task_struct ) 1.7 Kbytes each Created on every fork and every thread create –clone() deleted on every exit Kernel memory allocators Buddy system Slab allocation

48 Virtual MemoryCS-3013 C-term 200848 Buddy System Maintain a segment of contiguous pinned VM Round up each request to nearest power of 2 Recursively divide a chunk of size 2 k into two “buddies” of size 2 k-1 to reach desired size When freeing an object, recursively coalesce its block with adjacent free buddies Problem, still a lot of internal fragmentation –E.g., 11 Kbyte page table requires 16 Kbytes

49 Virtual MemoryCS-3013 C-term 200849 Buddy System (illustrated)

50 Virtual MemoryCS-3013 C-term 200850 Slab Allocation Maintain a separate cache for each major data type E.g., task_struct, inode in Linux Slab: fixed number of contiguous physical pages assigned to one particular cache Upon kernel memory allocation request Recycle an existing object if possible Allocate a new one within a slab if possible Else, create an additional slab for that cache When finished with an object Return it to cache for recycling Benefits Minimize fragmentation of kernel memory Most kernel memory requests can be satisfied quickly

51 Virtual MemoryCS-3013 C-term 200851 Slab Allocation Maintain a separate cache for each major data type E.g., task_struct, inode in Linux Slab: fixed number of contiguous physical pages assigned to one particular cache Upon kernel memory allocation request Recycle an existing object if possible Allocate a new one within a slab if possible Else, create an additional slab for that cache When finished with an object Return it to cache for recycling Benefits Minimize fragmentation of kernel memory Most kernel memory requests can be satisfied quickly Misuse of word cache

52 Virtual MemoryCS-3013 C-term 200852 Slab Allocation (illustrated)

53 Virtual MemoryCS-3013 C-term 200853 Unix VM Physical Memory –Core map (pinned) – page frame info –Kernel (pinned) – rest of kernel –Frames – remainder of memory Page replacement –Page daemon runs periodically to free up page frames Global replacement – multiple parameters Current BSD system uses 2-handed clock –Swapper – helps page daemon Look for processes idle 20 sec. or more and swap out longest idle Next, swap out one of 4 largest – one in memory the longest Check for processes to swap in

54 Virtual MemoryCS-3013 C-term 200854 Linux VM Kernel is pinned Rest of frames used –Processes –Buffer cache –Page Cache Multilevel paging –3 levels –Contiguous (slab) memory allocation using buddy algorithm Replacement – goal keep a certain number of pages free –Daemon (kswapd) runs once per second Clock algorithm on page and buffer caches Clock on unused shared pages Modified clock (by VA order) on user processes (by # of frames)

55 Virtual MemoryCS-3013 C-term 200855 Windows NT Uses demand paging with clustering. Clustering brings in pages surrounding the faulting page. Processes are assigned working set minimum (20-50) and working set maximum (45-345) Working set minimum is the minimum number of pages the process is guaranteed to have in memory. A process may be assigned as many pages up to its working set maximum. When the amount of free memory in the system falls below a threshold, automatic working set trimming is performed to restore the amount of free memory. (Balance set manager) Working set trimming removes pages from processes that have pages in excess of their working set minimum

56 Virtual MemoryCS-3013 C-term 200856 VM Summary Memory Management – from simple multiprogramming support to efficient use of multiple system resources Models and measurement exist to determine the goodness of an implementation In real systems, must tradeoff –Implementation complexity –Management Overhead –Access time overhead What are the system goals? What models seem to be appropriate? Select one Measure Tune

57 Virtual MemoryCS-3013 C-term 200857 Reading Assignment Silbershatz, rest of Chapter 9 –i.e., §§9.4-9.11

58 Virtual MemoryCS-3013 C-term 200858 More on Thrashing With multiprogramming, physical memory is shared –Kernel code –System buffers and system information (e.g. PTs) –Some for each process Each process needs a minimum number of pages in memory to avoid thrashing

59 Virtual MemoryCS-3013 C-term 200859 Thrashing Degree of multiprogramming CPU utilization

60 Virtual MemoryCS-3013 C-term 200860 Cause of Thrashing Why does thrashing occur? –sum of working sets of active processes > total physical memory available (excluding kernel and pinned pages) Bounding the problem – page replacement algorithm does not matter –If system has more than enough memory (overprovisioning) –If system has too little memory (overcommitted)

61 Virtual MemoryCS-3013 C-term 200861 Page Fault Frequency increase number of frames decrease number of frames upper bound lower bound Page Fault Rate Number of Frames

62 Virtual MemoryCS-3013 C-term 200862 What should OS do? Establish “acceptable” page-fault rate for each process –If rate too low, process loses frame –If rate too high, process gains frame If overall system rate is to high, suspend a process

63 Virtual MemoryCS-3013 C-term 200863 Frame Allocation - Policies Equal allocation – e.g., if 100 frames and 5 processes, give each 20 pages. Proportional allocation – Allocate according to the size of process bigger process are allocated more frames and smaller processes Priority allocation Use a proportional allocation scheme using process priorities rather than size.

64 Virtual MemoryCS-3013 C-term 200864 Frame Allocation and Page Replacement Frame Allocation and Page Replacement interact Global replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another. Local replacement – each process selects from only its own set of allocated frames.


Download ppt "Virtual MemoryCS-3013 C-term 20081 Virtual Memory CS-3013, Operating Systems C-term 2008 (Slides include materials from Operating System Concepts, 7 th."

Similar presentations


Ads by Google