Presentation is loading. Please wait.

Presentation is loading. Please wait.

2015/11/26\course\cpeg323-08F\Topic7e1 Virtual Memory.

Similar presentations


Presentation on theme: "2015/11/26\course\cpeg323-08F\Topic7e1 Virtual Memory."— Presentation transcript:

1 2015/11/26\course\cpeg323-08F\Topic7e1 Virtual Memory

2 2015/11/26\course\cpeg323-08F\Topic7e2 Historically, there were two major motivations for virtual memory: to allow efficient and safe sharing of memory among multiple programs, and to remove the programming burden of a small, limited amount of main memory. Patt&Henn 04 …a system has been devised to make the core drum combination appear to programmer as a single level store, the requisite transfers taking place automatically Kilbum et al. Virtual Memory: Motivation

3 2015/11/26\course\cpeg323-08F\Topic7e3 Provide sharing Automatically manage the M hierarchy (as “one-level”) Simplify loading (for relocation) MAIN PROCESSOR MEMORY MANAGE- MENT UNIT HIGH- SPEED CACHE MAIN MEMORY BACKING STORE LOGICAL ADDRESS CONTROL DATA PHYSICAL ADDRESS Purpose of Virtual Memory

4 2015/11/26\course\cpeg323-08F\Topic7e4 Structure of Virtual Memory Virtual AddressAddress TranslatorPhysical Address From Processor To Memory Page fault Using elaborate Software page fault Handling algorithm

5 2015/11/26\course\cpeg323-08F\Topic7e5 } 4K Virtual address Main memory address 64K virtual address space 32K main memory A Paging System

6 2015/11/26\course\cpeg323-08F\Topic7e6 1 = present in main memory, 0 = not present in main memory Virtual page Page frame Main memory Page frame Page Table

7 2015/11/26\course\cpeg323-08F\Topic7e7 Address Translation In Virtual Memory, blocks of memory (called pages) are mapped from one set of address (called virtual addresses) to another set (called physical addresses) See P&H Fig. 7.19 3 rd Ed or 5.19 4 th Ed

8 2015/11/26\course\cpeg323-08F\Topic7e8 If the valid bit for a virtual page is off, a page fault occurs. The operating system must be given control. Once the operating system gets control, it must find the page in the next level of the hierarchy (usually magnetic disk) and decide where to place the requested page in main memory. Page Faults See P&H Fig. 7.22 3 rd Ed or 5.22 4 th Ed

9 2015/11/26\course\cpeg323-08F\Topic7e9 Technology in 2008 See P&H Fig. pg. 453 4 th Ed

10 2015/11/26\course\cpeg323-08F\Topic7e10 Typical ranges of parameters for virtual memory in 2008 See P&H Fig. 5.29 4 th Ed

11 2015/11/26\course\cpeg323-08F\Topic7e11 VIRTUAL ADDRESS Page NumberDisplacement PAGE MAP Address within Page Base Address of Page PAGE (in Memory) Virtual Address Mapping

12 2015/11/26\course\cpeg323-08F\Topic7e12 Terminology Page Page fault Virtual address Physical address Memory mapping or address translation

13 2015/11/26\course\cpeg323-08F\Topic7e13 VM Simplifies Loading VM provide relocation function. Address mapping allows programs to be load in any location in Physical Memory Under VM relocation does not need special OS + hardware support as in the past

14 2015/11/26\course\cpeg323-08F\Topic7e14 Address Translation Consideration Direct mapping using register sets. Indirect mapping using tables. Associative mapping of frequently used pages.

15 2015/11/26\course\cpeg323-08F\Topic7e15 The Page Table (PT) must have one entry for each page in virtual memory! How many Pages? How large is PT? Design of Virtual Memory

16 2015/11/26\course\cpeg323-08F\Topic7e16 Pages should be large enough to amortize the high access time. (from 4 kB to 16 kB are typical, and some designers are considering size as large as 64 kB.) Organizations that reduce the page fault rate are attractive. The primary technique used here is to allow flexible placement of pages. (e.g. fully associative) 4 Key Design Decisions in VM Design

17 2015/11/26\course\cpeg323-08F\Topic7e17 Page fault (misses) in a virtual memory system can be handled in software, because the overhead will be small compared to the access time to disk. Furthermore, the software can afford to used clever algorithms for choosing how to place pages, because even small reductions in the miss rate will pay for the cost of such algorithms. Using write-through to manage writes in virtual memory will not work since writes take too long. Instead, we need a scheme that reduce the number of disk writes. 4 Key Design Decisions in VM Design

18 2015/11/26\course\cpeg323-08F\Topic7e18 What happens on a write ? Write-through to secondary storage is impractical for VM. Write-back is used: - Advantages (reduce number of writes to disk, amortize the cost). - Dirty-bit.

19 2015/11/26\course\cpeg323-08F\Topic7e19 Page Size Selection Constraints Efficiency of secondary memory device. Page table size. Fragmentation (internal). - Last part of last page Program logic structure - logic block size: ≤ 1k ~ 4k Table fragmentation [Kai, P68] - PT occupies some space

20 2015/11/26\course\cpeg323-08F\Topic7e20 Page Size Selection PT size. Miss ratio. PT transfer from disk to memory efficiency. Internal fragmentation. text heap stack Start-up time of a process - the smaller the faster! 3 x 0.5 = 1.5 times of a page size per process!

21 2015/11/26\course\cpeg323-08F\Topic7e21 An Example Case 1 VM page size512 VM address space 64 k Total virtual page = 64k/512 = 128 pages

22 2015/11/26\course\cpeg323-08F\Topic7e22 Case 2 VM page size512 VM address space 4G= 2 32 Total virtual page = = 8M pages If each PTE has 13 bits: so total PT size (bytes) ≈ 8M x 4 = 32M bytes Note : assuming Main Memory has 4M byte or = = = 2 13 frames 4G 512 4M 512 2 22 2 9 An Example (con’t)

23 2015/11/26\course\cpeg323-08F\Topic7e23 How about VM address space =2 52 (R-6000) (4 Petabytes) page size 4K bytes so total number of virtual pages: 2 52 2 12 = 2 40 An Example (con’t)

24 2015/11/26\course\cpeg323-08F\Topic7e24 Techniques for Reducing PT Size Set a lower limit, and permit dynamic growth. Permit growth from both directions. Inverted page table (a hash table). Multi-Level page table (segments and pages). PT itself can be paged: I.e. put PT itself in virtual address space (Note: some small portion of pages should be in main memory and never paged out).

25 2015/11/26\course\cpeg323-08F\Topic7e25 11 bits11 bits10 bits Segment Number Page Number Displacement Base of Segment Table 0101 2047 SEGMENT TABLE Base Address of Page Table 0101 2047 PAGE TABLE Base + 0 Base + 1 Base + 1023 PAGE (in Memory) Base Address of Page Address within Page Two level Address mapping

26 2015/11/26\course\cpeg323-08F\Topic7e26 Placement: OS designers always pick lower miss rates vs. simpler placement algorithm So, “fully associativity - VM pages can go anywhere in the main M (compare with sector cache) Question: Why not use associative hardware? (# of PT entries too big!)

27 2015/11/26\course\cpeg323-08F\Topic7e27 VM: Implementation Issues Page faults handling. Translation lookahead buffer (TLB) Protection issues

28 2015/11/26\course\cpeg323-08F\Topic7e28 Fast Address Translation PT must involve at least two accesses of M for each M address Improvement:  Store PT in fast registers Example: Xerox: 256 R ?  TLB For multiprogramming, should store pid as part of tags in TLB.

29 2015/11/26\course\cpeg323-08F\Topic7e29 Page Fault Handling When a virtual page number is not in TLB, then PT in M is accessed (through PTBR) to find the PTE If PTE indicates that the page is missing a page fault occurs Context switch!

30 2015/11/26\course\cpeg323-08F\Topic7e30 The TLB acts as a cache on the page table for the entries that map to physical pages only Making Address translation fast See P&H Fig. 7.23 3 rd Ed or 5.23 4 th Ed

31 2015/11/26\course\cpeg323-08F\Topic7e31 Typical values for a TLB in 2008 See P&H Fig. 5.29 4 th Ed Although the range of values is wide, this is partially because many of the values that have shifted over time are related; for example, as caches become larger to overcome larger miss penalties, block sizes also grow.

32 2015/11/26\course\cpeg323-08F\Topic7e32 TLB Design Placement policy: - Small TLBs: full-associativity can be used - Large TLBs: fully-associativity may be too slow Replacement policy: sometime even random policy is used for speed/simplicity

33 2015/11/26\course\cpeg323-08F\Topic7e33 Example: FasthMATH Processing a read or a write-through in the Intrinsity FastMATH TLB and cache See P&H Fig. 7.25 3 rd Ed or 5.25 4 th Ed

34 2015/11/26\course\cpeg323-08F\Topic7e34 pidi p i w Virtual address TLB Page map RWX pid M C P Page frame address in memory (PFA) PFA in S.M. i w Physical address Operation validation RWX Requested access type S/U Access fault Page fault PME (x) Replacement policy If s/u = 1 - supervisor mode PME(x) * C = 1-page PFA modified PME(x) * P = 1-page is private to process PME(x) * pid is process identification number PME(x) * PFA is page frame address Virtual to real address translation using page map PME – Page map entry

35 2015/11/26\course\cpeg323-08F\Topic7e35 Translation Look - Aside Buffer TLB - miss rate is low - Clark-Emer data [85] 3~4 times smaller then usually cache miss ratio. When TLB-miss, the penalty is relatively low - A TLB miss usually result in a cache fetch.

36 2015/11/26\course\cpeg323-08F\Topic7e36 TLB-miss implies higher miss rate for the main cache TLB translation is process-dependent - Strategies for context switching 1.Tagging by context 2.Flushing cont’d complete purge by context (shared) No absolute answer Translation Look - Aside Buffer

37 2015/11/26\course\cpeg323-08F\Topic7e37 Integrating VM, TLBs and Caches See P&H Fig. 7.24 3 rd Ed or 5.24 4 th Ed The TLB and cache implement the process of going from a virtual address to a data item in the Intrinsity Fast MATH. This figure shows the organization of the TLB and the data cache, assuming a 4 kB page size. This diagram focuses on a read.


Download ppt "2015/11/26\course\cpeg323-08F\Topic7e1 Virtual Memory."

Similar presentations


Ads by Google