Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory.

Similar presentations


Presentation on theme: "© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory."— Presentation transcript:

1 © 2004, D. J. Foreman 1 Virtual Memory

2 © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory  Increase program RAM past physical limits  Allocation based on virtual memory policy ■ Freedom from user requirements ■ Extended abstraction for users  Strategies ■ Paging - fixed sized blocks called pages ■ Segmentation - variable sized segments

3 © 2004, D. J. Foreman 3 Address Translation Mapping  Done at runtime  Only the part being used is loaded ■ Actually a small initial page-set > 1 page ■ Page size determined by the O/S  Instruction execution proceeds until an "addressing" or "missing data" fault ■ O/S gets control ■ Loads missing page ■ Re-start the instruction with new data address

4 © 2004, D. J. Foreman 4 Mapping -2  Formally: β t : vaddress  paddress  {Ω} ■ Β t is a time-varying map ■ t is the process's virtual time  Virtual memory manager implements the mapping. ANY mechanism is valid if it follows the definition.  β t (i) will be either: ■ Real address of virtual address i ■ Ω

5 © 2004, D. J. Foreman 5 Concepts  Entire virtual address space on disk  Small set of virtual addresses bound to real addresses at any instant  Virtual addresses are scattered  Page size depends on hardware  Page size usually = page frame size ■ Counter-example (OS/VS2 (2k) on VM (4k))  # page frames computed from physical memory constraints

6 © 2004, D. J. Foreman 6 Computations  Page size=2 h & Page frame size=2 h ■ Usually constrained by hardware protection  Number of system pages: n = 2 g  Number of process pages/frames: m = 2 j  For a process: ■ Number virtual addresses: G= 2 g  2 h = 2 g+h ■ Number physical addresses: H=2 j+h  FYI: ■ For Pentiums: g = 20 h=12 (page size=4K) ■ So G = 4 GB (max program size, including O/S)

7 © 2004, D. J. Foreman 7 Processing Ω  If function returns Ω ■ Find location i on disk ■ Bring it into main memory ■ Re-translate i ■ Re-start the instruction  Significant overhead ■ O/S context switch ■ Table search ■ I/O for missing address

8 © 2004, D. J. Foreman 8 Segmentation & Paging  Segmentation ■ Programs divided into segments ■ Location references are ■ I/O (Swap) whole segments (variable sized) ■ Programmer can control swapping ■ External fragmentation can occur  Paging ■ I/O (page) fixed-size blocks ■ Location references are linear ■ No programmer control of paging

9 © 2004, D. J. Foreman 9 Description of Translation  n = pages in virtual space  m =allocated frames  i is a virtual address 0<=i<=GG= 2 g+h  k=a physical memory address =U*2 h + V0<=V<2 h  c page size=2 h  Page number =  (i/c)   U is the page frame number  V=line number (offset in page) = i mod c

10 © 2004, D. J. Foreman 10 Policies  Fetch - when to load a page  Replace - victim selection (when full)  Position (placement) - (when not full)  # available frames is constant  Page reference stream - numbers of the pages a P references in order of reference

11 © 2004, D. J. Foreman 11 Paging Algorithm 1. Page fault occurs 2. Process with missing page is interrupted 3. Memory manager locates missing page 4. Page frame is unloaded (replacement policy) 5. Page is loaded into vacated page frame 6. Page table is updated 7. Process is restarted

12 © 2004, D. J. Foreman 12 Demand Paging Algorithms  Random - many 'missing page' faults  Belady - for comparisons only  Least Recently Used - if recently used, will be again, so dump the LRU page  Least Frequently Used - dump the most useless page - influenced by locality, slow to react  LRU, LFU are both Stack algorithms

13 © 2004, D. J. Foreman 13 2nd Chance  Adds two bits to table ■ R – page was referenced ■ M – page was modified  If (R==1) {R= 0 try next page } else take this page

14 © 2004, D. J. Foreman 14 Page Mgmt Structures Page #Disp ADKFrame # Virtual address PTE A - Assigned D - Dirty K - Prot. Key

15 © 2004, D. J. Foreman 15 Page Table Lookup Pg#Disp flags Frame# Process page-table ptr (a register) Frame#Disp Page Table 1 entry per Page Pg#

16 © 2004, D. J. Foreman 16 Inverted Page Tables  Useful for locating in-machine pages  Extract virtual page # (VPN) from address  Hash the VPN to an index  Search the table for this index  Each entry has VPN, frame# (PPN)  Efficient for small memories  Collisions must be resolved  Note: uses page#, not address

17 © 2004, D. J. Foreman 17 Inverted Page Tables -2  Regular page table ■ Uses virtual page # directly ■ Entry per page  Inverted table ■ Uses virtual page # as hash input ■ Sparse lookup table ■ Finds frame if in memory ■ Followed by disk address lookup if needed ■ Entry per frame

18 © 2004, D. J. Foreman 18 Inverted Page Tables page# or link 17 1 entry per Frame Pg#Disp Hashing Function Frame#Disp F=1 e.g.; 17 F = 1 because Pg# (17) matches in row 1 of table

19 © 2004, D. J. Foreman 19 Translation Lookaside Buffer VPNFull PTE TLB - h/w-cached simultaneous (associative) lookup S/W lookup table line by line lookup frame offset If no TLB 'hit' page offset VPN Frame#


Download ppt "© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory."

Similar presentations


Ads by Google