Presentation is loading. Please wait.

Presentation is loading. Please wait.

Memory Management Virtual Memory.

Similar presentations


Presentation on theme: "Memory Management Virtual Memory."— Presentation transcript:

1 Memory Management Virtual Memory

2 Virtual Memory Key Idea
Disassociate addresses referenced in a running process from addresses available in storage

3 Features Can address a storage space larger than primary storage
Creates the illusion that a process is placed contiguously in memory Two Methods Paging—memory allocated in fixed-size blocks Segmentation—memory allocated in different sizes

4 Terms Virtual Addresses Real Addresses Virtual Address Space
Addresses referenced by a running process Real Addresses Addresses available in primary storage Virtual Address Space Range of virtual address that a process may reference Real Address Space Range of real addresses available on a particular computer Implication: processes are referenced in virtual memory, but run in real memory

5 Mapping Virtual memory is contiguous
physical memory Virtual memory is contiguous Physical memory need not be contiguous Virtual memory can exceed physical memory #3 Implies that only a part of a process has to be in physical memory Virtual memory is limited by address size Physical memory has (what else?) physical limits

6 Issues How are addresses mapped?
Page Fault: Virtual memory reference has no corresponding item in physical memory Physical memory is full, but a process needs something in secondary storage

7 nl-MAP/MMU A First Look
Suppose every virtual address is mapped to a real address Far beyond base/limit registers Problem Page Map Table is as large as the process Solution Break process into fixed-size blocks, say 512 bytes to 8kb Map the blocks in hardware Virtual Memory Blocks: Pages Real memory Blocks: Page Frames

8 Page Table Page: Chunk in Virtual Memory
Page Frame: Chunk in Physical Memory Relation between virtual addresses and physical memory addresses is given by the page table Every page begins on a multiple of 4096 and ends addresses higher So 4K–8K really means 4096–8191 8K to 12K means 8192–12287

9 A Little Help From Hardware
The position and function of the MMU MMU is shown as being a part of the CPU chip Could just as easily be a separate chip

10 NL-MAP (1) NL-MAP is really a look-up into a page table
Each process has its own page table Register in CPU is loaded with the real address, A, of the process page table Page table contains 1 entry for each page of the process

11 NL-MAP (2) Virtual address has two parts: (p,d)
page number (p) offset from the start of the page table (d) Real address in page table has (at least) five parts present/absent bit protection bits (rwe) referenced bit: if dirty when evicted must be written to disk secondary storage address page frame address

12 NL-MAP (3) Consequences Page table can be large Mapping must be fast
ref: page has been referenced mod: page has been modified prot: what kinds of access permitted pres/abs: if set to 0, page fault

13 Speed Direct mapped page table is kept in main memory. Two memory accesses are required to satisfy a reference 1. Page table 2. Primary storage

14 Size Suppose we have 32 bit addresses
Some of the 32 bits will be the offset (displacement) within the page table Some of the 32 bits will be the offset (displacement) within the page frame Suppose our pages are 4K (4096 bytes) Offsets from 0 – 4K-1 are necessary These are displacements from the start of the page frame, the ‘d’ part of the real address Since 4k = 212 we reserve the low order 12 bits for this Leaves 20 bits to address entries in the page table. Each process would have a page table with 220 entries. If the page size is smaller, we have more entries!!

15 Implications of the direct mapping model
Large page tables are impractical from two perspectives Require lots of memory Loading an entire page table at each context switch would kill performance

16 More Help from Hardware
Translation lookaside buffer (TLB) Associative memory Searchable in parallel Very small number of entries, say 8 to 256

17 TLB

18 TLB close-up

19 Principle of Locality TLB with only 16 entries achieves 90% of the performance of a system with the entire page table in associative memory Why? A page referenced in the recent past is likely to be referenced again

20 Paging with TLB (1) Small associative memory with 16 or 32 registers
Store there the most recently referenced page frames Technique 1. Process references a virtual address (p,d) 2. Do a parallel search of TLB for p if found, return p’, the frame number corresponding to p else look up p in the page table Update tlb with p and p’, replacing the least recently used entry

21 Paging with TLB (2) When P is not found in TLB,
the least recently used slot in TLB is filled with P from page table

22 The Whole Picture

23 Page Tables Can be Large
Suppose: 32 bit machine, 4K page size, 12 bit displacement 220 page table entries, each (at least) 8 bytes 8MB page table per process Now Suppose 64 bit machine, 4K page size, 12 bit displacement 252 page table entries, each (at least) 8 bytes (255 bytes)/(240 bytes/TB) = 215 TB page table per process

24 Indirection Two Levels (There Could Be More)
Virtual address = p, t, d p: page number at first level t: page number at second level d: displacement into page frame

25 The Two Level Model p t d p’’ ... + p’ ... + p’’ d ...
1st level PT Address (A) + p’ ... + p’’ d ... 1 table for each entry in level 1 Level 1 Level 2

26 How Many Actual Tables? Suppose 32 bit addresses, 4K page size
P (10 bit displacement into first level table) T (10 bit displacement into second level table) D (12 bit displacement into page frame) P has 210 entries T has 210 entries Each entry in the top level table references 4MB of memory because It references a second level page table of or 210 entries Each of which points to a 4K page At most: 1 top level table with 1024 entries At most: 1024 second level tables, each pointing to a 4k page At most: 1024 * 1024 * 4K pages = 232 byte process

27 Key Idea Not all of the tables are necessary With direct mapping
Each process requires a page table with up to entries whether they are needed or not With a two-level page table the savings can be substantial

28 Example Suppose a 12 MB process bottom 4 MB for code
next 4 MB for data top 4 MB for stack Hole between the top of the data and bottom of the stack Top level page table has 210 slots, 23 bytes each Only three are used These point to three second level page tables, each with 210 slots, each slot requiring 23 bytes Total Page Table Memory = 210 slots x 23 bytes / slot + 3 x 210 slots x 23 bytes / slot = 215 bytes = 512 K for all page tables Direct Mapping: 8MB page table

29 Multilevel Page Tables
Each arrow on the right points to a 4K page. The low 12 bits of the virtual address are a displacement into the page

30 Sample Page Reference MMU receives this virtual address: 0x (hex) 0000|0000|0100|0000|0011|0000|0000|0100 P=1 T = 3 D = 4 P=1: 1st entry, starting at 0, in the first level page table. Find p’, the address of the 2nd level page table T=3: 3rd entry, starting at 0, in 2nd level page table whose address is p’. Find p’’, the address of the page frame D = 4: 4th byte offset within the page frame

31 Where in Virtual Memory?
P indexes top level page table. P = 1 corresponds to 2nd 4M of virt mem: 4M to 8M-1 T indexes 2nd level page table. T = 3 corresponds to the 4th 4K 3 * 4K to 4 * 4K-1 12K to 16K-1 12,288 to 16,383 within its 4M chunk But since P = 1, we are in the 2nd 4m chunk 12, M to 16, M or absolute addresses 4,206,592 – 4,210,687 Entry found in the 2nd level page table contains frame address corresponding to the virtual address: 0x To this we add the d = 4 to get virtual address 0x which translates to absolute address = within the virtual address space If present/absent bit is 0: page fault Note: a) Virtual address space with (232 bytes)/ (212 bytes/page) = 220 pages b) But we are using only 12M/(4K/page) = 3 K pages c) Only 4 page tables are necessary to handle all of the address references

32 More Levels are Possible
If we refer to the 1st level page table, as the page directory, then the scheme we have been describing is Intel’s from 1995 Pentium Pro in 2005 added another level Page Directory Pointer Table But with 4 entries, 512 entry page tables, 4k page frames 22 x x 29 x = 4GB (as before, but with more flexibility) x86 uses 4 levels of 512 entry page tables 29 x x x 29 x = = 256 TB

33 Getting out of Hand? Inverted Page Tables
Page table contains 1 entry per page frame of real memory Suppose we want to address 1GB of real memory using 4K page frames Inverted page table requires 218 entries because 218 x 212 = 1GB Seems big---But this is for all processes So, process n refers to virtual page p, p is no longer an index into the table Entire 256K table must be searched for page frame (n,p)

34 TLB and Hash Tables On TLB miss, entire inverted table has to be searched Hash Solution Search TLB If found, retrieve page frame if not found Hash page number to find entry in hash table All pages hashing to the same number are chained as tuples (page, page frame) Enter entry in TLB Retrieve page frame

35 Inverted Page Tables Direct mapping b) Inverted page table
c) Inverted page table with hash chains


Download ppt "Memory Management Virtual Memory."

Similar presentations


Ads by Google