Presentation is loading. Please wait.

Presentation is loading. Please wait.

EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.

Similar presentations


Presentation on theme: "EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient."— Presentation transcript:

1 EECS 470 Virtual Memory Lecture 15

2 Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient point to implement memory protection Creates a flexible mechanism to implement shared memory communication

3 Address Translation Partition memory into pages –Typically 4k or 8k bytes –Trade-offs? VPN is index into page table –Produces page table entry –Holds address, permissions, availability Physical page number (PPN) replaces VPN –To form physical address

4 Fast Address Translation Cache recent translations –In translation look-aside buffer (TLB) –Provides single cycle access TLB is typically small –32 to 128 entries –As a result, highly associative Loaded with PTEs when page table translations occur –What is replaced?

5 TLB Miss Handling If TLB access misses –This is a new translation, or a previous replaced translation Initiate TLB miss handler –Walk page tables –Replace entry in TLB with PTE –Possible to implement page walker in H/W or S/W Trade-offs? If PTE entry is marked invalid –Page is not resident in physical memory –Declare page fault exception –OS will now do its thing…

6 Maintaining a Coherent TLB TLB must reflect changes to address mapping –Physical page replacement [use TLB entry invalidation] –Physical page allocation [use TLB entry initialization] Context switches –Essentially replaces every entry in the TLB –How does the hardware recognize a context switch? –Naïve approaches can lead to expensive context switches Due to many accompanying TLB misses –Optimization context switches with address space IDs Processor control state records current process ASID, updated by OS at context switches ASID fields included TLB entries, only match TLB entries that share the same ASID as the current process ASID

7 Implementing VM with Caches Uses a virtually index – physically tagged cache What is the advantage of a virtually indexed cache? What is the disadvantage of a virtually tagged cache?

8 Virtual Address Synonyms Problem: processes can share physical memory in different virtual address locations –What if these virtual addresses map to different locations in cache? Cache Aliases Solutions: –Don’t let processes share memory (e.g., no DLLs) –Use a physically indexed cache (S L O W) –Force shared memory to be aligned to set size of the cache (i.e., translated bits used to index the cache will be equal to physical address bits) –Force all cache set sizes <= page size (i.e., no translated bits are allowed to index cache), most popular solution

9 Case Study – Pentium 4

10 Pentium 4 Page Directory/Table Entries Global pages are not flushed from TLB –Reduces impact of context switches Accessed bit used by OS to implement clock algorithm Cache disabled bit used to specify memory-mapped I/O Present bit used to implement swapping Dirty bit tracks writes to page U/S and R/W implement page permissions

11 Application Specific Super- page Entries Used to map large objects treated as a single unit –Operating system code –Video frame buffer Super-page works just like a normal pages, but maps a larger space –Reduces “pressure” on TLB –Implications on TLB design?

12 Case Study – Alpha 21264


Download ppt "EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient."

Similar presentations


Ads by Google