Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 471 Autumn 1998 Virtual memory

Similar presentations


Presentation on theme: "CSE 471 Autumn 1998 Virtual memory"— Presentation transcript:

1 CSE 471 Autumn 1998 Virtual memory
Paging (segmented) systems predate caches Same type of questions (mapping, replacement, writing policy) Paging: fixed block sizes Easy to map, translate, replace, transfer to/from disk Does not correspond to semantic objects, hence internal fragmentation Segmentation: variable block sizes “Difficult” to translate (need to check segment length) , place and replace (external fragmentation), transfer to/from disk Corresponds to semantic objects Segmentation (somewhat of a misnomer) with paging Good for large objects Breaks up page tables 5/2/2019 CSE 471 Autumn 1998 Virtual memory

2 Two extremes in the memory hierarchy
5/2/2019 CSE 471 Autumn 1998 Virtual memory

3 Other extreme differences
Mapping: Restricted (L1) vs. general (Paging) Hardware assist for virtual addres translation (TLB) Miss handler Harware only for caches Software only for paging system (context-switch) Hardware and/or software for TLB Replacement algorithm Not that important for caches Very important for paging system Write policy Always write back for paging systems 5/2/2019 CSE 471 Autumn 1998 Virtual memory

4 Virtual Memory Mapping: Page tables
Page tables contain page table entries (PTE): Virtual page number (implicit/explicit), physical page number,valid, protection, dirty, use bits (for LRU-like replacement) etc. Hardware register points to the page table of the running process Various page table organizations Single level (contiguous); multi level (segmented) etc. In some systems, inverted page tables (with a hash table) In all modern systems, use of a TLB to cache some PTE’s 5/2/2019 CSE 471 Autumn 1998 Virtual memory

5 CSE 471 Autumn 1998 Virtual memory
TLB’s Cache for PTE’s TLB miss handled by hardware or by software (e.g., PAL code in Alpha): depends on the implementation TLB miss cycles -> no context-switch Addressed in parallel with access to the L1 cache Since smaller, goes faster It’s on the critical path For a given TLB size (number of entries) Larger page size -> larger mapping range 5/2/2019 CSE 471 Autumn 1998 Virtual memory

6 CSE 471 Autumn 1998 Virtual memory
Choosing a page size Large page size Smaller page table Better coverage in TLB Amortize time to bring in page from disk Allow fast access to larger L1 (virtually addressed; physically tagged) BUT more fragmentation (unused portions of the page) Longer time for start-up Recent micros support multiple (two) page sizes Better use of TLB is main reason 5/2/2019 CSE 471 Autumn 1998 Virtual memory

7 CSE 471 Autumn 1998 Virtual memory
Protection Protect user processes from each other Disallow an unauthorized process to access (read, write, execute) part of the addressing space of another process Disallow the use of some instructions by user processes E.g., disallow process A to change the access rights to process B Leads to Kernel mode (operating system) vs. user mode Only kernel mode can use some privileged instructions (e.g., disabling interrupts, I/O functions) Providing system calls whereby CPU can switch between user and kernel mode 5/2/2019 CSE 471 Autumn 1998 Virtual memory

8 Where to put access rights
Associate protection bits with PTE e.g., disallow user to read/write /execute in kernel space e.g., allow one process to share-read some data with another but not modifying it etc. User/kernel protection model can be extended with rings of protection. Further extension leads to the concept of capabilities. 5/2/2019 CSE 471 Autumn 1998 Virtual memory

9 I/O and caches (a first look at cache coherence)
I/O data passes through the cache on its way from/to memory No coherency problem but contention for cache access I/O interacts directly with main memory (DMA) Case 1: Software solution Output: (1) Write-through cache. No problem ;(2)Write back cache purge the cache via O.S. interaction Input (WT and WB). Use a non-cacheable buffer space. Then after input is finished make the buffer cacheable 5/2/2019 CSE 471 Autumn 1998 Virtual memory

10 I/O consistency -- hardware approach
Subset of the shared-bus multiprocessor cache coherence protocol (see Chapter 9 and forthcoming slides) Cache has duplicate set of tags Cache controller snoops on the bus On input, if there is a match in tags, store in both the cache and main memory On output, if there is a match in tags: if WT invalidate the entry; if WB take the cache entry instead of the memory contents and invalidate. 5/2/2019 CSE 471 Autumn 1998 Virtual memory


Download ppt "CSE 471 Autumn 1998 Virtual memory"

Similar presentations


Ads by Google