Presentation is loading. Please wait.

Presentation is loading. Please wait.

Week 5:Virtual Memory CS 162. Today’s Section Administrivia Quiz Review of Lecture Worksheet and Discussion.

Similar presentations


Presentation on theme: "Week 5:Virtual Memory CS 162. Today’s Section Administrivia Quiz Review of Lecture Worksheet and Discussion."— Presentation transcript:

1 Week 5:Virtual Memory CS 162

2 Today’s Section Administrivia Quiz Review of Lecture Worksheet and Discussion

3 Administrivia Project 2 – Initial Design Documents due Next Thursday, March 6 th ! – Get started early, not as straightforward – Signups should be up by this weekend! – Fill out midterm course surveys! – Check out Project 2 Overview for Nachos on Piazza! Midterm 1 is 3/12, 4:00-5:30pm in 245 Li Ka Shing (A-L) and 105 Stanley (M-Z) – Covers lectures 1-12, readings, handouts, projs 1 & 2

4 LECTURE REVIEW Address Translation and Caching

5 Address Translation Translation – Changing the virtual address to a physical one Allows the program to think that we have more space than we actually do Provides a way to overlap to share memory between processes if need be

6 Example of General Address Translation Prog 1 Virtual Address Space 1 Prog 2 Virtual Address Space 2 Code Data Heap Stack Code Data Heap Stack Data 2 Stack 1 Heap 1 OS heap & Stacks Code 1 Stack 2 Data 1 Heap 2 Code 2 OS code OS data Translation Map 1Translation Map 2 Physical Address Space

7 Issues with Simple Segmentation Method Fragmentation problem – Not every process is the same size – Over time, memory space becomes fragmented Hard to do inter-process sharing – Want to share code segments when possible – Want to share memory between processes – Helped by providing multiple segments per process process 6 process 5 process 2 OS process 6 process 5 OS process 6 process 5 OS process 9 process 6 process 9 OS process 10 process 11

8 Schematic View of Swapping Q: What if not all processes fit in memory? A: Swapping: Extreme form of Context Switch – In order to make room for next process, some or all of the previous process is moved to disk – This greatly increases the cost of context-switching Desirable alternative? – Some way to keep only active portions of a process in memory at any one time – Need finer granularity control over physical memory

9 Page Table Types Page Table – Maps a virtual page to a physical page Multi-Level Page Tables – Each part of the address is split apart for each level of the table to get more spread. More memory accesses Inverted Page Table – Hash Table used to map virtual addresses to physical ones Remember a page table has to fit within one page!

10 9.10 2/24/2014Anthony D. Joseph CS162 ©UCB Spring 2014 Address Translation Comparison AdvantagesDisadvantages SegmentationFast context switching: Segment mapping maintained by CPU External fragmentation Paging (single-level page) No external fragmentation, fast easy allocation Large table size ~ virtual memory Paged segmentation Table size ~ # of pages in virtual memory, fast easy allocation Multiple memory references per page access Two-level pages Inverted TableTable size ~ # of pages in physical memory Hash function more complex

11 Lec 10.11 2/26/14 Anthony D. Joseph CS162 ©UCB Spring 2014 Caching Concept Cache: a repository for copies that can be accessed more quickly than the original –Make frequent case fast and infrequent case less dominant Caching at different levels –Can cache: memory locations, address translations, pages, file blocks, file names, network routes, etc… Only good if: –Frequent case frequent enough and –Infrequent case not too expensive Important measure: Average Access time = (Hit Rate x Hit Time) + (Miss Rate x Miss Time)

12 Lec 10.12 2/26/14 Anthony D. Joseph CS162 ©UCB Spring 2014 Why Does Caching Help? Locality! Temporal Locality (Locality in Time): –Keep recently accessed data items closer to processor Spatial Locality (Locality in Space): –Move contiguous blocks to the upper levels Address Space 02 n - 1 Probability of reference Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y

13 Types of Caches Direct Mapped Cache – Every entry has a direct mapping to it’s place in the cache. If something is mapped to the same place, it gets replaced. N-Way Set Associative Cache – N-way means that there are n things stored at that cache line. Log (base 2) n from Index added to Tag Fully Associative Cache – Same as Set Associative except no index bit. Essentially, the number of ways is equal to cache lines.

14 Lec 10.14 2/26/14 Anthony D. Joseph CS162 ©UCB Spring 2014 Write through: The information is written both to the block in the cache and to the block in the lower-level memory Write back: The information is written only to the block in the cache. –Modified cache block is written to main memory only when it is replaced –Question is block clean or dirty? Pros and Cons of each? –WT: »PRO: read misses cannot result in writes »CON: processor held up on writes unless writes buffered –WB: »PRO: repeated writes not sent to DRAM processor not held up on writes »CON: More complex Read miss may require writeback of dirty data What Happens on a Write?

15 Translation Lookaside Buffer (TLB) Essentially, a cache for the Page Table Fixed number of slots containing Page Table Entries meaning you don’t have to go into memory for the Entry. Nowadays, done in parallel with cache! As long as the offset size is the same. For Context Switches, must invalidate all of TLB’s entries.


Download ppt "Week 5:Virtual Memory CS 162. Today’s Section Administrivia Quiz Review of Lecture Worksheet and Discussion."

Similar presentations


Ads by Google