CS 162 Discussion Section Week 5 10/7 – 10/11. Today’s Section ●Project discussion (5 min) ●Quiz (5 min) ●Lecture Review (20 min) ●Worksheet and Discussion.

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Virtual Memory. The Limits of Physical Addressing CPU Memory A0-A31 D0-D31 “Physical addresses” of memory locations Data All programs share one address.
COSC 3407: Operating Systems Lecture 14: Address Translation – Caches and TLBs.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Cs 325 virtualmemory.1 Accessing Caches in Virtual Memory Environment.
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Virtual Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
CS 153 Design of Operating Systems Spring 2015
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
1 Lecture 20: Cache Hierarchies, Virtual Memory Today’s topics:  Cache hierarchies  Virtual memory Reminder:  Assignment 8 will be posted soon (due.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
Chap. 7.4: Virtual Memory. CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Cache Memory Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
CS 61C L25 VM III (1) Garcia, Spring 2004 © UCB TA Chema González www-inst.eecs/~cs61c-td inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture.
ENGS 116 Lecture 121 Caches Vincent H. Berk Wednesday October 29 th, 2008 Reading for Friday: Sections C.1 – C.3 Article for Friday: Jouppi Reading for.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Cache intro CSE 471 Autumn 011 Principle of Locality: Memory Hierarchies Text and data are not accessed randomly Temporal locality –Recently accessed items.
CS61C L37 VM III (1)Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
11/10/2005Comp 120 Fall November 10 8 classes to go! questions to me –Topics you would like covered –Things you don’t understand –Suggestions.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Lecture 19: Virtual Memory
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5B:Virtual Memory Adapted from Slides by Prof. Mary Jane Irwin, Penn State University Read Section 5.4,
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
Lecture Topics: 11/17 Page tables TLBs Virtual memory flat page tables
VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wananga o te Upoko o te Ika a Maui COMP 203 / NWEN 201 Computer Organisation / Computer Architectures Virtual.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
CS162 Operating Systems and Systems Programming Lecture 13 Caches and TLBs March 12, 2008 Prof. Anthony D. Joseph
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Week 5:Virtual Memory CS 162. Today’s Section Administrivia Quiz Review of Lecture Worksheet and Discussion.
CS203 – Advanced Computer Architecture Virtual Memory.
CS161 – Design and Architecture of Computer
8 July 2015 Charles Reiss
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Morgan Kaufmann Publishers
Lecture 21: Memory Hierarchy
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs February 26, 2014 Anthony D. Joseph
Lecture 23: Cache, Memory, Virtual Memory
February 23, 2011 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs February.
Lecture 22: Cache Hierarchies, Memory
Andy Wang Operating Systems COP 4610 / CGS 5765
Performance metrics for caches
Anthony D. Joseph and Ion Stoica
Overheads for Computers as Components 2nd ed.
CS162 Operating Systems and Systems Programming Lecture 9 Address Translation February 24, 2014 Anthony D. Joseph
Performance metrics for caches
Virtual Memory Overcoming main memory size limitation
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 471 Autumn 1998 Virtual memory
CS703 - Advanced Operating Systems
Cache Memory Rabi Mahapatra
Principle of Locality: Memory Hierarchies
Sarah Diesburg Operating Systems CS 3430
Performance metrics for caches
Sarah Diesburg Operating Systems COP 4610
October 10th, 2018 Prof. Ion Stoica
Presentation transcript:

CS 162 Discussion Section Week 5 10/7 – 10/11

Today’s Section ●Project discussion (5 min) ●Quiz (5 min) ●Lecture Review (20 min) ●Worksheet and Discussion (20 min)

Project 1 ●Autograder is still up submit proj1-test ●Due 10/8 (Today!) at 11:59 PM submit proj1-code ●Due 10/9 (Tomorrow!) at 11:59 PM Final design doc & Project 1 Group Evals Template posted on Piazza last week Will post Group Evals Link on Piazza Wed Afternoon ●Questions?

Quiz…

Short Answer [Use this one] 1.Name the 4 types of cache misses discussed in class, given their causes: [0.5 each] a)Program initialization, etc. (nothing you can do about them) [Compulsory Misses] b)Two addresses map to the same cache line [Conflict Misses] c)The cache size is too small [Capacity Misses] d)External processor or I/O interference [Coherence Misses] [Choose 1] 2.Which is better when a small number of items are modified frequently: write- back caching or write-through caching? [Write-back] 3.Name one of the two types of locality discussed in lecture that can benefit from some type of caching. [Temporal or Spatial] True/False [Choose 2] 3.Memory is typically allocated in finer grained units with segmentation than with paging. [False] 4.TLB lookups can be performed in parallel with data cache lookups [True] 5.The size of an inverted page table is proportional to the number of pages in virtual memory [False] 6.Conflict misses are possible in a 3-way-set-associative cache [True]

Lecture Review

9.7 10/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Virtualizing Resources Physical Reality: Processes/Threads share the same hardware –Need to multiplex CPU (CPU Scheduling) –Need to multiplex use of Memory (Today) Why worry about memory multiplexing? –The complete working state of a process and/or kernel is defined by its data in memory (and registers) –Consequently, cannot just let different processes use the same memory –Probably don’t want different processes to even have access to each other’s memory (protection)

9.8 10/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Important Aspects of Memory Multiplexing Controlled overlap: –Processes should not collide in physical memory –Conversely, would like the ability to share memory when desired (for communication) Protection: –Prevent access to private memory of other processes »Different pages of memory can be given special behavior (Read Only, Invisible to user programs, etc) »Kernel data protected from User programs Translation: –Ability to translate accesses from one address space (virtual) to a different one (physical) –When translation exists, process uses virtual addresses, physical memory uses physical addresses

9.9 10/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Two Views of Memory Address Space: –All the addresses and state a process can touch –Each process and kernel has different address space Consequently, two views of memory: –View from the CPU (what program sees, virtual memory) –View from memory (physical memory) –Translation box (MMU) converts between the two views Translation helps to implement protection –If task A cannot even gain access to task B’s data, no way for A to adversely affect B With translation, every program can be linked/loaded into same region of user address space Physical Addresses CPU MMU Virtual Addresses Untranslated read or write

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Address Segmentation 1111 heap code data Virtual memory view Physical memory view data heap (0x40) (0x80) (0xC0) seg #offset code (0x10) (0x50) (0x70) (0xE0) Seg #baselimit stack (0xF0) stack

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Address Segmentation 1111 stack heap code data Virtual memory view Physical memory view data heap stack seg #offset code Seg #baselimit What happens if stack grows to ?

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Address Segmentation 1111 stack heap code data Virtual memory view Physical memory view data heap stack seg #offset code Seg #baselimit No room to grow!! Buffer overflow error or resize segment and move segments around to make room

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Paging 1111 stack heap code data Virtual memory view page #offset Physical memory view data code heap stack null null null null null null null null null null null null null null null null null null null Page Table

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Paging 1111 stack heap code data Virtual memory view page #offset Physical memory view data code heap stack null null null null null null null null null null null null null null null null null null null Page Table What happens if stack grows to ?

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 stack Paging 1111 stack heap code data Virtual memory view page #offset Physical memory view data code heap stack null null null null null null null null null null null null null null null null null Page Table Allocate new pages where room! Challenge: Table size equal to # of pages in virtual memory!

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 stack Two-Level Paging 1111 stack heap code data Virtual memory view page1 #offset Physical memory view data code heap stack page2 # null 101 null null null null Page Tables (level 2) Page Table (level 1)

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 stack Two-Level Paging stack heap code data Virtual memory view (0x90) Physical memory view data code heap stack (0x80) null 101 null null null null Page Tables (level 2) Page Table (level 1) In best case, total size of page tables ≈ number of pages used by program virtual memory. Requires one additional memory access!

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Inverted Table 1111 stack heap code data Virtual memory view page #offset Inverted Table hash(virt. page #) = phys. page # h(11111) = h(11110) = h(11101) = h(11100) = h(10010)= h(10001)= h(10000)= h(01011)= h(01010)= h(01001)= h(01000)= h(00011)= h(00010)= h(00001)= h(00000)= stack Physical memory view data code heap stack Total size of page table ≈ number of pages used by program in physical memory. Hash more complex

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Address Translation Comparison AdvantagesDisadvantages SegmentationFast context switching: Segment mapping maintained by CPU External fragmentation Paging (single-level page) No external fragmentation, fast easy allocation Large table size ~ virtual memory Paged segmentation Table size ~ # of pages in virtual memory, fast easy allocation Multiple memory references per page access Two-level pages Inverted TableTable size ~ # of pages in physical memory Hash function more complex

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Caching Concept Cache: a repository for copies that can be accessed more quickly than the original –Make frequent case fast and infrequent case less dominant Caching at different levels –Can cache: memory locations, address translations, pages, file blocks, file names, network routes, etc… Only good if: –Frequent case frequent enough and –Infrequent case not too expensive Important measure: Average Access time = (Hit Rate x Hit Time) + (Miss Rate x Miss Time)

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Why Does Caching Help? Locality! Temporal Locality (Locality in Time): –Keep recently accessed data items closer to processor Spatial Locality (Locality in Space): –Move contiguous blocks to the upper levels Address Space 02 n - 1 Probability of reference Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Compulsory (cold start): first reference to a block –“Cold” fact of life: not a whole lot you can do about it –Note: When running “billions” of instruction, Compulsory Misses are insignificant Capacity: –Cache cannot contain all blocks access by the program –Solution: increase cache size Conflict (collision): –Multiple memory locations mapped to same cache location –Solutions: increase cache size, or increase associativity Two others: –Coherence (Invalidation): other process (e.g., I/O) updates memory –Policy: Due to non-optimal replacement policy Sources of Cache Misses

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Example: Block 12 placed in 8 block cache Block no. Direct mapped: block 12 (01100) can go only into block 4 (12 mod 8) Set associative: block 12 can go anywhere in set Block no. Set 0 Set 1 Set 2 Set 3 Fully associative: block 12 can go anywhere Block no Block Address Space: Block no. Where does a Block Get Placed in a Cache? tagindex tagindex tag

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Easy for Direct Mapped: Only one possibility Set Associative or Fully Associative: –Random –LRU (Least Recently Used) 2-way 4-way 8-way SizeLRU Random LRU Random LRU Random 16 KB5.2%5.7% 4.7%5.3%4.4% 5.0% 64 KB1.9%2.0% 1.5%1.7%1.4% 1.5% 256 KB1.15%1.17% 1.13% 1.13%1.12% 1.12% Which block should be replaced on a miss?

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Write through: The information is written both to the block in the cache and to the block in the lower-level memory Write back: The information is written only to the block in the cache. –Modified cache block is written to main memory only when it is replaced –Question is block clean or dirty? Pros and Cons of each? –WT: »PRO: read misses cannot result in writes »CON: processor held up on writes unless writes buffered –WB: »PRO: repeated writes not sent to DRAM processor not held up on writes »CON: More complex Read miss may require writeback of dirty data What happens on a write?

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Caching Applied to Address Translation Question is one of page locality: does it exist? –Instruction accesses spend a lot of time on the same page (since accesses sequential) –Stack accesses have definite locality of reference –Data accesses have less page locality, but still some… Can we have a TLB hierarchy? –Sure: multiple levels at different sizes/speeds Data Read or Write (untranslated) CPU Physical Memory TLB Translate (MMU) No Virtual Address Physical Address Yes Cached? Save Result

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Overlapping TLB & Cache Access (1/2) Main idea: –Offset in virtual address exactly covers the “cache index” and “byte select” –Thus can select the cached byte(s) in parallel to perform address translation Offset Virtual Page # index tag / page # byte virtual address physical address

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Putting Everything Together: Address Translation Physical Address: Offset Physical Page # Virtual Address: Offset Virtual P2 index Virtual P1 index PageTablePtr Page Table (1 st level) Page Table (2 nd level) Physical Memory:

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Page Table (2 nd level) PageTablePtr Page Table (1 st level) Putting Everything Together: TLB Offset Physical Page # Virtual Address: Offset Virtual P2 index Virtual P1 index Physical Memory: Physical Address: … TLB:

/2/2013 Anthony D. Joseph and John Canny CS162 ©UCB Fall 2013 Page Table (2 nd level) PageTablePtr Page Table (1 st level) Virtual Address: Offset Virtual P2 index Virtual P1 index … TLB: Putting Everything Together: Cache Offset Physical Memory: Physical Address: Physical Page # … tag:block: cache: index bytetag

Worksheet …