Caches III CSE 351 Spring 2019 Instructor: Ruth Anderson

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Simulations of Memory Hierarchy LAB 2: CACHE LAB.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
Computer ArchitectureFall 2008 © November 3 rd, 2008 Nael Abu-Ghazaleh CS-447– Computer.
CMPE 421 Parallel Computer Architecture
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
Lecture 5 Cache Operation
CSCI206 - Computer Organization & Programming
CS161 – Design and Architecture of Computer Systems
CS161 – Design and Architecture of Computer
CSE 351 Section 9 3/1/12.
CS161 – Design and Architecture of Computer
Associativity in Caches Lecture 25
Multilevel Memories (Improving performance using alittle “cash”)
How will execution time grow with SIZE?
Basic Performance Parameters in Computer Architecture:
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
The Hardware/Software Interface CSE351 Winter 2013
Caches III CSE 351 Autumn 2017 Instructor: Justin Hsia
Cache Memory Presentation I
Consider a Direct Mapped Cache with 4 word blocks
Morgan Kaufmann Publishers Memory & Cache
Caches II CSE 351 Spring 2017 Instructor: Ruth Anderson
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
Caches III CSE 351 Winter 2017.
CS61C : Machine Structures Lecture 6. 2
Instructor: Justin Hsia
Caches II CSE 351 Autumn 2016 Instructor: Justin Hsia
Lecture 21: Memory Hierarchy
Caches III CSE 351 Spring 2017 Instructor: Ruth Anderson
Lecture 21: Memory Hierarchy
Bojian Zheng CSCD70 Spring 2018
CS61C : Machine Structures Lecture 6. 2
Lecture 08: Memory Hierarchy Cache Performance
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Performance metrics for caches
Adapted from slides by Sally McKee Cornell University
ECE232: Hardware Organization and Design
Caches II CSE 351 Winter 2018 Instructor: Mark Wyse
CSE 351: The Hardware/Software Interface
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
How can we find data in the cache?
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Caches III CSE 351 Autumn 2018 Instructor: Justin Hsia
CS 3410, Spring 2014 Computer Science Cornell University
CSC3050 – Computer Architecture
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Caches III CSE 351 Winter 2019 Instructors: Max Willsey Luis Ceze
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Cache Memory and Performance
Principle of Locality: Memory Hierarchies
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

https://what-if.xkcd.com/111/ Caches III CSE 351 Spring 2019 Instructor: Ruth Anderson Teaching Assistants: Gavin Cai Jack Eggleston John Feltrup Britt Henderson Richard Jiang Jack Skalitzky Sophie Tian Connie Wang Sam Wolfson Casey Xing Chin Yeoh https://what-if.xkcd.com/111/

Administrivia Lab 3, due Wednesday (5/15) Homework 4 , due Wed (5/22) (Structs, Caches)

Making memory accesses fast! Cache basics Principle of locality Memory hierarchies Cache organization Direct-mapped (sets; index + tag) Associativity (ways) Replacement policy Handling writes Program optimizations that consider caches Divide addresses into “index” and “tag”

Direct-Mapped Cache Memory Cache Block Addr Block Data 00 00 00 01 00 10 00 11 01 00 01 01 01 10 01 11 10 00 10 01 10 10 10 11 11 00 11 01 11 10 11 11 Index Tag Block Data 00 01 11 10 Here 𝐾 = 4 B and 𝐶/𝐾 = 4 Hash function: (block address) mod (# of blocks in cache) Each memory address maps to exactly one index in the cache Fast (and simpler) to find an address

Direct-Mapped Cache Problem Memory Cache Block Addr Block Data 00 00 00 01 00 10 00 11 01 00 01 01 01 10 01 11 10 00 10 01 10 10 10 11 11 00 11 01 11 10 11 11 Index Tag Block Data 00 ?? 01 10 11 Here 𝐾 = 4 B and 𝐶/𝐾 = 4 Conflict! Rest of cache unused! What happens if we access the following addresses? 8, 24, 8, 24, 8, …? Conflict in cache (misses!) Rest of cache goes unused Solution?

Associativity What if we could store data in any place in the cache? More complicated hardware = more power consumed, slower So we combine the two ideas: Each address maps to exactly one set Each set can store block in more than one way “Where is address 2?” 1 2 3 4 5 6 7 Set 1-way: 8 sets, 1 block each 2-way: 4 sets, 2 blocks each 4-way: 2 sets, 4 blocks each 8-way: 1 set, 8 blocks direct mapped fully associative

Cache Organization (3) Associativity (𝐸): # of ways for each set Note: The textbook uses “b” for offset bits Associativity (𝐸): # of ways for each set Such a cache is called an “𝐸-way set associative cache” We now index into cache sets, of which there are 𝑆=𝐶/𝐾/𝐸 Use lowest log 2 𝐶/𝐾/𝐸 = 𝒔 bits of block address Direct-mapped: 𝐸 = 1, so 𝒔 = log 2 𝐶/𝐾 as we saw previously Fully associative: 𝐸 = 𝐶/𝐾, so 𝒔 = 0 bits Selects the set Used for tag comparison Selects the byte from block Tag (𝒕) Index (𝒔) Offset (𝒌) Increasing associativity Decreasing associativity Fully associative (only one set) Direct mapped (only one way)

Example Placement Where would data from address 0x1833 be placed? block size: 16 B capacity: 8 blocks address: 16 bits Where would data from address 0x1833 be placed? Binary: 0b 0001 1000 0011 0011 Tag (𝒕) Offset (𝒌) 𝒎-bit address: Index (𝒔) 𝒔 = log 2 𝐶/𝐾/𝐸 𝒌 = log 2 𝐾 𝒕 = 𝒎–𝒔–𝒌 8 𝒔 = ? 𝒔 = ? 𝒔 = ? Direct-mapped 2-way set associative 4-way set associative Set Tag Data 1 2 3 4 5 6 7 Set Tag Data 1 2 3 Set Tag Data 1

Block Replacement Any empty block in the correct set may be used to store block If there are no empty blocks, which one should we replace? No choice for direct-mapped caches Caches typically use something close to least recently used (LRU) (hardware usually implements “not most recently used”) Direct-mapped 2-way set associative 4-way set associative Set Tag Data 1 2 3 4 5 6 7 Set Tag Data 1 2 3 Set Tag Data 1

Peer Instruction Question We have a cache of size 2 KiB with block size of 128 B. If our cache has 2 sets, what is its associativity? Vote at http://pollev.com/rea 2 4 8 16 We’re lost… If addresses are 16 bits wide, how wide is the Tag field?

General Cache Organization (𝑆, 𝐸, 𝐾) 𝐸 = blocks/lines per set set “line” (block plus management bits) 𝑆 = # sets = 2 𝒔 “Layers” of a cache: Block (data) Line (data + management bits) Set (many lines based on associativity) Cache (many sets based on cache size & associativity) Valid bit lets us know if this line has been initialized (“is valid”). Cache size: 𝐶=𝐾×𝐸×𝑆 data bytes (doesn’t include V or Tag) V Tag 1 2 K-1 valid bit 𝐾 = bytes per block

Notation Review We just introduced a lot of new variable names! Please be mindful of block size notation when you look at past exam questions or are watching videos Variable This Quarter Formulas Block size 𝐾 (𝐵 in book) 𝑀= 2 𝑚 ↔ 𝑚= log 2 𝑀 𝑆= 2 𝒔 ↔ 𝒔= log 2 𝑆 𝐾= 2 𝒌 ↔ 𝒌= log 2 𝐾 𝐶=𝐾×𝐸×𝑆 𝒔= log 2 𝐶/𝐾/𝐸 𝒎=𝒕+𝒔+𝒌 Cache size 𝐶 Associativity 𝐸 Number of Sets 𝑆 Address space 𝑀 Address width 𝒎 Tag field width 𝒕 Index field width 𝒔 Offset field width 𝒌 (𝒃 in book)

Example Cache Parameters Problem 4 KiB address space, 125 cycles to go to memory. Fill in the following table: Cache Size 256 B Block Size 32 B Associativity 2-way Hit Time 3 cycles Miss Rate 20% Tag Bits Index Bits Offset Bits AMAT

Cache Read 𝐸 = blocks/lines per set 𝑆 = # sets = 2 𝒔 valid bit Locate set Check if any line in set is valid and has matching tag: hit Locate data starting at offset 𝐸 = blocks/lines per set Address of byte in memory: 𝒕 bits 𝒔 bits 𝒌 bits 𝑆 = # sets = 2 𝒔 tag set index block offset data begins at this offset v tag 1 2 K-1 valid bit 𝐾 = bytes per block

Example: Direct-Mapped Cache (𝐸 = 1) Direct-mapped: One line per set Block Size 𝐾 = 8 B Address of int: 1 2 7 tag v 3 6 5 4 𝒕 bits 0…01 100 1 2 7 tag v 3 6 5 4 find set 𝑆 = 2 𝒔 sets 1 2 7 tag v 3 6 5 4 1 2 7 tag v 3 6 5 4

Example: Direct-Mapped Cache (𝐸 = 1) Direct-mapped: One line per set Block Size 𝐾 = 8 B Address of int: valid? + match?: yes = hit 𝒕 bits 0…01 100 1 2 7 tag v 3 6 5 4 block offset

Example: Direct-Mapped Cache (𝐸 = 1) Direct-mapped: One line per set Block Size 𝐾 = 8 B Address of int: valid? + match?: yes = hit 𝒕 bits 0…01 100 1 2 7 tag v 3 6 5 4 block offset int (4 B) is here This is why we want alignment! No match? Then old line gets evicted and replaced

Example: Set-Associative Cache (𝐸 = 2) 2-way: Two lines per set Block Size 𝐾 = 8 B Address of short int: 𝒕 bits 0…01 100 1 2 7 tag v 3 6 5 4 1 2 7 tag v 3 6 5 4 find set 1 2 7 tag v 3 6 5 4 1 2 7 tag v 3 6 5 4

Example: Set-Associative Cache (𝐸 = 2) 2-way: Two lines per set Block Size 𝐾 = 8 B Address of short int: 𝒕 bits 0…01 100 compare both valid? + match: yes = hit 1 2 7 tag v 3 6 5 4 tag block offset

Example: Set-Associative Cache (𝐸 = 2) 2-way: Two lines per set Block Size 𝐾 = 8 B Address of short int: 𝒕 bits 0…01 100 compare both valid? + match: yes = hit 1 2 7 tag v 3 6 5 4 Do CAMERA demo here. block offset short int (2 B) is here No match? One line in set is selected for eviction and replacement Replacement policies: random, least recently used (LRU), …

Types of Cache Misses: 3 C’s! Compulsory (cold) miss Occurs on first access to a block Conflict miss Conflict misses occur when the cache is large enough, but multiple data objects all map to the same slot e.g. referencing blocks 0, 8, 0, 8, ... could miss every time Direct-mapped caches have more conflict misses than 𝐸-way set-associative (where 𝐸 > 1) Capacity miss Occurs when the set of active cache blocks (the working set) is larger than the cache (just won’t fit, even if cache was fully-associative) Note: Fully-associative only has Compulsory and Capacity misses

Example Code Analysis Problem Assuming the cache starts cold (all blocks invalid) and sum is stored in a register, calculate the miss rate: 𝑚 = 12 bits, 𝐶 = 256 B, 𝐾 = 32 B, 𝐸 = 2 #define SIZE 8 long ar[SIZE][SIZE], sum = 0; // &ar=0x800 for (int i = 0; i < SIZE; i++) for (int j = 0; j < SIZE; j++) sum += ar[i][j];

What about writes? Multiple copies of data exist: L1, L2, possibly L3, main memory What to do on a write-hit? Write-through: write immediately to next level Write-back: defer write to next level until line is evicted (replaced) Must track which cache lines have been modified (“dirty bit”) What to do on a write-miss? Write-allocate: (“fetch on write”) load into cache, update line in cache Good if more writes or reads to the location follow No-write-allocate: (“write around”) just write immediately to memory Typical caches: Write-back + Write-allocate, usually Write-through + No-write-allocate, occasionally Why? Reuse is common Typically you would read, then write (incr) Or after you initialize a value (say, to 0), likely read and write it again soon