CS203 – Advanced Computer Architecture Cache. Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi-core processors:

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Lecture 12 Reduce Miss Penalty and Hit Time
Computation I pg 1 Embedded Computer Architecture Memory Hierarchy: Cache Recap Course 5KK73 Henk Corporaal November 2014
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 2, 2005 Mon, Nov 7, 2005 Topic: Caches (contd.)
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 (and Appendix B) Memory Hierarchy Design Computer Architecture A Quantitative Approach,
The University of Adelaide, School of Computer Science
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
1 Lecture 14: Cache Innovations and DRAM Today: cache access basics and innovations, DRAM (Sections )
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
Csci4203/ece43631 Review Quiz. 1)It is less expensive 2)It is usually faster 3)Its average CPI is smaller 4)It allows a faster clock rate 5)It has a simpler.
EECS 470 Cache Systems Lecture 13 Coverage: Chapter 5.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
The University of Adelaide, School of Computer Science
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Computing Systems Memory Hierarchy.
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 Memory Hierarchy Design Computer Architecture A Quantitative Approach, Fifth Edition.
Lecture 15: Virtual Memory EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
– 1 – CSCE 513 Fall 2015 Lec08 Memory Hierarchy IV Topics Pipelining Review Load-Use Hazard Memory Hierarchy Review Terminology review Basic Equations.
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Computer Organization CS224 Fall 2012 Lessons 45 & 46.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
Improving Cache Performance Four categories of optimisation: –Reduce miss rate –Reduce miss penalty –Reduce miss rate or miss penalty using parallelism.
M E M O R Y. Computer Performance It depends in large measure on the interface between processor and memory. CPI (or IPC) is affected CPI = Cycles per.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Cache Memory Chapter 17 S. Dandamudi To be used with S. Dandamudi, “Fundamentals of Computer Organization and Design,” Springer,  S. Dandamudi.
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
CS6290 Caches. Locality and Caches Data Locality –Temporal: if data item needed now, it is likely to be needed again in near future –Spatial: if data.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 M EMORY H IERARCHY D ESIGN Computer Architecture A Quantitative Approach, Fifth Edition.
Microprocessor Microarchitecture Memory Hierarchy Optimization Lynn Choi Dept. Of Computer and Electronics Engineering.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
Reducing Hit Time Small and simple caches Way prediction Trace caches
Associativity in Caches Lecture 25
Improving Memory Access 1/3 The Cache and Virtual Memory
CSC 4250 Computer Architectures
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
The University of Adelaide, School of Computer Science
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
Lec09 Memory Hierarchy yet again
The University of Adelaide, School of Computer Science
Adapted from slides by Sally McKee Cornell University
Siddhartha Chatterjee
Lecture 15: Memory Design
Main Memory Background
Cache Performance Improvements
Areg Melik-Adamyan, PhD
The University of Adelaide, School of Computer Science
Memory & Cache.
Lecture 14: Cache Performance
10/18: Lecture Topics Using spatial locality
Presentation transcript:

CS203 – Advanced Computer Architecture Cache

Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi-core processors: Aggregate peak bandwidth grows with # cores: Intel Core i7 can generate two references per core per clock Four cores and 3.2 GHz clock 25.6 billion 64-bit data references/second billion 128-bit instruction references = GB/s! DRAM bandwidth is only 6% of this (25 GB/s) 2

Performance and Power Requires: Multi-port, pipelined caches Two levels of cache per core Shared third-level cache on chip High-end microprocessors have >10 MB on-chip cache Consumes large amount of area and power budget 3

Typical Memory Hierarchy Principle of locality: A program accesses a relatively small portion of the address space at a time Two different types of locality: Temporal locality: if an item is referenced, it will tend to be referenced again soon Spatial locality: if an item is referenced, items whose addresses are close tend to be referenced soon 4

Typical Memory Hierarchy 5

CACHE BASICS 6

Memory Hierarchy Basics When a word is not found in the cache, a miss occurs: Fetch word from lower level in hierarchy, requiring a higher latency reference Lower level may be another cache or the main memory Also fetch the other words contained within the block Takes advantage of spatial locality Place block into cache in any location within its set, determined by address 7

Cache Sets and Ways 8 block/line Sets: Block mapped by addr Ways: Block can go anywhere n-way set associative (4-way set associative) Example: Cache size = 16 blocks

Direct-mapped Cache 9 block/line 16 Sets 1-way Direct mapped cache Each block maps to only one cache line aka 1-way set associative

Set Associative Cache 10 block/line 4 Sets 4-way n-way set associative Each block can be mapped to a set of n-lines Set number is based on block address (4-way set associative)

Fully Associative Cache 11 block /line 1 Sets 16-ways Fully associative Each block can be mapped to any cache line aka m-way set associative where m = size of cache in blocks

Cache Addressing 12 block/line s-Sets: Block mapped by addr n-Ways: Block can go anywhere m = size of cache in blocks n = number of ways b = block size in bytes Tag (remainder) bits = 32-s-b Index (sets) bits = log 2 s Offset (block size) bits = log 2 b Address Cache size = s * n * b # of Sets (s) = m / n

Ex. 64KB cache, direct mapped, 16 byte block Cache Addressing 13 m = size of cache in blocks n = number of ways b = block size in bytes Tag (remainder) bits = 32-s-b Index (sets) bits = log 2 s Offset (block size) bits = log 2 b Address Cache size = s * n * b # of Sets (s) = m / n 16124

Ex. 64KB cache, 2-way assoc., 16 byte block Cache Addressing 14 m = size of cache in blocks n = number of ways b = block size in bytes Tag (remainder) bits = 32-s-b Index (sets) bits = log 2 s Offset (block size) bits = log 2 b Address Cache size = s * n * b # of Sets (s) = m / n 17114

Ex. 64KB cache, fully assoc., 16 byte block Cache Addressing 15 m = size of cache in blocks n = number of ways b = block size in bytes Tag (remainder) bits = 32-s-b Index (sets) bits = log 2 s Offset (block size) bits = log 2 b Address Cache size = s * n * b # of Sets (s) = m / n 2804

Cache Replacement Policies Picks which block to replace within the set Ex. - Random, First In First Out (FIFO), Least Recently Used (LRU), Psuedo-LRU Example: LRU 16 Line 001 Line 100 Line 211 Line 310 Line 010 Line 101 Line 211 Line 300 Hit on Line 3 Line 010 Line 101 Line 200 Line 311 Miss

Cache Write Policies Writing to cache: two strategies Write-through Immediately update lower levels of hierarchy Write-back Only update lower levels of hierarchy when an updated block is replaced Both strategies use write buffer to make writes asynchronous 17

Cache Performance Miss rate Fraction of cache access that result in a miss Causes of misses Compulsory (cold) First reference to a block Capacity Space is not sufficient to host data or code Conflict When two memory blocks map on the same cache block in direct-mapped or set-associative caches 18

Measuring/Classifying Misses How to find out? Cold misses: Simulate a fully associative infinite cache size Capacity misses: Simulate fully associative cache, then deduct cold misses Conflict misses: Simulate target cache configuration then deduct cold and capacity misses Classification is useful to understand how to eliminate misses High conflict misses  need higher associativity High capacity misses  need larger cache 19

Cache Performance Note that speculative and multithreaded processors may execute other instructions during a miss Reduces performance impact of misses 20

CACHE OPTIMIZATIONS 21

Cache Optimizations Basics Six basic cache optimizations: Larger block size Reduces compulsory misses Increases capacity and conflict misses, increases miss penalty Larger total cache capacity to reduce miss rate Increases hit time, increases power consumption Higher associativity Reduces conflict misses Increases hit time, increases power consumption Higher number of cache levels Reduces overall memory access time Giving priority to read misses over writes Reduces miss penalty Avoiding address translation in cache indexing Reduces hit time

Ten Advanced Optimizations Small and simple first level caches Critical timing path: addressing tag memory, then comparing tags, then selecting correct set Direct-mapped caches can overlap tag compare and transmission of data Lower associativity reduces power because fewer cache lines are accessed

L1 Size and Associativity Access time vs. size and associativity

L1 Size and Associativity Energy per read vs. size and associativity

Way Prediction To improve hit time, predict the way to pre-set mux Mis-prediction gives longer hit time Prediction accuracy > 90% for two-way > 80% for four-way I-cache has better accuracy than D-cache First used on MIPS R10000 in mid-90s Used on ARM Cortex-A8 Extend to predict block as well “Way selection” Increases mis-prediction penalty

Pipelining Cache Pipeline cache access to improve bandwidth Examples: Pentium: 1 cycle Pentium Pro – Pentium III: 2 cycles Pentium 4 – Core i7: 4 cycles Increases branch mis-prediction penalty Makes it easier to increase associativity

Non-blocking Caches Cache is a 2-ported device: memory & processor If a lockup-free cache misses, it does not block It handles the miss and keeps accepting accesses from the processor Allows for the concurrent processing of multiple misses and hits Cache has to bookkeep all pending misses MSHRs (miss status handling registers) contain the address of pending miss, the destination block in cache, and the destination register Number of MSHRs limits the number of pending misses Data dependencies eventually block the processor Non-blocking caches are required in dynamically scheduled processor and to support prefetching 28

Primary/Secondary Misses Primary: the first miss to a block Secondary: following accesses to blocks pending due to primary miss Lot more misses (blocking cache only has primary misses) Needs MSHRs for both primary and secondary misses Misses are overlapped with computation and other misses 29

Multibanked Caches Organize cache as independent banks to support simultaneous access ARM Cortex-A8 supports 1-4 banks for L2 Intel i7 supports 4 banks for L1 and 8 banks for L2 Interleave banks according to block address

Critical Word First, Early Restart Critical word first Request missed word from memory first Send it to the processor as soon as it arrives Early restart Request words in normal order Send missed word to the processor as soon as it arrives Effectiveness of these strategies depends on block size and likelihood of another access to the portion of the block that has not yet been fetched

Merging Write Buffer When storing to a block that is already pending in the write buffer, update write buffer Reduces stalls due to full write buffer Do not apply to I/O addresses No write buffering Write buffering

Compiler Optimizations Loop Interchange Swap nested loops to access memory in sequential order Blocking Instead of accessing entire rows or columns, subdivide matrices into blocks Requires more memory accesses but improves locality of accesses

Blocking (1) /* Before */ for (i=0; i<N; i++) for(j=0; j<N; j++) {r=0; for (k=0; k<N; k++) r = r + Y[i][k]*Z[k][j]; X[i][j] = r; }; 2N 3 + N 2 memory words accessed Figure 2.8 A snapshot of the three arrays x, y, and z when N = 6 and i = 1. The age of accesses to the array elements is indicated by shade: white means not yet touched, light means older accesses, and dark means newer accesses. Compared to Figure 2.9, elements of y and z are read repeatedly to calculate new elements of x. The variables i, j, and k are shown along the rows or columns used to access the arrays.

Blocking (2) /* After*/ for (jj=0; jj<N; jj = jj+B) for(kk=0; kk<N; kk = kk+B) for (i=0; i<N; i++) for (j=jj; j < min(jj+B,N); j++) {r=0; for (k=kk; k < min(kk+B,N); k++) r = r + Y[i][k]*Z[k][j]; X[i][j] = X[i][j] + r; }; 2N 3 /B + N 2 Figure 2.9 The age of accesses to the arrays x, y, and z when B = 3. Note that, in contrast to Figure 2.8, a smaller number of elements is accessed.

Hardware Prefetching Fetch two blocks on miss (include next sequential block) Pentium 4 Pre-fetching

Compiler Prefetching Insert prefetch instructions before data is needed Non-faulting: prefetch doesn’t cause exceptions Register prefetch Loads data into register Cache prefetch Loads data into cache Combine with loop unrolling and software pipelining

Summary