CES 524 May 6 Eleven Advanced Cache Optimizations (Ch 5) parallel architectures (Ch 4) Slides adapted from Patterson, UC Berkeley.

Slides:



Advertisements
Similar presentations
Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Advertisements

Lecture 12 Reduce Miss Penalty and Hit Time
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
Performance of Cache Memory
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 2, 2005 Mon, Nov 7, 2005 Topic: Caches (contd.)
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
The University of Adelaide, School of Computer Science
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
CS252/Culler Lec 4.1 1/31/02 CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes.
CS267 L2 Memory Hierarchies.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 2: Memory Hierarchies and Optimizing Matrix Multiplication.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
Uniprocessor Optimizations and Matrix Multiplication
Memory: Virtual MemoryCSCE430/830 Memory Hierarchy: Virtual Memory CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu.
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
CPE 731 Advanced Computer Architecture Advanced Memory Hierarchy Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of California,
1 Improving on Caches CS #4: Pseudo-Associative Cache Also called column associative Idea –start with a direct mapped cache, then on a miss check.
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
CS136, Advanced Architecture Cache and Memory Performance.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Computer Organization CS224 Fall 2012 Lessons 45 & 46.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
CSE 502 Graduate Computer Architecture Lec – Advanced Memory Hierarchy Larry Wittie Computer Science, StonyBrook University
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
Chapter 5 Memory Hierarchy Design
Memory Hierarchy—Improving Performance Professor Alvin R. Lebeck Computer Science 220 Fall 2008.
Pradondet Nilagupta (Based on notes Robert F. Hodson --- CNU)
Prefetching Techniques. 2 Reading Data prefetch mechanisms, Steven P. Vanderwiel, David J. Lilja, ACM Computing Surveys, Vol. 32, Issue 2 (June 2000)
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
1 Adapted from UC Berkeley CS252 S01 Lecture 17: Reducing Cache Miss Penalty and Reducing Cache Hit Time Hardware prefetching and stream buffer, software.
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
CS203 – Advanced Computer Architecture Cache. Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi-core processors:
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
/ Computer Architecture and Design
COMP 740: Computer Architecture and Implementation
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
Reducing Hit Time Small and simple caches Way prediction Trace caches
Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses
CSC 4250 Computer Architectures
Optimizing Cache Performance in Matrix Multiplication
The University of Adelaide, School of Computer Science
Lecture 9: Memory Hierarchy (3)
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:
11 Advanced Cache Optimizations
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
Optimizing Cache Performance in Matrix Multiplication
CPE 631 Lecture 06: Cache Design
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Lecture 14: Reducing Cache Misses
CS203A Graduate Computer Architecture Lecture 13 Cache Design
Memory Hierarchy.
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Siddhartha Chatterjee
/ Computer Architecture and Design
Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate
11 Advanced Cache Optimizations
Memory System Performance Chapter 3
Cache Performance Improvements
Presentation transcript:

CES 524 May 6 Eleven Advanced Cache Optimizations (Ch 5) parallel architectures (Ch 4) Slides adapted from Patterson, UC Berkeley

Review: Basic Cache Optimizations Reducing hit time 1.Giving Reads Priority over Writes E.g., Read completes before earlier writes in write buffer 2.Lower associativity Reducing Miss Penalty 3.Multilevel Caches Reducing Miss Rate 4.Larger Block size (fewer Compulsory misses) 5.Larger Cache size (fewer Capacity misses) 6.Higher Associativity (fewer Conflict misses)

Eleven Advanced Cache Optimizations Reducing hit time 1.Small and simple caches 2.Way prediction 3.Trace caches Increasing cache bandwidth 4.Pipelined caches 5.Multibanked caches 6.Nonblocking caches Reducing Miss Penalty 7. Critical word first 8. Merging write buffers Reducing Miss Rate 9. Compiler optimizations Reducing miss penalty or miss rate via parallelism 10. Hardware prefetching 11. Compiler prefetching

1. Fast Hit Times via Small, Simple Caches Index tag memory and then compare takes time  Small cache can help hit time since smaller memory takes less time to index to find right set of block(s) in cache –E.g., fast L1 caches were same smail size for 3 generations of AMD microprocessors: K6, Athlon, and Opteron –Also, having a L2 cache small enough to fit on-chip with the processor avoids time penalty of going off chip (~10X longer data latency off-chip) Simple  direct mapping –Overlap tag check with data transmission since no choice (kill data out if tag bad) Access time estimate for 90 nm using CACTI model 4.0 –Median ratios of access time relative to the direct-mapped caches are 1.32, 1.39, and 1.43 for 2-way, 4-way, and 8-way caches

2. Fast Hit Times via Way Prediction How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way Set Assoc. cache? Way prediction: keep extra bits in cache to predict the “way,” or block within the set, of next cache access. –Multiplexor is set early to select desired block, only 1 tag comparison performed that clock cycle in parallel with reading the cache data –Miss  check other blocks for matches in next clock cycle Accuracy  85% Drawback: hard to tune CPU pipeline if hit time varies from 1 or 2 cycles

4: Increase Cache Bandwidth by Pipelining Pipeline cache access to maintain bandwidth, even though pipe gives higher latency for each access. Number of instruction cache access pipeline stages: 1 for Pentium 2 for Pentium Pro through Pentium III 4 for Pentium 4 -  greater penalty on mispredicted branches: restart pipelined stream of memory addresses at new PC -  more clock cycles between the issue of a load and the availability of the loaded data

5. Increase Cache Bandwidth: Non-Blocking Caches Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss –helps if Full/Empty bits on all registers (to allow execution to go on until missed datum is actually used) or out-of-order completion –requires multi-bank memories for the non-blocking cache “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses - Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses - Requires multiple main memory banks (otherwise cannot support) - Penium Pro allows 4 outstanding memory misses

Value of Hit Under Miss for SPEC If i = FP programs on average: AMAT= > > > 0.26 Int programs on average: AMAT= > > > KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92 Integer Floating Point “Hit under i Misses” AMAT = average miss access time i=1->0 i=2->1 i=64->2 Base, i=64

6: Increase Cache Bandwidth via Multiple Banks Rather than treat the cache as a single monolithic block, divide it into independent banks that can support simultaneous accesses –E.g.,T1 (“Niagara”) L2 has 4 banks Banking works best when accesses naturally spread themselves across banks  mapping of addresses to banks affects behavior of memory system A simple mapping that works well is “sequential interleaving” => the next block of memory goes to the next bank of memory –Spread memory block indices sequentially across banks –E.g., if there are 4 banks, Bank 0 has all blocks whose index modulo 4 is 0; bank 1 has all blocks whose index modulo 4 is 1; …

7. Reduce Miss Penalty: Early Restart and Critical Word First Do not wait for full block before restarting CPU Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution –Spatial locality  tend to want next sequential word, so first access to a block is normally to 1st word, but next is to 2nd word, which may stall again and so on, so benefit from early restart alone is not clear Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block –Long blocks more popular today  Critical Word 1 st Widely Used block

8. Merge Multiple Adjacent New Words in Write Buffer to Reduce Miss Penalty Write buffer allows processor to continue without waiting to finish write to next lower memory/cache If buffer contains blocks of modified words, not just a single word per entry, addresses can be checked to see if the address of a newly written datum matches an address in an existing write buffer entry If so, new datum is combined with that existing entry For write-through caches, can increase block sizes of writes to lower memory from writes to individual words to writes to several sequential words, which allows more efficient use of memory system The Sun T1 (Niagara) processor, among many others, uses write merging

9. Reduce Misses by Compiler Optimizations McFarling [1989] used software to reduce cache misses by 75% for an 8KB dm cache (4 byte blocks) Instructions –Reorder procedures in memory so as to reduce conflict misses –Profiling to look at conflicts (using tools they developed) Data –Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays –Loop Interchange: change nesting of loops to access data in the order that they are stored in memory –Loop Fusion: Combine 2 non-dependent loops with the same looping structure so more accesses to all common variables in each iteration –Blocking: Improve temporal locality by accessing “blocks” of data (equal cache block size) repeatedly vs. going down whole columns or rows and moving from cache block to cache block rapidly

Merging Arrays Example /* Before: 2 sequential arrays */ int val[SIZE]; int key[SIZE]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Reduce conflicts between val & key; and improve spatial locality

Loop Interchange Example /* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After, since x[i][j+1] follows x[i][j] in memory*/ for (k = 0; k < 100; k = k+1) for (i = 0; i < 5000; i = i+1) for (j = 0; j < 100; j = j+1) x[i][j] = 2 * x[i][j]; Sequential accesses instead of striding through memory every 100 words; improve spatial locality

Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} Two misses per access to a & c vs. one miss per access; improve spatial locality

Blocking Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: –Read all NxN elements of z[] –Read N elements of 1 row of y[] repeatedly –Write N elements of 1 row of x[] Capacity Misses are a function of N & Cache Size: –2N 3 + N 2 => (assuming no conflict; otherwise …) Idea: Compute on B x B submatrix fitting in 1 cache block For large N, these long accesses repeatedly flush cache blocks that are needed again soon Better way

Blocking Example /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; B called Blocking Factor Capacity Misses fall from 2N 3 + N 2 to 2N 3 /B +N 2 Do Conflict Misses fall also?

Reduce Conflict Misses by Blocking Conflict misses in non-F.A. caches vs. Blocking size –Lam et al [1991] found a blocking factor of 24 had a fifth the misses vs. 48 despite both fitting in one cache block (F.A. Cache)

Summary of Compiler Optimizations to Reduce Cache Misses (by hand)

Reduce Misses by Hardware Prefetching Prefetching relies on having extra memory bandwidth that can be used without penalty since some prefetched values unused Instruction Prefetching –Typically, a CPU fetches 2 blocks on a miss: the requested block and the next consecutive block. –Requested block is placed in instruction cache when it returns, and prefetched block is placed into instruction stream buffer Data Prefetching –Pentium 4 can prefetch data into L2 cache from up to 8 streams from 8 different 4 KB pages –Prefetching invoked whenever 2 successive L2 cache misses to 1 page, if distance between those cache blocks is < 256 bytes

Reduce Misses by Software Prefetching Data Data Prefetch –Load data into register (HP PA-RISC loads) –Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) –Special prefetching instructions cannot cause faults; a form of speculative execution Issuing Prefetch Instructions takes time –Is cost of issuing prefetch instructions < savings from reduced misses? –Higher superscalar reduces difficulty of issue bandwidth

Compiler Optimization vs. Memory Hierarchy Search Compiler tries to figure out memory hierarchy optimizations New approach: “Auto-tuners” 1st run variations of program on computer to find best combinations of optimizations (blocking, padding, …) and algorithms, then produce C code to be compiled for that computer “Auto-tuner” targeted to numerical method –E.g., PHiPAC (BLAS), Atlas (BLAS), Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W

Reference Best: 4x2 Mflop/s Sparse Matrix – Search for Blocking for finite element problem [Im, Yelick, Vuduc, 2005]

Why Matrix Multiplication? An important kernel in scientific problems – Appears in many linear algebra algorithms – Closely related to other algorithms, e.g., transitive closure on a graph using Floyd-Warshall Optimization ideas can be used in other problems The best case for optimization payoffs The most-studied algorithm in high performance computing

Matrix-multiply, optimized several ways Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

Outli ne Idealized and actual costs in modern processors Memory hierarchies Case Study: Matrix Multiplication – Simple cache model – Warm-up: Matrix-vector multiplication – Blocking algorithms – Other techniques Automatic Performance Tuning

Note on Matrix Storage A matrix is a 2-D array of elements, but memory addresses are “1-D” Conventions for matrix layout – by column, or “column major” (Fortran default); A(i,j) at A+i+j*n – by row, or “row major” (C default) A(i,j) at A+i*n+j – recursive (later) Column major (for now) Column major Row major cachelines Blue row of matrix is stored in red cachelines Figure source: Larry Carter, UCSD Column major matrix in memory

Computational Intensity: Key to algorithm efficiency Machine Balance:Key to machine efficiency Using a Simple Model of Memory to Optimize Assume just 2 levels in the hierarchy, fast (DRAM) and slow (cache) All data initially in slow memory –m = number of memory elements (words) moved between fast and slow memory –t m = time per slow memory operation –f = number of arithmetic operations –t f = time per arithmetic operation << t m –q = f / m average number of flops per slow memory access Minimum possible time = f* t f when all data in fast memory Actual time –f * t f + m * t m = f * t f * (1 + t m /t f * 1/q) Larger q means time closer to minimum f * t f – q  t m /t f needed to get at least half of peak speed

Warm up: Matrix-vector multiplication {implements y = y + A*x} for i = 1:n for j = 1:n y(i) = y(i) + A(i,j)*x(j) = + * y(i) A(i,:) x(:)

Warm up: Matrix-vector multiplication {read x(1:n) into fast memory} {read y(1:n) into fast memory} for i = 1:n {read row i of A into fast memory} for j = 1:n y(i) = y(i) + A(i,j)*x(j) {write y(1:n) back to slow memory} m = number of slow memory refs = 3n + n 2 f = number of arithmetic operations = 2n 2 q = f / m ~ 2 Matrix-vector multiplication limited by slow memory speed

Modeling Matrix-Vector Multiplication Compute time for n x n = 1000x1000 matrix Time f * t f + m * t m = f * t f * (1 + t m /t f * 1/q) = 2*n 2 * t f * (1 + t m /t f * 1/2) machine balance (q must be at least this for ½ peak speed)

Simplifying Assumptions What simplifying assumptions did we make in this analysis? – Ignored parallelism in processor between memory and arithmetic within the processor » Sometimes drop arithmetic term in this type of analysis – Assumed fast memory was large enough to hold three vectors » Reasonable if we are talking about any level of cache » Not if we are talking about registers (~32 words) – Assumed the cost of a fast memory access is 0 » Reasonable if we are talking about registers » Not necessarily if we are talking about cache (1-2 cycles for L1) – Memory latency is constant Could simplify even further by ignoring memory operations in X and Y vectors – Mflop rate/element = 2 / (2* t f + t m )

Validating the Model How well does the model predict actual performance? – Actual DGEMV: Most highly optimized code for the platform Model sufficient to compare across machines But under-predicting on most recent ones due to latency estimate

Naïve Matrix Multiply {implements C = C + A*B} for i = 1 to n for j = 1 to n for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) =+* C(i,j) A(i,:) B(:,j) Algorithm has 2*n 3 = O(n 3 ) Flops and operates on 3*n 2 words of memory q potentially as large as 2*n 3 / 3*n 2 = O(n)

Naïve Matrix Multiply {implements C = C + A*B} for i = 1 to n {read row i of A into fast memory} for j = 1 to n {read C(i,j) into fast memory} {read column j of B into fast memory} for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) {write C(i,j) back to slow memory} =+* C(i,j) A(i,:) B(:,j) C(i,j)

Naïve Matrix Multiply Number of slow memory references on unblocked matrix multiply m = n 3 to read each column of B n times + n 2 to read each row of A once + 2n 2 to read and write each element of C once = n 3 + 3n 2 So q = f / m = 2n 3 / ( n 3 + 3n 2 ) ~ 2 for large n, no improvement over matrix-vector multiply =+* C(i,j) A(i,:) B(:,j)

Matrix-multiply, optimized several ways Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

Naïve Matrix Multiply on RS/6000 T = N 4.7 O(N 3 ) performance would have constant cycles/flop Performance looks like O(N 4.7 ) Size 2000 took 5 days would take 1095 years Slide source: Larry Carter, UCSD

Naïve Matrix Multiply on RS/6000 Slide source: Larry Carter, UCSD Page miss every iteration TLB miss every iteration Cache miss every 16 iterations Page miss every 512 iterations

Blocked (Tiled) Matrix Multiply Consider A,B,C to be N-by-N matrices of b-by-b subblocks where b=n / N is called the block size for i = 1 to N for j = 1 to N {read block C(i,j) into fast memory} for k = 1 to N {read block A(i,k) into fast memory} {read block B(k,j) into fast memory} C(i,j) = C(i,j) + A(i,k) * B(k,j) {do a matrix multiply on blocks} {write block C(i,j) back to slow memory} =+* C(i,j) A(i,k) B(k,j)

Blocked (Tiled) Matrix Multiply Recall: m is amount memory traffic between slow and fast memory matrix has nxn elements, and NxN blocks each of size bxb f is number of floating point operations, 2n 3 for this problem q = f / m is our measure of algorithm efficiency in the memory system So: m = N*n 2 read each block of B N 3 times (N 3 * b 2 = N 3 * (n/N) 2 = N*n 2 ) + N*n 2 read each block of A N 3 times + 2n 2 read and write each block of C once = (2N + 2) * n 2 So computational intensity q = f / m = 2n 3 / ((2N + 2) * n 2 ) ~= n / N = b for large n So we can improve performance by increasing the blocksize b Can be much faster than matrix-vector multiply (q=2)

Using Analysis to Understand Machines The blocked algorithm has computational intensity q ~= b The larger the block size, the more efficient our algorithm will be Limit: All three blocks from A,B,C must fit in fast memory (cache), so we cannot make these blocks arbitrarily large Assume your fast memory has size M fast 3b 2 <= M fast, so q ~= b <= sqrt(M fast /3) To build a machine to run matrix multiply at 1/2 peak arithmetic speed of the machine, we need a fast memory of size M fast >= 3b 2 ~= 3q 2 = 3(t m /t f ) 2 This size is reasonable for L1 cache, but not for register sets Note: analysis assumes it is possible to schedule the instructions perfectly

Limits to Optimizing Matrix Multiply The blocked algorithm changes the order in which values are accumulated into each C[i,j] by applying associativity – Get slightly different answers from naïve code, because of roundoff - OK The previous analysis showed that the blocked algorithm has computational intensity: q ~= b <= sqrt(M fast /3) There is a lower bound result that says we cannot do any better than this (using only associativity) Theorem (Hong & Kung, 1981): Any reorganization of this algorithm (that uses only associativity) is limited to q = O(sqrt(M fast ))

Technique Hit Time Band- width Mi ss pe na lty Miss rate HW cost/ complexit y Comment Small and simple caches +–0 Trivial; widely used Way-predicting caches +1 Used in Pentium 4 Trace caches +3 Used in Pentium 4 Pipelined cache access –+ 1 Widely used Nonblocking caches ++ 3 Widely used Banked caches + 1 Used in L2 of Opteron and Niagara Critical word first and early restart + 2 Widely used Merging write buffer + 1 Widely used with write through Compiler techniques to reduce cache misses + 0 Software is a challenge; some computers have compiler option Hardware prefetching of instructions and data ++ 2 instr., 3 data Many prefetch instructions; AMD Opteron prefetches data Compiler-controlled prefetching ++ 3 Needs nonblocking cache; in many CPUs