Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)

Similar presentations


Presentation on theme: "1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)"— Presentation transcript:

1 1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)

2 2Outline  Improving Cache Performance Reducing misses -- contd. from previous lecture Reducing misses -- contd. from previous lecture Reducing miss penalty Reducing miss penalty Reading: HP3 Sections 5.3-5.7

3 3 5. Reduce Misses by Hardware Prefetching  Instruction prefetching Alpha 21064 fetches 2 blocks on a miss Alpha 21064 fetches 2 blocks on a miss Extra block placed in stream buffer Extra block placed in stream buffer On miss check stream buffer On miss check stream buffer  Works with data blocks too Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 stream buffers got 43% Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 stream buffers got 43% Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches  Prefetching relies on extra memory bandwidth that can be used without penalty e.g., up to 8 prefetch stream buffers in the UltraSPARC III e.g., up to 8 prefetch stream buffers in the UltraSPARC III

4 4 6. Reducing Misses by Software Prefetching  Data prefetch Compiler inserts special “prefetch” instructions into program Compiler inserts special “prefetch” instructions into program  Load data into register (HP PA-RISC loads)  Cache Prefetch: load into cache (MIPS IV,PowerPC,SPARC v9) A form of speculative execution A form of speculative execution  don’t really know if data is needed or if not in cache already Most effective prefetches are “semantically invisible” to prgm Most effective prefetches are “semantically invisible” to prgm  does not change registers or memory  cannot cause a fault/exception  if they would fault, they are simply turned into NOP’s  Issuing prefetch instructions takes time Is cost of prefetch issues < savings in reduced misses? Is cost of prefetch issues < savings in reduced misses?

5 5 7. Reduce Misses by Compiler Optzns.  Instructions Reorder procedures in memory so as to reduce misses Reorder procedures in memory so as to reduce misses Profiling to look at conflicts Profiling to look at conflicts McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache with 4 byte blocks McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache with 4 byte blocks  Data Merging Arrays Merging Arrays  Improve spatial locality by single array of compound elements vs. 2 arrays Loop Interchange Loop Interchange  Change nesting of loops to access data in order stored in memory Loop Fusion Loop Fusion  Combine two independent loops that have same looping and some variables overlap Blocking Blocking  Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

6 6 Merging Arrays Example  Reduces conflicts between val and key  Addressing expressions are different /* Before */ int val[SIZE]; int key[SIZE]; /* Before */ int val[SIZE]; int key[SIZE]; /* After */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; /* After */ struct merge { int val; int key; }; struct merge merged_array[SIZE];

7 7 Loop Interchange Example  Sequential accesses instead of striding through memory every 100 words /* Before */ for (k = 0; k < 100; k++) for (j = 0; j < 100; j++) for (i = 0; i < 5000; i++) x[i][j] = 2 * x[i][j]; /* Before */ for (k = 0; k < 100; k++) for (j = 0; j < 100; j++) for (i = 0; i < 5000; i++) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k++) for (i = 0; i < 5000; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k++) for (i = 0; i < 5000; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j];

8 8 Loop Fusion Example  Before: 2 misses per access to a and c  After: 1 miss per access to a and c /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i++) for (j = 0; j < N; j++) d[i][j] = a[i][j] + c[i][j]; /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i++) for (j = 0; j < N; j++) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} /* After */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];}

9 9 Blocking Example  Two Inner Loops: Read all NxN elements of z[] Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[] Write N elements of 1 row of x[]  Capacity Misses a function of N and Cache Size 3 NxN  no capacity misses; otherwise... 3 NxN  no capacity misses; otherwise...  Idea: compute on BxB submatrix that fits /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) { r = 0; for (k = 0; k < N; k++) r = r + y[i][k]*z[k][j]; x[i][j] = r; } /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) { r = 0; for (k = 0; k < N; k++) r = r + y[i][k]*z[k][j]; x[i][j] = r; }

10 10 Blocking Example (contd.)  Age of accesses White means not touched yet White means not touched yet Light gray means touched a while ago Light gray means touched a while ago Dark gray means newer accesses Dark gray means newer accesses

11 11 Blocking Example (contd.)  Work with BxB submatrices smaller working set can fit within the cache smaller working set can fit within the cache  fewer capacity misses /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i++) for (j = jj; j < min(jj+B-1,N); j++) { r = 0; for (k = kk; k < min(kk+B-1,N); k++) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r; } /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i++) for (j = jj; j < min(jj+B-1,N); j++) { r = 0; for (k = kk; k < min(kk+B-1,N); k++) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r; }

12 12 Blocking Example (contd.)  Capacity misses go from (2N 3 + N 2 ) to (2N 3 /B +N 2 )  B = “blocking factor”  What happens to conflict misses?

13 13 Reducing Conflict Misses by Blocking  Conflict misses in non-FA caches vs. block size Lam et al [1991] found that a blocking factor of 24 had a fifth the misses vs. 48 despite the fact that both fit in cache Lam et al [1991] found that a blocking factor of 24 had a fifth the misses vs. 48 despite the fact that both fit in cache

14 14 Summary: Compiler Optimizations to Reduce Cache Misses

15 15 Reducing Miss Penalty 1. Read Priority over Write on Miss: Write through: Write through:  Using write buffers: RAW conflicts with reads on cache misses  If simply wait for write buffer to empty might increase read miss penalty by 50% (old MIPS 1000)  Check write buffer contents before read; if no conflicts, let the memory access continue Write Back? Write Back?  Read miss replacing dirty block  Normal: Write dirty block to memory, and then do the read  Instead copy the dirty block to a write buffer, then do the read, and then do the write  CPU stall less since restarts as soon as read completes

16 16 Valid Bits 100 200 300 1 1 1 0 11 0 0 0 0 0 1 2. Fetching Subblocks to Reduce Miss Penalty  Don’t have to load full block on a miss  Have bits per subblock to indicate valid

17 17 3. Early Restart and Critical Word First  Don’t wait for full block to be loaded before restarting CPU Early Restart —As soon as the requested word of the block arrrives, send it to the CPU and let the CPU continue execution Early Restart —As soon as the requested word of the block arrrives, send it to the CPU and let the CPU continue execution Critical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives Critical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives  let the CPU continue while filling the rest of the words in the block.  also called “wrapped fetch” and “requested word first”  Generally useful only in large blocks  Spatial locality a problem tend to want next sequential word, so not clear if benefit by early restart tend to want next sequential word, so not clear if benefit by early restart

18 18 4. Non-blocking Caches  Non-blocking cache or lockup-free cache allows the data cache to continue to supply cache hits during a miss “Hit under miss” “Hit under miss”  reduces the effective miss penalty by being helpful during a miss instead of ignoring the requests of the CPU “Hit under multiple miss” or “miss under miss” “Hit under multiple miss” or “miss under miss”  may further lower the effective miss penalty by overlapping multiple misses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses

19 19 Value of Hit Under Miss for SPEC  FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26  Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19  8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss Integer Floating Point “Hit under i Misses”

20 20 5. Miss Penalty Reduction: L2 Cache L2 Equations: AMAT = Hit Time L1 + Miss Rate L1  Miss Penalty L1 Miss Penalty L1 = Hit Time L2 + Miss Rate L2  Miss Penalty L2 AMAT = Hit Time L1 + Miss Rate L1  (Hit Time L2 + Miss Rate L2  Miss Penalty L2 ) Definitions: Local miss rate — misses in this cache divided by the total number of memory accesses to this cache (Miss rate L2 ) Global miss rate —misses in this cache divided by the total number of memory accesses generated by the CPU (Miss Rate L1  Miss Rate L2 )

21 21 Reducing Misses: Which Apply to L2 Cache?  Reducing Miss Rate 1. Reduce Misses via Larger Block Size 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations 7. Reducing Capacity/Conf. Misses by Compiler Optimizations

22 22 L2 cache block size & A.M.A.T.  32KB L1, 8 byte path to memory

23 23 Reducing Miss Penalty Summary  Five techniques Read priority over write on miss Read priority over write on miss Subblock placement Subblock placement Early Restart and Critical Word First on miss Early Restart and Critical Word First on miss Non-blocking Caches (Hit Under Miss) Non-blocking Caches (Hit Under Miss) Second Level Cache Second Level Cache  reducing L2 miss rate effectively reduces L1 miss penalty!  Can be applied recursively to Multilevel Caches Danger is that time to DRAM will grow with multiple levels in between Danger is that time to DRAM will grow with multiple levels in between

24 24 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache. (next lecture)


Download ppt "1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)"

Similar presentations


Ads by Google