Download presentation
Presentation is loading. Please wait.
1
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
2
2 7. Reduce Misses by Compiler Optzns. Instructions Reorder procedures in memory so as to reduce misses Reorder procedures in memory so as to reduce misses Profiling to look at conflicts Profiling to look at conflicts McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache with 4 byte blocks McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache with 4 byte blocks Data Merging Arrays Merging Arrays Improve spatial locality by single array of compound elements vs. 2 arrays Loop Interchange Loop Interchange Change nesting of loops to access data in order stored in memory Loop Fusion Loop Fusion Combine two independent loops that have same looping and some variables overlap Blocking Blocking Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows
3
3 Merging Arrays Example Reduces conflicts between val and key Addressing expressions are different /* Before */ int val[SIZE]; int key[SIZE]; /* Before */ int val[SIZE]; int key[SIZE]; /* After */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; /* After */ struct merge { int val; int key; }; struct merge merged_array[SIZE];
4
4 Loop Interchange Example Sequential accesses instead of striding through memory every 100 words /* Before */ for (k = 0; k < 100; k++) for (j = 0; j < 100; j++) for (i = 0; i < 5000; i++) x[i][j] = 2 * x[i][j]; /* Before */ for (k = 0; k < 100; k++) for (j = 0; j < 100; j++) for (i = 0; i < 5000; i++) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k++) for (i = 0; i < 5000; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k++) for (i = 0; i < 5000; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j];
5
5 Loop Fusion Example Before: 2 misses per access to a and c After: 1 miss per access to a and c /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i++) for (j = 0; j < N; j++) d[i][j] = a[i][j] + c[i][j]; /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i++) for (j = 0; j < N; j++) d[i][j] = a[i][j] + c[i][j]; /* After */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} /* After */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];}
6
6 Blocking Example Two Inner Loops: Read all NxN elements of z[] Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[] Write N elements of 1 row of x[] Capacity Misses a function of N and Cache Size 3 NxN no capacity misses; otherwise... 3 NxN no capacity misses; otherwise... Idea: compute on BxB submatrix that fits /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {r = 0; for (k = 0; k < N; k++) r = r + y[i][k]*z[k][j]; x[i][j] = r;} /* Before */ for (i = 0; i < N; i++) for (j = 0; j < N; j++) {r = 0; for (k = 0; k < N; k++) r = r + y[i][k]*z[k][j]; x[i][j] = r;}
7
7 Blocking Example (contd.) Work with BxB submatrices smaller working set can fit within the cache smaller working set can fit within the cache fewer capacity misses /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i++) for (j = jj; j < min(jj+B-1,N); j++) {r = 0; for (k = kk; k < min(kk+B-1,N); k++) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r;} /* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i++) for (j = jj; j < min(jj+B-1,N); j++) {r = 0; for (k = kk; k < min(kk+B-1,N); k++) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r;}
8
8 Blocking Example (contd.) Capacity misses go from 2N 3 + N 2 to 2N 3 /B +N 2 B = “blocking factor” What happens to conflict misses?
9
9 Reducing Conflict Misses by Blocking Conflict misses in non-FA caches vs. block size Lam et al [1991] found that a blocking factor of 24 had a fifth the misses vs. 48 despite the fact that both fit in cache Lam et al [1991] found that a blocking factor of 24 had a fifth the misses vs. 48 despite the fact that both fit in cache
10
10 Summary: Compiler Optimizations to Reduce Cache Misses
11
11 Reducing Miss Penalty 1. Read Priority over Write on Miss: Write through: Write through: Using write buffers: RAW conflicts with reads on cache misses If simply wait for write buffer to empty might increase read miss penalty by 50% (old MIPS 1000) Check write buffer contents before read; if no conflicts, let the memory access continue Write Back? Write Back? Read miss replacing dirty block Normal: Write dirty block to memory, and then do the read Instead copy the dirty block to a write buffer, then do the read, and then do the write CPU stall less since restarts as soon as read completes
12
12 Valid Bits 100 200 300 1 1 1 0 11 0 0 0 0 0 1 2. Fetching Subblocks to Reduce Miss Penalty Don’t have to load full block on a miss Have bits per subblock to indicate valid
13
13 3. Early Restart and Critical Word First Don’t wait for full block to be loaded before restarting CPU Early Restart —As soon as the requested word of the block arrrives, send it to the CPU and let the CPU continue execution Early Restart —As soon as the requested word of the block arrrives, send it to the CPU and let the CPU continue execution Critical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives Critical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives let the CPU continue while filling the rest of the words in the block. also called “wrapped fetch” and “requested word first” Generally useful only in large blocks Spatial locality a problem tend to want next sequential word, so not clear if benefit by early restart tend to want next sequential word, so not clear if benefit by early restart
14
14 4. Non-blocking Caches Non-blocking cache or lockup-free cache allows the data cache to continue to supply cache hits during a miss “Hit under miss” “Hit under miss” reduces the effective miss penalty by being helpful during a miss instead of ignoring the requests of the CPU “Hit under multiple miss” or “miss under miss” “Hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses
15
15 Value of Hit Under Miss for SPEC FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss Integer Floating Point “Hit under i Misses”
16
16 5. Miss Penalty Reduction: L2 Cache L2 Equations: AMAT = Hit Time L1 + Miss Rate L1 Miss Penalty L1 Miss Penalty L1 = Hit Time L2 + Miss Rate L2 Miss Penalty L2 AMAT = Hit Time L1 + Miss Rate L1 (Hit Time L2 + Miss Rate L2 Miss Penalty L2 ) Definitions: Local miss rate — misses in this cache divided by the total number of memory accesses to this cache (Miss rate L2 ) Global miss rate —misses in this cache divided by the total number of memory accesses generated by the CPU (Miss Rate L1 Miss Rate L2 )
17
17 Reducing Misses: Which Apply to L2 Cache? Reducing Miss Rate 1. Reduce Misses via Larger Block Size 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations 7. Reducing Capacity/Conf. Misses by Compiler Optimizations
18
18 L2 cache block size & A.M.A.T. 32KB L1, 8 byte path to memory
19
19 Reducing Miss Penalty Summary Five techniques Read priority over write on miss Read priority over write on miss Subblock placement Subblock placement Early Restart and Critical Word First on miss Early Restart and Critical Word First on miss Non-blocking Caches (Hit Under Miss) Non-blocking Caches (Hit Under Miss) Second Level Cache Second Level Cache reducing L2 miss rate effectively reduces L1 miss penalty! Can be applied recursively to Multilevel Caches Danger is that time to DRAM will grow with multiple levels in between Danger is that time to DRAM will grow with multiple levels in between
20
20 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache. (next lecture)
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.