Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due.

Similar presentations


Presentation on theme: "Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due."— Presentation transcript:

1 Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due to cache capacity limit, some blocks are discarded resulting in a miss 3.Conflict –Due to a conflict in a set of blocks, some blocks are discarded resulting in a miss

2 Miss Rate Reduction Techniques 1.Larger Block Size: Increasing the block size decreases compulsory misses Larger blocks take advantage of spatial locality But, larger blocks increase the miss penalty Also, larger block may increase conflict misses There is a trade-off between miss rate (reduction) and miss penalty (increase)

3 Miss Rate Reduction Techniques (Cont’d) (Example on page 394) Memory system takes 40 clock cycles of overhead, then 16 bytes every 2 clock cycles. Which block size in Fig. 5.12 has the smallest average access time for each cache size? Assume hit time = 1 Clock Cycle Average access time (in clock cycles) = High latency and high bandwidth encourage larger block size

4 Miss Rate Reduction Techniques (Cont’d) 2.Higher Associativity: Two rules of thumb: i.8-way set associative is as effective as fully associative ii.A direct-mapped cache of size N has about the same miss rate as a 2-way set associative cache of size N/2 Greater associativity may increase the hit time

5 Miss Rate Reduction Techniques (Cont’d) (Example on page 396) For finding miss rates, see Fig. 5.9/page 391

6 Miss Rate Reduction Techniques (Cont’d) Larger Caches: –Increasing the capacity of the cache will reduce capacity miss rate –The drawbacks are larger hit time and higher cost Pseudo-Associative Caches: –The first cache access is just as in the direct- mapped cache –If the first access is not a hit, another cache entry is checked to see if it matches there (i.e., in the pseudo set) Figure 5.16

7 Miss Rate Reduction Techniques (Cont’d) –Do example on page 399

8 Miss Rate Reduction Techniques (Cont’d) Victim Caches: A small fully associative cache called victim cache, is placed between the main cache and its refill path. (see Fig 5.15) The block discarded from the main cache because of a miss, are kept in the victim cache The victim cache is checked after a miss in the main cache, before going to the next lower-level memory Victim cache reduces the conflict misses; a four- entry victim cache removed 20% to 95% of the conflict misses in a 4-kB direct-mapped cache

9 Miss Rate Reduction Techniques (Cont’d) Hardware Prefetching of Instructions and Data: On a miss, in additional to the requested block, the next consecutive block is also fetched The extra block is kept in an instruction stream buffer Example on page 401

10 Miss Rate Reduction Techniques (Cont’d) Alpha 21064 uses instruction prefetching It takes 1 extra CC, if instruction is found in the instruction stream buffer; Prefetch hit rate = 25% Miss rate for 8-KB instruction cache = 1.1% Hit time = 2 CC; Miss penalty = 50 CC Memory Access Time = 2 + (.011 x.25) x 1 + (.011x.75)x50 = 2.415 CC For effective miss rate; Memory Access time = Hit time + Miss rate x Miss penalty Effective Miss Rate = 50 This is better than 1.1%, the miss rate of an 8-KB Instruction Cache 2.415 - 2 = 0.83%

11 Miss Rate Reduction Techniques (Cont’d) Compiler Optimizations Code is rearranged by the compiler to reduce miss rates, particularly conflict misses Two examples i.Loop Interchange –In multiple nested loops, changing the order of the nesting can reduce the miss rate –This approach makes use of spatial locality ii.Blocking –Loop interchange does not always work, e.g., matrix multiplication –Block algorithms operate on submatrices or blocks, which can fit within the cache.


Download ppt "Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due."

Similar presentations


Ads by Google