Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cache memory. Cache memory Overview CPU Cache Main memory Transfer of words Transfer of blocks of words.

Similar presentations


Presentation on theme: "Cache memory. Cache memory Overview CPU Cache Main memory Transfer of words Transfer of blocks of words."— Presentation transcript:

1 Cache memory

2 Cache memory Overview CPU Cache Main memory Transfer of words Transfer of blocks of words

3 Cache memory Organization Main memory k words word Address 0 1 2 k-1 Line no 0 1 2 c-1 Cache memory Tag k words

4 Elements of cache memory design Cache size Small enough so that the overall average cost per bit is close to that of main memory Large enough so that the overall average access time is close to the cache alone

5 Elements of cache memory design Cache size – hit ratio Average access time T s = H T 1 + (1 – H) (T 1 +T 2 ) Hit ratio H = N 1 / (N 1 + N 2 )

6 Elements of cache memory design Cache size – hit ratio M 1 (S 1 ) M 2 (S 2 ) CPU

7 Elements of cache memory design Mapping function Direct mapping Associative mapping Set associative mapping

8 Elements of cache memory design Mapping function – direct mapping Addresses of blocks Cache lines

9 Elements of cache memory design Mapping function – direct mapping The direct mapping technique is simple and inexpensive to implement. Its main disadvantage is that there is a fixed cache location for any given block. If a program happens to repeatedly reference words from two different blocks that map into the same line, then the blocks will be continually swapped in the cache.

10 Elements of cache memory design Mapping function – Associative mapping Associative mapping overcomes the disadvantage of direct mapping by permitting each main memory block to be loaded into any line of cache. Main disadvantage of associative mapping is the complex circuitry required to examine the tags of all cache lines in parallel.

11 Elements of cache memory design Mapping function – Set-associative mapping A compromise that exhibits the strengths of both the direct and associative approaches without their disadvantages.

12 Elements of cache memory design Mapping function – Set-associative mapping

13 Elements of cache memory design Write policy Write through All write operations are made to main memory as well as to the cache, ensuring that main memory is always valid. Write back Updates are made only in the cache. UPDATE bit is associated with each slot. When a block is replaced, it is written back to main memory if the update bit is set.

14 Elements of cache memory design Block size

15 Hit ratio increases at first because of principle of locality of reference. At some point hit ratio decreases because probability of using the newly fetched information becomes less than the probability of reusing the information that has to be replaced. Size of from 4 to 8 addressable units seems to be reasonably close to optimum.

16 Elements of cache memory design Number of caches Several studies have shown that, in general, the use of a second level of cache does improve performance Inclusion policy Inclusive multilevel cache Exclusive multilevel cache

17 Elements of cache memory design Unified vs Split Cache Unified cache contains both instructions and data Split cache has dedicated cache memories for storing instructions and data separately Which one is better?

18 Elements of cache memory design Unified vs Split Cache Unified cache For a given cache size, a unified cache has a higher hit rate than split cache because it balances the load between instruction and data fetches automatically. Only one cache needs to be designed and implemented.

19 Elements of cache memory design Unified vs Split Cache Split cache is currently used mainly due to various mechanisms of parallel processing employed in modern general-purpose processors (superscalar execution, pipelining) Split cache eliminates to some degree contention for the cache in the context of instruction and data accesses

20 Elements of cache memory design Advanced subjects Hardware prefetching Software prefetching - cacheability control instructions Streaming load & store instructions Cache coherency in the context of parallel processing mechanisms


Download ppt "Cache memory. Cache memory Overview CPU Cache Main memory Transfer of words Transfer of blocks of words."

Similar presentations


Ads by Google