Lecture: Cache Hierarchies

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
1 Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers By Sreemukha Kandlakunta Phani Shashank.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 Lecture 20: Cache Hierarchies, Virtual Memory Today’s topics:  Cache hierarchies  Virtual memory Reminder:  Assignment 8 will be posted soon (due.
The Lord of the Cache Project 3. Caches Three common cache designs: Direct-Mapped store in exactly one cache line Fully Associative store in any cache.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
CS Lecture 10 Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers N.P. Jouppi Proceedings.
1 Lecture 12: Cache Innovations Today: cache access basics and innovations (Sections )
1 Lecture 14: Cache Innovations and DRAM Today: cache access basics and innovations, DRAM (Sections )
1 Lecture 16: Large Cache Innovations Today: Large cache design and other cache innovations Midterm scores  91-80: 17 students  79-75: 14 students 
Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
1 Lecture 13: Cache Innovations Today: cache access basics and innovations, DRAM (Sections )
1 Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1)
An Intelligent Cache System with Hardware Prefetching for High Performance Jung-Hoon Lee; Seh-woong Jeong; Shin-Dug Kim; Weems, C.C. IEEE Transactions.
1 Lecture 16: Cache Innovations / Case Studies Topics: prefetching, blocking, processor case studies (Section 5.2)
1 Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5)
1 Lecture 14: DRAM Main Memory Systems Today: cache/TLB wrap-up, DRAM basics (Section 2.3)
Fundamentals of Parallel Computer Architecture - Chapter 61 Chapter 6 Introduction to Memory Hierarchy Organization Yan Solihin Copyright.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
1 Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1)
Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache And Pefetch Buffers Norman P. Jouppi Presenter:Shrinivas Narayani.
CSE 351 Section 9 3/1/12.
Lecture: Large Caches, Virtual Memory
Associativity in Caches Lecture 25
CSC 4250 Computer Architectures
Multilevel Memories (Improving performance using alittle “cash”)
Cache Memory Presentation I
Lecture: Large Caches, Virtual Memory
Lecture: Cache Hierarchies
Lecture 23: Cache, Memory, Security
Lecture: SMT, Cache Hierarchies
Lecture 21: Memory Hierarchy
Lecture 21: Memory Hierarchy
Lecture 12: Cache Innovations
Bojian Zheng CSCD70 Spring 2018
Rose Liu Electrical Engineering and Computer Sciences
Lecture 23: Cache, Memory, Virtual Memory
Lecture 14: Reducing Cache Misses
Chapter 5 Memory CSE 820.
Lecture 17: Case Studies Topics: case studies for virtual memory and cache hierarchies (Sections )
Lecture 22: Cache Hierarchies, Memory
Lecture: SMT, Cache Hierarchies
Lecture: Cache Innovations, Virtual Memory
Lecture 22: Cache Hierarchies, Memory
Lecture 24: Memory, VM, Multiproc
Adapted from slides by Sally McKee Cornell University
Chapter 5 Exploiting Memory Hierarchy : Cache Memory in CMP
Performance metrics for caches
Lecture 20: OOO, Memory Hierarchy
Lecture: SMT, Cache Hierarchies
Lecture 20: OOO, Memory Hierarchy
Lecture 15: Memory Design
Lecture 22: Cache Hierarchies, Memory
Lecture 11: Cache Hierarchies
CSE451 Virtual Memory Paging Autumn 2002
Lecture: SMT, Cache Hierarchies
Lecture: Cache Hierarchies
Lecture 21: Memory Hierarchy
Lecture 24: Virtual Memory, Multiprocessors
Lecture 23: Virtual Memory, Multiprocessors
Cache - Optimization.
Principle of Locality: Memory Hierarchies
Lecture 13: Cache Basics Topics: terminology, cache organization (Sections )
Lecture 14: Cache Performance
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1)

Accessing the Cache Byte address 101000 Offset 8-byte words 8 words: 3 index bits Direct-mapped cache: each address maps to a unique address Sets Data array

The Tag Array Byte address 101000 Tag 8-byte words Compare Direct-mapped cache: each address maps to a unique address Tag array Data array

Increasing Line Size Byte address A large cache line size  smaller tag array, fewer misses because of spatial locality 10100000 32-byte cache line size or block size Tag Offset Tag array Data array

Associativity Byte address Set associativity  fewer conflicts; wasted power because multiple data and tags are read 10100000 Tag Way-1 Way-2 Tag array Data array Compare

Problem 2 Assume a direct-mapped cache with just 4 sets. Assume that block A maps to set 0, B to 1, C to 2, D to 3, E to 0, and so on. For the following access pattern, estimate the hits and misses: A B B E C C A D B F A E G C G A

Problem 2 Assume a direct-mapped cache with just 4 sets. Assume that block A maps to set 0, B to 1, C to 2, D to 3, E to 0, and so on. For the following access pattern, estimate the hits and misses: A B B E C C A D B F A E G C G A M MH MM H MM HM HMM M M M

Problem 3 Assume a 2-way set-associative cache with just 2 sets. Assume that block A maps to set 0, B to 1, C to 0, D to 1, E to 0, and so on. For the following access pattern, estimate the hits and misses: A B B E C C A D B F A E G C G A

Problem 3 Assume a 2-way set-associative cache with just 2 sets. Assume that block A maps to set 0, B to 1, C to 0, D to 1, E to 0, and so on. For the following access pattern, estimate the hits and misses: A B B E C C A D B F A E G C G A M MH M MH MM HM HMM M H M

Problem 4 64 KB 16-way set-associative data cache array with 64 byte line sizes, assume a 40-bit address How many sets? How many index bits, offset bits, tag bits? How large is the tag array?

Problem 4 64 KB 16-way set-associative data cache array with 64 byte line sizes, assume a 40-bit address How many sets? 64 How many index bits (6), offset bits (6), tag bits (28)? How large is the tag array (28 Kb)?

Problem 5 8 KB fully-associative data cache array with 64 byte line sizes, assume a 40-bit address How many sets? How many ways? How many index bits, offset bits, tag bits? How large is the tag array?

Problem 5 8 KB fully-associative data cache array with 64 byte line sizes, assume a 40-bit address How many sets (1) ? How many ways (128) ? How many index bits (0), offset bits (6), tag bits (34) ? How large is the tag array (544 bytes) ?

Types of Cache Misses Compulsory misses: happens the first time a memory word is accessed – the misses for an infinite cache Capacity misses: happens because the program touched many other words before re-touching the same word – the misses for a fully-associative cache Conflict misses: happens because two words map to the same location in the cache – the misses generated while moving from a fully-associative to a direct-mapped cache Sidenote: can a fully-associative cache have more misses than a direct-mapped cache of the same size?

What Influences Cache Misses? Compulsory Capacity Conflict Increasing cache capacity Increasing number of sets Increasing block size Increasing associativity

Reducing Miss Rate Large block size – reduces compulsory misses, reduces miss penalty in case of spatial locality – increases traffic between different levels, space waste, and conflict misses Large cache – reduces capacity/conflict misses – access time penalty High associativity – reduces conflict misses – rule of thumb: 2-way cache of capacity N/2 has the same miss rate as 1-way cache of capacity N – more energy

More Cache Basics L1 caches are split as instruction and data; L2 and L3 are unified The L1/L2 hierarchy can be inclusive, exclusive, or non-inclusive On a write, you can do write-allocate or write-no-allocate On a write, you can do writeback or write-through; write-back reduces traffic, write-through simplifies coherence Reads get higher priority; writes are usually buffered L1 does parallel tag/data access; L2/L3 does serial tag/data

Tolerating Miss Penalty Out of order execution: can do other useful work while waiting for the miss – can have multiple cache misses -- cache controller has to keep track of multiple outstanding misses (non-blocking cache) Hardware and software prefetching into prefetch buffers – aggressive prefetching can increase contention for buses

Techniques to Reduce Cache Misses Victim caches Better replacement policies – pseudo-LRU, NRU, DRRIP Cache compression

Victim Caches A direct-mapped cache suffers from misses because multiple pieces of data map to the same location The processor often tries to access data that it recently discarded – all discards are placed in a small victim cache (4 or 8 entries) – the victim cache is checked before going to L2 Can be viewed as additional associativity for a few sets that tend to have the most conflicts

Replacement Policies Pseudo-LRU: maintain a tree and keep track of which side of the tree was touched more recently; simple bit ops NRU: every block in a set has a bit; the bit is made zero when the block is touched; if all are zero, make all one; a block with bit set to 1 is evicted DRRIP: use multiple (say, 3) NRU bits; incoming blocks are set to a high number (say 6), so they are close to being evicted; similar to placing an incoming block near the head of the LRU list instead of near the tail

Prefetching Hardware prefetching can be employed for any of the cache levels It can introduce cache pollution – prefetched data is often placed in a separate prefetch buffer to avoid pollution – this buffer must be looked up in parallel with the cache access Aggressive prefetching increases “coverage”, but leads to a reduction in “accuracy”  wasted memory bandwidth Prefetches must be timely: they must be issued sufficiently in advance to hide the latency, but not too early (to avoid pollution and eviction before use)

Stream Buffers Simplest form of prefetch: on every miss, bring in multiple cache lines When you read the top of the queue, bring in the next line L1 Sequential lines Stream buffer

Stride-Based Prefetching For each load, keep track of the last address accessed by the load and a possibly consistent stride FSM detects consistent stride and issues prefetches incorrect init steady correct correct incorrect (update stride) PC tag prev_addr stride state correct correct trans no-pred incorrect (update stride) incorrect (update stride)

Title Bullet