Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.

Similar presentations


Presentation on theme: "Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same."— Presentation transcript:

1 Chapter 5 Memory Hierarchy Design

2 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same MCM as CPU) Physical memory (usu. mounted on same board as CPU) Virtual memory (on hard disk, often in same enclosure as CPU) Disk files (on hard disk often in same enclosure as CPU) Network-accessible disk files (often in the same building as the CPU) Tape backup/archive system (often in the same building as the CPU) Data warehouse: Robotically-accessed room full of shelves of tapes (usually on the same planet as the CPU) Our focus in chapter 5 Usually made invisible to the programmer (even assembly programmers) Invisible only to high-level language programmers There can also be a 3 rd (or more) cache levels here

3 3 Simple Hierarchy Example Note many orders of magnitude change in characteristics between levels: ×128→ ×8192→×200→ ×4 →×100 → ×50,000 → (for random access) (2 GB) (1 TB 10 ms)

4 4 Why More on Memory Hierarchy? Processor-Memory Performance Gap Growing

5 5 Three Types of Misses Compulsory –During a program, the very first access to a block will not be in the cache (unless pre-fetched) Capacity –The working set of blocks accessed by the program is too large to fit in the cache Conflict –Unless cache is fully associative, sometimes blocks may be evicted too early (compared to fully-associative) because too many frequently-accessed blocks map to the same limited set of frames.

6 6 An Alternative Metric (Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty) The times T acc, T hit, and T +miss can be either: –Real time (e.g., nanoseconds), or, number of clock cycles –T +miss means the extra (not total) time (or cycle) for a miss in addition to T hit, which is incurred by all accesses CPUCache Lower levels of hierarchy Hit time Miss penalty

7 7 Multiple-Level Caches Avg mem acc time = Hit time (L1) + Miss rate (L1) x Miss Penalty (L1) Miss penalty (L1) = Hit time (L2) + Miss rate (L2) x Miss Penalty (L2) Can plug 2nd equation into the first: –Avg mem access time = Hit time(L1) + Miss rate(L1) x (Hit time(L2) + Miss rate(L2)x Miss penalty(L2))

8 Eleven Advanced Optimizations of Cache Performance

9 1. Small & simple caches - Reduce hit time Cost: indexing, Smaller is faster L2 small enough to fit on processor chip Direct mapping is simple Overlap tag check with data transmit CACTI - Simulate impact on hit time E.g., Fig 5.4 Access vs. size & associativity Suggest: Hit time Direct mapped is 1.2 - 1.5 x faster than 2-way set associative 2-way is 1.02 - 1.1 x 4-way 4-way is 1.0 - 1.08 x fully associative

10 Access Time versus Cache Size

11 2. Way prediction - Reduce hit time Extra bits kept in cache to predict way, block within set of next cache access Set multiplexor early to select desired block Single tag compare in that cycle in parallel with reading cache data Miss? Check other blocks for matches in next cycle Saves pipeline stages 85% of accesses for 2- way ==> Good match for speculative processors Pentium 4 uses

12 3. Trace caches - Reduce hit time ILP challenge: Enough instructions to execute every cycle without dependencies Trace cache - dynamic traces of executed instructions Not static sequences of instructions from memory Branch prediction folded into instruction cache More complicated address mapping Better use of long blocks Disadvantage: Conditional branches making different choices put same instructions in separate traces Pentium 4 uses trace cache of decoded micro- instructions

13 4. Pipelined cache access - Increase cache bandwidth Pipeline cache access Effective latency of L1 cache hit is multiple clock cycles Fast clock high bandwidth Slow hits Pentium 4 L1 cache hit takes 4 cycles Increased pipeline stages

14 5. Nonblocking cache (hit under miss) - Increase cache bandwidth With out-of-order completion, processor need not stall on cache miss Continue fetching instructions while waiting for cache data If cache does not block, allow cache to supply data to hits while processing a miss Reduces effective miss penalty Overlap multiple misses? Called hit under multiple misses or miss under miss Requires memory to service multiple misses simultaneously

15 6. Multi-banked caches - Increase cache bandwidth Independent banks supporting simultaneous access Originally used in main memory AMD Opteron has 2 banks of L2 Sun Niagara has 4 banks of L2 Best when accesses spread across banks Spread addresses sequentially across banks - Sequential interleaving

16 7. Critical word first, Early restart - Reduce miss penalty Processor needs 1 word of block Give it what it needs first How is block retrieved from memory? Critical word first - Get requested word first Return it Continue with memory transfer Early restart - Fetch in normal order When requested word comes, return it Benefits only with large blocks. Why? Disadvantage: Spatial locality. Why? Miss penalty is hard to estimate

17 8. Merge write buffers - Reduce miss penalty Write-through relies on write buffers All stores sent to lower level Write-back uses simple buffer when block is replaced Case: Write buffer is empty Data & addresses written from cache block to buffer Cache thinks write is done Case: Write buffer contained modified blocks Is this block already in write buffer? Write merging - Combine newly modified with buffer contents Case: Buffer full & no address match Must wait for empty buffer block Uses memory more efficiently - multi-word writes

18 9. Compiler optimizations - Reduce miss rate Compiler research improvements Instruction misses Data misses Optimizations include Code & data rearrangement –Reorder procedures - might reduce conflict misses –Align basic blocks to beginning of a cache block - decreases chance of cache miss –Branch straightening - Change sense of branch test, swap basic blocks of branches –Data - Arrange to improve spatial & temporal locality E.g., arrays by block

19 9. Compiler optimizations - Reduce miss rate Loop interchange - Make code access data in order it is stored, e.g., /* Before, stride 100 */ for (j = 0; j < 100; j++) for (i = 0; i < 500; i++) x[i][j] = 2 * x[i][j]; vs. /* After, stride 1 */ for (i = 0; i < 500; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j]; vs. blocking for Gaussian elimination?

20 10. Hardware prefetch instructions & data - Reduce miss penalty or miss rate Prefetch instructions and data before processor requests Fetch by block already tries On miss, fetch missed block and next one Block prediction? Data access, similarly Multiple streams? e.g., matrix * matrix Pentium 4 can prefetch data into L2 from 8 streams from 8 different 4 Kb pages

21 11. Compiler-controlled prefetch - Reduce miss penalty or miss rate Compiler inserts instructions to prefetch To register To cache Faulting or nonfaulting? Should prefetch cause page fault or memory protection fault? Assume nonfaulting cache prefetch Does not change contents of registers or memory Does not cause memory fault Goal: Overlap execution with prefetching


Download ppt "Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same."

Similar presentations


Ads by Google