Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5 Memory Hierarchy Design

Similar presentations


Presentation on theme: "Chapter 5 Memory Hierarchy Design"— Presentation transcript:

1 Chapter 5 Memory Hierarchy Design
Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines Conclusion

2 Many Levels in Memory Hierarchy
Pipeline registers Invisible only to high-level language programmers Register file There can also be a 3rd (or more) cache levels here Usually made invisible to the programmer (even assembly programmers) 1st-level cache (on-chip) 2nd-level cache (on same MCM as CPU) Our focus in chapter 5 Physical memory (usu. mounted on same board as CPU) Virtual memory (on hard disk, often in same enclosure as CPU) Disk files (on hard disk often in same enclosure as CPU) Network-accessible disk files (often in the same building as the CPU) Tape backup/archive system (often in the same building as the CPU) Data warehouse: Robotically-accessed room full of shelves of tapes (usually on the same planet as the CPU)

3 Simple Hierarchy Example
Note many orders of magnitude change in characteristics between levels: ×128→ ×8192→ ×200→ (for random access) ×4 → ×100 → ×50,000 → (1 TB 10 ms) (2 GB)

4 Why More on Memory Hierarchy?
Processor-Memory Performance Gap Growing Y-axis is performance X-axis is time Latency Cliché: Not e that x86 didn’t have cache on chip until 1989

5 Three Types of Misses Compulsory Capacity Conflict
During a program, the very first access to a block will not be in the cache (unless pre-fetched) Capacity The working set of blocks accessed by the program is too large to fit in the cache Conflict Unless cache is fully associative, sometimes blocks may be evicted too early (compared to fully-associative) because too many frequently-accessed blocks map to the same limited set of frames.

6 Misses by Type Conflict Conflict misses are significant in a direct-mapped cache. From direct-mapped to 2-way helps as much as doubling cache size. Going from direct-mapped to 4-way is better than doubling cache size.

7 As fraction of total misses

8 Cache Performance Consider memory delays in calculating CPU time:
Example: Ideal CPI=1, 1.5 ref / inst, miss rate=2%, miss penalty=100 CPU cycles In-order pipeline is assumed The lower the ideal CPI, the higher the impact of cache miss Measured by CPU cycles, fast cycle time has more penalty

9 Cache Performance Example
Ideal-L1 CPI=2.0, ref / inst=1.5, cache size=64KB, miss penalty=75ns, hit time=1 clock cycle Compare performance of two caches: Direct-mapped (1-way): cycle time=1ns, miss rate=1.4% 2-way: cycle time=1.25ns, miss rate=1.0%

10 Out-Of-Order Processor
Define new “miss penalty” considering overlap Need to decide memory latency and overlapped latency Not straight forward Example (from previous slide) Assume 30% of 75ns penalty can be overlapped, but with longer (1.25ns) cycle on 1-way design due to OOO

11 An Alternative Metric CPU Cache
(Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty) The times Tacc, Thit, and T+miss can be either: Real time (e.g., nanoseconds), or, number of clock cycles T+miss means the extra (not total) time (or cycle) for a miss in addition to Thit, which is incurred by all accesses The Ave. mem access time does not take other instructions into the formula, the same example shows different results Hit time Lower levels of hierarchy CPU Cache Miss penalty

12 Another View (Multi-cycle Cache)
Instead of increase the cycle time, 2-way cache can have two-cycle cache access, then becomes: In reality, not all the 2nd cycle of the 2-cycle loads will cause stalls of dependent instructions, say only 20%: (Note the formula assume impact all) Note, cycle time always impacts all

13 Cache Performance Consider the cache performance equation:
It obviously follows that there are three basic ways to improve cache performance: Reducing miss penalty Reducing miss rate Reducing miss penalty/rate via parallelism Reducing hit time Note that by Amdahl’s Law, there will be diminishing returns from reducing only hit time or amortized miss penalty by itself, instead of both together. (Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty) “Amortized miss penalty”

14 6 Basic Cache Optimizations
Reducing hit time 1. Giving Reads Priority over Writes E.g., Read complete before earlier writes in write buffer 2. Avoiding Address Translation during Cache Indexing Reducing Miss Penalty 3. Multilevel Caches Reducing Miss Rate 4. Larger Block size (Compulsory misses) 5. Larger Cache size (Capacity misses) 6. Higher Associativity (Conflict misses)

15 Multiple-Level Caches
Avg mem acc time = Hit time (L1) + Miss rate (L1) x Miss Penalty (L1) Miss penalty (L1) = Hit time (L2) + Miss rate (L2) x Miss Penalty (L2) Can plug 2nd equation into the first: Avg mem access time = Hit time(L1) + Miss rate(L1) x (Hit time(L2) Miss rate(L2)x Miss penalty(L2))

16 Multi-level Cache Terminology
“Local miss rate” The miss rate of one hierarchy level by itself. # of misses at that level / # accesses to that level e.g. Miss rate(L1), Miss rate(L2) “Global miss rate” The miss rate of a whole group of hierarchy levels # of accesses coming out of that group (to lower levels) / # of accesses to that group Generally this is the product of the miss rates at each level in the group. Global L2 Miss rate = Miss rate(L1) × Local Miss rate(L2)

17 Effect of 2-level Caching
L2 size usually much bigger than L1 Provide reasonable hit rate Decreases miss penalty of 1st-level cache May increase L2 miss penalty Multiple-level cache inclusion property Inclusive cache: L1 is a subset of L2; simplify cache coherence mechanism, effective cache size = L2 Exclusive cache: L1, L2 are exclusive; increase effect cache sizes = L1 + L2 Enforce inclusion property: Backward invalidation on L2 replacement

18 11 Advanced Cache Optimizations
Reducing hit time Small and simple caches Way prediction Trace caches Increasing cache bandwidth Pipelined caches Multibanked caches Nonblocking caches Reducing Miss Penalty Critical word first Merging write buffers Reducing Miss Rate Compiler optimizations Reducing miss penalty or miss rate via parallelism Hardware prefetching Compiler prefetching

19 1. Fast Hit times via Small and Simple Caches
Index tag memory and then compare takes time  Small cache can help hit time since smaller memory takes less time to index E.g., L1 caches same size for 3 generations of AMD microprocessors: K6, Athlon, and Opteron Also L2 cache small enough to fit on chip with the processor avoids time penalty of going off chip Simple  direct mapping Can overlap tag check with data transmission since no choice Access time estimate for 90 nm using CACTI model 4.0 Median ratios of access time relative to the direct-mapped caches are 1.32, 1.39, and 1.43 for 2-way, 4-way, and 8-way caches

20 2. Fast Hit times via Way Prediction
How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache? Way prediction: keep extra bits in cache to predict the “way,” or block within the set, of next cache access. Multiplexor is set early to select desired block, only 1 tag comparison performed that clock cycle in parallel with reading the cache data Miss  1st check other blocks for matches in next clock cycle Accuracy  85% (can be higher with bigger history) Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles Used for instruction caches vs. data caches Hit Time Way-Miss Hit Time Miss Penalty

21 3. Fast Hit times via Trace Cache (Pentium 4 only; and last time?)
Find more instruction level parallelism? How avoid translation from x86 to microops? Trace cache in Pentium 4 Dynamic traces of the executed instructions vs. static sequences of instructions as determined by layout in memory Built-in branch predictor Cache the micro-ops vs. x86 instructions Decode/translate from x86 to micro-ops on trace cache miss + 1.  better utilize long blocks (don’t exit in middle of block, don’t enter at label in middle of block) 1.  complicated address mapping since addresses no longer aligned to power-of-2 multiples of word size - 1.  instructions may appear multiple times in multiple dynamic traces due to different branch outcomes

22 4: Increasing Cache Bandwidth by Pipelining
Pipeline cache access to maintain bandwidth, but higher latency Instruction cache access pipeline stages: 1: Pentium 2: Pentium Pro through Pentium III 4: Pentium 4  greater penalty on mispredicted branches  more clock cycles between the issue of the load and the use of the data Answer is 3 stages between branch and new instruction fetch and 2 stages between load and use (even though if looked at red insertions that it would be 3 for load and 2 for branch) Reasons: 1) Load: TC just does tag check, data available after DS; thus supply the data & forward it, restarting the pipeline on a data cache miss 2) EX phase does the address calculation even though just added one phase; presumed reason is that since want fast clockc cycle don’t want to sitck RF phase with reading regisers AND testing for zero, so just moved it back on phase

23 5. Increasing Cache Bandwidth: Non-Blocking Caches
Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss requires F/E bits on registers or out-of-order execution requires multi-bank memories “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses Requires muliple memory banks (otherwise cannot support) Penium Pro allows 4 outstanding memory misses

24 Nonblocking Cache Stats
hu1m Average stall time as % of blocking cache hu2m hu64m Averages

25 Value of Hit Under Miss for SPEC
Integer Floating Point 0->1 1->2 2->64 Base “Hit under n Misses” FP programs on average: AMAT= > > > 0.26 Int programs on average: AMAT= > > > 0.19 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92

26 6: Increasing Cache Bandwidth via Multiple Banks
Rather than treat the cache as a single monolithic block, divide into independent banks that can support simultaneous accesses E.g.,T1 (“Niagara”) L2 has 4 banks Banking works best when accesses naturally spread themselves across banks  mapping of addresses to banks affects behavior of memory system Simple mapping that works well is “sequential interleaving” Spread block addresses sequentially across banks E,g, if there 4 banks, Bank 0 has all blocks whose address modulo 4 is 0; bank 1 has all blocks whose address modulo 4 is 1; … Answer is 3 stages between branch and new instruction fetch and 2 stages between load and use (even though if looked at red insertions that it would be 3 for load and 2 for branch) Reasons: 1) Load: TC just does tag check, data available after DS; thus supply the data & forward it, restarting the pipeline on a data cache miss 2) EX phase does the address calculation even though just added one phase; presumed reason is that since want fast clockc cycle don’t want to sitck RF phase with reading regisers AND testing for zero, so just moved it back on phase

27 7. Reduce Miss Penalty: Early Restart and Critical Word First
Don’t wait for full block before restarting CPU Early restart —As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution Spatial locality  tend to want next sequential word, so not clear size of benefit of just early restart Critical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block Long blocks more popular today  Critical Word 1st Widely used block

28 8. Merging Write Buffer to Reduce Miss Penalty
Write buffer to allow processor to continue while waiting to write to memory If buffer contains modified blocks, the addresses can be checked to see if address of new data matches the address of a valid write buffer entry If so, new data are combined with that entry Increases block size of write for write-through cache of writes to sequential words, bytes since multiword writes more efficient to memory The Sun T1 (Niagara) processor, among many others, uses write merging

29 9. Reducing Misses by Compiler Optimizations
McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software Instructions Reorder procedures in memory so as to reduce conflict misses Profiling to look at conflicts(using tools they developed) Data Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays Loop Interchange: change nesting of loops to access data in order stored in memory Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

30 Merging Arrays Example
/* Before: 2 sequential arrays */ int val[SIZE]; int key[SIZE]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Reducing conflicts between val & key; improve spatial locality when they are accessed in a interleaved fashion

31 Loop Interchange Example
/* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After */ Sequential accesses instead of striding through memory every 100 words; improved spatial locality

32 Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j]; /* After */ { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} 2 misses per access to a & c vs. one miss per access; improve spatial locality

33 Blocking Example Two Inner Loops:
/* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[] Capacity Misses a function of N & Cache Size: 2N3 + N2 => (assuming no conflict; otherwise …) Idea: compute on BxB submatrix that fits

34 Blocking Example B called Blocking Factor
/* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; B called Blocking Factor Capacity Misses from 2N3 + N2 to 2N3/B +N2 Conflict Misses Too?

35 Loop Blocking – Matrix Multiply
Before: After:

36 Reducing Conflict Misses by Blocking
Conflict misses in caches not FA vs. Blocking size Lam et al [1991] a blocking factor of 24 had a fifth the misses vs. 48 despite both fit in cache

37 Summary of Compiler Optimizations to Reduce Cache Misses (by hand)

38 10. Reducing Misses by Hardware Prefetching of Instructions & Data
Prefetching relies on having extra memory bandwidth that can be used without penalty Instruction Prefetching Typically, CPU fetches 2 blocks on a miss: the requested block and the next consecutive block. Requested block is placed in instruction cache when it returns, and prefetched block is placed into instruction stream buffer Data Prefetching Pentium 4 can prefetch data into L2 cache from up to 8 streams from 8 different 4 KB pages Prefetching invoked if 2 successive L2 cache misses to a page, if distance between those cache blocks is < 256 bytes

39 11. Reducing Misses by Software Prefetching Data
Data Prefetch Load data into register (HP PA-RISC loads) Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) Special prefetching instructions cannot cause faults; a form of speculative execution Issuing Prefetch Instructions takes time Is cost of prefetch issues < savings in reduced misses? Higher superscalar reduces difficulty of issue bandwidth

40 Compiler Optimization vs. Memory Hierarchy Search
Compiler tries to figure out memory hierarchy optimizations New approach: “Auto-tuners” 1st run variations of program on computer to find best combinations of optimizations (blocking, padding, …) and algorithms, then produce C code to be compiled for that computer “Auto-tuner” targeted to numerical method E.g., PHiPAC (BLAS), Atlas (BLAS), Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W NOTE, Figure 5.11 summarizes impact on cache performance and complexity of these methods. Autotuners is fine. I had a slightly longer list I was working on before seeing Krste's mail. PHiPAC: Dense linear algebra (Krste, Jim, Jeff Bilmes and others at UCB) PHiPAC = Portable High Performance Ansi C FFTW: Fastest Fourier Transforms in the West (from Matteo Frigo and Steve Johnson at MIT; Matteo is now at IBM) Atlas: Dense linear algebra now the "standard" for many BLAS implementations; used in Matlab, for example. (Jack Dongarra, Clint Whaley et al) Sparsity: Sparse linear algebra (Eun-Jin Im and Kathy at UCB) Spiral: DSP algorithms including FFTs and other transforms (Markus Pueschel, José M. F. Moura et al) OSKI: Sparse linear algebra (Rich Vuduc, Jim and Kathy, From the Bebop project at UCB) In addition there are groups at Rice, USC, UIUC, Cornell, UT Austin, UCB (Titanium), LLNL and others working on compilers that include an auto-tuning (Search-based) optimization phase. Both the Bebop group and the Atlas group have done work on automatic tuning of collective communication routines for supercomputers/clusters, but this is ongoing. I'll send a slide with an autotuning example later. Kathy

41 Main Memory Some definitions:
Bandwidth (bw): Bytes read or written per unit time Latency: Described by Access Time: Delay between access initiation & completion For reads: Present address till result ready. Cycle time: Minimum interval between separate requests to memory. Address lines: Separate bus CPUMem to carry addresses. (Not usu. counted in BW figures.) RAS (Row Access Strobe) First half of address, sent first. CAS (Column Access Strobe) Second half of address, sent second.

42 RAS vs. CAS (save address pin)
DRAM bit-cell array 1. RAS selects a row 2. Parallel readout of all row data 3. CAS selects a column to read 4. Selected bit written to memory bus

43 Types of Memory DRAM (Dynamic Random Access Memory)
Cell design needs only 1 transistor per bit stored. Cell charges leak away and may dynamically (over time) drift from their initial levels. Requires periodic refreshing to correct drift e.g. every 8 ms Time spent refreshing kept to <5% of BW SRAM (Static Random Access Memory) Cell voltages are statically (unchangingly) tied to power supply references. No drift, no refresh. But needs 4-6 transistors per bit. DRAM: 4-8x larger, 8-16x slower, 8-16x cheaper/bit

44 Typical DRAM Organization
(256 Mbit) Low 14 bits High 14 bits

45 Amdahl/Case Rule Memory size (and I/O b/w) should grow linearly with CPU speed Typical: 1 MB main memory, 1 Mbps I/O b/w per 1 MIPS CPU performance. Takes a fairly constant ~8 seconds to scan entire memory (if memory bandwidth = I/O bandwidth, 4 bytes/load, 1 load/4 instructions, and if latency not a problem) Moore’s Law: DRAM size doubles every 18 months (up 60%/yr) Tracks processor speed improvements Unfortunately, DRAM latency has only decreased 7%/yr! Latency is a big deal.

46 Some DRAM Trend Since 1998, the rate of increase in chip capacity has slowed to 2x per 2 years: * 128 Mb in 1998 * 256 Mb in 2000 * 512 Mb in 2002 See new Fig 5.13 for more updated data

47 Improving DRAM Bandwidth
Fast Page mode: access the same row repeatly Synchronous DRAM: SDRAM (with controller) Double data rate (DDR): transfer on both rising and falling edge of DRAM clock DDR, MHz, 2.5 Volts DDR2, MHz, 1.8 Volts DDR3, MHz, 1.5 Volts See the detailed name convention and bandwidth from Figure 5.14.


Download ppt "Chapter 5 Memory Hierarchy Design"

Similar presentations


Ads by Google