Presentation is loading. Please wait.

Presentation is loading. Please wait.

现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:

Similar presentations


Presentation on theme: "现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:"— Presentation transcript:

1 现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:http://glearning.tju.edu.cn/
2017年

2 The Main Contents课程主要内容
Chapter 1. Fundamentals of Quantitative Design and Analysis Chapter 2. Memory Hierarchy Design Chapter 3. Instruction-Level Parallelism and Its Exploitation Chapter 4. Data-Level Parallelism in Vector, SIMD, and GPU Architectures Chapter 5. Thread-Level Parallelism Chapter 6. Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism Appendix A. Pipelining: Basic and Intermediate Concepts

3 Many Levels in Memory Hierarchy
Pipeline registers Invisible only to high-level language programmers Register file 1st-level cache (on-chip) There can also be a 3rd (or more) cache levels here Usually made invisible to the programmer (even assembly programmers) 2nd-level cache (on same MCM as CPU) Physical memory (usu. mounted on same board as CPU) Virtual memory (on hard disk, often in same enclosure as CPU) Disk files (on hard disk often in same enclosure as CPU) Network-accessible disk files (often in the same building as the CPU) Tape backup/archive system (often in the same building as the CPU) Data warehouse: Robotically-accessed room full of shelves of tapes (usually on the same planet as the CPU)

4 Simple Hierarchy Example
Note many orders of magnitude change in characteristics between levels: ×128→ ×8192→ ×200→ ×4 → ×100 → ×50,000 → (for random access) (2 GB) (1 TB 10 ms)

5 Memory Hierarchy

6 Why More on Memory Hierarchy?
Processor-Memory Performance Gap Growing Y-axis is performance X-axis is time Latency Cliché: Not e that x86 didn’t have cache on chip until 1989

7 Review for Cache Blocks or Lines? How to find a block in a Cache? Map?
Write strategies? Write buffer? Miss or miss rate? Hit or hit rate?

8 Memory Hierarchy Basics

9 Memory Hierarchy Basics

10 Memory Hierarchy Basics

11 Memory Hierarchy Basics

12 Three Types of Misses Compulsory强制 Capacity容量 Conflict冲突
During a program, the very first access to a block will not be in the cache (unless pre-fetched) Capacity容量 The working set of blocks accessed by the program is too large to fit in the cache Conflict冲突 Unless cache is fully associative, sometimes blocks may be evicted too early (compared to fully-associative) because too many frequently-accessed blocks map to the same limited set of frames.

13 An Alternative Metric =? CPU Cache
(Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty) =? The times Tacc, Thit, and T+miss can be either: Real time (e.g., nanoseconds), or, number of clock cycles T+miss means the extra (not total) time (or cycle) for a miss in addition to Thit, which is incurred by all accesses Hit time Lower levels of hierarchy CPU Cache Miss penalty

14 6 Basic Cache Optimizations (New)
Reducing hit time 1. Avoiding Address Translation during Cache Indexing Reducing Miss Penalty 2. Giving Reads Priority over Writes E.g., Read complete before earlier writes in write buffer 3. Multilevel Caches Reducing Miss Rate 4. Larger Block size (Compulsory misses) 5. Larger Cache size (Capacity misses) 6. Higher Associativity (Conflict misses)

15 Multiple-Level Caches
Avg mem acc time = Hit time (L1) + Miss rate (L1) x Miss Penalty (L1) Miss penalty (L1) = Hit time (L2) + Miss rate (L2) x Miss Penalty (L2) Can plug 2nd equation into the first: Avg mem access time = Hit time(L1) + Miss rate(L1) x (Hit time(L2) Miss rate(L2)x Miss penalty(L2))

16 Ten Advanced Optimizations of Cache Performance

17 1. Small & Simple First-Level Caches to Reduce hit time and Power
Cost: indexing, Smaller is faster L2 small enough to fit on processor chip Direct mapping is simple Overlap tag check with data transmit CACTI - Simulate impact on hit time E.g., Fig 2.4 Access vs. size & associativity Suggest: Hit time     Direct mapped is x faster than 2-way set associative     2-way is x 4-way     4-way is x fully associative

18 Access Time versus Cache Size and associativity
Access time in picoseconds

19 Energy per read vs. size and associativity

20 2. Way prediction 路预测- Reduce hit time
Extra bits kept in cache to predict way, block within set of next cache access Set multiplexor early to select desired block Single tag compare in that cycle in parallel with reading cache data Miss? Check other blocks for matches in next cycle Saves pipeline stages 85% of accesses for 2-way ==> Good match for speculative processors Pentium 4 uses

21 3. Pipelined cache access - Increase cache bandwidth
Pipeline cache access Effective latency of L1 cache hit is multiple clock cycles Fast clock high bandwidth Slow hits Pentium cache hit took 1 cycle Core i7 cache hit takes 4 cycles Increased pipeline stages

22 4. Nonblocking cache (hit under miss) - Increase cache bandwidth
With out-of-order execution, processor need not stall on cache miss Continue fetching instructions while waiting for cache data If cache does not block, allow cache to supply data to hits while processing a miss Reduces effective miss penalty Overlap multiple misses? Called hit under multiple misses or miss under miss Requires memory to service multiple misses simultaneously

23 5. Multibanked caches - Increase cache bandwidth
Independent banks supporting simultaneous access Originally used in main memory AMD Opteron has 2 banks of L2 Sun Niagara has 4 banks of L2 Best when accesses spread across banks Spread addresses sequentially across banks - Sequential interleaving

24 6. Critical word first, Early restart - Reduce miss penalty
Don’t wait for full block before restarting CPU Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution Spatial locality  tend to want next sequential word, so not clear size of benefit of just early restart Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block Long blocks more popular today  Critical Word 1st Widely used block

25 6. Critical word first, Early restart - Reduce miss penalty
Processor needs 1 word of block Give it what it needs first How is block retrieved from memory? Critical word first - Get requested word first Return it Continue with memory transfer Early restart - Fetch in normal order When requested word comes, return it Benefits only with large blocks. Why? Disadvantage: Spatial locality. Why? Miss penalty is hard to estimate

26 7. Merge write buffers - Reduce miss penalty
Write-through relies on write buffers All stores sent to lower level Write-back uses simple buffer when block is replaced Case: Write buffer is empty Data & addresses written from cache block to buffer Cache thinks write is done Case: Write buffer contained modified blocks Is this block already in write buffer? Write merging - Combine newly modified with buffer contents Case: Buffer full & no address match Must wait for empty buffer block Uses memory more efficiently - multi-word writes

27 7. Merge write buffers - Reduce miss penalty

28 8. Compiler optimizations - Reduce miss rate
Compiler research improvements Instruction misses Data misses Optimizations include Code & data rearrangement Reorder procedures - might reduce conflict misses Align basic blocks to beginning of a cache block - decreases chance of cache miss Branch straightening - Change sense of branch test, swap basic blocks of branches Data - Arrange to improve spatial & temporal locality E.g., arrays by block

29 8. Compiler optimizations - Reduce miss rate
Instructions Reorder procedures in memory so as to reduce conflict misses Profiling to look at conflicts(using tools they developed) Data (自学,阅读教材相关内容) Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays Loop Interchange: change nesting of loops to access data in order stored in memory Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

30 Merging Arrays Example
/* Before: 2 sequential arrays */ int val[SIZE]; int key[SIZE]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[SIZE]; Reducing conflicts between val & key; improve spatial locality

31 Loop Interchange Example
/* Before */ for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1) for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j]; /* After */ Sequential accesses instead of striding through memory every 100 words; improved spatial locality

32 Loop Fusion Example /* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j]; /* After */ { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] + c[i][j];} 2 misses per access to a & c vs. one miss per access; improve spatial locality

33 Blocking Example Two Inner Loops:
/* Before */ for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1) {r = 0; for (k = 0; k < N; k = k+1){ r = r + y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[] Capacity Misses a function of N & Cache Size: 2N3 + N2 => (assuming no conflict; otherwise …) Idea: compute on BxB submatrix that fits

34 Blocking Example B called Blocking Factor
/* After */ for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1) for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r; }; B called Blocking Factor Capacity Misses from 2N3 + N2 to 2N3/B +N2 Conflict Misses Too?

35 Loop Blocking – Matrix Multiply
Before: After:

36 9. Hardware Prefetching of Ins
9. Hardware Prefetching of Ins. & Data, Reduce Miss Penalty or Miss Rate Processor fetch two instruction block on a miss Requested block is placed in I-Cache Next block is placed in instruction stream buffer A similar approach can be applied to data accesses Intel Core i7 supports hardware prefetching into both L1 and L2

37 10. Compiler-controlled prefetch - Reduce miss penalty or miss rate
Compiler inserts instructions to prefetch To register To cache Faulting or nonfaulting? Should prefetch cause page fault or memory protection fault? Assume nonfaulting cache prefetch Does not change contents of registers or memory Does not cause memory fault Goal: Overlap execution with prefetching

38 10. Compiler-controlled prefetch - Reduce miss penalty or miss rate
Before for (i=0; i<3; i=i+1) for (j=0; j<100; j=j+1) a[i][j]=b[j][0]*b[j+1][0]; After for (j=0; j<100; j=j+1) prefetch(b[j+7][0]); prefetch(a[0][j+7]); a[0][j]=b[j][0]*b[j+1][0]; for (i=0; i<3; i=i+1) for (j=0; j<100; j=j+1) prefetch(a[0][j+7]); a[i][j]=b[j][0]*b[j+1][0];

39 作业4(第五版2.1)

40


Download ppt "现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:"

Similar presentations


Ads by Google