CS252/Culler Lec 4.1 1/31/02 CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes.

Slides:



Advertisements
Similar presentations
Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Lecture 12 Reduce Miss Penalty and Hit Time
Miss Penalty Reduction Techniques (Sec. 5.4) Multilevel Caches: A second level cache (L2) is added between the original Level-1 cache and main memory.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
Caches Vincent H. Berk October 21, 2005
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Memory Hierarchy Design Chapter 5 Karin Strauss. Background 1980: no caches 1995: two levels of caches 2004: even three levels of caches Why? Processor-Memory.
EENG449b/Savvides Lec /1/04 April 1, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
ENGS 116 Lecture 131 Caches and Virtual Memory Vincent H. Berk October 31 st, 2008 Reading for Today: Sections C.1 – C.3 (Jouppi article) Reading for Monday:
CS252/Kubiatowicz Lec 3.1 1/24/01 CS252 Graduate Computer Architecture Lecture 3 Caches and Memory Systems I January 24, 2001 Prof. John Kubiatowicz.
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
EENG449b/Savvides Lec /7/05 April 7, 2005 Prof. Andreas Savvides Spring g449b EENG 449bG/CPSC 439bG.
Lec17.1 °Q1: Where can a block be placed in the upper level? (Block placement) °Q2: How is a block found if it is in the upper level? (Block identification)
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
Lecture 12: Memory Hierarchy— Five Ways to Reduce Miss Penalty (Second Level Cache) Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
Computer Architecture Ch5-1 Ping-Liang Lai ( 賴秉樑 ) Lecture 5 Review of Memory Hierarchy (Appendix C in textbook) Computer Architecture 計算機結構.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Computer Organization CS224 Fall 2012 Lessons 45 & 46.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
Improving Cache Performance Four categories of optimisation: –Reduce miss rate –Reduce miss penalty –Reduce miss rate or miss penalty using parallelism.
M E M O R Y. Computer Performance It depends in large measure on the interface between processor and memory. CPI (or IPC) is affected CPI = Cycles per.
Lecture 15 Calculating and Improving Cache Perfomance
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache And Pefetch Buffers Norman P. Jouppi Presenter:Shrinivas Narayani.
MBG 1 CIS501, Fall 99 Lecture 11: Memory Hierarchy: Caches, Main Memory, & Virtual Memory Michael B. Greenwald Computer Architecture CIS 501 Fall 1999.
Cache (Memory) Performance Optimization. Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the miss rate.
Memory Hierarchy—Improving Performance Professor Alvin R. Lebeck Computer Science 220 Fall 2008.
Pradondet Nilagupta (Based on notes Robert F. Hodson --- CNU)
For each of these, where could the data be and how would we find it? TLB hit – cache or physical memory TLB miss – cache, memory, or disk Virtual memory.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
1 Adapted from UC Berkeley CS252 S01 Lecture 17: Reducing Cache Miss Penalty and Reducing Cache Hit Time Hardware prefetching and stream buffer, software.
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 29 Memory Hierarchy Design Cache Performance Enhancement by: Reducing Cache.
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory
CMSC 611: Advanced Computer Architecture
Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses
CSC 4250 Computer Architectures
Lecture: Cache Hierarchies
The University of Adelaide, School of Computer Science
Lecture 9: Memory Hierarchy (3)
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
Lecture: Cache Hierarchies
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Advanced Computer Architectures
Lecture 14: Reducing Cache Misses
Lecture 08: Memory Hierarchy Cache Performance
CS203A Graduate Computer Architecture Lecture 13 Cache Design
Lecture: Cache Innovations, Virtual Memory
Memory Hierarchy.
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Siddhartha Chatterjee
Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate
Cache Performance Improvements
Presentation transcript:

CS252/Culler Lec 4.1 1/31/02 CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes

CS252/Culler Lec 4.2 1/31/02 How to Improve Cache Performance? 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

CS252/Culler Lec 4.3 1/31/02 Where to misses come from? Classifying Misses: 3 Cs –Compulsory —The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses. (Misses in even an Infinite Cache) –Capacity —If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved. (Misses in Fully Associative Size X Cache) –Conflict —If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses. (Misses in N-way Associative, Size X Cache) 4th “C”: –Coherence - Misses caused by cache coherence.

CS252/Culler Lec 4.4 1/31/02 3Cs Absolute Miss Rate (SPEC92) Conflict

CS252/Culler Lec 4.5 1/31/02 Cache Size Old rule of thumb: 2x size => 25% cut in miss rate What does it reduce?

CS252/Culler Lec 4.6 1/31/02 Cache Organization? Assume total cache size not changed: What happens if: 1)Change Block Size: 2)Change Associativity: 3) Change Compiler: Which of 3Cs is obviously affected?

CS252/Culler Lec 4.7 1/31/02 Larger Block Size (fixed size&assoc) Reduced compulsory misses Increased Conflict Misses What else drives up block size?

CS252/Culler Lec 4.8 1/31/02 Associativity Conflict

CS252/Culler Lec 4.9 1/31/02 3Cs Relative Miss Rate Conflict Flaws: for fixed block size Good: insight => invention

CS252/Culler Lec /31/02 Fast Hit Time + Low Conflict => Victim Cache How to combine fast hit time of direct mapped yet still avoid conflict misses? Add buffer to place data discarded from cache Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache Used in Alpha, HP machines To Next Lower Level In Hierarchy DATA TAGS One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator

CS252/Culler Lec /31/02 Alpha Cache Organization

CS252/Culler Lec /31/02 Reducing Misses by Hardware Prefetching of Instructions & Data E.g., Instruction Prefetching –Alpha fetches 2 blocks on a miss –Sequential prefetch –Cache Pollution if unused! – Unnecessarily replaced some useful data! –Solution: Put incoming block in “stream buffer” –On miss check stream buffer Works with data blocks too: –Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43% –Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches –Data Prediction is difficult Prefetching relies on having extra memory bandwidth that can be used without penalty Question: What to prefetch and when to prefetch? Instruction prefetch is fine, but data?

CS252/Culler Lec /31/02 Leave it to the Programmer? Software Prefetching Data Data Prefetch – Explicit prefetch instructions –Load data into register (HP PA-RISC loads) –Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) –Special prefetching instructions cannot cause faults; a form of speculative execution Prefetching comes in two flavors: –Binding prefetch: Requests load directly into register. »Must be correct address and register! –Non-Binding prefetch: Load into cache. »Can be incorrect. Faults? Issuing Prefetch Instructions takes time –Is cost of prefetch issues < savings in reduced misses? –Higher superscalar reduces difficulty of issue bandwidth

CS252/Culler Lec /31/02 Summary: Miss Rate Reduction 3 Cs: Compulsory, Capacity, Conflict 0. Larger cache 1. Reduce Misses via Larger Block Size 2. Reduce Misses via Higher Associativity 3. Reducing Misses via Victim Cache 4. Reducing Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Misses by Compiler Optimizations Prefetching comes in two flavors: –Binding prefetch: Requests load directly into register. »Must be correct address and register! –Non-Binding prefetch: Load into cache. »Can be incorrect. Frees HW/SW to guess!

CS252/Culler Lec /31/02 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

CS252/Culler Lec /31/02 1. Reducing Miss Penalty: Read Priority over Write on Miss Write-through w/ write buffers => RAW conflicts with main memory reads on cache misses –If simply wait for write buffer to empty, might increase read miss penalty (old MIPS 1000 by 50% ) –Check write buffer contents before read; if no conflicts, let the memory access continue Write-back want buffer to hold displaced blocks –Read miss replacing dirty block –Normal: Write dirty block to memory, and then do the read –Instead copy the dirty block to a write buffer, then do the read, and then do the write –CPU stall less since restarts as soon as do read

CS252/Culler Lec /31/02 2. Reduce Miss Penalty: Early Restart and Critical Word First Don’t wait for full block to be loaded before restarting CPU –Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution –Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first Generally useful only in large blocks, Spatial locality => tend to want next sequential word, so not clear if benefit by early restart block

CS252/Culler Lec /31/02 3. Reduce Miss Penalty: Non- blocking Caches to reduce stalls on misses Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss –requires F/E bits on registers or out-of-order execution –requires multi-bank memories “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses –Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses –Requires muliple memory banks (otherwise cannot support) –Penium Pro allows 4 outstanding memory misses

CS252/Culler Lec /31/02 4: Add a second-level cache L2 Equations AMAT = Hit Time L1 + Miss Rate L1 x Miss Penalty L1 Miss Penalty L1 = Hit Time L2 + Miss Rate L2 x Miss Penalty L2 AMAT = Hit Time L1 + Miss Rate L1 x (Hit Time L2 + Miss Rate L2 + Miss Penalty L2 ) Definitions: –Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rate L2 ) –Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU –Global Miss Rate is what matters

CS252/Culler Lec /31/02 Comparing Local and Global Miss Rates 32 KByte 1st level cache; Increasing 2nd level cache Global miss rate close to single level cache rate provided L2 >> L1 Don’t use local miss rate L2 not tied to CPU clock cycle! Cost & A.M.A.T. Generally Fast Hit Times and fewer misses Since hits are few, target miss reduction Linear Log Cache Size

CS252/Culler Lec /31/02 Example For every 1000 instructions, 40 misses in L1 and 20 misses in L2; Hit cycle in L1 is 1, L2 is 10; Miss penalty from L2 to memory is 100 cycles; there are 1.5 memory references per instruction. What is AMAT and average stall cycles per instruction? –AMAT = [1 + 40/1000 * ( /40 * 100) ] *cc = 3.4 cycles –AMAT without L2 = /1000 * 100 = 5 cycles => An improvement of 1.6 cycles due to L2 –Average stall cycles per instruction = 1.5 * 40/1000 * * 20/1000 * 100 = 3.6 cycles Note: We have not distinguished reads and writes. Access L2 only on L1 miss, i.e. write back cache

CS252/Culler Lec /31/02 Reducing Misses: Which apply to L2 Cache? Reducing Miss Rate 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations

CS252/Culler Lec /31/02 L2 cache block size & A.M.A.T. 32KB L1, 8 byte path to memory

CS252/Culler Lec /31/02 Reducing Miss Penalty Summary Four techniques –Read priority over write on miss –Early Restart and Critical Word First on miss –Non-blocking Caches (Hit under Miss, Miss under Miss) –Second Level Cache Can be applied recursively to Multilevel Caches –Danger is that time to DRAM will grow with multiple levels in between –First attempts at L2 caches can make things worse, since increased worst case is worse

CS252/Culler Lec /31/02 What is the Impact of What You’ve Learned About Caches? : Speed = ƒ(no. operations) 1990 –Pipelined Execution & Fast Clock Rate –Out-of-Order execution –Superscalar Instruction Issue 1998: Speed = ƒ(non-cached memory accesses) Superscalar, Out-of-Order machines hide L1 data cache miss (­5 clocks) but not L2 cache miss (­50 clocks)?

CS252/Culler Lec /31/02 Cache Optimization Summary TechniqueMRMPHTComplexity Larger Block Size+–0 Higher Associativity+–1 Victim Caches+2 Pseudo-Associative Caches +2 HW Prefetching of Instr/Data+2 Compiler Controlled Prefetching+3 Compiler Reduce Misses+0 Priority to Read Misses+1 Early Restart & Critical Word 1st +2 Non-Blocking Caches+3 Second Level Caches+2 miss rate miss penalty

CS252/Culler Lec /31/02 How to Improve Cache Performance? 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

CS252/Culler Lec /31/02 1. Small and simple caches Small on-chip L1 caches – less access time Direct mapped cache faster because no hardware comparison between blocks, but higher miss ratio Compromise – Direct L1 cache and Set- associative L2 cache How to predict cache access time at the design stage? – Use CACTI program

CS252/Culler Lec /31/02 2. Virtual Caches No address translation from virtual to physical address – no TLB Problems (Read section 5.7): How to ensure protection? – Add a bit to distinguish between processes? Add Process-identifier tag (PID) and flush the cache for another process – too much overhead!! Increases miss rate. O/S can not use same physical address for two or more virtual processes – called aliasing problem!

CS252/Culler Lec /31/02 3. Pipeline the cache access – Does it not increase hit latency? 4. Trace Caches – What are they? Read pp Homework