Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate

Slides:



Advertisements
Similar presentations
Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Lecture 12 Reduce Miss Penalty and Hit Time
Miss Penalty Reduction Techniques (Sec. 5.4) Multilevel Caches: A second level cache (L2) is added between the original Level-1 cache and main memory.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
Caches Vincent H. Berk October 21, 2005
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
CS252/Culler Lec 4.1 1/31/02 CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes.
Memory Hierarchy Design Chapter 5 Karin Strauss. Background 1980: no caches 1995: two levels of caches 2004: even three levels of caches Why? Processor-Memory.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
ENGS 116 Lecture 131 Caches and Virtual Memory Vincent H. Berk October 31 st, 2008 Reading for Today: Sections C.1 – C.3 (Jouppi article) Reading for Monday:
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
Memory Hierarchy Design
1 Improving on Caches CS #4: Pseudo-Associative Cache Also called column associative Idea –start with a direct mapped cache, then on a miss check.
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
Lecture 12: Memory Hierarchy— Five Ways to Reduce Miss Penalty (Second Level Cache) Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
M E M O R Y. Computer Performance It depends in large measure on the interface between processor and memory. CPI (or IPC) is affected CPI = Cycles per.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
MBG 1 CIS501, Fall 99 Lecture 11: Memory Hierarchy: Caches, Main Memory, & Virtual Memory Michael B. Greenwald Computer Architecture CIS 501 Fall 1999.
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
Pradondet Nilagupta (Based on notes Robert F. Hodson --- CNU)
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
For each of these, where could the data be and how would we find it? TLB hit – cache or physical memory TLB miss – cache, memory, or disk Virtual memory.
1 Adapted from UC Berkeley CS252 S01 Lecture 17: Reducing Cache Miss Penalty and Reducing Cache Hit Time Hardware prefetching and stream buffer, software.
Memory Design Principles Principle of locality dominates design Smaller = faster Hierarchy goal: total memory system almost as cheap as the cheapest component,
Microprocessor Microarchitecture Memory Hierarchy Optimization Lynn Choi Dept. Of Computer and Electronics Engineering.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Memory Hierarchy and Cache Design (3). Reducing Cache Miss Penalty 1. Giving priority to read misses over writes 2. Sub-block placement for reduced miss.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 29 Memory Hierarchy Design Cache Performance Enhancement by: Reducing Cache.
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
Chapter 5 Memory II CSE 820. Michigan State University Computer Science and Engineering Equations CPU execution time = (CPU cycles + Memory-stall cycles)
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory
CMSC 611: Advanced Computer Architecture
COMP 740: Computer Architecture and Implementation
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
Associativity in Caches Lecture 25
Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses
CSC 4250 Computer Architectures
The University of Adelaide, School of Computer Science
Lecture 9: Memory Hierarchy (3)
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
CPE 631 Lecture 06: Cache Design
CS252 Graduate Computer Architecture Lecture 7 Cache Design (continued) Feb 12, 2002 Prof. David Culler.
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
CMSC 611: Advanced Computer Architecture
Advanced Computer Architectures
Lecture 14: Reducing Cache Misses
Chapter 5 Memory CSE 820.
Lecture 08: Memory Hierarchy Cache Performance
CS203A Graduate Computer Architecture Lecture 13 Cache Design
Lecture: Cache Innovations, Virtual Memory
Memory Hierarchy.
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
CS 3410, Spring 2014 Computer Science Cornell University
Cache - Optimization.
Cache Memory Rabi Mahapatra
Cache Performance Improvements
10/18: Lecture Topics Using spatial locality
Presentation transcript:

Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate 1. Larger Block Size 2. Higher Associativity 3. Victim Cache 4. Pseudo-Associativity 5. HW Prefetching Instr, Data 6. SW Prefetching Data 7. Compiler Optimizations Danger of concentrating on just one parameter when evaluating performance

Overview Reducing Miss Penalty Reducing Hit Time Giving priority to read misses over writes Sub-block placement Early restart and critical word first Nonblocking caches Second-level caches Reducing Hit Time Small and simple caches Avoiding address translation Pipelining writes Small subblocks

1. Giving Priority to Read Misses Write buffers complicate memory access RAW hazard in main memory on cache misses SW 512(R0), R3 (cache index 0) LW R1, 1024(R0) (cache index 0) LW R2, 512(R0) (cache index 0) Wait for write buffer to empty? Might increase read miss penalty Check write buffer contents before read If no conflicts, let the memory access continue Write Back: Read miss replacing dirty block Normal: Write dirty block to memory, then do the read Optimized: copy dirty block to write buffer, then do the read More optimization: write merging

2. Sub-block Placement Don’t have to load full block on a miss Valid bits per subblock indicate valid data Tag Data 1 1 1 1 1 1 1 1 sub-blocks

3. Early Restart Don’t wait for full block to be loaded Early restart—As soon as the requested word arrives, send it to the CPU and let the CPU continue execution Critical Word First—Request the missed word first and send it to the CPU as soon as it arrives; then fill in the rest of the words in the block. Generally useful only in large blocks Extremely good spatial locality can reduce impact Back to back reads on two halves of cache block does not save you much (see example in book) Need to schedule instructions!

4. Nonblocking Caches Non-blocking caches continue to supply cache hits during a miss requires out-of-order execution CPU “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests “hit under multiple miss” may further lower the effective miss penalty by overlapping multiple misses Significantly increases the complexity of the cache controller Requires multiple memory banks (otherwise cannot support) Pentium Pro allows 4 outstanding memory misses

Hit Under Miss FP: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 Hit Under i Misses FP: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 Int:: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle penalty 2 1.8 1.6 1.4 0->1 Avg. Mem. Access Time 1.2 1->2 1 2->64 0.8 Base 0.6 0.4 0.2 ear wave5 nasa7 ora xlisp doduc su2cor eqntott espresso compress mdljsp2 fpppp swm256 mdljdp2 hydro2d alvinn spice2g6 tomcatv

5. Second-Level Caches L2 Equations Definitions: AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1 Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2 AMAT = Hit TimeL1 + Miss RateL1 x (Hit TimeL2 +Miss RateL2 +Miss PenaltyL2) Definitions: Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rateL2) Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss RateL1 x Miss RateL2) Global Miss Rate is what matters

Local and Global Miss Rates 32 KByte L1 cache; Global miss rate close to single level cache rate provided L2 >> L1 local miss rate Do not use to measure impact Use in equation! L2 not tied to clock cycle! Target miss reduction

Reducing L2 Miss Rate Reducing Miss Rate Reduce Misses via Larger Block Size Inclusion is useful (for consistency) Can complicate design  L2 cache miss (need to invalidate L1) 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations

Miss Penalty Summary Five techniques Read priority over write on miss Subblock placement Early Restart and Critical Word First on miss Non-blocking Caches (Hit under Miss) L2 Cache Can be applied recursively to Multilevel Caches Danger: time to DRAM will grow with multiple level

Reducing Hit Time Hit time affects the CPU clock rate Techniques Even for machines that take multiple cycles to access the cache Techniques Small and simple caches Avoiding address translation Pipelining writes

1. Small and Simple Caches Small hardware is faster Fits on the same chip as the processor Alpha 21164 has 8KB Instruction and 8KB data cache + 96KB second level cache? Small data cache and fast clock rate Direct Mapped, on chip Overlap tag check with data transmission For L2 keep tag check on chip, data off chip  fast tag check, large capacity associated with separate memory chip

2. Avoiding Address Translation Virtually Addressed Cache (vs. Physical Cache) Send virtual address to cache. Every time process is switched must flush the cache; Cost: time to flush + “compulsory” misses from empty cache Dealing with aliases (two different virtual addresses map to same physical address) I/O must interact with cache, so need virtual address Solution to aliases HW guarantees that every cache block has unique PA SW guarantee (page coloring): lower n bits must have same address; as long as covers index field & direct mapped, they must be unique; Solution to cache flush PID tag that identifies process and address within process

Virtual Addressed Caches CPU CPU CPU VA VA VA VA Tags TB $ PA Tags $ TB PA VA PA L2 $ $ TB MEM PA PA MEM MEM Overlap $ access with VA translation: requires $ index to remain invariant across translation Conventional Organization Virtually Addressed Cache Translate only on miss Synonym Problem

Process ID Impact

Index with Physical Portion of Address If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag Limits cache to page size: what if want bigger caches and uses same trick? Larger page sizes Higher associativity Index = log(Cache Size/[block size*associativity]) Page coloring 12 11 31 Page offset Index Block offset Page address Addres tag

3. Pipelined Writes CPU W1 W1 W2 Data R1/W1 R1 Delayed write buffer M Address Data Data in out =? W1 Tag Delayed write buffer W1 W2 Data M u x =? Write buffer R1/W1 R1 Lower level memory