5.2 Eleven Advanced Optimizations of Cache Performance

Slides:



Advertisements
Similar presentations
Multi-Level Caches Vittorio Zaccaria. Preview What you have seen: Data organization, Associativity, Cache size Policies -- how to manage the data once.
Advertisements

Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Lecture 12 Reduce Miss Penalty and Hit Time
Miss Penalty Reduction Techniques (Sec. 5.4) Multilevel Caches: A second level cache (L2) is added between the original Level-1 cache and main memory.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Chapter 12 Pipelining Strategies Performance Hazards.
Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
7/2/ _23 1 Pipelining ECE-445 Computer Organization Dr. Ron Hayne Electrical and Computer Engineering.
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
Improving Cache Performance Four categories of optimisation: –Reduce miss rate –Reduce miss penalty –Reduce miss rate or miss penalty using parallelism.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Cache (Memory) Performance Optimization. Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the miss rate.
Prefetching Techniques. 2 Reading Data prefetch mechanisms, Steven P. Vanderwiel, David J. Lilja, ACM Computing Surveys, Vol. 32, Issue 2 (June 2000)
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Cache Perf. CSE 471 Autumn 021 Cache Performance CPI contributed by cache = CPI c = miss rate * number of cycles to handle the miss Another important metric.
1 Adapted from UC Berkeley CS252 S01 Lecture 17: Reducing Cache Miss Penalty and Reducing Cache Hit Time Hardware prefetching and stream buffer, software.
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 29 Memory Hierarchy Design Cache Performance Enhancement by: Reducing Cache.
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 通信邮箱: 提交作业邮箱: 2014 年 1.
CS203 – Advanced Computer Architecture Cache. Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi-core processors:
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
1 Lecture 20: OOO, Memory Hierarchy Today’s topics:  Out-of-order execution  Cache basics.
Chapter 5 Memory II CSE 820. Michigan State University Computer Science and Engineering Equations CPU execution time = (CPU cycles + Memory-stall cycles)
Pentium 4 Deeply pipelined processor supporting multiple issue with speculation and multi-threading 2004 version: 31 clock cycles from fetch to retire,
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
Yu-Lun Kuo Computer Sciences and Information Engineering
Reducing Hit Time Small and simple caches Way prediction Trace caches
Associativity in Caches Lecture 25
Improving Memory Access 1/3 The Cache and Virtual Memory
CSC 4250 Computer Architectures
Multilevel Memories (Improving performance using alittle “cash”)
The University of Adelaide, School of Computer Science
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
ECE 445 – Computer Organization
Bojian Zheng CSCD70 Spring 2018
Lecture 14: Reducing Cache Misses
Lecture 17: Case Studies Topics: case studies for virtual memory and cache hierarchies (Sections )
Ka-Ming Keung Swamy D Ponpandi
Lecture 20: OOO, Memory Hierarchy
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Lecture 20: OOO, Memory Hierarchy
Siddhartha Chatterjee
Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate
Cache - Optimization.
Main Memory Background
Cache Performance Improvements
The University of Adelaide, School of Computer Science
10/18: Lecture Topics Using spatial locality
Ka-Ming Keung Swamy D Ponpandi
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

5.2 Eleven Advanced Optimizations of Cache Performance

1. Small & simple caches - Reduce hit time Cost: indexing, Smaller is faster L2 small enough to fit on processor chip Direct mapping is simple Overlap tag check with data transmit CACTI - Simulate impact on hit time E.g., Fig 5.4 Access vs. size & associativity Suggest: Hit time     Direct mapped is 1.2 - 1.5 x faster than 2-way set associative     2-way is 1.02 - 1.1 x 4-way     4-way is 1.0 - 1.08 x fully associative

2. Way prediction - Reduce hit time Extra bits kept in cache to predict way, block within set of next cache access Set multiplexor early to select desired block Single tag compare in that cycle in parallel with reading cache data Miss? Check other blocks for matches in next cycle Saves pipeline stages 85% of accesses for 2-way ==> Good match for speculative processors Pentium 4 uses

3. Trace caches - Reduce hit time ILP challenge: Enough instructions to execute every cycle without dependencies Trace cache - dynamic traces of executed instructions Not static sequences of instructions from memory Branch prediction folded into instruction cache More complicated address mapping Better use of long blocks Disadvantage: Conditional branches making different choices put same instructions in separate traces Pentium 4 uses trace cache of decoded micro-instructions

4. Pipelined cache access - Increase cache bandwidth Pipeline cache access Effective latency of L1 cache hit is multiple clock cycles Fast clock high bandwidth Slow hits Pentium 4 L1 cache hit takes 4 cycles Increased pipeline stages

5. Nonblocking cache (hit under miss) - Increase cache bandwidth With out-of-order completion, processor need not stall on cache miss Continue fetching instructions while waiting for cache data If cache does not block, allow cache to supply data to hits while processing a miss Reduces effective miss penalty Overlap multiple misses? Called hit under multiple misses or miss under miss Requires memory to service multiple misses simultaneously

6. Multi-banked caches - Increase cache bandwidth Independent banks supporting simultaneous access Originally used in main memory AMD Opteron has 2 banks of L2 Sun Niagara has 4 banks of L2 Best when accesses spread across banks Spread addresses sequentially across banks - Sequential interleaving

7. Critical word first, Early restart - Reduce miss penalty Processor needs 1 word of block Give it what it needs first How is block retrieved from memory? Critical word first - Get requested word first Return it Continue with memory transfer Early restart - Fetch in normal order When requested word comes, return it Benefits only with large blocks. Why? Disadvantage: Spatial locality. Why? Miss penalty is hard to estimate

8. Merge write buffers - Reduce miss penalty Write-through relies on write buffers All stores sent to lower level Write-back uses simple buffer when block is replaced Case: Write buffer is empty Data & addresses written from cache block to buffer Cache thinks write is done Case: Write buffer contained modified blocks Is this block already in write buffer? Write merging - Combine newly modified with buffer contents Case: Buffer full & no address match Must wait for empty buffer block Uses memory more efficiently - multi-word writes

9. Compiler optimizations - Reduce miss rate Compiler research improvements Instruction misses Data misses Optimizations include Code & data rearrangement Reorder procedures - might reduce conflict misses Align basic blocks to beginning of a cache block - decreases chance of cache miss Branch straightening - Change sense of branch test, swap basic blocks of branches Data - Arrange to improve spatial & temporal locality E.g., arrays by block

9. Compiler optimizations - Reduce miss rate Loop interchange - Make code access data in order it is stored, e.g.,       /* Before, stride 100 */     for (j = 0; j < 100; j++)         for (i = 0; i < 500; i++)             x[i][j] = 2 * x[i][j];   vs.       /* After, stride 1 */     for (i = 0; i < 500; i++)         for (j = 0; j < 100; j++)             x[i][j] = 2 * x[i][j];   vs. blocking for Gaussian elimination?

10. Hardware prefetch instructions & data - Reduce miss penalty or miss rate Prefetch instructions and data before processor requests Fetch by block already tries On miss, fetch missed block and next one Block prediction? Data access, similarly Multiple streams? e.g., matrix * matrix Pentium 4 can prefetch data into L2 from 8 streams from 8 different 4 Kb pages

11. Compiler-controlled prefetch - Reduce miss penalty or miss rate Compiler inserts instructions to prefetch To register To cache Faulting or nonfaulting? Should prefetch cause page fault or memory protection fault? Assume nonfaulting cache prefetch Does not change contents of registers or memory Does not cause memory fault Goal: Overlap execution with prefetching