Lecture 08: Memory Hierarchy Cache Performance

Slides:



Advertisements
Similar presentations
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Advertisements

Lecture 12 Reduce Miss Penalty and Hit Time
Computation I pg 1 Embedded Computer Architecture Memory Hierarchy: Cache Recap Course 5KK73 Henk Corporaal November 2014
Performance of Cache Memory
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 (and Appendix B) Memory Hierarchy Design Computer Architecture A Quantitative Approach,
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Memory Hierarchy Design Chapter 5 Karin Strauss. Background 1980: no caches 1995: two levels of caches 2004: even three levels of caches Why? Processor-Memory.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Storage HierarchyCS510 Computer ArchitectureLecture Lecture 12 Storage Hierarchy.
CMPE 421 Parallel Computer Architecture
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
Computer Architecture Ch5-1 Ping-Liang Lai ( 賴秉樑 ) Lecture 5 Review of Memory Hierarchy (Appendix C in textbook) Computer Architecture 計算機結構.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
M E M O R Y. Computer Performance It depends in large measure on the interface between processor and memory. CPI (or IPC) is affected CPI = Cycles per.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
1 Memory Hierarchy Design Chapter 5. 2 Cache Systems CPUCache Main Memory Data object transfer Block transfer CPU 400MHz Main Memory 10MHz Bus 66MHz CPU.
CPE 626 CPU Resources: Introduction to Cache Memories Aleksandar Milenkovic Web:
CMSC 611: Advanced Computer Architecture
Soner Onder Michigan Technological University
COSC3330 Computer Architecture
CSE 351 Section 9 3/1/12.
CSC 4250 Computer Architectures
Multilevel Memories (Improving performance using alittle “cash”)
Appendix B. Review of Memory Hierarchy
Cache Memory Presentation I
Lecture 08: Memory Hierarchy Cache Performance
Lecture 21: Memory Hierarchy
Chapter 5 Memory CSE 820.
CPE 631 Lecture 05: Cache Design
Adapted from slides by Sally McKee Cornell University
ECE232: Hardware Organization and Design
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
CS 704 Advanced Computer Architecture
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
If a DRAM has 512 rows and its refresh time is 9ms, what should be the frequency of row refresh operation on the average?
appreciate the favor & spread the kindness
Lecture 22: Cache Hierarchies, Memory
CS-447– Computer Architecture Lecture 20 Cache Memories
Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate
Review of Memory Hierarchy
Lecture 21: Memory Hierarchy
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Cache Memory Rabi Mahapatra
Lecture 7 Memory Hierarchy and Cache Design
Memory & Cache.
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Lecture 08: Memory Hierarchy Cache Performance Kai Bu kaibu@zju.edu.cn http://list.zju.edu.cn/kaibu/comparch2016

Lab 2 Report due May 04

data process & temporary storage

temporary storage

permanent storage

faster temporary storage

Memory Hierarchy

Wait, but what’s cache?

Preview What’s cache? How data in/out of cache matters? How to benefit more from cache?

Appendix B.1-B.3

So, what’s cache?

Cache The highest or first level of the memory hierarchy encountered once the addr leaves the processor Employ buffering to reuse commonly occurring items

Cache Hit/Miss When the processor can/cannot find a requested data item in the cache

Block/Line Run a fixed-size collection of data containing the requested word, retrieved from the main memory and placed into the cache

Cache Locality Temporal locality need the requested word again soon Spatial locality likely need other data in the block soon

Cache Miss Time required for cache miss depends on: Latency: the time to retrieve the first word of the block Bandwidth: the time to retrieve the rest of this block

How cache performance matters?

Cache Performance: Equations Assumption: Includes the time to handle a cache hit/miss

Cache Miss Metrics Memory stall cycles the number of cycles during processor is stalled waiting for a mem access Miss rate number of misses over number of accesses Miss penalty the cost per miss (number of extra clock cycles to wait)

Cache Performance: Example a computer with CPI=1 when cache hit; 50% instructions are loads and stores; 2 cc per memory access; 2% miss rate, 25 cc miss penalty; Q: how much faster would the computer be if all instructions were cache hits?

Cache Performance: Example Answer always hit: CPU execution time

Cache Performance: Example Answer with misses: Memory stall cycles CPU execution timecache

Cache Performance: Example Answer

Hit or Miss: Where to find a block?

Block Placement Direct Mapped only one place Fully Associative anywhere Set Associative anywhere within only one set Before we know where to find a block, we first need to know how it is placed

Block Placement

Block Placement: Generalized n-way set associative: n blocks in a set Direct mapped = one-way set associative i.e., one block in a set Fully associative = m-way set associative i.e., entire cache as one set with m blocks

Block Identification Block address: tag + index Index: select the set Tag: = valid bit + block address check all blocks in the set Block offset: the address of the desired data within the block Fully associative caches have no index field

Block Replacement upon cache miss, to load the data to a cache block, which block to replace? Direct-mapped placement only one block can be replaced

Block Replacement Fully/set associative Random simple to build LRU: Least Recently Used the block that has been unused for the longest time; use temporal locality; complicated/expensive; FIFO: first in, first out

Write Strategy Write-through info is written to both the block in the cache and to the block in the lower-level memory Write-back info is written only to the block in the cache; to the main memory only when the modified cache block is replaced;

Write Strategy Options on a write miss Write allocate the block is allocated on a write miss No-write allocate write miss not affect the cache; the block is modified in memory; until the program tries to read the block;

Write Strategy: Example

Write Strategy: Example No-write allocate: 4 misses + 1 hit cache not affected- address 100 not in the cache; read [200] miss, block replaced, then write [200] hits; Write allocate: 2 misses + 3 hits

Hit or Miss: How long will it take?

Avg Mem Access Time Average memory access time =Hit time + Miss rate x Miss penalty

Avg Mem Access Time Example 16KB instr cache + 16KB data cache; 32KB unified cache; 36% data transfer instructions; (load/store takes 1 extra cc on unified cache) 1 CC hit; 200 CC miss penalty; Q1: split cache or unified cache has lower miss rate? Q2: average memory access time?

Example: miss per 1000 instructions

Avg Mem Access Time Q1 Overall miss rate

Avg Mem Access Time Q2

Cache vs Processor Processor Performance Lower avg memory access time may correspond to higher CPU time (Example on Page B.19)

Out-of-Order Execution in out-of-order execution, stalls happen to only instructions that depend on incomplete result; other instructions can continue; so less avg miss penalty

How to optimize cache performance?

Average Memory Access Time = Hit Time + Miss Rate x Miss Penalty

Average Memory Access Time = Hit Time + Miss Rate x Miss Penalty

Average Memory Access Time = Hit Time + Miss Rate x Miss Penalty Larger block size; Larger cache size; Higher associativity;

Reducing Miss Rate 3 categories of miss rates / root causes Compulsory: cold-start/first-reference misses; Capacity cache size limit; blocks discarded and later retrieved; Conflict collision misses: associativity a block discarded and later retrieved in a set;

Opt #1: Larger Block Size Reduce compulsory misses Leverage spatial locality Increase conflict/capacity misses Fewer block in the cache

Example given the above miss rates; assume memory takes 80 CC overhead, delivers 16 bytes in 2 CC; Q: which block size has the smallest average memory access time for each cache size?

Answer avg mem access time =hit time + miss rate x miss penalty *assume 1-CC hit time for a 256-byte block in a 256 KB cache: = 1 + 0.49% x (80 + 2x256/16) = 1.5 cc

Answer average memory access time

Opt #2: Larger Cache Reduce capacity misses Increase hit time, cost, and power

Opt #3: Higher Associativity Reduce conflict misses Increase hit time

Example assume higher associativity -> higher clock cycle time: assume 1-cc hit time, 25-cc miss penalty, and miss rates in the following table;

Miss rates

Question: for which cache sizes are each of the statements true?

Answer for a 512 KB, 8-way set associative cache: avg mem access time =hit time + miss rate x miss penalty =1.52x1 + 0.006 x 25 =1.66

Answer average memory access time

Average Memory Access Time = Hit Time + Miss Rate x Miss Penalty Multilevel caches; Reads > Writes;

Opt #4: Multilevel Cache Reduce miss penalty Motivation faster/smaller cache to keep pace with the speed of processors? larger cache to overcome the widening gap between processor and main mem?

Opt #4: Multilevel Cache Two-level cache Add another level of cache between the original cache and memory L1: small enough to match the clock cycle time of the fast processor; L2: large enough to capture many accesses that would go to main memory, lessening miss penalty

Opt #4: Multilevel Cache Average memory access time =Hit timeL1 + Miss rateL1 x Miss penaltyL1 =Hit timeL1 + Miss rateL1 x(Hit timeL2+Miss rateL2xMiss penaltyL2) Average mem stalls per instruction =Misses per instructionL1 x Hit timeL2 + Misses per instrL2 x Miss penaltyL2

Opt #4: Multilevel Cache Local miss rate the number of misses in a cache divided by the total number of mem accesses to this cache; Miss rateL1, Miss rateL2 Global miss rate the number of misses in the cache divided by the number of mem accesses generated by the processor; Miss rateL1, Miss rateL1 x Miss rateL2

Example 1000 mem references -> 40 misses in L1 and 20 misses in L2; miss penalty from L2 is 200 cc; hit time of L2 is 10 cc; hit time of L1 is 1 cc; 1.5 mem references per instruction; Q: 1. various miss rates? 2. avg mem access time? 3. avg stall cycles per instruction?

Answer 1. various miss rates? L1: local = global 40/1000 = 4% L2: local: 20/40 = 50% global: 20/1000 = 2%

Answer 2. avg mem access time? average memory access time =Hit timeL1 + Miss rateL1 x(Hit timeL2+Miss rateL2xMiss penaltyL2) =1 + 4% x (10 + 50% x 200) =5.4

Answer 3. avg stall cycles per instruction? average stall cycles per instruction =Misses per instructionL1 x Hit timeL2 + Misses per instrL2 x Miss penaltyL2 =(1.5x40/1000)x10+(1.5x20/1000)x200 =6.6

Opt #5: Prioritize read misses over writes Reduce miss penalty instead of simply stall read miss until write buffer empties, check the contents of write buffer, let the read miss continue if no conflicts with write buffer & memory system is available

Opt #5: Prioritize read misses over writes Why for the code sequence, assume a direct-mapped, write-through cache that maps 512 and 1024 to the same block; a four-word write buffer is not checked on a read miss. R2.value ≡ R3.value ?

Average Memory Access Time = Hit Time + Miss Rate x Miss Penalty Avoid address translation during indexing of the cache

Opt #6: Avoid address translation during indexing cache Cache addressing virtual address – virtual cache physical address – physical cache Processor/program – virtual address Processor -> address translation -> Cache virtual cache or physical cache?

Opt #6: Avoid address translation during indexing cache Virtually indexed, physically tagged page offset to index the cache; physical address for tag match; For direct-mapped cache, it cannot be bigger than the page size. Reference: CPU Cache http://zh.wikipedia.org/wiki/CPU%E9%AB%98%E9%80%9F%E7%BC%93%E5%AD%98

?

ACE YOUR EXAMS!

#What’s More The Power of Introverts by Susan Cain