Cache Memory and Performance

Slides:



Advertisements
Similar presentations
CS492B Analysis of Concurrent Programs Memory Hierarchy Jaehyuk Huh Computer Science, KAIST Part of slides are based on CS:App from CMU.
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Cache Performance 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Performance of Cache Memory
Cache Memories September 30, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance.
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
Virtual Memory Topics Virtual Memory Access Page Table, TLB Programming for locality Memory Mountain Revisited.
ECE Dept., University of Toronto
Memory Hierarchy 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy CprE 381 Computer Organization and Assembly Level Programming, Fall 2013 Zhao Zhang Iowa State University.
1 Seoul National University Cache Memories. 2 Seoul National University Cache Memories Cache memory organization and operation Performance impact of caches.
Lecture Objectives: 1)Explain the relationship between miss rate and block size in a cache. 2)Construct a flowchart explaining how a cache miss is handled.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Computer Organization CS224 Fall 2012 Lessons 41 & 42.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Caches 1 Computer Organization II © McQuain Memory Technology Static RAM (SRAM) – 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM)
Memory Hierarchy 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
The Memory Hierarchy (Lectures #17 - #20) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
Cache Organization 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
Vassar College 1 Jason Waterman, CMPU 224: Computer Organization, Fall 2015 Cache Memories CMPU 224: Computer Organization Nov 19 th Fall 2015.
Cache Issues Computer Organization II 1 Main Memory Supporting Caches Use DRAMs for main memory – Fixed width (e.g., 1 word) – Connected by fixed-width.
CMSC 611: Advanced Computer Architecture
Cache Memories.
COSC3330 Computer Architecture
COSC3330 Computer Architecture
CSE 351 Section 9 3/1/12.
ECE232: Hardware Organization and Design
Yu-Lun Kuo Computer Sciences and Information Engineering
The Goal: illusion of large, fast, cheap memory
Lecture 12 Virtual Memory.
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Cache Memories CSE 238/2038/2138: Systems Programming
How will execution time grow with SIZE?
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
The Hardware/Software Interface CSE351 Winter 2013
Today How’s Lab 3 going? HW 3 will be out today
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
CS 105 Tour of the Black Holes of Computing
William Stallings Computer Organization and Architecture 7th Edition
CSE 153 Design of Operating Systems Winter 2018
ECE 445 – Computer Organization
Lecture 21: Memory Hierarchy
Cache Memories September 30, 2008
Chapter 8 Digital Design and Computer Architecture: ARM® Edition
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
ECE 445 – Computer Organization
Adapted from slides by Sally McKee Cornell University
Morgan Kaufmann Publishers
Miss Rate versus Block Size
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
If a DRAM has 512 rows and its refresh time is 9ms, what should be the frequency of row refresh operation on the average?
Computer System Design Lecture 9
CS-447– Computer Architecture Lecture 20 Cache Memories
CS 3410, Spring 2014 Computer Science Cornell University
Cache Memories Professor Hugh C. Lauer CS-2011, Machine Organization and Assembly Language (Slides include copyright materials from Computer Systems:
Lecture 21: Memory Hierarchy
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
CSE 153 Design of Operating Systems Winter 2019
Cache Memories.
Cache Memory and Performance
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Cache Memory and Performance
Memory & Cache.
Presentation transcript:

Cache Memory and Performance Many of the following slides are taken with permission from Complete Powerpoint Lecture Notes for Computer Systems: A Programmer's Perspective (CS:APP) Randal E. Bryant and David R. O'Hallaron http://csapp.cs.cmu.edu/public/lectures.html The book is used explicitly in CS 2505 and CS 3214 and as a reference in CS 2506.

Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory CPU looks first for data in caches (e.g., L1, L2, and L3), then in main memory. Typical system structure: CPU chip Register file ALU Cache memories System bus Memory bus Main memory Bus interface I/O bridge

General Cache Organization (S, E, B) E = 2e lines (blocks) per set 1 2e-1 set 1 line (block) 2 S = 2s sets 3 2s-1 1 2 B-1 tag v B = 2b bytes per cache block (the data) valid bit Cache size: C = S x E x B data bytes

Cache Organization Types The "geometry" of the cache is defined by: S = 2s the number of sets in the cache E = 2e the number of lines (blocks) in a set B = 2b the number of bytes in a line (block) E = 1 (e = 0) direct-mapped cache only one possible location in cache for each DRAM block S > 1 E = K > 1 K-way associative cache K possible locations (in same cache set) for each DRAM block S = 1 (only one set) fully-associative cache E = # of cache blocks each DRAM block can be at any location in the cache

Cache Performance Metrics miss rate: fraction of memory references not found in cache (# misses / # accesses) = 1 – hit rate Typical miss rates: - 3-10% for L1 - can be quite small (e.g., < 1%) for L2, depending on cache size and locality hit time: time to deliver a line in the cache to the processor includes time to determine whether the line is in the cache Typical times: - 1-2 clock cycles for L1 - 5-20 clock cycles for L2 miss penalty: additional time required for data access because of a cache miss typically 50-200 cycles for main memory Trend is for increasing # of cycles… why?

Calculating Average Access Time Let’s say that we have two levels of cache, backed by DRAM: - L1 cache costs 1 cycle to access and has miss rate of 10% - L2 cache costs 10 cycles to access and has miss rate of 2% - DRAM costs 80 cycles to access (and has miss rate of 0%) Then the average memory access time (AMAT) would be: 1 always access L1 cache + 0.10 * 10 probability of miss in L1 cache * time to access L2 0.10 * 0.02 * 80 probability of miss in L1 cache * probability of miss in L2 cache * time to access DRAM = 2.16 cycles

Let’s see that again… DRAM L2 L1 CPU Let’s say that we have two levels of cache, backed by DRAM: - L1 cache costs 1 cycle to access and has miss rate of 10% - L2 cache costs 10 cycles to access and has miss rate of 2% - DRAM costs 80 cycles to access (and has miss rate of 0%) CPU L1 L2 DRAM % of time we go here*: 0.2% # cycles to go here: 80 Average penalty*: 0.16 cycles % of time we go here*: 10% # cycles to go here: 10 Average penalty*: 1 cycle % of time we go here*: 100% # cycles to go here: 1 Average penalty*: 1 cycle * On a memory access

Lets think about those numbers There can be a huge difference between the cost of a hit and a miss. Could be 100x, if just L1 and main memory Would you believe 99% hits is twice as good as 97%? Consider: L1 cache hit time of 1 cycle L1 miss penalty of 100 cycles (to DRAM) Average access time: 97% L1 hits: 1 cycle + 0.03 * 100 cycles = 4 cycles 99% L1 hits: 1 cycle + 0.01 * 100 cycles = 2 cycles

Measuring Cache Performance Components of CPU time: Program execution cycles Includes cache hit time Memory stall cycles Mainly from cache misses With simplifying assumptions:

Cache Performance Example Given - Instruction-cache miss rate = 2% - Data-cache miss rate = 4% - Miss penalty = 100 cycles - Base CPI (with ideal cache performance) = 2 - Load & stores are 36% of instructions Miss cycles per instruction - Instruction-cache: 0.02 × 100 = 2 - Data-cache: 0.36 × 0.04 × 100 = 1.44 Actual CPI = 2 + 2 + 1.44 = 5.44 - Ideal CPI is 5.44/2 =2.72 times faster - We spend 3.44/5.44 = 63% of our execution time on memory stalls!

Reduce Ideal CPI What if we improved the datapath so that the average ideal CPI was reduced? - Instruction-cache miss rate = 2% - Data-cache miss rate = 4% - Miss penalty = 100 cycles - Base CPI (with ideal cache performance) = 1.5 - Load & stores are 36% of instructions Miss cycles per instruction will still be the same as before. Actual CPI = 1.5 + 2 + 1.44 = 4.94 - Ideal CPI is 4.94/1.5 =3.29 times faster - We spend 3.44/4.94 = 70% of our execution time on memory stalls!

Performance Summary When CPU performance increases Decreasing base CPI - effect of miss penalty becomes more significant Decreasing base CPI - greater proportion of time spent on memory stalls Increasing clock rate - memory stalls account for more CPU cycles Can’t neglect cache behavior when evaluating system performance

Multilevel Cache Considerations Primary cache Focus on minimal hit time L-2 cache Focus on low miss rate to avoid main memory access Hit time has less overall impact Results L-1 cache usually smaller than a single cache L-1 block size smaller than L-2 block size

Intel Core i7 Cache Hierarchy Processor package Core 0 Core 3 2700 Series L1 i-cache and d-cache: 32 KB, 8-way, Access: 4 cycles L2 unified cache: 256 KB, 8-way, Access: 11 cycles L3 unified cache: 8 MB, 16-way, Access: 25-40 cycles Block size: 64 bytes for all caches. Regs Regs L1 d-cache L1 i-cache L1 d-cache L1 i-cache … L2 unified cache L2 unified cache L3 unified cache (shared by all cores) Main memory

I-7 Sandybridge

What about writes? Multiple copies of data may exist: - L1 - L2 - DRAM - Disk Remember: each level of the hierarchy is a subset of the one below it. Suppose we write to a data block that's in L1. If we update only the copy in L1, then we will have multiple, inconsistent versions! If we update all the copies, we'll incur a substantial time penalty! And what if we write to a data block that's not in L1?

What about writes? What to do on a write-hit? Write-through (write immediately to memory) Write-back (defer write to memory until replacement of line) Need a dirty bit (cached line is different from memory or not)

What about writes? What to do on a write-miss? Write-allocate (load into cache, update line in cache) Good if more writes to the location follow No-write-allocate (writes immediately to memory) Typical combinations: Write-through + No-write-allocate Write-back + Write-allocate