Cache Memories.

Slides:



Advertisements
Similar presentations
CS492B Analysis of Concurrent Programs Memory Hierarchy Jaehyuk Huh Computer Science, KAIST Part of slides are based on CS:App from CMU.
Advertisements

Cache Performance 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Example How are these parameters decided?. Row-Order storage main() { int i, j, a[3][4]={1,2,3,4,5,6,7,8,9,10,11,12}; for (i=0; i
Today Memory hierarchy, caches, locality Cache organization
Carnegie Mellon 1 Cache Memories : Introduction to Computer Systems 10 th Lecture, Sep. 23, Instructors: Randy Bryant and Dave O’Hallaron.
Cache Memories September 30, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance.
Memory System Performance October 29, 1998 Topics Impact of cache parameters Impact of memory reference patterns –matrix multiply –transpose –memory mountain.
Cache Memories May 5, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance EECS213.
Cache Memories February 24, 2004 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance class13.ppt.
CPSC 312 Cache Memories Slides Source: Bryant Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on.
Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance CS213.
Systems I Locality and Caching
ECE Dept., University of Toronto
1 Cache Memories Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
– 1 – , F’02 Caching in a Memory Hierarchy Larger, slower, cheaper storage device at level k+1 is partitioned into blocks.
Lecture 13: Caching EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr. Rozier.
Lecture 20: Locality and Caching CS 2011 Fall 2014, Dr. Rozier.
ECE 454 Computer Systems Programming Memory performance (Part II: Optimizing for caches) Ding Yuan ECE Dept., University of Toronto
Code and Caches 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with permission.
1 Seoul National University Cache Memories. 2 Seoul National University Cache Memories Cache memory organization and operation Performance impact of caches.
Memory Hierarchy II. – 2 – Last class Caches Direct mapped E=1 (One cache line per set) Each main memory address can be placed in exactly one place in.
1 Cache Memory. 2 Outline Cache mountain Matrix multiplication Suggested Reading: 6.6, 6.7.
Cache Memories February 28, 2002 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance Reading:
Systems I Cache Organization
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
1 Cache Memory. 2 Outline General concepts 3 ways to organize cache memory Issues with writes Write cache friendly codes Cache mountain Suggested Reading:
Carnegie Mellon Introduction to Computer Systems /18-243, spring th Lecture, Feb. 19 th Instructors: Gregory Kesden and Markus Püschel.
1 Cache Memories. 2 Today Cache memory organization and operation Performance impact of caches  The memory mountain  Rearranging loops to improve spatial.
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance cache.ppt CS 105 Tour.
Memory Hierarchy Computer Organization and Assembly Languages Yung-Yu Chuang 2007/01/08 with slides by CMU
Cache Organization 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Optimizing for the Memory Hierarchy Topics Impact of caches on performance Memory hierarchy considerations Systems I.
Cache Memories February 26, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance The.
1 Writing Cache Friendly Code Make the common case go fast  Focus on the inner loops of the core functions Minimize the misses in the inner loops  Repeated.
Vassar College 1 Jason Waterman, CMPU 224: Computer Organization, Fall 2015 Cache Memories CMPU 224: Computer Organization Nov 19 th Fall 2015.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Cache Memories CENG331 - Computer Organization Instructors:
Cache Memories CENG331 - Computer Organization Instructor: Murat Manguoglu(Section 1) Adapted from: and
Programming for Cache Performance Topics Impact of caches on performance Blocking Loop reordering.
Carnegie Mellon 1 Cache Memories Authors: Adapted from slides by Randy Bryant and Dave O’Hallaron.
Cache Memories.
CSE 351 Section 9 3/1/12.
Cache Memory and Performance
Optimization III: Cache Memories
Cache Memories CSE 238/2038/2138: Systems Programming
The Hardware/Software Interface CSE351 Winter 2013
Section 7: Memory and Caches
Today How’s Lab 3 going? HW 3 will be out today
Cache Memory Presentation I
CS 105 Tour of the Black Holes of Computing
The Memory Hierarchy : Memory Hierarchy - Cache
Authors: Adapted from slides by Randy Bryant and Dave O’Hallaron
Caches III CSE 351 Winter 2017.
ReCap Random-Access Memory (RAM) Nonvolatile Memory
Caches III CSE 351 Spring 2017 Instructor: Ruth Anderson
Cache Memories September 30, 2008
Cache Memories Topics Cache memory organization Direct mapped caches
“The course that gives CMU its Zip!”
Memory Hierarchy II.
November 14 6 classes to go! Read
Feb 11 Announcements Memory hierarchies! How’s Lab 3 going?
Memory Hierarchy and Cache Memories CENG331 - Computer Organization
Cache Memories Professor Hugh C. Lauer CS-2011, Machine Organization and Assembly Language (Slides include copyright materials from Computer Systems:
Cache Performance October 3, 2007
Cache Memories Lecture, Oct. 30, 2018
Computer Organization and Assembly Languages Yung-Yu Chuang 2006/01/05
Cache Memory and Performance
Cache Memory and Performance
Writing Cache Friendly Code

Presentation transcript:

Cache Memories

Outline Cache memory organization and operation Performance impact of caches The memory mountain Rearranging loops to improve spatial locality Using blocking to improve temporal locality Suggested Reading 6.4, 6.5, 6.6

Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware Hold frequently accessed blocks of main memory CPU looks first for data in cache Typical system structure: CPU chip Register file ALU Cache memory System bus Memory bus Main memory Bus interface I/O bridge

How it Really Looks Like Source: Dell Desktop PC Source: PC Magazine Source: techreport.com CPU (Intel Core i7) Source: Dell Motherboard Source: Dell Main memory (DRAM)

General Cache Organization (S, E, B) E = 2e lines per set set line S = 2s sets Cache size: = S x E x B data bytes v tag 1 2 B-1 valid bit B = 2b bytes per cache block (the data)

Cache Read Locate set Check if any line in set has matching tag Yes + line valid: hit Locate data starting at offset E = 2e lines per set Address of word: t bits s bits b bits S = 2s sets tag set index block offset data begins at this offset v tag 1 2 B-1 valid bit B = 2b bytes per cache block (the data)

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size B=8 bytes Address of int: v tag 1 2 3 4 5 6 7 t bits 0…01 100 v tag 1 2 3 4 5 6 7 find set S = 2s sets v tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size B=8 bytes Address of int: valid? + match: assume yes (= hit) t bits 0…01 100 v tag tag 1 2 3 4 5 6 7 block offset

Example: Direct Mapped Cache (E = 1) Direct mapped: One line per set Assume: cache block size B=8 bytes Address of int: valid? + match: assume yes (= hit) t bits 0…01 100 v tag 1 2 3 4 5 6 7 block offset int (4 Bytes) is here If tag doesn’t match (= miss): old line is evicted and replaced

Direct-Mapped Cache Simulation 4-bit addresses (address space size M=16 bytes) S=4 sets, E=1 Blocks/set, B=2 bytes/block Address trace (reads, one byte per read): 0 [00002], 1 [00012], 7 [01112], 8 [10002], 0 [00002] t=1 s=2 b=1 x xx x miss hit miss miss miss v Tag Block Set 0 1 M[0-1] 1 M[8-9] M[0-1] Set 1 Set 2 Set 3 1 M[6-7]

A Higher Level Example Ignore the variables sum, i, j assume: cold (empty) cache, a[0][0] goes here int sum_array_rows(double a[16][16]) { int i, j; double sum = 0; for (i = 0; i < 16; i++) for (j = 0; j < 16; j++) sum += a[i][j]; return sum; } int sum_array_cols(double a[16][16]) { int i, j; double sum = 0; for (j = 0; i < 16; i++) for (i = 0; j < 16; j++) sum += a[i][j]; return sum; } 32 B = 4 doubles blackboard

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size B=8 bytes Address of short int: 2 lines per set t bits 0…01 100 v tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7 find set v tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7 S sets

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size B=8 bytes Address of short int: t bits 0…01 100 compare both valid? + match: yes (= hit) v tag tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7 block offset

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size B=8 bytes Address of short int: t bits 0…01 100 compare both valid? + match: yes (= hit) v tag tag 1 2 3 4 4 5 5 6 7 v tag 1 2 3 4 5 6 7 block offset short int (2 Bytes) is here No match or not valid (= miss): One line in set is selected for eviction and replacement Replacement policies: random, least recently used (LRU), …

2-Way Set Associative Cache Simulation 4-bit addresses (M=16 bytes) S=2 sets, E=2 blocks/set, B=2 bytes/block Address trace (reads, one byte per read): 0 [00002], 1 [00012], 7 [01112], 8 [10002], 0 [00002] t=2 s=1 b=1 xx x x miss hit miss miss hit v Tag Block 1 00 M[0-1] Set 0 1 10 M[8-9] 1 01 M[6-7] Set 1

A Higher Level Example Ignore the variables sum, i, j int sum_array_rows(double a[16][16]) { int i, j; double sum = 0; for (i = 0; i < 16; i++) for (j = 0; j < 16; j++) sum += a[i][j]; return sum; } assume: cold (empty) cache, a[0][0] goes here 32 B = 4 doubles int sum_array_rows(double a[16][16]) { int i, j; double sum = 0; for (j = 0; i < 16; i++) for (i = 0; j < 16; j++) sum += a[i][j]; return sum; } blackboard

What about writes? Multiple copies of data exist: 1 2 B-1 tag d B = 2b bytes dirty bit v valid bit Multiple copies of data exist: L1, L2, L3, Main Memory, Disk What to do on a write-hit? Write-through (write immediately to memory) Write-back (defer write to memory until replacement of line) Each cache line needs a dirty bit (set if data differs from memory) What to do on a write-miss? Write-allocate (load into cache, update line in cache) Good if more writes to the location will follow No-write-allocate (writes straight to memory, does not load into cache) Typical Write-through + No-write-allocate Write-back + Write-allocate

Why Index Using Middle Bits? Direct mapped: One line per set Assume: cache block size 8 bytes t bits 0…01 100 Address of int: find set Standard Method: Middle bit indexing v tag 1 2 3 4 5 6 7 v tag 1 2 3 4 5 6 7 S = 2s sets v tag 1 2 3 4 5 6 7 t bits 1…11 100 Address of int: find set Alternative Method: High bit indexing v tag 1 2 3 4 5 6 7

Illustration of Indexing Approaches 0000xx 0001xx 64-byte memory 6-bit addresses 16 byte, direct-mapped cache Block size = 4. Thus 4 sets. Why? 2 bits tag, 2 bits index, 2 bits offset 0010xx 0011xx 0100xx 0101xx 0110xx 0111xx 1000xx 1001xx Set 0 1010xx Set 1 1011xx Set 2 1100xx 1101xx Set 3 1110xx 1111xx

Middle Bit Indexing Addresses of form TTSSBB 0000xx 0001xx 0010xx Addresses of form TTSSBB TT Tag bits SS Set index bits BB Offset bits Makes good use of spatial locality 0011xx 0100xx 0101xx 0110xx 0111xx 1000xx 1001xx Set 0 1010xx Set 1 1011xx Set 2 1100xx 1101xx Set 3 1110xx 1111xx

High Bit Indexing Addresses of form SSTTBB 0000xx 0001xx 0010xx Addresses of form SSTTBB SS Set index bits TT Tag bits BB Offset bits Program with high spatial locality would generate lots of conflicts 0011xx 0100xx 0101xx 0110xx 0111xx 1000xx 1001xx Set 0 1010xx Set 1 1011xx Set 2 1100xx 1101xx Set 3 1110xx 1111xx

Intel Core i7 Cache Hierarchy Processor package Core 0 Core 3 L1 i-cache and d-cache: 32 KB, 8-way, Access: 4 cycles L2 unified cache: 256 KB, 8-way, Access: 10 cycles L3 unified cache: 8 MB, 16-way, Access: 40-75 cycles Block size: 64 bytes for all caches. Regs Regs L1 d-cache L1 i-cache L1 d-cache L1 i-cache … L2 unified cache L2 unified cache L3 unified cache (shared by all cores) Main memory

Example: Core i7 L1 Data Cache B = S = , s = E = , e = C = 32 kB 8-way set associative 64 bytes/block 47 bit address range Stack Address: 0x00007f7262a1e010 Block offset: 0x?? Set index: 0x?? Tag: 0x?? Block offset: . bits Set index: . bits Tag: . bits

Example: Core i7 L1 Data Cache B = 64 S = 64, s = 6 E = 8, e = 3 C = 64 x 64 x 8 = 32,768 32 kB 8-way set associative 64 bytes/block 47 bit address range Stack Address: 0x00007f7262a1e010 Block offset: 0x10 Set index: 0x0 Tag: 0x7f7262a1e Block offset: 6 bits Set index: 6 bits Tag: 35 bits 0000 0001 0000

Cache Performance Metrics Miss Rate Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate Typical numbers (in percentages): 3-10% for L1 can be quite small (e.g., < 1%) for L2, depending on size, etc. Hit Time Time to deliver a line in the cache to the processor includes time to determine whether the line is in the cache Typical numbers: 1~2 clock cycle for L1 5~20 clock cycles for L2 Miss Penalty Additional time required because of a miss typically 50-200 cycles for main memory (Trend: increasing!)

Let’s think about those numbers Huge difference between a hit and a miss Could be 100x, if just L1 and main memory Would you believe 99% hits is twice as good as 97%? Consider this simplified example: cache hit time of 1 cycle miss penalty of 100 cycles Average access time: 97% hits: 1 cycle + 0.03 x 100 cycles = 4 cycles 99% hits: 1 cycle + 0.01 x 100 cycles = 2 cycles This is why “miss rate” is used instead of “hit rate”

Writing Cache Friendly Code Make the common case go fast Focus on the inner loops of the core functions Minimize the misses in the inner loops Repeated references to variables are good (temporal locality) Stride-1 reference patterns are good (spatial locality) Key idea: Our qualitative notion of locality is quantified through our understanding of cache memories

Cache organization and operation Performance impact of caches Outline Locality of reference Cache organization and operation Performance impact of caches The memory mountain Rearranging loops to improve spatial locality Using blocking to improve temporal locality

The Memory Mountain Read throughput (read bandwidth) Number of bytes read from memory per second (MB/s) Memory mountain: Measured read throughput as a function of spatial and temporal locality. Compact way to characterize memory system performance.

Memory Mountain Test Function long data[MAXELEMS]; /* Global array to traverse */ /* test - Iterate over first "elems" elements of * array "data" with stride of "stride“, * using 4x4 loop unrolling. */ int test(int elems, int stride) { long i, sx2=stride*2, sx3=stride*3, sx4=stride*4; long acc0 = 0, acc1 = 0, acc2 = 0, acc3 = 0; long length = elems, limit = length - sx4; /* Combine 4 elements at a time */ for (i = 0; i < limit; i += sx4) { acc0 = acc0 + data[i]; acc1 = acc1 + data[i+stride]; acc2 = acc2 + data[i+sx2]; acc3 = acc3 + data[i+sx3]; } /* Finish any remaining elements */ for (; i < length; i++) { return ((acc0 + acc1) + (acc2 + acc3)); Call test() with many combinations of elems and stride. For each elems and stride: 1. Call test() once to warm up the caches. 2. Call test() again and measure the read throughput(MB/s) mountain/mountain.c

The Memory Mountain Core i7 Haswell 2.1 GHz 32 KB L1 d-cache 256 KB L2 cache 8 MB L3 cache 64 B block size The Memory Mountain Aggressive prefetching Ridges of temporal locality L1 Mem L2 L3 Slopes of spatial locality

Cache organization and operation Performance impact of caches Outline Locality of reference Cache organization and operation Performance impact of caches The memory mountain Rearranging loops to improve spatial locality Using blocking to improve temporal locality

Matrix Multiplication Example Variable sum held in register Description: Multiply N x N matrices Matrix elements are doubles (8 bytes) O(N3) total operations N reads per source element N values summed per destination but may be able to hold in register /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } matmult/mm.c

Miss Rate Analysis for Matrix Multiply Assume: Block size = 32B (big enough for four doubles) Matrix dimension (N) is very large Approximate 1/N as 0.0 Cache is not even big enough to hold multiple rows Analysis Method: Look at access pattern of inner loop C i j A k i B k j = x

Layout of C Arrays in Memory (review) C arrays allocated in row-major order each row in contiguous memory locations Stepping through columns in one row: for (i = 0; i < N; i++) sum += a[0][i]; accesses successive elements if block size (B) > sizeof(aij) bytes, exploit spatial locality miss rate = sizeof(aij) / B Stepping through rows in one column: for (i = 0; i < n; i++) sum += a[i][0]; accesses distant elements no spatial locality! miss rate = 1 (i.e. 100%)

Matrix Multiplication (ijk) for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } Inner loop: (*,j) (i,j) (i,*) A B C matmult/mm.c Row-wise Column- wise Fixed Miss rate for inner loop iterations: A B C 0.25 1.0 0.0 Block size = 32B (four doubles)

Matrix Multiplication (kij) for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } Inner loop: (i,k) (k,*) (i,*) A B C Fixed Row-wise Row-wise matmult/mm.c Miss rate for inner loop iterations: A B C 0.0 0.25 0.25 Block size = 32B (four doubles)

Matrix Multiplication (jki) Inner loop: /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } (*,k) (*,j) (k,j) A B C Column- wise Fixed Column- wise matmult/mm.c Miss rate for inner loop iterations: A B C 1.0 0.0 1.0 Block size = 32B (four doubles)

Summary of Matrix Multiplication for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } ijk (& jik): 2 loads, 0 stores avg misses/iter = 1.25 for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } kij (& ikj): 2 loads, 1 store avg misses/iter = 0.5 for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } jki (& kji): 2 loads, 1 store avg misses/iter = 2.0

Core i7 Matrix Multiply Performance Cycles per inner loop iteration jki / kji (2.0) ijk / jik (1.25) kij / ikj (0.5)

Cache organization and operation Performance impact of caches Today Locality of reference Cache organization and operation Performance impact of caches The memory mountain Rearranging loops to improve spatial locality Using blocking to improve temporal locality

Example: Matrix Multiplication c = (double *) calloc(sizeof(double), n*n); /* Multiply n x n matrices a and b */ void mmm(double *a, double *b, double *c, int n) { int i, j, k; for (i = 0; i < n; i++) for (j = 0; j < n; j++) for (k = 0; k < n; k++) c[i*n + j] += a[i*n + k] * b[k*n + j]; } j c a b = x i

= x = x Cache Miss Analysis Assume: First iteration: Matrix elements are doubles Cache block = 8 doubles Cache size C << n (much smaller than n) First iteration: n/8 + n = 9n/8 misses Afterwards in cache: (schematic) n = x = x 8 wide

= x Cache Miss Analysis Assume: Second iteration: Total misses: Matrix elements are doubles Cache block = 8 doubles Cache size C << n (much smaller than n) Second iteration: Again: n/8 + n = 9n/8 misses Total misses: 9n/8 n2 = (9/8) n3 n = x 8 wide

Blocked Matrix Multiplication c = (double *) calloc(sizeof(double), n*n); /* Multiply n x n matrices a and b */ void mmm(double *a, double *b, double *c, int n) { int i, j, k; for (i = 0; i < n; i+=B) for (j = 0; j < n; j+=B) for (k = 0; k < n; k+=B) /* B x B mini matrix multiplications */ for (i1 = i; i1 < i+B; i++) for (j1 = j; j1 < j+B; j++) for (k1 = k; k1 < k+B; k++) c[i1*n+j1] += a[i1*n + k1]*b[k1*n + j1]; } matmult/bmm.c j1 c a b c = x + i1 Block size B x B

= x = x Cache Miss Analysis Assume: First (block) iteration: Cache block = 8 doubles Cache size C << n (much smaller than n) Three blocks fit into cache: 3B2 < C First (block) iteration: B2/8 misses for each block 2n/B x B2/8 = nB/4 (omitting matrix c) Afterwards in cache (schematic) n/B blocks = x Block size B x B = x

x = Cache Miss Analysis Assume: Second (block) iteration: Cache block = 8 doubles Cache size C << n (much smaller than n) Three blocks fit into cache: 3B2 < C Second (block) iteration: Same as first iteration 2n/B x B2/8 = nB/4 Total misses: nB/4 * (n/B)2 = n3/(4B) n/B blocks = x Block size B x B

Blocking Summary No blocking: (9/8) n3 misses Blocking: (1/(4B)) n3 misses Use largest block size B, such that B satisfies 3B2 < C Reason for dramatic difference: Matrix multiplication has inherent temporal locality: Input data: 3n2, computation 2n3 Every array elements used O(n) times! But program has to be written properly

Summary Cache memories can have significant performance impact Programmer can optimize for cache performance How data structures are organized How data are accessed Nested loop structure Blocking is a general technique All systems favor “cache friendly code” Getting absolute optimum performance is very platform specific Cache sizes, line sizes, associativities, etc. Can get most of the advantage with generic code Keep working set reasonably small (temporal locality) Use small strides (spatial locality)