Code and Caches 1 Computer Organization II ©2005-2013 CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with permission.

Slides:



Advertisements
Similar presentations
CS492B Analysis of Concurrent Programs Memory Hierarchy Jaehyuk Huh Computer Science, KAIST Part of slides are based on CS:App from CMU.
Advertisements

Cache Performance 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Example How are these parameters decided?. Row-Order storage main() { int i, j, a[3][4]={1,2,3,4,5,6,7,8,9,10,11,12}; for (i=0; i
Faculty of Computer Science © 2006 CMPUT 229 Cache Performance Analysis Hitting for performance.
Carnegie Mellon 1 Cache Memories : Introduction to Computer Systems 10 th Lecture, Sep. 23, Instructors: Randy Bryant and Dave O’Hallaron.
Data Locality CS 524 – High-Performance Computing.
Memory System Performance October 29, 1998 Topics Impact of cache parameters Impact of memory reference patterns –matrix multiply –transpose –memory mountain.
Matrix Multiplication (i,j,k) for I = 1 to n do for j = 1 to n do for k = 1 to n do C[i,j] = C[i,j] + A[i,k] x B[k,j] endfor.
Cache Memories May 5, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance EECS213.
Cache Memories February 24, 2004 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance class13.ppt.
Data Locality CS 524 – High-Performance Computing.
CS 3214 Computer Systems Godmar Back Lecture 11. Announcements Stay tuned for Exercise 5 Project 2 due Sep 30 Auto-fail rule 2: –Need at least Firecracker.
CPSC 312 Cache Memories Slides Source: Bryant Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on.
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
Fast matrix multiplication; Cache usage
Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance CS213.
Systems I Locality and Caching
Memory and cache CPU Memory I/O.
Memory Hierarchy 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
External Sorting Sort n records/elements that reside on a disk. Space needed by the n records is very large.  n is very large, and each record may be.
1 Cache Memories Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
Recitation 7: 10/21/02 Outline Program Optimization –Machine Independent –Machine Dependent Loop Unrolling Blocking Annie Luo
– 1 – , F’02 Caching in a Memory Hierarchy Larger, slower, cheaper storage device at level k+1 is partitioned into blocks.
Lecture 13: Caching EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr. Rozier.
Lecture 20: Locality and Caching CS 2011 Fall 2014, Dr. Rozier.
Introduction to Computer Systems Topics: Theme Five great realities of computer systems (continued) “The class that bytes”
ECE 454 Computer Systems Programming Memory performance (Part II: Optimizing for caches) Ding Yuan ECE Dept., University of Toronto
1 Seoul National University Cache Memories. 2 Seoul National University Cache Memories Cache memory organization and operation Performance impact of caches.
Memory Hierarchy II. – 2 – Last class Caches Direct mapped E=1 (One cache line per set) Each main memory address can be placed in exactly one place in.
1 Cache Memory. 2 Outline Cache mountain Matrix multiplication Suggested Reading: 6.6, 6.7.
Cache Memories February 28, 2002 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance Reading:
Systems I Cache Organization
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
Assembly - Arrays תרגול 7 מערכים.
1 Cache Memories. 2 Today Cache memory organization and operation Performance impact of caches  The memory mountain  Rearranging loops to improve spatial.
Lecture 5: Memory Performance. Types of Memory Registers L1 cache L2 cache L3 cache Main Memory Local Secondary Storage (local disks) Remote Secondary.
Cache Memories Topics Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance CS 105 Tour of the.
Cache Memories Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance cache.ppt CS 105 Tour.
Memory Hierarchy Computer Organization and Assembly Languages Yung-Yu Chuang 2007/01/08 with slides by CMU
Cache Organization 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Optimizing for the Memory Hierarchy Topics Impact of caches on performance Memory hierarchy considerations Systems I.
Cache Memories February 26, 2008 Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance The.
1 Writing Cache Friendly Code Make the common case go fast  Focus on the inner loops of the core functions Minimize the misses in the inner loops  Repeated.
Vassar College 1 Jason Waterman, CMPU 224: Computer Organization, Fall 2015 Cache Memories CMPU 224: Computer Organization Nov 19 th Fall 2015.
DEPENDENCE-DRIVEN LOOP MANIPULATION Based on notes by David Padua University of Illinois at Urbana-Champaign 1.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Cache Memories CENG331 - Computer Organization Instructors:
Cache Memories CENG331 - Computer Organization Instructor: Murat Manguoglu(Section 1) Adapted from: and
Programming for Cache Performance Topics Impact of caches on performance Blocking Loop reordering.
Carnegie Mellon 1 Cache Memories Authors: Adapted from slides by Randy Bryant and Dave O’Hallaron.
Cache Memories.
Cache Memories CSE 238/2038/2138: Systems Programming
The Hardware/Software Interface CSE351 Winter 2013
Section 7: Memory and Caches
CS 105 Tour of the Black Holes of Computing
The Memory Hierarchy : Memory Hierarchy - Cache
Authors: Adapted from slides by Randy Bryant and Dave O’Hallaron
Cache Memories Topics Cache memory organization Direct mapped caches
“The course that gives CMU its Zip!”
Memory Hierarchy II.
Memory Hierarchy and Cache Memories CENG331 - Computer Organization
Cache Memories Professor Hugh C. Lauer CS-2011, Machine Organization and Assembly Language (Slides include copyright materials from Computer Systems:
Cache Performance October 3, 2007
Cache Memories Lecture, Oct. 30, 2018
Computer Organization and Assembly Languages Yung-Yu Chuang 2006/01/05
Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as ai,j and elements of B as.
Cache Memories.
Cache Memory and Performance
Cache Memory and Performance
Writing Cache Friendly Code

Presentation transcript:

Code and Caches 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with permission from Complete Powerpoint Lecture Notes for Computer Systems: A Programmer's Perspective (CS:APP) Randal E. BryantRandal E. Bryant and David R. O'HallaronDavid R. O'Hallaron The book is used explicitly in CS 2505 and CS 3214 and as a reference in CS 2506.

Code and Caches 2 Computer Organization II © CS:APP & McQuain Locality Example (1) int sumarraycols(int a[M][N]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; } int sumarrayrows(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; } Claim:Being able to look at code and get a qualitative sense of its locality is a key skill for a professional programmer. Question:Which of these functions has good locality?

Code and Caches 3 Computer Organization II © CS:APP & McQuain Layout of C Arrays in Memory C arrays allocated in contiguous memory locations with addresses ascending with the array index: int32_t A[20] = {0, 1, 2, 3, 4,..., 8, 9}; C C9

Code and Caches 4 Computer Organization II © CS:APP & McQuain Layout of C Arrays in Memory Two-dimensional C arrays allocated in row-major order - each row in contiguous memory locations: int32_t A[3][5] = { { 0, 1, 2, 3, 4}, {10, 11, 12, 13, 14}, {20, 21, 22, 23, 24}, }; C C C

Code and Caches 5 Computer Organization II © CS:APP & McQuain Layout of C Arrays in Memory int32_t A[3][5] = { { 0, 1, 2, 3, 4}, {10, 11, 12, 13, 14}, {20, 21, 22, 23, 24}, }; C C C Stepping through columns in one row: for (i = 0; i < 3; i++) for (j = 0; j < 5; j++) sum += A[i][j]; -accesses successive elements in memory -if cache block size B > 4 bytes, exploit spatial locality compulsory miss rate = 4 bytes / B i = 0 i = 1 i = 2

Code and Caches 6 Computer Organization II © CS:APP & McQuain Layout of C Arrays in Memory int32_t A[3][5] = { { 0, 1, 2, 3, 4}, {10, 11, 12, 13, 14}, {20, 21, 22, 23, 24}, }; C C C j = 0 j = 1 Stepping through rows in one column: for (j = 0; i < 5; i++) for (i = 0; i < 3; i++) sum += a[i][j]; accesses distant elements no spatial locality! compulsory miss rate = 1 (i.e. 100%)

Code and Caches 7 Computer Organization II © CS:APP & McQuain Writing Cache Friendly Code Miss rate = 1/4 = 25% int sumarrayrows(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; } Repeated references to variables are good (temporal locality) Stride-1 reference patterns are good (spatial locality) Assume an initially-empty cache with 16-byte cache blocks i = 0, j = 0 to i = 0, j = 3 i = 0, j = 4 to i = 1, j = 2

Code and Caches 8 Computer Organization II © CS:APP & McQuain Writing Cache Friendly Code Miss rate = 100% int sumarraycols(int a[M][N]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; } "Skipping" accesses down the rows of a column do not provide good locality: (That's actually somewhat pessimistic... depending on cache geometry.)

Code and Caches 9 Computer Organization II © CS:APP & McQuain Locality Example (2) int sumarray3d(int a[M][N][N]) { int i, j, k, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) for (k = 0; k < N; k++) sum += a[k][i][j]; return sum } Question:Can you permute the loops so that the function scans the 3D array a[] with a stride-1 reference pattern (and thus has good spatial locality)?

Code and Caches 10 Computer Organization II © CS:APP & McQuain Layout of C Arrays in Memory int A[5] = {0, 1, 2, 3, 4}; for (i = 0; i < 5; i++) printf("%d: %X\n", i, (unsigned)&A[i]); i address : 28ABE0 1: 28ABE4 2: 28ABE8 3: 28ABEC 4: 28ABF0 We see there that for a 1D array, the index varies in a stride-1 pattern. stride-1 :addresses differ by the size of an array cell (4 bytes, here) It's easy to write an array traversal and see the addresses at which the array elements are stored:

Code and Caches 11 Computer Organization II © CS:APP & McQuain Layout of C Arrays in Memory int B[3][5] = {... }; for (i = 0; i < 3; i++) for (j = 0; j < 5; j++) printf("%d %3d: %X\n", i, j, (unsigned)&B[i][j]); i-j order: i j address : 28ABA4 0 1: 28ABA8 0 2: 28ABAC 0 3: 28ABB0 0 4: 28ABB4 1 0: 28ABB8 1 1: 28ABBC 1 2: 28ABC0 We see that for a 2D array, the second index varies in a stride-1 pattern. j-i order: i j address : 28CC9C 1 0: 28CCB0 2 0: 28CCC4 0 1: 28CCA0 1 1: 28CCB4 2 1: 28CCC8 0 2: 28CCA4 1 2: 28CCB8 But the first index does not vary in a stride-1 pattern. stride-5 (0x14/4) stride-1

Code and Caches 12 Computer Organization II © CS:APP & McQuain Layout of C Arrays in Memory int C[2][3][5] = {... }; for (i = 0; i < 2; i++) for (j = 0; j < 3; j++) for (k = 0; k < 5; k++) printf("%3d %3d %3d: %d\n", i, j, k, (unsigned)&C[i][j][k]); i-j-k order: i j k address : 28CC1C 0 0 1: 28CC : 28CC : 28CC : 28CC2C 0 1 0: 28CC : 28CC : 28CC38 We see that for a 3D array, the third index varies in a stride-1 pattern: k-j-i order: i j k address : 28CC : 28CC : 28CC : 28CC : 28CC4C 1 2 0: 28CC : 28CC : 28CC64 i-k-j order: i j k address : 28CC : 28CC : 28CC4C 0 0 1: 28CC : 28CC3C 0 2 1: 28CC : 28CC2C 0 1 2: 28CC40 But… if we change the order of access, we no longer have a stride-1 pattern:

Code and Caches 13 Computer Organization II © CS:APP & McQuain Locality Example (2) int sumarray3d(int a[M][N][N]) { int i, j, k, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) for (k = 0; k < N; k++) sum += a[k][i][j]; return sum } Question:Can you permute the loops so that the function scans the 3D array a[] with a stride-1 reference pattern (and thus has good spatial locality)? This code does not yield good locality at all. The inner loop is varying the first index, worst case!

Code and Caches 14 Computer Organization II © CS:APP & McQuain Locality Example (3) // struct of arrays struct soa { float *x; float *y; float *z; float *r; }; compute_r(struct soa s) { for (i = 0; …) { s.r[i] = s.x[i] * s.x[i] + s.y[i] * s.y[i] + s.z[i] * s.z[i]; } // array of structs struct aos { float x; float y; float z; float r; }; compute_r(struct aos *s) { for (i = 0; …) { s[i].r = s[i].x * s[i].x + s[i].y * s[i].y + s[i].z * s[i].z; } Question: Which of these two exhibits better spatial locality?

Code and Caches 15 Computer Organization II © CS:APP & McQuain Locality Example (3) // struct of arrays struct soa { float *x; float *y; float *z; float *r; }; struct soa s; s.x = malloc(8*sizeof(float));... // array of structs struct aos { float x; float y; float r; }; struct aos s[8]; x y r z x y r z x y r z x y r z x y r z x y r z x y r z x y r z x y r z 16 bytes 32 bytes each 16 bytes each

Code and Caches 16 Computer Organization II © CS:APP & McQuain Locality Example (3) // struct of arrays compute_r(struct soa s) { for (i = 0; …) { s.r[i] = s.x[i] * s.x[i] + s.y[i] * s.y[i] + s.z[i] * s.z[i]; } // array of structs compute_r(struct aos *s) { for (i = 0; …) { s[i].r = s[i].x * s[i].x + s[i].y * s[i].y + s[i].z * s[i].z; } x y r z x y r z x y r z x y r z x y r z x y r z x y r z x y r z x y r z Question: Which of these two exhibits better spatial locality?

Code and Caches 17 Computer Organization II © CS:APP & McQuain Locality Example (4) // struct of arrays sum_r(struct soa s) { sum = 0; for (i = 0; …) { sum += s.r[i]; } // array of structs sum_r(struct aos *s) { sum = 0; for (i = 0; …) { sum += s[i].r; } x y r z x y r z x y r z x y r z x y r z x y r z x y r z x y r z x y r z Question: Which of these two exhibits better spatial locality?

Code and Caches 18 Computer Organization II © CS:APP & McQuain Locality Example (5) // array of pointers to structs struct aos { float x; float y; float z; float r; }; struct aops[8]; for (i = 0; i < 8; i++) apos[i] = malloc(sizeof(struct aops)); QTP: How would this compare to the previous two?

Code and Caches 19 Computer Organization II © CS:APP & McQuain Writing Cache Friendly Code Key idea: Our qualitative notion of locality is quantified through our understanding of cache memories. Make the common case go fast – Focus on the inner loops of the core functions Minimize the misses in the inner loops – Repeated references to variables are good (temporal locality) – Stride-1 reference patterns are good (spatial locality)

Code and Caches 20 Computer Organization II © CS:APP & McQuain Miss Rate Analysis for Matrix Multiply A k i B k j C i j Assume: Line size = 32B (big enough for four 64-bit words) Matrix dimension (N) is very large Approximate 1/N as 0.0 Cache is not even big enough to hold multiple rows Analysis Method: Look at access pattern of inner loop

Code and Caches 21 Computer Organization II © CS:APP & McQuain Matrix Multiplication Example /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } Variable sum held in register Description: Multiply N x N matrices O(N 3 ) total operations N reads per source element N values summed per destination

Code and Caches 22 Computer Organization II © CS:APP & McQuain Matrix Multiplication (ijk) /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } A (i,*) B (*,j) C (i,j) Inner loop: Column- wise Row-wiseFixed Misses per inner loop iteration: ABC

Code and Caches 23 Computer Organization II © CS:APP & McQuain Matrix Multiplication (kij) /* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } /* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } C (i,*) A (i,k) B (k,*) Inner loop: Row-wise Fixed Misses per inner loop iteration: ABC

Code and Caches 24 Computer Organization II © CS:APP & McQuain Matrix Multiplication (jki) /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } B (k,j) Inner loop: C (*,j) A (*,k) Column- wise Column- wise Fixed Misses per inner loop iteration: ABC

Code and Caches 25 Computer Organization II © CS:APP & McQuain Summary of Matrix Multiplication ijk (& jik): 2 loads, 0 stores misses/iter = 1.25 kij (& ikj): 2 loads, 1 store misses/iter = 0.5 jki (& kji): 2 loads, 1 store misses/iter = 2.0 for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; }

Code and Caches 26 Computer Organization II © CS:APP & McQuain Core i7 Matrix Multiply Performance jki / kji ijk / jik kij / ikj

Code and Caches 27 Computer Organization II © CS:APP & McQuain Concluding Observations Programmer can optimize for cache performance How data structures are organized How data are accessed Nested loop structure Blocking is a general technique All systems favor “cache friendly code” Getting absolute optimum performance is very platform specific Cache sizes, line sizes, associativities, etc. Can get most of the advantage with generic code Keep working set reasonably small (temporal locality) Use small strides (spatial locality)