Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Cache Memory. 2 Outline Cache mountain Matrix multiplication Suggested Reading: 6.6, 6.7.

Similar presentations


Presentation on theme: "1 Cache Memory. 2 Outline Cache mountain Matrix multiplication Suggested Reading: 6.6, 6.7."— Presentation transcript:

1 1 Cache Memory

2 2 Outline Cache mountain Matrix multiplication Suggested Reading: 6.6, 6.7

3 3 6.6 Putting it Together: The Impact of Caches on Program Performance 6.6.1 The Memory Mountain

4 4 The Memory Mountain P512 Read throughput (read bandwidth) –The rate that a program reads data from the memory system Memory mountain –A two-dimensional function of read bandwidth versus temporal and spatial locality –Characterizes the capabilities of the memory system for each computer

5 5 Memory mountain main routine Figure 6.41 P513 /* mountain.c - Generate the memory mountain. */ #define MINBYTES (1 << 10) /* Working set size ranges from 1 KB */ #define MAXBYTES (1 << 23) /*... up to 8 MB */ #define MAXSTRIDE 16 /* Strides range from 1 to 16 */ #define MAXELEMS MAXBYTES/sizeof(int) int data[MAXELEMS]; /* The array we'll be traversing */

6 6 Memory mountain main routine int main() { int size; /* Working set size (in bytes) */ int stride; /* Stride (in array elements) */ double Mhz; /* Clock frequency */ init_data(data, MAXELEMS); /* Initialize each element in data to 1 */ Mhz = mhz(0); /* Estimate the clock frequency */

7 7 Memory mountain main routine for (size = MAXBYTES; size >= MINBYTES; size >>= 1) { for (stride = 1; stride <= MAXSTRIDE; stride++) printf("%.1f\t", run(size, stride, Mhz)); printf("\n"); } exit(0); }

8 8 Memory mountain test function Figure 6.40 P512 /* The test function */ void test (int elems, int stride) { int i, result = 0; volatile int sink; for (i = 0; i < elems; i += stride) result += data[i]; sink = result; /* So compiler doesn't optimize away the loop */ }

9 9 Memory mountain test function /* Run test (elems, stride) and return read throughput (MB/s) */ double run (int size, int stride, double Mhz) { double cycles; int elems = size / sizeof(int); test (elems, stride); /* warm up the cache */ cycles = fcyc2(test, elems, stride, 0); /* call test (elems,stride) */ return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */ }

10 10 The Memory Mountain Data –Size MAXBYTES(8M) bytes or MAXELEMS(2M) words –Partially accessed Working set: from 8MB to 1KB Stride: from 1 to 16

11 11 The Memory Mountain Figure 6.42 P514

12 12 Ridges of temporal locality Slice through the memory mountain with stride=1 –illuminates read throughputs of different caches and memory Ridges: 山脊

13 13 Ridges of temporal locality Figure 6.43 P515

14 14 A slope of spatial locality Slice through memory mountain with size=256KB –shows cache block size.

15 15 A slope of spatial locality Figure 6.44 P516

16 16 6.6 Putting it Together: The Impact of Caches on Program Performance 6.6.2 Rearranging Loops to Increase Spatial Locality

17 17 Matrix Multiplication P517

18 18 Matrix Multiplication Implementation Figure 6.45 (a) P518 /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { c[i][j] = 0.0; for (k=0; k<n; k++) c[i][j] += a[i][k] * b[k][j]; } O(n 3 )adds and multiplies Each n 2 elements of A and B is read n times /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { c[i][j] = 0.0; for (k=0; k<n; k++) c[i][j] += a[i][k] * b[k][j]; } O(n 3 )adds and multiplies Each n 2 elements of A and B is read n times

19 19 Matrix Multiplication P517 Assumptions: –Each array is an n  n array of double, with size 8 –There is a single cache with a 32-byte block size ( B=32 ) –The array size n is so large that a single matrix row does not fit in the L1 cache –The compiler stores local variables in registers, and thus references to local variables inside loops do not require any load and store instructions.

20 20 /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } Variable sum held in register Matrix Multiplication Figure 6.45 (a) P518

21 21 /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } ABC (i,*) (*,j) (i,j) Inner loop: Column- wise Row-wise Fixed Misses per Inner Loop Iteration: ABC 0.251.00.0 Matrix multiplication (ijk) Figure 6.46 P519 1) (AB)

22 22 /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } ABC (i,*) (*,j) (i,j) Inner loop: Row-wiseColumn- wise Fixed Misses per Inner Loop Iteration: ABC 0.251.00.0 Matrix multiplication (jik) Figure 6.45 (b) P518 Figure 6.46 P519 1) (AB)

23 23 /* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } ABC (i,*) (i,k)(k,*) Inner loop: Row-wise Fixed Misses per Inner Loop Iteration: ABC 0.00.250.25 Matrix multiplication (kij) Figure 6.45 (e) P518 Figure 6.46 P519 3) (BC)

24 24 /* ikj */ for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } ABC (i,*) (i,k)(k,*) Inner loop: Row-wise Fixed Misses per Inner Loop Iteration: ABC 0.00.250.25 Matrix multiplication (ikj) Figure 6.45 (f) P518 Figure 6.46 P519 3) (BC)

25 25 /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } ABC (*,j) (k,j) Inner loop: (*,k) Column - wise Column- wise Fixed Misses per Inner Loop Iteration: ABC 1.00.01.0 Matrix multiplication (jki) Figure 6.45 (c) P518 Figure 6.46 P519 2) (AC)

26 26 /* kji */ for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } ABC (*,j) (k,j) Inner loop: (*,k) FixedColumn- wise Column- wise Misses per Inner Loop Iteration: ABC 1.00.01.0 Matrix multiplication (kji) Figure 6.45 (d) P518 Figure 6.46 P519 2) (AC)

27 27 Pentium matrix multiply performance Figure 6.47 (d) P519 2) (AC) 3) (BC) 1) (AB)

28 28 Pentium matrix multiply performance Notice that miss rates are helpful but not perfect predictors. –Code scheduling matters, too.

29 29 for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } ijk (& jik): 2 loads, 0 stores misses/iter = 1.25 for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } kij (& ikj): 2 loads, 1 store misses/iter = 0.5 jki (& kji): 2 loads, 1 store misses/iter = 2.0 Summary of matrix multiplication 1) (AB)3) (BC)2) (AC)

30 30 6.6 Putting it Together: The Impact of Caches on Program Performance 6.6.3 Using Blocking to Increase Temporal Locality

31 31 Improving temporal locality by blocking P520 Example: Blocked matrix multiplication –“block” (in this context) does not mean “cache block”. –Instead, it mean a sub-block within the matrix. –Example: N = 8; sub-block size = 4

32 32 Improving temporal locality by blocking C 11 = A 11 B 11 + A 12 B 21 C 12 = A 11 B 12 + A 12 B 22 C 21 = A 21 B 11 + A 22 B 21 C 22 = A 21 B 12 + A 22 B 22 A11 A12 A21 A22 B11 B12 B21 B22 X = C11 C12 C21 C22 Key idea: Sub-blocks (i.e., A xy ) can be treated just like scalars.

33 33 for (jj=0; jj<n; jj+=bsize) { for (i=0; i<n; i++) for (j=jj; j < min(jj+bsize,n); j++) c[i][j] = 0.0; for (kk=0; kk<n; kk+=bsize) { for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0 for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; } Blocked matrix multiply (bijk) Figure 6.48 P521

34 34 Blocked matrix multiply analysis Innermost loop pair multiplies a 1 X bsize sliver of A by a bsize X bsize block of B and accumulates into 1 X bsize sliver of C –Loop over i steps through n row slivers of A & C, using same B Sliver: 长条

35 35 ABC block reused n times in succession row sliver accessed bsize times Update successive elements of sliver ii kk jj for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0 for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; } Innermost Loop Pair Blocked matrix multiply analysis Figure 6.49 P522

36 36 Pentium blocked matrix multiply performance Figure 6.50 P523 2) 3) 1)

37 37 6.7 Putting it Together: Exploring Locality in Your Programs

38 38 Techniques P523 Focus your attention on the inner loops Try to maximize the spatial locality in your programs by reading data objects sequentially, in the order they are stored in memory Try to maximize the temporal locality in your programs by using a data object as often as possible once it has been read from memory Miss rates, the number of memory accesses


Download ppt "1 Cache Memory. 2 Outline Cache mountain Matrix multiplication Suggested Reading: 6.6, 6.7."

Similar presentations


Ads by Google