Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multithreaded Programming in Cilk L ECTURE 2 Charles E. Leiserson Supercomputing Technologies Research Group Computer Science and Artificial Intelligence.

Similar presentations


Presentation on theme: "Multithreaded Programming in Cilk L ECTURE 2 Charles E. Leiserson Supercomputing Technologies Research Group Computer Science and Artificial Intelligence."— Presentation transcript:

1 Multithreaded Programming in Cilk L ECTURE 2 Charles E. Leiserson Supercomputing Technologies Research Group Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

2 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 2 Minicourse Outline ● L ECTURE 1 Basic Cilk programming: Cilk keywords, performance measures, scheduling. ● L ECTURE 2 Analysis of Cilk algorithms: matrix multiplication, sorting, tableau construction. ● L ABORATORY Programming matrix multiplication in Cilk — Dr. Bradley C. Kuszmaul ● L ECTURE 3 Advanced Cilk programming: inlets, abort, speculation, data synchronization, & more.

3 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 3 L ECTURE 2 Matrix Multiplication Tableau Construction Recurrences (Review) Conclusion Merge Sort

4 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 4 The Master Method The Master Method for solving recurrences applies to recurrences of the form T(n) = a T(n/b) + f (n), where a ¸ 1, b > 1, and f is asymptotically positive. I DEA : Compare n log b a with f (n). *The unstated base case is T(n) =  (1) for sufficiently small n. *

5 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 5 Master Method — C ASE 1 n log b a À f (n) Specifically, f (n) = O(n log b a –  ) for some constant  > 0. Solution: T(n) =  (n log b a ). T(n) = a T(n/b) + f (n)

6 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 6 Master Method — C ASE 2 Specifically, f (n) =  (n log b a lg k n) for some constant k ¸ 0. Solution: T(n) =  (n log b a lg k+1 n). n log b a ¼ f (n) T(n) = a T(n/b) + f (n)

7 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 7 Master Method — C ASE 3 Specifically, f (n) =  (n log b a +  ) for some constant  > 0 and f (n) satisfies the regularity condition that a f (n/b) · c f (n) for some constant c < 1. Solution: T(n) =  (f (n)). n log b a ¿ f (n) T(n) = a T(n/b) + f (n)

8 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 8 Master Method Summary C ASE 1: f (n) = O(n log b a –  ), constant  > 0  T(n) =  (n log b a ). C ASE 2: f (n) =  (n log b a lg k n), constant k  0  T(n) =  (n log b a lg k+1 n). C ASE 3: f (n) =  (n log b a +  ), constant  > 0, and regularity condition  T(n) =  ( f (n)). T(n) = a T(n/b) + f (n)

9 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 9 Master Method Quiz T(n) = 4 T(n/2) + n n log b a = n 2 À n ) C ASE 1: T(n) =  (n 2 ). T(n) = 4 T(n/2) + n 2 n log b a = n 2 = n 2 lg 0 n ) C ASE 2: T(n) =  (n 2 lg n). T(n) = 4 T(n/2) + n 3 n log b a = n 2 ¿ n 3 ) C ASE 3: T(n) =  (n 3 ). T(n) = 4 T(n/2) + n 2 / lg n Master method does not apply!

10 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 10 L ECTURE 2 Matrix Multiplication Tableau Construction Recurrences (Review) Conclusion Merge Sort

11 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 11 Square-Matrix Multiplication c 11 c 12  c1nc1n c 21 c 22  c2nc2n    cn1cn1 cn2cn2  c nn a 11 a 12  a1na1n a 21 a 22  a2na2n    an1an1 an2an2  a nn b 11 b 12  b1nb1n b 21 b 22  b2nb2n    bn1bn1 bn2bn2  b nn = £ CAB c ij =  k = 1 n a ik b kj Assume for simplicity that n = 2 k.

12 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 12 Recursive Matrix Multiplication 8 multiplications of (n/2) £ (n/2) matrices. 1 addition of n £ n matrices. Divide and conquer — C 11 C 12 C 21 C 22 = £ A 11 A 12 A 21 A 22 B 11 B 12 B 21 B 22 =+ A 11 B 11 A 11 B 12 A 21 B 11 A 21 B 12 A 12 B 21 A 12 B 22 A 22 B 21 A 22 B 22

13 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 13 cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2); spawn Mult(C22,A21,B12,n/2); spawn Mult(C21,A21,B11,n/2); spawn Mult(T11,A12,B21,n/2); spawn Mult(T12,A12,B22,n/2); spawn Mult(T22,A22,B22,n/2); spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2); spawn Mult(C22,A21,B12,n/2); spawn Mult(C21,A21,B11,n/2); spawn Mult(T11,A12,B21,n/2); spawn Mult(T12,A12,B22,n/2); spawn Mult(T22,A22,B22,n/2); spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } Matrix Multiply in Pseudo-Cilk C = A ¢ B Absence of type declarations.

14 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 14 cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2); spawn Mult(C22,A21,B12,n/2); spawn Mult(C21,A21,B11,n/2); spawn Mult(T11,A12,B21,n/2); spawn Mult(T12,A12,B22,n/2); spawn Mult(T22,A22,B22,n/2); spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2); spawn Mult(C22,A21,B12,n/2); spawn Mult(C21,A21,B11,n/2); spawn Mult(T11,A12,B21,n/2); spawn Mult(T12,A12,B22,n/2); spawn Mult(T22,A22,B22,n/2); spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } C = A ¢ B Coarsen base cases for efficiency. Matrix Multiply in Pseudo-Cilk

15 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 15 cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2); spawn Mult(C22,A21,B12,n/2); spawn Mult(C21,A21,B11,n/2); spawn Mult(T11,A12,B21,n/2); spawn Mult(T12,A12,B22,n/2); spawn Mult(T22,A22,B22,n/2); spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2); spawn Mult(C22,A21,B12,n/2); spawn Mult(C21,A21,B11,n/2); spawn Mult(T11,A12,B21,n/2); spawn Mult(T12,A12,B22,n/2); spawn Mult(T22,A22,B22,n/2); spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } C = A ¢ B Submatrices are produced by pointer calculation, not copying of elements. Also need a row- size argument for array indexing. Matrix Multiply in Pseudo-Cilk

16 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 16 cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2); spawn Mult(C22,A21,B12,n/2); spawn Mult(C21,A21,B11,n/2); spawn Mult(T11,A12,B21,n/2); spawn Mult(T12,A12,B22,n/2); spawn Mult(T22,A22,B22,n/2); spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2); spawn Mult(C22,A21,B12,n/2); spawn Mult(C21,A21,B11,n/2); spawn Mult(T11,A12,B21,n/2); spawn Mult(T12,A12,B22,n/2); spawn Mult(T22,A22,B22,n/2); spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } C = A ¢ B cilk void Add(*C, *T, n) { h base case & partition matrices i spawn Add(C11,T11,n/2); spawn Add(C12,T12,n/2); spawn Add(C21,T21,n/2); spawn Add(C22,T22,n/2); sync; return; } cilk void Add(*C, *T, n) { h base case & partition matrices i spawn Add(C11,T11,n/2); spawn Add(C12,T12,n/2); spawn Add(C21,T21,n/2); spawn Add(C22,T22,n/2); sync; return; } C = C + T Matrix Multiply in Pseudo-Cilk

17 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 17 A 1 (n)= ? 4 A 1 (n/2) +  (1) cilk void Add(*C, *T, n) { h base case & partition matrices i spawn Add(C11,T11,n/2); spawn Add(C12,T12,n/2); spawn Add(C21,T21,n/2); spawn Add(C22,T22,n/2); sync; return; } cilk void Add(*C, *T, n) { h base case & partition matrices i spawn Add(C11,T11,n/2); spawn Add(C12,T12,n/2); spawn Add(C21,T21,n/2); spawn Add(C22,T22,n/2); sync; return; } Work of Matrix Addition Work: n log b a = n log 2 4 = n 2 À  (1). — C ASE 1 =(n2)=(n2)

18 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 18 cilk void Add(*C, *T, n) { h base case & partition matrices i spawn Add(C11,T11,n/2); spawn Add(C12,T12,n/2); spawn Add(C21,T21,n/2); spawn Add(C22,T22,n/2); sync; return; } cilk void Add(*C, *T, n) { h base case & partition matrices i spawn Add(C11,T11,n/2); spawn Add(C12,T12,n/2); spawn Add(C21,T21,n/2); spawn Add(C22,T22,n/2); sync; return; } cilk void Add(*C, *T, n) { h base case & partition matrices i spawn Add(C11,T11,n/2); spawn Add(C12,T12,n/2); spawn Add(C21,T21,n/2); spawn Add(C22,T22,n/2); sync; return; } cilk void Add(*C, *T, n) { h base case & partition matrices i spawn Add(C11,T11,n/2); spawn Add(C12,T12,n/2); spawn Add(C21,T21,n/2); spawn Add(C22,T22,n/2); sync; return; } A 1 (n)= ? Span of Matrix Addition A 1 (n/2) +  (1) Span: n log b a = n log 2 1 = 1 ) f (n) =  (n log b a lg 0 n). — C ASE 2 =  (lg n) maximum

19 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 19 M 1 (n)= ? Work of Matrix Multiplication 8 M 1 (n/2) +A 1 (n) +  (1) Work: cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2);  spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2);  spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } 8 n log b a = n log 2 8 = n 3 À  (n 2 ). =8 M 1 (n/2) +  (n 2 ) =(n3)=(n3) — C ASE 1

20 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 20 cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2);  spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2);  spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2);  spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { float *T = Cilk_alloca(n*n*sizeof(float)); h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2);  spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } M 1 (n)= ? M 1 (n/2) + A 1 (n) +  (1) Span of Matrix Multiplication Span: n log b a = n log 2 1 = 1 ) f (n) =  (n log b a lg 1 n). =M 1 (n/2) +  (lg n) =  (lg 2 n) — C ASE 2 8

21 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 21 Parallelism of Matrix Multiply M1(n)= (n3)M1(n)= (n3) Work: M 1 (n)=  (lg 2 n) Span: Parallelism: M1(n)M1(n) M1(n)M1(n) =  (n 3 /lg 2 n) For 1000 £ 1000 matrices, parallelism ¼ (10 3 ) 3 /10 2 = 10 7.

22 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 22 cilk void Mult(*C, *A, *B, n) { h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2);  spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } cilk void Mult(*C, *A, *B, n) { h base case & partition matrices i spawn Mult(C11,A11,B11,n/2); spawn Mult(C12,A11,B12,n/2);  spawn Mult(T21,A22,B21,n/2); sync; spawn Add(C,T,n); sync; return; } Stack Temporaries float *T = Cilk_alloca(n*n*sizeof(float)); In hierarchical-memory machines (especially chip multiprocessors), memory accesses are so expensive that minimizing storage often yields higher performance. I DEA : Trade off parallelism for less storage.

23 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 23 No-Temp Matrix Multiplication cilk void MultA(*C, *A, *B, n) { // C = C + A * B h base case & partition matrices i spawn MultA(C11,A11,B11,n/2); spawn MultA(C12,A11,B12,n/2); spawn MultA(C22,A21,B12,n/2); spawn MultA(C21,A21,B11,n/2); sync; spawn MultA(C21,A22,B21,n/2); spawn MultA(C22,A22,B22,n/2); spawn MultA(C12,A12,B22,n/2); spawn MultA(C11,A12,B21,n/2); sync; return; } cilk void MultA(*C, *A, *B, n) { // C = C + A * B h base case & partition matrices i spawn MultA(C11,A11,B11,n/2); spawn MultA(C12,A11,B12,n/2); spawn MultA(C22,A21,B12,n/2); spawn MultA(C21,A21,B11,n/2); sync; spawn MultA(C21,A22,B21,n/2); spawn MultA(C22,A22,B22,n/2); spawn MultA(C12,A12,B22,n/2); spawn MultA(C11,A12,B21,n/2); sync; return; } Saves space, but at what expense?

24 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 24 =(n3)=(n3) Work of No-Temp Multiply M 1 (n)= ? 8 M 1 (n/2) +  (1) Work: — C ASE 1 cilk void MultA(*C, *A, *B, n) { // C = C + A * B h base case & partition matrices i spawn MultA(C11,A11,B11,n/2); spawn MultA(C12,A11,B12,n/2); spawn MultA(C22,A21,B12,n/2); spawn MultA(C21,A21,B11,n/2); sync; spawn MultA(C21,A22,B21,n/2); spawn MultA(C22,A22,B22,n/2); spawn MultA(C12,A12,B22,n/2); spawn MultA(C11,A12,B21,n/2); sync; return; } cilk void MultA(*C, *A, *B, n) { // C = C + A * B h base case & partition matrices i spawn MultA(C11,A11,B11,n/2); spawn MultA(C12,A11,B12,n/2); spawn MultA(C22,A21,B12,n/2); spawn MultA(C21,A21,B11,n/2); sync; spawn MultA(C21,A22,B21,n/2); spawn MultA(C22,A22,B22,n/2); spawn MultA(C12,A12,B22,n/2); spawn MultA(C11,A12,B21,n/2); sync; return; }

25 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 25 cilk void MultA(*C, *A, *B, n) { // C = C + A * B h base case & partition matrices i spawn MultA(C11,A11,B11,n/2); spawn MultA(C12,A11,B12,n/2); spawn MultA(C22,A21,B12,n/2); spawn MultA(C21,A21,B11,n/2); sync; spawn MultA(C21,A22,B21,n/2); spawn MultA(C22,A22,B22,n/2); spawn MultA(C12,A12,B22,n/2); spawn MultA(C11,A12,B21,n/2); sync; return; } cilk void MultA(*C, *A, *B, n) { // C = C + A * B h base case & partition matrices i spawn MultA(C11,A11,B11,n/2); spawn MultA(C12,A11,B12,n/2); spawn MultA(C22,A21,B12,n/2); spawn MultA(C21,A21,B11,n/2); sync; spawn MultA(C21,A22,B21,n/2); spawn MultA(C22,A22,B22,n/2); spawn MultA(C12,A12,B22,n/2); spawn MultA(C11,A12,B21,n/2); sync; return; } cilk void MultA(*C, *A, *B, n) { // C = C + A * B h base case & partition matrices i spawn MultA(C11,A11,B11,n/2); spawn MultA(C12,A11,B12,n/2); spawn MultA(C22,A21,B12,n/2); spawn MultA(C21,A21,B11,n/2); sync; spawn MultA(C21,A22,B21,n/2); spawn MultA(C22,A22,B22,n/2); spawn MultA(C12,A12,B22,n/2); spawn MultA(C11,A12,B21,n/2); sync; return; } cilk void MultA(*C, *A, *B, n) { // C = C + A * B h base case & partition matrices i spawn MultA(C11,A11,B11,n/2); spawn MultA(C12,A11,B12,n/2); spawn MultA(C22,A21,B12,n/2); spawn MultA(C21,A21,B11,n/2); sync; spawn MultA(C21,A22,B21,n/2); spawn MultA(C22,A22,B22,n/2); spawn MultA(C12,A12,B22,n/2); spawn MultA(C11,A12,B21,n/2); sync; return; } =(n)=(n) M 1 (n)= ? Span of No-Temp Multiply Span: — C ASE 1 2 M 1 (n/2) +  (1) maximum

26 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 26 Parallelism of No-Temp Multiply M1(n)=(n3)M1(n)=(n3) Work: M1(n)=(n)M1(n)=(n) Span: Parallelism: M1(n)M1(n) M1(n)M1(n) =  (n 2 ) For 1000 £ 1000 matrices, parallelism ¼ (10 3 ) 3 /10 3 = 10 6. Faster in practice!

27 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 27 Testing Synchronization Cilk language feature: A programmer can check whether a Cilk procedure is “synched” (without actually performing a sync ) by testing the pseudovariable SYNCHED : SYNCHED = 0 ) some spawned children might not have returned. SYNCHED = 1 ) all spawned children have definitely returned.

28 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 28 Best of Both Worlds cilk void Mult1(*C, *A, *B, n) {// multiply & store h base case & partition matrices i spawn Mult1(C11,A11,B11,n/2); // multiply & store spawn Mult1(C12,A11,B12,n/2); spawn Mult1(C22,A21,B12,n/2); spawn Mult1(C21,A21,B11,n/2); if (SYNCHED) { spawn MultA1(C11,A12,B21,n/2); // multiply & add spawn MultA1(C12,A12,B22,n/2); spawn MultA1(C22,A22,B22,n/2); spawn MultA1(C21,A22,B21,n/2); } else { float *T = Cilk_alloca(n*n*sizeof(float)); spawn Mult1(T11,A12,B21,n/2); // multiply & store spawn Mult1(T12,A12,B22,n/2); spawn Mult1(T22,A22,B22,n/2); spawn Mult1(T21,A22,B21,n/2); sync; spawn Add(C,T,n); // C = C + T } sync; return; } cilk void Mult1(*C, *A, *B, n) {// multiply & store h base case & partition matrices i spawn Mult1(C11,A11,B11,n/2); // multiply & store spawn Mult1(C12,A11,B12,n/2); spawn Mult1(C22,A21,B12,n/2); spawn Mult1(C21,A21,B11,n/2); if (SYNCHED) { spawn MultA1(C11,A12,B21,n/2); // multiply & add spawn MultA1(C12,A12,B22,n/2); spawn MultA1(C22,A22,B22,n/2); spawn MultA1(C21,A22,B21,n/2); } else { float *T = Cilk_alloca(n*n*sizeof(float)); spawn Mult1(T11,A12,B21,n/2); // multiply & store spawn Mult1(T12,A12,B22,n/2); spawn Mult1(T22,A22,B22,n/2); spawn Mult1(T21,A22,B21,n/2); sync; spawn Add(C,T,n); // C = C + T } sync; return; } This code is just as parallel as the original, but it only uses more space if runtime parallelism actually exists.

29 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 29 Ordinary Matrix Multiplication c ij =  k = 1 n a ik b kj I DEA : Spawn n 2 inner products in parallel. Compute each inner product in parallel. Work:  (n 3 ) Span:  (lg n) Parallelism:  (n 3 /lg n) B UT, this algorithm exhibits poor locality and does not exploit the cache hierarchy of modern microprocessors, especially CMP’s.

30 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 30 L ECTURE 2 Matrix Multiplication Tableau Construction Recurrences (Review) Conclusion Merge Sort

31 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 31 3121946 4142123 193 4 12 142123 46 Merging Two Sorted Arrays void Merge(int *C, int *A, int *B, int na, int nb) { while (na>0 && nb>0) { if (*A <= *B) { *C++ = *A++; na--; } else { *C++ = *B++; nb--; } while (na>0) { *C++ = *A++; na--; } while (nb>0) { *C++ = *B++; nb--; } void Merge(int *C, int *A, int *B, int na, int nb) { while (na>0 && nb>0) { if (*A <= *B) { *C++ = *A++; na--; } else { *C++ = *B++; nb--; } while (na>0) { *C++ = *A++; na--; } while (nb>0) { *C++ = *B++; nb--; } Time to merge n elements = ? (n). (n).

32 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 32 cilk void MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn MergeSort(C, A, n/2); spawn MergeSort(C+n/2, A+n/2, n-n/2); sync; Merge(B, C, C+n/2, n/2, n-n/2); } cilk void MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn MergeSort(C, A, n/2); spawn MergeSort(C+n/2, A+n/2, n-n/2); sync; Merge(B, C, C+n/2, n/2, n-n/2); } Merge Sort 14461931233421 43319461431221 46333121941421 46143412192133 merge

33 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 33 =  (n lg n) T 1 (n)= ? 2 T 1 (n/2) +  (n) Work of Merge Sort Work: — C ASE 2 n log b a = n log 2 2 = n ) f (n) =  (n log b a lg 0 n). cilk void MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn MergeSort(C, A, n/2); spawn MergeSort(C+n/2, A+n/2, n-n/2); sync; Merge(B, C, C+n/2, n/2, n-n/2); } cilk void MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn MergeSort(C, A, n/2); spawn MergeSort(C+n/2, A+n/2, n-n/2); sync; Merge(B, C, C+n/2, n/2, n-n/2); }

34 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 34 T 1 (n)= ? T 1 (n/2) +  (n) Span of Merge Sort Span: — C ASE 3 =(n)=(n) n log b a = n log 2 1 = 1 ¿  (n). cilk void MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn MergeSort(C, A, n/2); spawn MergeSort(C+n/2, A+n/2, n-n/2); sync; Merge(B, C, C+n/2, n/2, n-n/2); } cilk void MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn MergeSort(C, A, n/2); spawn MergeSort(C+n/2, A+n/2, n-n/2); sync; Merge(B, C, C+n/2, n/2, n-n/2); }

35 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 35 Parallelism of Merge Sort T 1 (n)=  (n lg n) Work: T1(n)=(n)T1(n)=(n) Span: Parallelism: T1(n)T1(n) T1(n)T1(n) =  (lg n) We need to parallelize the merge!

36 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 36 B A 0na 0nb na ¸ nb Parallel Merge · A[na/2] ¸ A[na/2] Binary search jj+1 Recursive merge Recursive merge na/2 · A[na/2] ¸ A[na/2] K EY I DEA : If the total number of elements to be merged in the two arrays is n = na + nb, the total number of elements in the larger of the two recursive merges is at most ? (3/4) n.

37 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 37 Parallel Merge cilk void P_Merge(int *C, int *A, int *B, int na, int nb) { if (na < nb) { spawn P_Merge(C, B, A, nb, na); } else if (na==1) { if (nb == 0) { C[0] = A[0]; } else { C[0] = (A[0]<B[0]) ? A[0] : B[0]; /* minimum */ C[1] = (A[0]<B[0]) ? B[0] : A[0]; /* maximum */ } } else { int ma = na/2; int mb = BinarySearch(A[ma], B, nb); spawn P_Merge(C, A, B, ma, mb); spawn P_Merge(C+ma+mb, A+ma, B+mb, na-ma, nb-mb); sync; } cilk void P_Merge(int *C, int *A, int *B, int na, int nb) { if (na < nb) { spawn P_Merge(C, B, A, nb, na); } else if (na==1) { if (nb == 0) { C[0] = A[0]; } else { C[0] = (A[0]<B[0]) ? A[0] : B[0]; /* minimum */ C[1] = (A[0]<B[0]) ? B[0] : A[0]; /* maximum */ } } else { int ma = na/2; int mb = BinarySearch(A[ma], B, nb); spawn P_Merge(C, A, B, ma, mb); spawn P_Merge(C+ma+mb, A+ma, B+mb, na-ma, nb-mb); sync; } Coarsen base cases for efficiency.

38 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 38 T 1 (n)= ? T 1 (3n/4) +  (lg n) Span of P_Merge Span: — C ASE 2 =  (lg 2 n) cilk void P_Merge(int *C, int *A, int *B, int na, int nb) { if (na < nb) {  } else { int ma = na/2; int mb = BinarySearch(A[ma], B, nb); spawn P_Merge(C, A, B, ma, mb); spawn P_Merge(C+ma+mb, A+ma, B+mb, na-ma, nb-mb); sync; } cilk void P_Merge(int *C, int *A, int *B, int na, int nb) { if (na < nb) {  } else { int ma = na/2; int mb = BinarySearch(A[ma], B, nb); spawn P_Merge(C, A, B, ma, mb); spawn P_Merge(C+ma+mb, A+ma, B+mb, na-ma, nb-mb); sync; } n log b a = n log 4/3 1 = 1 ) f (n) =  (n log b a lg 1 n).

39 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 39 T 1 (n) = ? T 1 (  n) + T 1 ((1–  )n) +  (lg n), where 1/4 ·  · 3/4. Work of P_Merge Work: cilk void P_Merge(int *C, int *A, int *B, int na, int nb) { if (na < nb) {  } else { int ma = na/2; int mb = BinarySearch(A[ma], B, nb); spawn P_Merge(C, A, B, ma, mb); spawn P_Merge(C+ma+mb, A+ma, B+mb, na-ma, nb-mb); sync; } cilk void P_Merge(int *C, int *A, int *B, int na, int nb) { if (na < nb) {  } else { int ma = na/2; int mb = BinarySearch(A[ma], B, nb); spawn P_Merge(C, A, B, ma, mb); spawn P_Merge(C+ma+mb, A+ma, B+mb, na-ma, nb-mb); sync; } C LAIM : T 1 (n) =  (n).

40 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 40 Analysis of Work Recurrence Substitution method: Inductive hypothesis is T 1 (k) · c 1 k – c 2 lg k, where c 1,c 2 > 0. Prove that the relation holds, and solve for c 1 and c 2. T 1 (n) = T 1 (  n) + T 1 ((1–  )n) +  (lg n), where 1/4 ·  · 3/4. T 1 (n)=T 1 (  n) + T 1 ((1–  )n) +  (lg n) · c 1 (  n) – c 2 lg(  n) + c 1 ((1–  )n) – c 2 lg((1–  )n) +  (lg n)

41 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 41 Analysis of Work Recurrence T 1 (n)=T 1 (  n) + T 1 ((1–  )n) +  (lg n) · c 1 (  n) – c 2 lg(  n) + c 1 (1–  )n – c 2 lg((1–  )n) +  (lg n) T 1 (n) = T 1 (  n) + T 1 ((1–  )n) +  (lg n), where 1/4 ·  · 3/4.

42 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 42 T 1 (n)=T 1 (  n) + T 1 ((1–  )n) +  (lg n) · c 1 (  n) – c 2 lg(  n) + c 1 (1–  )n – c 2 lg((1–  )n) +  (lg n) Analysis of Work Recurrence · c 1 n – c 2 lg(  n) – c 2 lg((1–  )n) +  (lg n) · c 1 n – c 2 ( lg(  (1–  )) + 2 lg n ) +  (lg n) · c 1 n – c 2 lg n – ( c 2 (lg n + lg(  (1–  ))) –  (lg n) ) · c 1 n – c 2 lg n by choosing c 1 and c 2 large enough. T 1 (n) = T 1 (  n) + T 1 ((1–  )n) +  (lg n), where 1/4 ·  · 3/4.

43 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 43 Parallelism of P_Merge T1(n)=(n)T1(n)=(n) Work: T 1 (n)=  (lg 2 n) Span: Parallelism: T1(n)T1(n) T1(n)T1(n) =  (n/lg 2 n)

44 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 44 cilk void P_MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn P_MergeSort(C, A, n/2); spawn P_MergeSort(C+n/2, A+n/2, n-n/2); sync; spawn P_Merge(B, C, C+n/2, n/2, n-n/2); } cilk void P_MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn P_MergeSort(C, A, n/2); spawn P_MergeSort(C+n/2, A+n/2, n-n/2); sync; spawn P_Merge(B, C, C+n/2, n/2, n-n/2); } Parallel Merge Sort

45 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 45 T 1 (n)=2 T 1 (n/2) +  (n) Work of Parallel Merge Sort Work: — C ASE 2 =  (n lg n) cilk void P_MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn P_MergeSort(C, A, n/2); spawn P_MergeSort(C+n/2, A+n/2, n-n/2); sync; spawn P_Merge(B, C, C+n/2, n/2, n-n/2); } cilk void P_MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn P_MergeSort(C, A, n/2); spawn P_MergeSort(C+n/2, A+n/2, n-n/2); sync; spawn P_Merge(B, C, C+n/2, n/2, n-n/2); }

46 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 46 Span of Parallel Merge Sort T 1 (n)= ? T 1 (n/2) +  (lg 2 n) Span: — C ASE 2 =  (lg 3 n) n log b a = n log 2 1 = 1 ) f (n) =  (n log b a lg 2 n). cilk void P_MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn P_MergeSort(C, A, n/2); spawn P_MergeSort(C+n/2, A+n/2, n-n/2); sync; spawn P_Merge(B, C, C+n/2, n/2, n-n/2); } cilk void P_MergeSort(int *B, int *A, int n) { if (n==1) { B[0] = A[0]; } else { int *C; C = (int*) Cilk_alloca(n*sizeof(int)); spawn P_MergeSort(C, A, n/2); spawn P_MergeSort(C+n/2, A+n/2, n-n/2); sync; spawn P_Merge(B, C, C+n/2, n/2, n-n/2); }

47 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 47 Parallelism of Merge Sort T 1 (n)=  (n lg n) Work: T 1 (n)=  (lg 3 n) Span: Parallelism: T1(n)T1(n) T1(n)T1(n) =  (n/lg 2 n)

48 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 48 L ECTURE 2 Matrix Multiplication Tableau Construction Recurrences (Review) Conclusion Merge Sort

49 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 49 Tableau Construction A[i, j] = f ( A[i, j–1], A[i–1, j], A[i–1, j–1] ). Problem: Fill in an n £ n tableau A, where Dynamic programming Longest common subsequence Edit distance Time warping 00 01 02 03 04 05 06 07 10 11 12 13 14 15 16 17 20 21 22 23 24 25 26 27 30 31 32 33 34 35 36 37 40 41 42 43 44 45 46 47 50 51 52 53 54 55 56 57 60 61 62 63 64 65 66 67 70 71 72 73 74 75 76 77 Work:  (n 2 ).

50 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 50 n n spawn I; sync; spawn II; spawn III; sync; spawn IV; sync; I I II III IV Cilk code Recursive Construction

51 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 51 n n Work: T 1 (n) = ? 4T 1 (n/2) +  (1) spawn I; sync; spawn II; spawn III; sync; spawn IV; sync; I I II III IV Cilk code Recursive Construction =  (n 2 ) — C ASE 1

52 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 52 Span: T 1 (n) = ? n n spawn I; sync; spawn II; spawn III; sync; spawn IV; sync; I I II III IV Cilk code Recursive Construction 3T 1 (n/2) +  (1) =  (n lg3 ) — C ASE 1

53 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 53 Analysis of Tableau Construction Work: T 1 (n) =  (n 2 ) Span: T 1 (n) =  (n lg3 ) ¼  (n 1.58 ) Parallelism: T1(n)T1(n) T1(n)T1(n) ¼  (n 0.42 )

54 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 54 n spawn I; sync; spawn II; spawn III; sync; spawn IV; spawn V; spawn VI sync; spawn VII; spawn VIII; sync; spawn IX; sync; A More-Parallel Construction I I II III IV V V VI VII VIII IX n

55 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 55 n spawn I; sync; spawn II; spawn III; sync; spawn IV; spawn V; spawn VI sync; spawn VII; spawn VIII; sync; spawn IX; sync; A More-Parallel Construction I I II III IV V V VI VII VIII IX n Work: T 1 (n) = ? 9T 1 (n/3) +  (1) =  (n 2 ) — C ASE 1

56 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 56 n spawn I; sync; spawn II; spawn III; sync; spawn IV; spawn V; spawn VI sync; spawn VII; spawn VIII; sync; spawn IX; sync; A More-Parallel Construction I I II III IV V V VI VII VIII IX n Span: T 1 (n) = ? 5T 1 (n/3) +  (1) =  (n log 3 5 ) — C ASE 1

57 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 57 Analysis of Revised Construction Work: T 1 (n) =  (n 2 ) Span: T 1 (n) =  (n log 3 5 ) ¼  (n 1.46 ) Parallelism: T1(n)T1(n) T1(n)T1(n) ¼  (n 0.54 ) More parallel by a factor of  (n 0.54 )/  (n 0.42 ) =  (n 0.12 ).

58 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 58 Puzzle You may only use basic Cilk control constructs ( spawn, sync ) for synchronization. No locks, synchronizing through memory, etc. What is the largest parallelism that can be obtained for the tableau- construction problem using Cilk?

59 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 59 L ECTURE 2 Matrix Multiplication Tableau Construction Recurrences (Review) Conclusion Merge Sort

60 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 60 Key Ideas Cilk is simple: cilk, spawn, sync, SYNCHED Recurrences, recurrences, recurrences, … Work & span

61 © 2006 by Charles E. LeisersonMultithreaded Programming in Cilk —L ECTURE 2July 14, 2006 61 Minicourse Outline ● L ECTURE 1 Basic Cilk programming: Cilk keywords, performance measures, scheduling. ● L ECTURE 2 Analysis of Cilk algorithms: matrix multiplication, sorting, tableau construction. ● L ABORATORY Programming matrix multiplication in Cilk — Dr. Bradley C. Kuszmaul ● L ECTURE 3 Advanced Cilk programming: inlets, abort, speculation, data synchronization, & more.


Download ppt "Multithreaded Programming in Cilk L ECTURE 2 Charles E. Leiserson Supercomputing Technologies Research Group Computer Science and Artificial Intelligence."

Similar presentations


Ads by Google