CS 140 : Matrix multiplication Linear algebra problems Matrix multiplication I : cache issues Matrix multiplication II: parallel issues Thanks to Jim Demmel.

Slides:



Advertisements
Similar presentations
Dense Matrix Algorithms. Topic Overview Matrix-Vector Multiplication Matrix-Matrix Multiplication Solving a System of Linear Equations.
Advertisements

Parallel Matrix Operations using MPI CPS 5401 Fall 2014 Shirley Moore, Instructor November 3,
Numerical Algorithms ITCS 4/5145 Parallel Computing UNC-Charlotte, B. Wilkinson, 2009.
Communication Avoiding Algorithms for Dense Linear Algebra Jim Demmel CS294, Lecture #4 Fall, 2011 Communication-Avoiding Algorithms
CSE5304—Project Proposal Parallel Matrix Multiplication Tian Mi.
9/12/2007CS194 Lecture1 Shared Memory Hardware: Case Study in Matrix Multiplication Kathy Yelick
CS 240A : Matrix multiplication Matrix multiplication I : parallel issues Matrix multiplication II: cache issues Thanks to Jim Demmel and Kathy Yelick.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
Numerical Algorithms • Matrix multiplication
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
CS267 L20 Dense Linear Algebra II.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 20: Dense Linear Algebra - II James Demmel
Languages and Compilers for High Performance Computing Kathy Yelick EECS Department U.C. Berkeley.
02/21/2007CS267 Lecture DLA11 CS 267 Dense Linear Algebra: Parallel Matrix Multiplication James Demmel
Towards Communication Avoiding Fast Algorithm for Sparse Matrix Multiplication Part I: Minimizing arithmetic operations Oded Schwartz CS294, Lecture #21.
CS267 Dense Linear Algebra I.1 Demmel Fa 2002 CS 267 Applications of Parallel Computers Dense Linear Algebra James Demmel
02/09/2006CS267 Lecture 81 CS 267 Dense Linear Algebra: Parallel Matrix Multiplication James Demmel
Analysis and Performance Results of a Molecular Modeling Application on Merrimac Erez, et al. Stanford University 2004 Presented By: Daniel Killebrew.
08/29/2002CS267 Lecure 21 CS 267: Optimizing for Uniprocessors—A Case Study in Matrix Multiplication Katherine Yelick
CS267 L2 Memory Hierarchies.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 2: Memory Hierarchies and Optimizing Matrix Multiplication.
9/10/2007CS194 Lecture1 Memory Hierarchies and Optimizations: Case Study in Matrix Multiplication Kathy Yelick
CS267 Dense Linear Algebra I.1 Demmel Fa 2001 CS 267 Applications of Parallel Computers Dense Linear Algebra James Demmel
Uniprocessor Optimizations and Matrix Multiplication
Single Processor Optimizations Matrix Multiplication Case Study
Dense Matrix Algorithms CS 524 – High-Performance Computing.
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
CS 240A: Complexity Measures for Parallel Computation.
CES 524 May 6 Eleven Advanced Cache Optimizations (Ch 5) parallel architectures (Ch 4) Slides adapted from Patterson, UC Berkeley.
Optimizing for the serial processors Scaled speedup: operate near the memory boundary. Memory systems on modern processors are complicated. The performance.
Conjugate gradients, sparse matrix-vector multiplication, graphs, and meshes Thanks to Aydin Buluc, Umit Catalyurek, Alan Edelman, and Kathy Yelick for.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
2/25/2009CS267 Lecture 101 Parallelism and Locality in Matrix Computations Dense Linear Algebra: Optimizing Parallel.
High Performance Computing 1 Numerical Linear Algebra An Introduction.
SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY Lecture 12 CS2110 – Fall 2009.
1 Lecture 2 Single Processor Machines: Memory Hierarchies and Processor Features UCSB CS240A, Winter 2013 Modified from Demmel/Yelick’s slides.
1 Minimizing Communication in Numerical Linear Algebra Case Study: Matrix Multiply Jim Demmel EECS & Math Departments, UC Berkeley.
1 High-Performance Grid Computing and Research Networking Presented by Xing Hang Instructor: S. Masoud Sadjadi
CS 140 : Matrix multiplication Warmup: Matrix times vector: communication volume Matrix multiplication I: parallel issues Matrix multiplication II: cache.
1 Single Processor Machines: Memory Hierarchies and Processor Features Case Study: Tuning Matrix Multiply Based on slides by James Demmel
Dense Linear Algebra Sathish Vadhiyar. Gaussian Elimination - Review Version 1 for each column i zero it out below the diagonal by adding multiples of.
Performance Measurement n Assignment? n Timing #include double When() { struct timeval tp; gettimeofday(&tp, NULL); return((double)tp.tv_sec + (double)tp.tv_usec.
Performance Measurement. A Quantitative Basis for Design n Parallel programming is an optimization problem. n Must take into account several factors:
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Matrix Multiply Methods. Some general facts about matmul High computation to communication hides multitude of sins Many “symmetries” meaning that the.
CS240A: Conjugate Gradients and the Model Problem.
3/02/2010CS267 Lecture 131 CS 267 Dense Linear Algebra: History and Structure, Parallel Matrix Multiplication James Demmel
2/25/2009CS267 Lecture 101 CS 267 Dense Linear Algebra: History and Structure, Parallel Matrix Multiplication James Demmel
Dense Linear Algebra Sathish Vadhiyar. Gaussian Elimination - Review Version 1 for each column i zero it out below the diagonal by adding multiples of.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
Complexity Measures for Parallel Computation. Problem parameters: nindex of problem size pnumber of processors Algorithm parameters: t p running time.
09/13/2012CS4230 CS4230 Parallel Programming Lecture 8: Dense Linear Algebra and Locality Optimizations Mary Hall September 13,
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
A few words on locality and arrays
Numerical Algorithms Chapter 11.
Optimizing Cache Performance in Matrix Multiplication
Optimizing Cache Performance in Matrix Multiplication
CS 267 Dense Linear Algebra: Parallel Matrix Multiplication
BLAS: behind the scenes
Kathy Yelick CS 267 Applications of Parallel Processors Lecture 13: Parallel Matrix Multiply Kathy Yelick
Complexity Measures for Parallel Computation
CS 140 : Matrix multiplication
Parallel Matrix Operations
Numerical Algorithms • Parallelizing matrix multiplication
Complexity Measures for Parallel Computation
Memory System Performance Chapter 3
To accompany the text “Introduction to Parallel Computing”,
Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as ai,j and elements of B as.
CS 140 : Matrix multiplication
Parallel Matrix Multiply
Presentation transcript:

CS 140 : Matrix multiplication Linear algebra problems Matrix multiplication I : cache issues Matrix multiplication II: parallel issues Thanks to Jim Demmel and Kathy Yelick (UCB) for some of these slides

Problems in linear algebra Solving linear equations: Find x such that A*x = b Workhorse of computational modeling Dense: electromagnetics; kernel in sparse Sparse: differential equations, etc. Eigenvalue / eigenvector: Find λ and x such that A*x = λ*x Dynamics of systems; vibration; information retrieval Today: Matrix – matrix multiplication Compute C = A * B Kernel in linear equation solvers, etc.

Sequential Matrix Multiplication Simple mathematics, but getting good performance is complicated by memory hierarchy --- cache issues.

Avoiding data movement: Reuse and locality Large memories are slow, fast memories are small Parallel processors, collectively, have large, fast cache the slow accesses to “remote” data we call “communication” Algorithm should do most work on local data Proc Cache L2 Cache L3 Cache Memory Conventional Storage Hierarchy Proc Cache L2 Cache L3 Cache Memory Proc Cache L2 Cache L3 Cache Memory potential interconnects

Assume just 2 levels in the hierarchy, fast and slow All data initially in slow memory m = number of memory elements (words) moved between fast and slow memory t m = time per slow memory operation f = number of arithmetic operations t f = time per arithmetic operation << t m q = f / m average number of flops per slow element access Minimum possible time = f* t f when all data in fast memory Actual time f * t f + m * t m = f * t f * (1 + t m /t f * 1/q) Larger q means time closer to minimum f * t f Simplified model of hierarchical memory Simplified model of hierarchical memory

“Naïve” Matrix Multiply {implements C = C + A*B} for i = 1 to n for j = 1 to n for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) =+* C(i,j) A(i,:) B(:,j) Algorithm has 2*n 3 = O(n 3 ) Flops and operates on 3*n 2 words of memory

Matrix Multiply on RS/6000 T = N 4.7 O(N 3 ) performance would have constant cycles/flop Performance looks much closer to O(N 5 ) Size 2000 took 5 days would take 1095 years Slide source: Larry Carter, UCSD

Matrix Multiply on RS/6000 Slide source: Larry Carter, UCSD Page miss every iteration TLB miss every iteration Cache miss every 16 iterations Page miss every 512 iterations

“Naïve” Matrix Multiply {implements C = C + A*B} for i = 1 to n {read row i of A into fast memory} for j = 1 to n {read C(i,j) into fast memory} {read column j of B into fast memory} for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) {write C(i,j) back to slow memory} =+* C(i,j) A(i,:) B(:,j) C(i,j)

“Naïve” Matrix Multiply How many references to slow memory? m = n 3 read each column of B n times + n 2 read each row of A once + 2n 2 read and write each element of C once = n 3 + 3n 2 So q = f / m = 2n 3 / ( n 3 + 3n 2 ) ~= 2 for large n, no improvement over matrix-vector multiply =+* C(i,j) A(i,:) B(:,j)

Blocked Matrix Multiply Consider A,B,C to be N by N matrices of b by b subblocks where b=n / N is called the block size for i = 1 to N for j = 1 to N {read block C(i,j) into fast memory} for k = 1 to N {read block A(i,k) into fast memory} {read block B(k,j) into fast memory} C(i,j) = C(i,j) + A(i,k) * B(k,j) {do a matrix multiply on blocks} {write block C(i,j) back to slow memory} =+* C(i,j) A(i,k) B(k,j)

Blocked Matrix Multiply m is amount memory traffic between slow and fast memory matrix has nxn elements, and NxN blocks each of size bxb f is number of floating point operations, 2n 3 for this problem q = f / m measures algorithm efficiency in the memory system m = N*n 2 read a block of B N 3 times (N 3 * n/N * n/N) + N*n 2 read a block of A N 3 times + 2n 2 read and write each block of C once = (2N + 2) * n 2 So computational intensity q = f / m = 2n 3 / ((2N + 2) * n 2 ) ~= n / N = b for large n We can improve performance by increasing the blocksize b Can be much faster than matrix-vector multiply (q=2)

BLAS: Basic Linear Algebra Subroutines Industry standard interface Vendors, others supply optimized implementations History BLAS1 (1970s): vector operations: dot product, saxpy (y=  *x+y), etc m=2*n, f=2*n, q ~1 or less BLAS2 (mid 1980s) matrix-vector operations: matrix vector multiply, etc m=n^2, f=2*n^2, q~2, less overhead somewhat faster than BLAS1 BLAS3 (late 1980s) matrix-matrix operations: matrix matrix multiply, etc m >= n^2, f=O(n^3), so q can possibly be as large as n BLAS3 is potentially much faster than BLAS2 Good algorithms use BLAS3 when possible (LAPACK) See

BLAS speeds on an IBM RS6000/590 BLAS 3 BLAS 2 BLAS 1 BLAS 3 (n-by-n matrix matrix multiply) vs BLAS 2 (n-by-n matrix vector multiply) vs BLAS 1 (saxpy of n vectors) Peak speed = 266 Mflops Peak

Parallel Matrix Multiplication

Parallel Matrix Multiply Computing C = C + A*B Using basic algorithm: 2*n 3 flops Variables are: Data layout Structure of communication Schedule of communication Simple model for analyzing algorithm performance: t comm = communication time = t startup + (#words * t data ) = latency + ( #words * 1/(words/second) ) =  + w* 

Latency Bandwidth Model Network of fixed number p of processors fully connected each with local memory Latency (  ) accounts for varying performance with number of messages Inverse bandwidth (  ) accounts for performance varying with volume of data Parallel efficiency: e(p) = serial time / (p * parallel time) perfect speedup  e(p) = 1

Matrix Multiply with 1D Column Layout Assume matrices are n x n and n is divisible by p A(i) refers to the n by n/p block column that processor i owns (similiarly for B(i) and C(i)) B(i,j) is the n/p by n/p sublock of B(i) in rows j*n/p through (j+1)*n/p Algorithm uses the formula C(i) = C(i) + A*B(i) = C(i) +  j A(j)*B(j,i) p0p1p2p3p5p4p6p7 May be a reasonable assumption for analysis, not for code

Matrix Multiply: 1D Layout on Bus or Ring Algorithm uses the formula C(i) = C(i) + A*B(i) = C(i) +  j A(j)*B(j,i) First consider a bus-connected machine without broadcast: only one pair of processors can communicate at a time (ethernet) Second consider a machine with processors on a ring: all processors may communicate with nearest neighbors simultaneously

MatMul: 1D layout on Bus without Broadcast Naïve algorithm: C(myproc) = C(myproc) + A(myproc)*B(myproc,myproc) for i = 0 to p-1 for j = 0 to p-1 except i if (myproc == i) send A(i) to processor j if (myproc == j) receive A(i) from processor i C(myproc) = C(myproc) + A(i)*B(i,myproc) barrier Cost of inner loop: computation: 2*n*(n/p) 2 = 2*n 3 /p 2 communication:  +  *n 2 / p

Naïve MatMul (continued) Cost of inner loop: computation: 2*n*(n/p) 2 = 2*n 3 /p 2 communication:  +  *n 2 /p … approximately Only 1 pair of processors (i and j) are active on any iteration, and of those, only i is doing computation => the algorithm is almost entirely serial Running time: = (p*(p-1) + 1)*computation + p*(p-1)*communication ~= 2*n 3 + p 2 *  + p*n 2 *  this is worse than the serial time and grows with p Parallel Efficiency = 2*n 3 / (p * Total Time) = 1/ (p +  * p 3 /(2*n 3 ) +  * p 2 /(2*n) ) = 1/ (p +...)

Matmul for 1D layout on a Processor Ring Pairs of processors can communicate simultaneously Copy A(myproc) into Tmp C(myproc) = C(myproc) + Tmp*B(myproc, myproc) for j = 1 to p-1 send Tmp to processor myproc+1 mod p receive Tmp from processor myproc-1 mod p C(myproc) = C(myproc) + Tmp*B( myproc-j mod p, myproc) Avoiding deadlock: nonblocking sends or more complicated May want double buffering in practice for overlap Time of inner loop = 2*(  +  *n 2 /p) + 2*n*(n/p) 2

Matmul for 1D layout on a Processor Ring Time of inner loop = 2*(  +  *n 2 /p) + 2*n*(n/p) 2 Total Time = 2*n* (n/p) 2 + (p-1) * Time of inner loop ~ 2*n 3 /p + 2*p*  + 2*  *n 2 Optimal for 1D layout on Ring or Bus, even with broadcast: Perfect speedup for arithmetic A(myproc) must move to each other processor, costs at least (p-1)*cost of sending n*(n/p) words Parallel Efficiency = 2*n 3 / (p * Total Time) = 1/(1 +  * p 2 /(2*n 3 ) +  * p/(2*n) ) = 1/ (1 + O(p/n)) Grows to 1 as n/p increases (or  and  shrink)

MatMul with 2D Layout Consider processors in 2D grid (physical or logical) Processors can communicate with 4 nearest neighbors Broadcast along rows and columns Assume p is square s x s grid p(0,0) p(0,1) p(0,2) p(1,0) p(1,1) p(1,2) p(2,0) p(2,1) p(2,2) p(0,0) p(0,1) p(0,2) p(1,0) p(1,1) p(1,2) p(2,0) p(2,1) p(2,2) p(0,0) p(0,1) p(0,2) p(1,0) p(1,1) p(1,2) p(2,0) p(2,1) p(2,2) =*

Cannon’s Algorithm … C(i,j) = C(i,j) +  A(i,k)*B(k,j) … assume s = sqrt(p) is an integer forall i=0 to s-1 … “skew” A left-circular-shift row i of A by i … so that A(i,j) overwritten by A(i,(j+i)mod s) forall i=0 to s-1 … “skew” B up-circular-shift B column i of B by i … so that B(i,j) overwritten by B((i+j)mod s), j) for k=0 to s-1 … sequential forall i=0 to s-1 and j=0 to s-1 … all processors in parallel C(i,j) = C(i,j) + A(i,j)*B(i,j) left-circular-shift each row of A by 1 up-circular-shift each row of B by 1 k

C(1,2) = A(1,0) * B(0,2) + A(1,1) * B(1,2) + A(1,2) * B(2,2) Cannon’s Matrix Multiplication

Initial Step to Skew Matrices in Cannon Initial blocked input After skewing before initial block multiplies A(0,1)A(0,2) A(1,0) A(2,0) A(1,1)A(1,2) A(2,1)A(2,2) A(0,0) B(0,1)B(0,2) B(1,0) B(2,0) B(1,1)B(1,2) B(2,1)B(2,2) B(0,0)A(0,1)A(0,2) A(1,0) A(2,0) A(1,1)A(1,2) A(2,1)A(2,2) A(0,0) B(0,1) B(0,2)B(1,0) B(2,0) B(1,1) B(1,2) B(2,1) B(2,2)B(0,0)

Skewing Steps in Cannon First step Second Third A(0,1)A(0,2) A(1,0) A(2,0) A(1,1)A(1,2) A(2,1)A(2,2) A(0,0) B(0,1) B(0,2)B(1,0) B(2,0) B(1,1) B(1,2) B(2,1) B(2,2)B(0,0) A(0,1)A(0,2) A(1,0) A(2,0) A(1,2) A(2,1) B(0,1) B(0,2)B(1,0) B(2,0) B(1,1) B(1,2) B(2,1) B(2,2)B(0,0) A(0,1)A(0,2) A(1,0) A(2,0) A(1,1)A(1,2) A(2,1)A(2,2) A(0,0)B(0,1) B(0,2)B(1,0) B(2,0) B(1,1) B(1,2) B(2,1) B(2,2)B(0,0) A(1,1) A(2,2) A(0,0)

Cost of Cannon’s Algorithm forall i=0 to s-1 … recall s = sqrt(p) left-circular-shift row i of A by i … cost = s*(  +  *n 2 /p) forall i=0 to s-1 up-circular-shift B column i of B by i … cost = s*(  +  *n 2 /p) for k=0 to s-1 forall i=0 to s-1 and j=0 to s-1 C(i,j) = C(i,j) + A(i,j)*B(i,j) … cost = 2*(n/s) 3 = 2*n 3 /p 3/2 left-circular-shift each row of A by 1 … cost =  +  *n 2 /p up-circular-shift each row of B by 1 … cost =  +  *n 2 /p ° Total Time = 2*n 3 /p + 4 * s*\alpha + 4*\beta*n 2 /s ° Parallel Efficiency = 2*n 3 / (p * Total Time) = 1/( 1 +  * 2*(s/n) 3 +  * 2*(s/n) ) = 1/(1 + O(sqrt(p)/n)) ° Grows to 1 as n/s = n/sqrt(p) = sqrt(data per processor) grows ° Better than 1D layout, which had Efficiency = 1/(1 + O(p/n))

Drawbacks to Cannon Hard to generalize for p not a perfect square A and B not square dimensions of A, B not perfectly divisible by s=sqrt(p) A and B not “aligned” in the way they are stored on processors block-cyclic layouts Memory hog (extra copies of local matrices)

SUMMA Algorithm SUMMA = Scalable Universal Matrix Multiply Slightly less efficient, but simpler and easier to generalize Presentation from van de Geijn and Watts Similar ideas appeared many times Used in practice in PBLAS = Parallel BLAS

SUMMA * = I J A(I,k) k k B(k,J) I, J represent all rows, columns owned by a processor k is a single row or column or a block of b rows or columns C(I,J) = C(I,J) +  k A(I,k)*B(k,J) Assume a p r by p c processor grid (p r = p c = 4 above) Need not be square C(I,J)

SUMMA For k=0 to n-1 … or n/b-1 where b is the block size … = # cols in A(I,k) and # rows in B(k,J) for all I = 1 to p r … in parallel owner of A(I,k) broadcasts it to whole processor row for all J = 1 to p c … in parallel owner of B(k,J) broadcasts it to whole processor column Receive A(I,k) into Acol Receive B(k,J) into Brow C( myproc, myproc ) = C( myproc, myproc) + Acol * Brow * = I J A(I,k) k k B(k,J) C(I,J)

SUMMA performance For k=0 to n/b-1 for all I = 1 to s … s = sqrt(p) owner of A(I,k) broadcasts it to whole processor row … time = log s *(  +  * b*n/s), using a tree for all J = 1 to s owner of B(k,J) broadcasts it to whole processor column … time = log s *(  +  * b*n/s), using a tree Receive A(I,k) into Acol Receive B(k,J) into Brow C( myproc, myproc ) = C( myproc, myproc) + Acol * Brow … time = 2*(n/s) 2 *b °Total time = 2*n 3 /p +  * log p * n/b +  * log p * n 2 /s °To simplify analysis only, assume s = sqrt(p)

SUMMA performance Total time = 2*n 3 /p +  * log p * n/b +  * log p * n 2 /s Parallel Efficiency = 1/(1 +  * log p * p / (2*  *n 2 ) +  * log p * s/(2*n) ) ~Same  term as Cannon, except for log p factor log p grows slowly so this is ok Latency (  ) term can be larger, depending on b When b=1, get  * log p * n As b grows to n/s, term shrinks to  * log p * s (log p times Cannon) Temporary storage grows like 2*b*n/s Can change b to tradeoff latency cost with memory

ScaLAPACK Parallel Library