Presentation is loading. Please wait.

Presentation is loading. Please wait.

04/13/2009CS267 Lecture 20 CS 267: Applications of Parallel Computers Lecture 20 -- Structured Grids Horst D. Simon

Similar presentations


Presentation on theme: "04/13/2009CS267 Lecture 20 CS 267: Applications of Parallel Computers Lecture 20 -- Structured Grids Horst D. Simon"— Presentation transcript:

1 04/13/2009CS267 Lecture 20 CS 267: Applications of Parallel Computers Lecture 20 -- Structured Grids Horst D. Simon hdsimon@lbl.gov www.cs.berkeley.edu/~demmel/cs267_Spr09

2 04/13/09CS267 Lecture 202 Motifs The Motifs (formerly “Dwarfs”) from “The Berkeley View” (Asanovic et al.) Motifs form key computational patterns Topic of this lecture

3 04/13/09CS267 Lecture 20 Outline °Review PDEs °Overview of Methods for Poisson Equation °Jacobi’s method °Red-Black SOR method °Conjugate Gradient °Sparse LU (Lecture 18, Sherry Li) °FFT (Lecture 21, Horst Simon) °Multigrid

4 04/13/09CS267 Lecture 20 Solving PDEs ° Hyperbolic problems (waves): Sound wave(position, time) Use explicit time-stepping Solution at each point depends on neighbors at previous time °Elliptic (steady state) problems: Electrostatic Potential (position) Everything depends on everything else This means locality is harder to find than in hyperbolic problems °Parabolic (time-dependent) problems: Temperature(position,time) Involves an elliptic solve at each time-step °Focus on elliptic problems Canonical example is the Poisson equation  2 u/  x 2 +  2 u/  y 2 +  2 u/  z 2 = f(x,y,z)

5 04/13/09CS267 Lecture 20 Explicit Solution of PDEs °Explicit methods often used for hyperbolic PDEs °Stability limits size of time-step °Computation corresponds to Matrix vector multiply Combine nearest neighbors on grid °Use finite differences with u[j,i] as the solution at time t= i*  (i = 0,1,2,…) and position x = j*h (j=0,1,…,N=1/h) initial conditions on u[j,0] boundary conditions on u[0,i] and u[N,i] °At each timestep i = 0,1,2,... i=5 i=4 i=3 i=2 i=1 i=0 u[0,0] u[1,0] u[2,0] u[3,0] u[4,0] u[5,0] j i For j=0 to N u[j,i+1]= z*u[j-1,i]+ (1-2*z)*u[j,i] + z*u[j+1,i] where z =C*  /h 2

6 04/13/09CS267 Lecture 20 Matrix View of Explicit Method for Heat Equation u[j,i+1]= z*u[j-1,i]+ (1-2*z)*u[j,i] + z*u[j+1,i] u[ :, i+1] = T * u[ :, i] where T is tridiagonal: L called Laplacian (in 1D) For a 2D mesh (5 point stencil) the Laplacian is pentadiagonal (5 nonzeros per row, along diagonals) For a 3D mesh there are 7 nonzeros per row 1-2z z z Graph and “3 point stencil” T = = I – z*L, L = 2 -1 -1 2 -1 -1 2 1-2z z z 1-2z z z 1-2z

7 04/13/09CS267 Lecture 20 Poisson’s equation in 1D:  2 u/  x 2 = f(x) 2 -1 -1 2 -1 -1 2 T = 2 Graph and “stencil”

8 04/13/09CS267 Lecture 20 2D Poisson’s equation °Similar to the 1D case, but the matrix T is now °3D is analogous 4 -1 -1 -1 4 -1 -1 -1 4 -1 -1 4 -1 -1 -1 -1 4 -1 -1 -1 -1 4 -1 -1 4 -1 -1 -1 4 -1 -1 -1 4 T = 4 Graph and “stencil”

9 04/13/09CS267 Lecture 20 Algorithms for 2D (3D) Poisson Equation (N = n 2 (n 3 ) vars) AlgorithmSerialPRAMMemory #Procs °Dense LUN 3 NN 2 N 2 °Band LUN 2 (N 7/3 )NN 3/2 (N 5/3 )N (N 4/3 ) °JacobiN 2 (N 5/3 ) N (N 2/3 ) NN °Explicit Inv.N 2 log NN 2 N 2 °Conj.Gradients N 3/2 (N 4/3 ) N 1/2(1/3) *log NNN °Red/Black SOR N 3/2 (N 4/3 ) N 1/2 (N 1/3 ) NN °Sparse LUN 3/2 (N 2 ) N 1/2 N*log N(N 4/3 ) N °FFTN*log Nlog NNN °MultigridNlog 2 NNN °Lower boundNlog NN PRAM is an idealized parallel model with zero cost communication Reference: James Demmel, Applied Numerical Linear Algebra, SIAM, 1997.

10 04/13/09CS267 Lecture 20 Overview of Algorithms on n x n grid (n x n x n grid) (1) °Sorted in two orders (roughly): from slowest to fastest on sequential machines. from most general (works on any matrix) to most specialized (works on matrices “like” T). °Dense LU: Gaussian elimination; works on any N-by-N matrix. °Band LU: Exploits the fact that T is nonzero only on N 1/2 (N 2/3 ) diagonals nearest main diagonal. °Jacobi: Essentially does matrix-vector multiply by T in inner loop of iterative algorithm. °Explicit Inverse: Assume we want to solve many systems with T, so we can precompute and store inv(T) “for free”, and just multiply by it (but still expensive!). °Conjugate Gradient: Uses matrix-vector multiplication, like Jacobi, but exploits mathematical properties of T that Jacobi does not. °Red-Black SOR (successive over-relaxation): Variation of Jacobi that exploits yet different mathematical properties of T. Used in multigrid schemes. °Sparse LU: Gaussian elimination exploiting particular zero structure of T. °FFT (fast Fourier transform): Works only on matrices very like T. °Multigrid: Also works on matrices like T, that come from elliptic PDEs. °Lower Bound: Serial (time to print answer); parallel (time to combine N inputs). °Details in class notes and www.cs.berkeley.edu/~demmel/ma221.

11 04/13/09CS267 Lecture 20 Overview of Algorithms on n x n grid (n x n x n grid) (2) °Dimension = N = n 2 in 2D (=n 3 in 3D) °Bandwidth = BW = n in 2D (=n 2 in 3D) °Condition number = k = n 2 in 2D or 3D °#its = number of iterations °SpMV = Sparse-matrix-vector-multiply °Dense LU: cost = N 3 (serial) or N (parallel, with N 2 procs) °Band LU: cost = N * BW 2 (serial) or N (parallel with BW 2 procs) °Jacobi: cost = cost(SpMV) * #its, #its = k. °Conjugate Gradient: cost = (cost(SpMV) + cost(dot prod))* #its, #its = sqrt(k) °Red-Black SOR: cost = cost(SpMV) * #its, #its = sqrt(k) °Iterative algorithms need different assumptions for convergence analysis SpMV cost is cost of nearest neighbor relaxation, which is O(N)

12 04/13/09CS267 Lecture 20 Jacobi’s Method °To derive Jacobi’s method, write Poisson as: u(i,j) = (u(i-1,j) + u(i+1,j) + u(i,j-1) + u(i,j+1) + b(i,j))/4 °Let u(i,j,m) be approximation for u(i,j) after m steps u(i,j,m+1) = (u(i-1,j,m) + u(i+1,j,m) + u(i,j-1,m) + u(i,j+1,m) + b(i,j)) / 4 °I.e., u(i,j,m+1) is a weighted average of neighbors °Motivation: u(i,j,m+1) chosen to exactly satisfy equation at (i,j) °Steps to converge proportional to problem size, N=n 2 See Lecture 24 of www.cs.berkeley.edu/~demmel/cs267_Spr99www.cs.berkeley.edu/~demmel/cs267_Spr99 °Therefore, serial complexity is O(N 2 )

13 04/13/09CS267 Lecture 20 Convergence of Nearest Neighbor Methods °Jacobi’s method involves nearest neighbor computation on nxn grid (N = n 2 ) So it takes O(n) = O(sqrt(N)) iterations for information to propagate °E.g., consider a rhs (b) that is 0, except the center is 1 °The exact solution looks like: Even in the best case, any nearest neighbor computation will take n/2 steps to propagate on an nxn grid

14 04/13/09CS267 Lecture 20 Convergence of Nearest Neighbor Methods

15 04/13/09CS267 Lecture 20 Parallelizing Jacobi’s Method °Reduces to sparse-matrix-vector multiply by (nearly) T U(m+1) = (T/4 - I) * U(m) + B/4 °Each value of U(m+1) may be updated independently keep 2 copies for iterations m and m+1 °Requires that boundary values be communicated if each processor owns n 2 /p elements to update amount of data communicated, n/p per neighbor, is relatively small if n>>p

16 04/13/09CS267 Lecture 20 Locality Optimizations in Jacobi °Nearest neighbor update in Jacobi has: Good spatial locality (for regular mesh): traverse rows/columns Poor temporal locality: few flops per mesh point -E.g., on 2D mesh: 4 adds and 1 multiply for 5 loads and 1 store °For both parallel and single processors, may trade off extra computation for reduced memory operations Idea: for each subgrid, do multiple Jacobi iterations Size of subgrid shrinks on each iteration, so start with larger one Used for uniprocessors: -Rescheduling for Locality, Michelle Mills Strout: –www-unix.mcs.anl.gov/~mstrout -Cache-efficient multigrid algorithms, Sid Chatterjee And parallel machines, including “tuning” for grids -Ian Foster et al, SC2001

17 04/13/09CS267 Lecture 20 Redundant Ghost Nodes in Jacobi °Overview of Memory Hierarchy Optimization °Can be used on unstructured meshes °Size of ghost region (and redundant computation) depends on network/memory speed vs. computation To compute green Copy yellow Compute blue

18 04/13/09CS267 Lecture 20 Improvements on Jacobi °Similar to Jacobi: u(i,j,m+1) is computed as a linear combination of neighbors °Numeric coefficients and update order are different °Based on 2 improvements over Jacobi Use “most recent values” of u that are available, since these are probably more accurate Update value of u(m+1) “more aggressively” at each step °First, note that while evaluating sequentially u(i,j,m+1) = (u(i-1,j,m) + u(i+1,j,m) … some of the values for m+1 are already available u(i,j,m+1) = (u(i-1,j,latest) + u(i+1,j,latest) … where latest is either m or m+1

19 04/13/09CS267 Lecture 20 Gauss-Seidel °Updating left-to-right row-wise order, we get the Gauss-Seidel algorithm for i = 1 to n for j = 1 to n u(i,j,m+1) = (u(i-1,j,m+1) + u(i+1,j,m) + u(i,j-1,m+1) + u(i,j+1,m) + b(i,j)) / 4 °Cannot be parallelized, because of dependencies, so instead we use a “red-black” order forall black points u(i,j) u(i,j,m+1) = (u(i-1,j,m) + … forall red points u(i,j) u(i,j,m+1) = (u(i-1,j,m+1) + … ° For general graph, use graph coloring ° Can use repeated Maximal Independent Sets to color ° Graph(T) is bipartite => 2 colorable (red and black) ° Nodes for each color can be updated simultaneously ° Still Sparse-matrix-vector multiply, using submatrices

20 04/13/09CS267 Lecture 20 Successive Overrelaxation (SOR) °Red-black Gauss-Seidel converges twice as fast as Jacobi, but there are twice as many parallel steps, so the same in practice °To motivate next improvement, write basic step in algorithm as: u(i,j,m+1) = u(i,j,m) + correction(i,j,m) °If “correction” is a good direction to move, then one should move even further in that direction by some factor w>1 u(i,j,m+1) = u(i,j,m) + w * correction(i,j,m) °Called successive overrelaxation (SOR) °Parallelizes like Jacobi (Still sparse-matrix-vector multiply…) °Can prove w = 2/(1+sin(  /(n+1)) ) for best convergence Number of steps to converge = parallel complexity = O(n), instead of O(n 2 ) for Jacobi Serial complexity O(n 3 ) = O(N 3/2 ), instead of O(n 4 ) = O(N 2 ) for Jacobi

21 04/13/09CS267 Lecture 20 Conjugate Gradient (CG) for solving A*x = b °This method can be used when the matrix A is symmetric, i.e., A = A T positive definite, defined equivalently as: -all eigenvalues are positive -x T * A * x > 0 for all nonzero vectors s -a Cholesky factorization, A = L*L T exists °Algorithm maintains 3 vectors x = the approximate solution, improved after each iteration r = the residual, r = A*x - b p = search direction, also called the conjugate gradient °One iteration costs Sparse-matrix-vector multiply by A (major cost) 3 dot products, 3 saxpys (scale*vector + vector) °Converges in O(n) = O(N 1/2 ) steps, like SOR Serial complexity = O(N 3/2 ) Parallel complexity = O(N 1/2 log N), log N factor from dot-products

22 04/13/09CS267 Lecture 20 Conjugate Gradient Algorithm for Solving Ax=b °Initial guess x °r = b – A*x, j=1 °Repeat rho = r T *r … dot product If j=1, p = r, else beta = rho/old_rho, p = r + beta*p, endif … saxpy q = A*p … sparse matrix vector multiply alpha = rho / p T * q … dot product x = x + alpha * p … saxpy r = r – alpha * q … saxpy old_rho = rho; j=j+1 °Until rho small enough

23 04/13/09CS267 Lecture 20 Algorithms for 2D (3D) Poisson Equation (N = n 2 (n 3 ) vars) AlgorithmSerialPRAMMemory #Procs °Dense LUN 3 NN 2 N 2 °Band LUN 2 (N 7/3 )NN 3/2 (N 5/3 )N (N 4/3 ) °JacobiN 2 (N 5/3 ) N (N 2/3 ) NN °Explicit Inv.N 2 log NN 2 N 2 °Conj.Gradients N 3/2 (N 4/3 ) N 1/2(1/3) *log NNN °Red/Black SOR N 3/2 (N 4/3 ) N 1/2 (N 1/3 ) NN °Sparse LUN 3/2 (N 2 ) N 1/2 N*log N(N 4/3 ) N °FFTN*log Nlog NNN °MultigridNlog 2 NNN °Lower boundNlog NN PRAM is an idealized parallel model with zero cost communication Reference: James Demmel, Applied Numerical Linear Algebra, SIAM, 1997.

24 04/13/09CS267 Lecture 20 2D Poisson’s equation °Similar to the 1D case, but the matrix T is now °3D is analogous 4 -1 -1 -1 4 -1 -1 -1 4 -1 -1 4 -1 -1 -1 -1 4 -1 -1 -1 -1 4 -1 -1 4 -1 -1 -1 4 -1 -1 -1 4 T = 4 Graph and “stencil”

25 04/13/09CS267 Lecture 20 Multigrid Motivation °Recall that Jacobi, SOR, CG, or any other sparse- matrix-vector-multiply-based algorithm can only move information one grid cell at a time Take at least n steps to move information across n x n grid °Can show that decreasing error by fixed factor c<1 takes  (log n) steps Convergence to fixed error < 1 takes  (log n ) steps °Therefore, converging in O(1) steps requires moving information across grid faster than to one neighboring grid cell per step

26 04/13/09CS267 Lecture 20 Multigrid Methods We studied several iterative methods Jacobi, SOR, Guass-Seidel, Red-Black variations, Conjugate Gradients (CG) All use sparse matrix-vector multiply (nearest neighbor communication on grid) Key problem with iterative methods is that: detail (short wavelength) is correct convergence controlled by coarse (long wavelength) structure In simple methods one needs of order N 2 iterations to get good results Ironically, one goes to large N (fine mesh) to get detail If all you wanted was coarse structure, a smaller mesh would be fine Basic idea in multigrid is key in many areas of science Solve a problem at multiple scales We get coarse structure from small N and fine detail from large N Good qualitative idea but how do we implement? Slide source: Geoffrey Fox and (indirectly) Ulrich Ruede

27 04/13/09CS267 Lecture 20 Gauss Seidel is Slow I °Take Laplace’s equation in the Unit Square with initial guess as  = 0 and boundary conditions that are zero except on one side °For N=31 x 31 Grid it takes around 1000 (N 2 ) iterations to get a reasonable answer Boundary Conditions Exact Solution Slide source: Geoffrey Fox and (indirectly) Ulrich Ruede

28 04/13/09CS267 Lecture 20 Gauss Seidel is Slow II 1 Iteration10 Iterations 100 Iterations 1000 Iterations Slide source: Geoffrey Fox and (indirectly) Ulrich Ruede

29 04/13/09CS267 Lecture 20 Multigrid Overview °Basic Algorithm: Replace problem on fine grid by an approximation on a coarser grid Solve the coarse grid problem approximately, and use the solution as a starting guess for the fine-grid problem, which is then iteratively updated Solve the coarse grid problem recursively, i.e. by using a still coarser grid approximation, etc. °Success depends on coarse grid solution being a good approximation to the fine grid

30 04/13/09CS267 Lecture 20 Same Big Idea used elsewhere °Replace fine problem by coarse approximation, recursively °Multilevel Graph Partitioning (METIS): Replace graph to be partitioned by a coarser graph, obtained via Maximal Independent Set Given partitioning of coarse grid, refine using Kernighan-Lin °Barnes-Hut (and Fast Multipole Method) for computing gravitational forces on n particles in O(n log n) time: Approximate particles in box by total mass, center of gravity Good enough for distant particles; for close ones, divide box recursively °All examples depend on coarse approximation being accurate enough (at least if we are far enough away)

31 04/13/09CS267 Lecture 20 Multigrid uses Divide-and-Conquer in 2 Ways °First way: Solve problem on a given grid by calling Multigrid on a coarse approximation to get a good guess to refine °Second way: Think of error as a sum of sine curves of different frequencies Each call to Multigrid responsible for suppressing coefficients of sine curves of the lower half of the frequencies in the error (pictures later)

32 04/13/09CS267 Lecture 20 Multigrid Sketch in 1D Consider a 2 m +1 grid in 1D for simplicity Let P (i) be the problem of solving the discrete Poisson equation on a 2 i +1 grid in 1D Write linear system as T(i) * x(i) = b(i) P (m), P (m-1), …, P (1) is sequence of problems from finest to coarsest

33 04/13/09CS267 Lecture 20 Multigrid Sketch in 2D Consider a 2 m +1 by 2 m +1 grid in 2D Let P (i) be the problem of solving the discrete Poisson equation on a 2 i +1 by 2 i +1 grid in 2D Write linear system as T(i) * x(i) = b(i) P (m), P (m-1), …, P (1) is sequence of problems from finest to coarsest

34 04/13/09CS267 Lecture 20 Multigrid Hierarchy Relax InterpolateRestrict Relax Slide source: Geoffrey Fox

35 04/13/09CS267 Lecture 20 Basic Multigrid Ideas In picture, relax is application of standard iteration scheme “solve” short wavelength solution at a given level i.e. use Jacobi, Gauss-Seidel, Conjugate Gradient Interpolation is taking a solution at a coarser grid and interpolating to find a solution at half the grid size Restriction is taking solution at given grid and averaging to find solution at coarser grid k k+1 Interpolate Restrict Slide source: Geoffrey Fox

36 04/13/09CS267 Lecture 20 Multigrid Operators For problem P (i) at varying coarsening levels (i, grid size grows with i): b(i) is the Right Hand Side (RHS) and x(i) is the current estimated solution All the following operators just average values on neighboring grid points (so information moves fast on coarse grids) The restriction operator R(i) maps P (i) to P (i-1) Restricts problem on fine grid P (i) to coarse grid P (i-1) Uses sampling or averaging b(i-1)= R(i) (b(i)) The interpolation operator In(i-1) maps approx. solution x(i-1) to x(i) Interpolates solution on coarse grid P (i-1) to fine grid P (i) x(i) = In(i-1)(x(i-1)) The solution operator S(i) takes P (i) and improves solution x(i) Uses “weighted” Jacobi or SOR on single level of grid x improved (i) = S(i) (b(i), x(i)) Overall algorithm, then details of operators both live on grids of size 2 i -1

37 04/13/09CS267 Lecture 20 Multigrid Operators For problem P (i) at varying coarsening levels (i, grid size grows with i): b(i) is the Right Hand Side (RHS) and x(i) is the current estimated solution All the following operators just average values on neighboring grid points (so information moves fast on coarse grids) The restriction operator R(i) maps P (i) to P (i-1) Restricts problem on fine grid P (i) to coarse grid P (i-1) Uses sampling or averaging b(i-1)= R(i) (b(i)) The interpolation operator In(i-1) maps approx. solution x(i-1) to x(i) Interpolates solution on coarse grid P (i-1) to fine grid P (i) x(i) = In(i-1)(x(i-1)) The solution operator S(i) takes P (i) and improves solution x(i) Uses “weighted” Jacobi or SOR on single level of grid x improved (i) = S(i) (b(i), x(i)) Overall algorithm, then details of operators both live on grids of size 2 i -1

38 04/13/09CS267 Lecture 20 Multigrid Operators For problem P (i) at varying coarsening levels (i, grid size grows with i): b(i) is the Right Hand Side (RHS) and x(i) is the current estimated solution All the following operators just average values on neighboring grid points (so information moves fast on coarse grids) The restriction operator R(i) maps P (i) to P (i-1) Restricts problem on fine grid P (i) to coarse grid P (i-1) Uses sampling or averaging b(i-1)= R(i) (b(i)) The interpolation operator In(i-1) maps approx. solution x(i-1) to x(i) Interpolates solution on coarse grid P (i-1) to fine grid P (i) x(i) = In(i-1)(x(i-1)) The solution operator S(i) takes P (i) and improves solution x(i) Uses “weighted” Jacobi or SOR on single level of grid x improved (i) = S(i) (b(i), x(i)) Overall algorithm, then details of operators both live on grids of size 2 i -1

39 04/13/09CS267 Lecture 20 Multigrid Operators For problem P (i) at varying coarsening levels (i, grid size grows with i): b(i) is the Right Hand Side (RHS) and x(i) is the current estimated solution All the following operators just average values on neighboring grid points (so information moves fast on coarse grids) The restriction operator R(i) maps P (i) to P (i-1) Restricts problem on fine grid P (i) to coarse grid P (i-1) Uses sampling or averaging b(i-1)= R(i) (b(i)) The interpolation operator In(i-1) maps approx. solution x(i-1) to x(i) Interpolates solution on coarse grid P (i-1) to fine grid P (i) x(i) = In(i-1)(x(i-1)) The solution operator S(i) takes P (i) and improves solution x(i) Uses “weighted” Jacobi or SOR on single level of grid x improved (i) = S(i) (b(i), x(i)) Overall algorithm, then details of operators both live on grids of size 2 i -1

40 04/13/09CS267 Lecture 20 Multigrid V-Cycle Algorithm Function MGV ( b(i), x(i) ) … Solve T(i)*x(i) = b(i) given b(i) and an initial guess for x(i) … return an improved x(i) if (i = 1) compute exact solution x(1) of P (1) only 1 unknown return x(1) else x(i) = S(i) (b(i), x(i)) improve solution by damping high frequency error r(i) = T(i)*x(i) - b(i) compute residual d(i) = In(i-1) ( MGV( R(i) ( r(i) ), 0 ) ) solve T(i)*d(i) = r(i) recursively x(i) = x(i) - d(i) correct fine grid solution x(i) = S(i) ( b(i), x(i) ) improve solution again return x(i) Function MGV ( b(i), x(i) ) … Solve T(i)*x(i) = b(i) given b(i) and an initial guess for x(i) … return an improved x(i) if (i = 1) compute exact solution x(1) of P (1) only 1 unknown return x(1) else solve recursively x(i) = S(i) (b(i), x(i)) improve solution by damping high frequency error r(i) = T(i)*x(i) - b(i) compute residual d(i) = In(i-1) ( MGV( R(i) ( r(i) ), 0 ) ) solve T(i)*d(i) = r(i) recursively x(i) = x(i) - d(i) correct fine grid solution x(i) = S(i) ( b(i), x(i) ) improve solution again return x(i)

41 04/13/09CS267 Lecture 20 This is called a V-Cycle °Just a picture of the call graph °In time a V-cycle looks like the following

42 04/13/09CS267 Lecture 20 Complexity of a V-Cycle °On a serial machine Work at each point in a V-cycle is O(the number of unknowns) Cost of Level i is (2 i -1) 2 = O(4 i ) for a 2D grid If finest grid level is m, total time is:  O(4 i ) = O( 4 m ) for a 2D grid = O(# unknowns) in general °On a parallel machine (PRAM) with one processor per grid point and free communication, each step in the V-cycle takes constant time Total V-cycle time is O(m) = O(log #unknowns) m i=1

43 04/13/09CS267 Lecture 20 Full Multigrid (FMG) °Intuition: improve solution by doing multiple V-cycles avoid expensive fine-grid (high frequency) cycles analysis of why this works is beyond the scope of this class Function FMG (b(m), x(m)) … return improved x(m) given initial guess compute the exact solution x(1) of P(1) for i=2 to m x(i) = MGV ( b(i), In (i-1) (x(i-1) ) ) °In other words: Solve the problem with 1 unknown Given a solution to the coarser problem, P (i-1), map it to starting guess for P (i) Solve the finer problem using the Multigrid V-cycle

44 04/13/09CS267 Lecture 20 Full Multigrid Cost Analysis °One V for each call to FMG people also use Ws and other compositions °Serial time:  O(4 i ) = O( 4 m ) = O(# unknowns) °PRAM time:  O(i) = O( m 2 ) = O( log 2 # unknowns) m i=1 m i=1

45 04/13/09CS267 Lecture 20 Complexity of Solving Poisson’s Equation Theorem: error after one FMG call: error_after <= (.5* error_before) independent of # unknowns  At least 1 bit each time Corollary: We can make the error < any fixed tolerance in a fixed number of steps, independent of size of finest grid This is the most important convergence property of MG, distinguishing it from all other methods, which converge more slowly for large grids Total complexity just proportional to cost of one FMG call

46 04/13/09CS267 Lecture 20 The Solution Operator S(i) - Details °The solution operator, S(i), is a weighted Jacobi °Consider the 1D problem °At level i, pure Jacobi replaces: x(j) := 1/2 (x(j-1) + x(j+1) + b(j) ) ° Weighted Jacobi uses: x(j) := 1/3 (x(j-1) + x(j) + x(j+1) + b(j) ) °In 2D, similar average of nearest neighbors

47 04/13/09CS267 Lecture 20 Weighted Jacobi chosen to damp high frequency error Initial error “Rough” Lots of high frequency components Norm = 1.65 Error after 1 weighted Jacobi step “Smoother” Less high frequency component Norm = 1.06 Error after 2 weighted Jacobi steps “Smooth” Little high frequency component Norm =.92, won’t decrease much more

48 04/13/09CS267 Lecture 20 Multigrid as Divide and Conquer Algorithm °Each level in a V-Cycle reduces the error in one part of the frequency domain

49 04/13/09CS267 Lecture 20 The Restriction Operator R(i) - Details °The restriction operator, R(i), takes a problem P (i) with RHS b(i) and maps it to a coarser problem P (i-1) with RHS b(i-1) °In 1D, average values of neighbors x coarse (i) = 1/4 * x fine (i-1) + 1/2 * x fine (i) + 1/4 * x fine (i+1) °In 2D, average with all 8 neighbors (N,S,E,W,NE,NW,SE,SW)

50 04/13/09CS267 Lecture 20 Interpolation Operator In(i-1): details °The interpolation operator In(i-1), takes a function on a coarse grid P (i-1), and produces a function on a fine grid P (i) °In 1D, linearly interpolate nearest coarse neighbors x fine (i) = x coarse (i) if the fine grid point i is also a coarse one, else x fine (i) = 1/2 * x coarse (left of i) + 1/2 * x coarse (right of i) °In 2D, interpolation requires averaging with 4 nearest neighbors (NW,SW,NE,SE)

51 04/13/09CS267 Lecture 20 Convergence Picture of Multigrid in 1D

52 04/13/09CS267 Lecture 20 Convergence Picture of Multigrid in 2D

53 04/13/09CS267 Lecture 20 Parallel 2D Multigrid °Multigrid on 2D requires nearest neighbor (up to 8) computation at each level of the grid °Start with n=2 m +1 by 2 m +1 grid (here m=5) °Use an s by s processor grid (here s=4)

54 04/13/09CS267 Lecture 20 Performance Model of parallel 2D Multigrid (V-cycle) °Assume 2 m +1 by 2 m +1 grid of points, n= 2 m -1, N=n 2 °Assume p = 4 k processors, arranged in 2 k by 2 k grid Processors start with 2 m-k by 2 m-k subgrid of unknowns °Consider V-cycle starting at level m At levels m through k of V-cycle, each processor does some work At levels k-1 through 1, some processors are idle, because a 2 k-1 by 2 k-1 grid of unknowns cannot occupy each processor °Cost of one level in V-cycle If level j >= k, then cost = O(4 j-k ) …. Flops, proportional to the number of grid points/processor + O( 1 )  …. Send a constant # messages to neighbors + O( 2 j-k )  …. Number of words sent If level j < k, then cost = O(1) …. Flops, proportional to the number of grid points/processor + O(1)  …. Send a constant # messages to neighbors + O(1) .… Number of words sent °Sum over all levels in all V-cycles in FMG to get complexity

55 04/13/09CS267 Lecture 20 Comparison of Methods (in O(.) sense) # Flops # Messages # Words sent MG N/p + (log N) 2 (N/p) 1/2 + log p * log N log p * log N FFT N log N / p p 1/2 N/p SOR N 3/2 /p N 1/2 N/p °SOR is slower than others on all counts °Flops for MG and FFT depends on accuracy of MG °MG communicates less total data (bandwidth) °Total messages (latency) depends … This coarse analysis can’t say whether MG or FFT is better when  >> 

56 04/13/09CS267 Lecture 20 Practicalities °In practice, we don’t go all the way to P (1) °In sequential code, the coarsest grids are negligibly cheap, but on a parallel machine they are not. Consider 1000 points per processor In 2D, the surface to communicate is 4xsqrt(1000) ~= 128, or 13% In 3D, the surface is 1000-8 3 ~= 500, or 50% ° See Tuminaro and Womble, SIAM J. Sci. Comp., v14, n5, 1993 for analysis of MG on 1024 nCUBE2 on 64x64 grid of unknowns, only 4 per processor -efficiency of 1 V-cycle was.02, and on FMG.008 on 1024x1024 grid -efficiencies were.7 (MG Vcycle) and.42 (FMG) -although worse parallel efficiency, FMG is 2.6 times faster that V-cycles alone nCUBE had fast communication, slow processors

57 04/13/09CS267 Lecture 20 Multigrid on an Adaptive Mesh °For problems with very large dynamic range, another level of refinement is needed °Build adaptive mesh and solve multigrid (typically) at each level °Can’t afford to use finest mesh everywhere

58 04/13/09CS267 Lecture 20 Multiblock Applications °Solve system of equations on a union of rectangles subproblem of AMR °E.g.,

59 04/13/09CS267 Lecture 20 Adaptive Mesh Refinement °Data structures in AMR °Usual parallelism is to assign grids on each level to processors °Load balancing is a problem

60 04/13/09CS267 Lecture 20 Support for AMR °Domains in Titanium designed for this problem °Kelp, Boxlib, and AMR++ are libraries for this °Primitives to help with boundary value updates, etc.

61 04/13/09CS267 Lecture 20 Multigrid on an Unstructured Mesh °Another approach to variable activity is to use an unstructured mesh that is more refined in areas of interest °Adaptive rectangular or unstructured? Numerics easier on rectangular Supposedly easier to implement (arrays without indirection) but boundary cases tend to dominate code Up to 39M unknowns on 960 processors, With 50% efficiency (Source: M. Adams)

62 04/13/09CS267 Lecture 20 Multigrid on an Unstructured Mesh °Need to partition graph for parallelism °What does it mean to do Multigrid anyway? °Need to be able to coarsen grid (hard problem) Can’t just pick “every other grid point” anymore Use “maximal independent sets” again How to make coarse graph approximate fine one °Need to define R() and In() How do we convert from coarse to fine mesh and back? °Need to define S() How do we define coarse matrix (no longer formula, like Poisson) °Dealing with coarse meshes efficiently Should we switch to using fewer processors on coarse meshes? Should we switch to another solver on coarse meshes?

63 04/13/09CS267 Lecture 20 Source of Unstructured Finite Element Mesh: Vertebra Source: M. Adams, H. Bayraktar, T. Keaveny, P. Papadopoulos, A. Gupta

64 04/13/09CS267 Lecture 20 Multigrid for nonlinear elastic analysis of bone Mechanical testing for material properties Micro Computed Tomography @ 22  m resolution 3D image  FE mesh 2.5 mm 3 44  m elements Up to 537M unknowns 4088 Processors (ASCI White) 70% parallel efficiency Source: M. Adams et al

65 04/13/2009CS267 Lecture 20 Extra Slides

66 04/13/09CS267 Lecture 20 Preconditioning °One can change A and b by preconditioning M 1 -1 A (M 2 -1 M 2 )x = M 1 -1 b °is same equation as before for any choice of matrices M 1 and M 2 °All these choices are designed to accelerate convergence of iterative methods °A new = M 1 -1 A M 2 -1 °x new = M 2 x °b new = M 1 -1 b °We choose M 1 and M 2 to make our standard methods perform better °There are specialized preconditioning ideas and perhaps better general approaches such as multigrid and Incomplete LU (ILU) decomposition A new x new = b new has same form as above and we can apply any of the methods that we used on A x = b Slide source: Geoffrey Fox

67 04/13/09CS267 Lecture 20 Multigrid Philosophically °Suppose we have a finest level M(1) with N by N points (in 2D) °Then the k’th coarsest approximation M(k) to this has N/2 k by N/2 k points °One way to think about Multigrid is that M(k+1) can form a preconditioner to M(k) so that one can replace natural matrix A(k) by A - 1 (k+1)A(k) A -1 (k+1)A(k) is a nicer matrix than A(k) and iterative solvers converge faster as long wavelength terms have been removed °Basic issue is that A(k) and A(k+1) are of different size so we need prolongation and restriction to map solutions at level k and k+1 We apply this idea recursively Slide source: Geoffrey Fox and (indirectly) Ulrich Ruede

68 04/13/09CS267 Lecture 20 Multigrid Algorithm: procedure MG(level, A, u, f) °if level = coarsest then solve coarsest grid equation by another method “exactly” °else smooth A level u = f (m 1 times) Compute residual r = f - A level u Restrict F = R r( R is Restriction Operator) Call MG( level + 1, A (level+1), V, F) (m c times) Interpolate v = P V (Interpolate new solution at this level) correct u new = u + v smooth Au new = f (m 2 times) and set u = u new °endif °endprocedure A level u exact = f u exact = u + v A level v = r = f - A level u Slide source: Geoffrey Fox

69 04/13/09CS267 Lecture 20 Multigrid Cycles °The parameter m c determines the exact strategy for how one iterates through different meshes °One ends up with a full cycle as shown Full Multigrid Cycle V Cycle W Cycle Coarsest Grid Finest Grid Finest Coarsest Slide source: Geoffrey Fox

70 04/13/2009CS267 Lecture 20 Extra Slides

71 04/13/09CS267 Lecture 20 Templates for choosing an iterative algorithm for Ax=b °Use only matrix-vector-multiplication, dot, saxpy, … °www.netlib.org/templates Symmetric?Definite? A T available? Eigenvalues Known? Try CGTry CG or Chebyshev Try Minres or CG Try QMRStorage Expensive? Try CGS or Bi-CGSTAB Try GMRES NY N N N N Y Y Y Y

72 04/13/09CS267 Lecture 20 Summary of Jacobi, SOR and CG °Jacobi, SOR, and CG all perform sparse-matrix-vector multiply °For Poisson, this means nearest neighbor communication on an n-by-n grid °It takes n = N 1/2 steps for information to travel across an n-by-n grid °Since solution on one side of grid depends on data on other side of grid faster methods require faster ways to move information FFT Multigrid


Download ppt "04/13/2009CS267 Lecture 20 CS 267: Applications of Parallel Computers Lecture 20 -- Structured Grids Horst D. Simon"

Similar presentations


Ads by Google