Presentation is loading. Please wait.

Presentation is loading. Please wait.

Numerical Software Libraries for the Scalable Solution of PDEs Introduction to PETSc Satish Balay, Bill Gropp, Lois C. McInnes, Barry Smith Mathematics.

Similar presentations


Presentation on theme: "Numerical Software Libraries for the Scalable Solution of PDEs Introduction to PETSc Satish Balay, Bill Gropp, Lois C. McInnes, Barry Smith Mathematics."— Presentation transcript:

1 Numerical Software Libraries for the Scalable Solution of PDEs Introduction to PETSc Satish Balay, Bill Gropp, Lois C. McInnes, Barry Smith Mathematics and Computer Science Division Argonne National Laboratory full tutorial as presented at the SIAM Parallel Processing Conference, March 1999, available via http://www.mcs.anl.gov/petsc

2 2 Philosophy Writing hand-parallelized application codes from scratch is extremely difficult and time consuming. Scalable parallelizing compilers for real application codes are very far in the future. We can ease the development of parallel application codes by developing general-purpose, parallel numerical PDE libraries. Caveats –Developing parallel, non-trivial PDE solvers that deliver high performance is still difficult, and requires months (or even years) of concentrated effort. –PETSc is a toolkit that can reduce the development time, but it is not a black-box PDE solver nor a silver bullet.

3 3 PETSc Concepts As illustrated via a complete nonlinear PDE example: flow in a driven cavity How to specify the mathematics of the problem –Data objects vectors, matrices How to solve the problem –Solvers linear, nonlinear, and timestepping (ODE) solvers Parallel computing complications –Parallel data layout structured and unstructured meshes Debugging, profiling, extensibility

4 4 Advanced –user-defined customization of algorithms and data structures Developer –advanced customizations, intended primarily for use by library developers Tutorial Approach Beginner –basic functionality, intended for use by most programmers Intermediate –selecting options, performance evaluation and tuning From the perspective of an application programmer: beginner 1 intermediate 2 advanced 3 developer 4

5 5 Incremental Application Improvement Beginner –Get the application “up and walking” Intermediate –Experiment with options –Determine opportunities for improvement Advanced –Extend algorithms and/or data structures as needed Developer –Consider interface and efficiency issues for integration and interoperability of multiple toolkits

6 6 Component Interactions for Numerical PDEs Grids Steering Optimization PDE Discretization Algebraic Solvers Visualization Derivative Computation PETSc emphasis

7 7 CFD on an Unstructured Grid 3D incompressible Euler Tetrahedral grid Up to 11 million unknowns Based on a legacy NASA code, FUN3d, developed by W. K. Anderson Fully implicit steady-state Primary PETSc tools: nonlinear solvers ( SNES ) and vector scatters ( VecScatter ) Results courtesy of collaborators Dinesh Kaushik and David Keyes, Old Dominion Univ., partially funded by NSF and ASCI level 2 grant

8 8 11,047,096 Unknowns Scalability for Fixed-Size Problem Missing data point

9 9 600 MHz T3E, 2.8M vertices Scalability Study

10 10 Multiphase Flow Oil reservoir simulation: fully implicit, time-dependent First, and only, fully implicit, parallel compositional simulator 3D EOS model (8 DoF per cell) Structured Cartesian mesh Over 4 million cell blocks, 32 million DoF Primary PETSc tools: linear solvers ( SLES ) –restarted GMRES with Block Jacobi preconditioning –Point-block ILU(0) on each processor Over 10.6 gigaflops sustained performance on 128 nodes of an IBM SP. 90+ percent parallel efficiency Results courtesy of collaborators Peng Wang and Jason Abate, Univ. of Texas at Austin, partially funded by DOE ER FE/MICS

11 11 179,000 unknowns (22,375 cell blocks) PC and SP Comparison PC: Fast ethernet (100 Megabits/second) network of 300 Mhz Pentium PCs with 66 Mhz bus SP: 128 node IBM SP with 160 MHz Power2super processors and 2 memory cards

12 12 Speedup Comparison

13 13 The PETSc Programming Model Goals –Portable, runs everywhere –Performance –Scalable parallelism Approach –Distributed memory, “shared-nothing” Requires only a compiler (single node or processor) Access to data on remote machines through MPI –Can still exploit “compiler discovered” parallelism on each node (e.g., SMP) –Hide within parallel objects the details of the communication –User orchestrates communication at a higher abstract level than message passing

14 14 Computation and Communication Kernels MPI, MPI-IO, BLAS, LAPACK Profiling Interface PETSc PDE Application Codes Object-Oriented Matrices, Vectors, Indices Grid Management Linear Solvers Preconditioners + Krylov Methods Nonlinear Solvers, Unconstrained Minimization ODE Integrators Visualization Interface PDE Application Codes

15 15 Compressed Sparse Row (AIJ) Blocked Compressed Sparse Row (BAIJ) Block Diagonal (BDIAG) DenseOther IndicesBlock IndicesStrideOther Index Sets Vectors Line SearchTrust Region Newton-based Methods Other Nonlinear Solvers Additive Schwartz Block Jacobi ILUICC LU (Sequential only) Others Preconditioners Euler Backward Euler Pseudo Time Stepping Other Time Steppers GMRESCGCGSBi-CG-STABTFQMRRichardsonChebychevOther Krylov Subspace Methods Matrices PETSc Numerical Components

16 16 Mesh Definitions: For Our Purposes Structured –Determine neighbor relationships purely from logical I, J, K coordinates –PETSc support via DA Semi-Structured –No direct PETSc support; could work with tools such as SAMRAI, Overture, Kelp Unstructured –Do not explicitly use logical I, J, K coordinates –PETSc support via lower-level VecScatter utilities One is always free to manage the mesh data as if it is unstructured.

17 17 A Freely Available and Supported Research Code Available via http://www.mcs.anl.gov/petsc Usable in C, C++, and Fortran77/90 (with minor limitations in Fortran 77/90 due to their syntax) Users manual in Postscript and HTML formats Hyperlinked manual pages for all routines Many tutorial-style examples Support via email: petsc-maint@mcs.anl.gov

18 18 True Portability Tightly coupled systems –Cray T3D/T3E –SGI/Origin –IBM SP –Convex Exemplar Loosely coupled systems, e.g., networks of workstations –Sun OS, Solaris 2.5 –IBM AIX –DEC Alpha –HP –Linux –Freebsd –Windows NT/95

19 19 PETSc History September 1991: begun by Bill and Barry May 1992: first release of PETSc 1.0 January 1993: joined by Lois September 1994: begun PETSc 2.0 June 1995: first release of version 2.0 October 1995: joined by Satish Now: over 4,000 downloads of version 2.0 Department of Energy, MICS Program National Science Foundation, Multidisciplinary Challenge Program, CISE PETSc Funding and Support

20 20 A Complete Example: Driven Cavity Model Velocity-vorticity formulation, with flow driven by lid and/or bouyancy Finite difference discretization with 4 DoF per mesh point Primary PETSc tools: SNES, DA Example code: petsc/src/snes/examples/tutorials/ex8.c Solution Components velocity: u velocity: v temperature: T vorticity: z [u,v, z,T] Application code author: D. E. Keyes

21 21 PETSc codeUser code Application Initialization Function Evaluation Jacobian Evaluation Post- Processing PCKSP PETSc Main Routine Linear Solvers (SLES) Nonlinear Solvers (SNES) Solve F(u) = 0 Driven Cavity Solution Approach A CD B

22 22 Driven Cavity Program Part A: Data objects ( Vec and Mat ), Solvers ( SNES ) Part B: Parallel data layout ( DA ) Part C: Nonlinear function evaluation –ghost point updates –local function computation Part D: Jacobian evaluation –default colored finite differencing approximation Experimentation

23 23 Collectivity MPI communicators (MPI_Comm) specify collectivity (processors involved in a computation) All PETSc creation routines for solver and data objects are collective with respect to a communicator, e.g., VecCreate(MPI_Comm comm, int m, int M, Vec *x) Some operations are collective, while others are not, e.g., –collective: VecNorm( ) –not collective: VecGetLocalSize() If a sequence of collective routines is used, they must be called in the same order on each processor

24 24 Vectors Fundamental objects for storing field solutions, right-hand sides, etc. VecCreateMPI(...,Vec *) –MPI_Comm - processors that share the vector –number of elements local to this processor –total number of elements Each process locally owns a subvector of contiguously numbered global indices proc 3 proc 2 proc 0 proc 4 proc 1

25 25 Sparse Matrices Fundamental objects for storing linear operators (e.g., Jacobians) MatCreateMPIAIJ(…,Mat *) –MPI_Comm - processors that share the matrix –number of local rows and columns –number of global rows and columns –optional storage pre-allocation information Each process locally owns a submatrix of contiguously numbered global rows. proc 3 proc 2 proc 1 proc 4 proc 0

26 26 Parallel Matrix and Vector Assembly Processes may generate any entries in vectors and matrices Entries need not be generated by the process on which they ultimately will be stored PETSc automatically moves data during assembly if necessary Vector example: VecSetValues(Vec,…) –number of entries to insert/add –indices of entries –values to add –mode: [INSERT_VALUES,ADD_VALUES] VecAssemblyBegin(Vec) VecAssemblyEnd(Vec)

27 27 Matrix Assembly MatSetValues(Mat,…) –number of rows to insert/add –indices of rows and columns –number of columns to insert/add –values to add –mode: [INSERT_VALUES,ADD_VALUES] MatAssemblyBegin(Mat) MatAssemblyEnd(Mat)

28 28 Blocked Sparse Matrices For multi-component PDEs MatCreateMPIBAIJ(…,Mat *) –MPI_Comm - processes that share the matrix –block size –number of local rows and columns –number of global rows and columns –optional storage pre-allocation information 3D compressible Euler code Block size 5 IBM Power2

29 29 Solvers: Usage Concepts Linear ( SLES ) Nonlinear ( SNES ) Timestepping ( TS ) Context variables Solver options Callback routines Customization Solver ClassesUsage Concepts important concepts

30 30 Nonlinear Solvers (SNES) Goal: For problems arising from PDEs, support the general solution of F(u) = 0 Newton-based methods, including –Line search strategies –Trust region approaches –Pseudo-transient continuation –Matrix-free variants User provides: –Code to evaluate F(u) –Code to evaluate Jacobian of F(u) (optional) or use sparse finite difference approximation or use automatic differentiation (coming soon!) User can customize all phases of solution

31 31 Context Variables Are the key to solver organization Contain the complete state of an algorithm, including –parameters (e.g., convergence tolerance) –functions that run the algorithm (e.g., convergence monitoring routine) –information about the current state (e.g., iteration number)

32 32 Creating the SNES Context C/C++ version ierr = SNESCreate( MPI_COMM_WORLD,SNES_NONLINEAR_EQUATIONS, &sles); Fortran version call SNESCreate( MPI_COMM_WORLD,SNES_NONLINEAR_EQUATIONS, snes,ierr) Provides an identical user interface for all linear solvers –uniprocessor and parallel –real and complex numbers

33 33 Basic Nonlinear Solver Code (C/C++) SNES snes; /* nonlinear solver context */ Mat J; /* Jacobian matrix */ Vec x, F; /* solution, residual vectors */ int n, its; /* problem dimension, number of iterations */ ApplicationCtx usercontext; /* user-defined application context */... MatCreate(MPI_COMM_WORLD,n,n,&J); VecCreate(MPI_COMM_WORLD,n,&x); VecDuplicate(x,&F); SNESCreate(MPI_COMM_WORLD,SNES_NONLINEAR_EQUATIONS,&snes); SNESSetFunction(snes,F,EvaluateFunction,usercontext); SNESSetJacobian(snes,J,EvaluateJacobian,usercontext); SNESSetFromOptions(snes); SNESSolve(snes,x,&its);

34 34 Basic Nonlinear Solver Code (Fortran) SNES snes Mat J Vec x, F int n, its... call MatCreate(MPI_COMM_WORLD,n,n,J,ierr) call VecCreate(MPI_COMM_WORLD,n,x,ierr) call VecDuplicate(x,F,ierr) call SNESCreate(MPI_COMM_WORLD,SNES_NONLINEAR_EQUATIONS,snes,ierr) call SNESSetFunction(snes,F,EvaluateFunction,PETSC_NULL,ierr) call SNESSetJacobian(snes,J,EvaluateJacobian,PETSC_NULL,ierr) call SNESSetFromOptions(snes,ierr) call SNESSolve(snes,x,its,ierr)

35 35 Solvers Based on Callbacks User provides routines to perform actions that the library requires. For example, –SNESSetFunction(SNES,...) uservector - vector to store function values userfunction - name of the user’s function usercontext - pointer to private data for the user’s function Now, whenever the library needs to evaluate the user’s nonlinear function, the solver may call the application code directly with its own local state. usercontext : serves as an application context object. Data are handled through such opaque objects; the library never sees irrelevant application data important concept

36 36 Sample Application Context: Driven Cavity Problem typedef struct { /* - - - - - - - - - - - - - - - - basic application data - - - - - - - - - - - - - - - - - */ double lid_velocity, prandtl, grashof; /* problem parameters */ int mx, my; /* discretization parameters */ int mc; /* number of DoF per node */ int draw_contours; /* flag - drawing contours */ /* - - - - - - - - - - - - - - - - - - - - parallel data - - - - - - - - - - - - - - - - - - - - - */ MPI_Comm comm; /* communicator */ DA da; /* distributed array */ Vec localF, localX; /* local ghosted vectors */ } AppCtx;

37 37 Sample Function Evaluation Code: Driven Cavity Problem UserComputeFunction(SNES snes, Vec X, Vec F, void *ptr) { AppCtx *user = (AppCtx *) ptr; /* user-defined application context */ int istart, iend, jstart, jend; /* local starting and ending grid points */ Scalar *f; /* local vector data */ …. /* Communicate nonlocal ghost point data */ VecGetArray( F, &f ); /* Compute local function components; insert into f[ ] */ VecRestoreArray( F, &f ); …. return 0; }

38 38 Sample Local Computational Loops: Driven Cavity Problem #define U(i) 4*(i) #define V(i)4*(i)+1 #define Omega(i) 4*(i)+2 #define Temp(i)4*(i)+3 …. for ( j = jstart; j<jend; j++ ) { row = (j - gys) * gxm + istart - gxs - 1; for ( i = istart; i<iend; i++ ) { row++; u = x[U(row)]; uxx = (two * u - x[ U (row-1)] - x [U (row+1) ] ) * hydhx; uyy = (two * u - x[ U (row-gxm)] - x [ U (row+gxm) ] ) * hxdhy; f [U(row)] = uxx + uyy - p5 * (x [ (Omega (row+gxm)] - x [Omega (row-gxm)] ) * hx; } } ….

39 39 Customization Options Procedural Interface –Provides a great deal of control on a usage-by-usage basis inside a single code –Gives full flexibility inside an application Command Line Interface –Applies same rule to all queries via a database –Enables the user to have complete control at runtime, with no extra coding

40 40 Setting Solver Options within Code SNESSetTolerances(SNES snes, double atol, double rtol, double stol, int maxits, int maxf) SNESSetType(SNES snes,SNESType type) etc.

41 41 Uniform access to all linear and nonlinear solvers -ksp_type [cg,gmres,bcgs,tfqmr,…] -pc_type [lu,ilu,jacobi,sor,asm,…] -snes_type [ls,tr,…] -snes_line_search -sles_ls -snes_rtol etc... 1 2

42 42 Runtime Script Example

43 43 Customization via Callbacks: Setting a user-defined line search routine SNESSetLineSearch(SNES snes,int(*ls)(…),void *lsctx) where available line search routines ls( ) include: –SNESCubicLineSearch( ) - cubic line search –SNESQuadraticLineSearch( ) - quadratic line search –SNESNoLineSearch( ) - full Newton step –SNESNoLineSearchNoNorms( ) - full Newton step but calculates no norms (faster in parallel, useful when using a fixed number of Newton iterations instead of usual convergence testing) –YourOwnFavoriteLineSearchRoutine( ) - whatever you like 2 3

44 44 SNES: Review of Basic Usage SNESCreate( ) - Create SNES context SNESSetFunction( ) - Set function eval. routine SNESSetJacobian( ) - Set Jacobian eval. routine SNESSetFromOptions( ) - Set runtime solver options for [SNES,SLES, KSP,PC] SNESSolve( ) - Run nonlinear solver SNESView( ) - View solver options actually used at runtime (alternative: -snes_view ) SNESDestroy( ) - Destroy solver

45 45 SNES: Review of Selected Options And many more options... 1 2

46 46 SNES: Example Programs ex1.c, ex1f.F - basic uniprocessor codes ex4.c, ex4f.F - uniprocessor nonlinear PDE (1 DoF per node) ex5.c, ex5f.F- parallel nonlinear PDE (1 DoF per node) ex8.c - parallel nonlinear PDE (multiple DoF per node, driven cavity problem) And many more examples... Location: petsc/src/snes/examples/tutorials/ 1 2

47 47 Data Layout and Ghost Values : Usage Concepts Structured –DA objects Unstructured –VecScatter objects Geometric data Data structure creation Ghost point updates Local numerical computation Mesh TypesUsage Concepts Managing field data layout and required ghost values is the key to high performance of most PDE-based parallel programs. important concepts

48 48 Ghost Values Local nodeGhost node Ghost values: To evaluate a local function f(x), each process requires its local portion of the vector x as well as its ghost values -- or bordering portions of x that are owned by neighboring processes.

49 49 Communication and Physical Discretization Communication Data Structure Creation Ghost Point Data Structures Ghost Point Updates Local Numerical Computation Geometric Data DA AO DACreate( ) DAGlobalToLocal( ) Loops over I,J,K indices stencil [implicit] VecScatter AO VecScatterCreate( ) VecScatter( ) Loops over entities elements edges vertices unstructured meshes structured meshes 1 2

50 50 Global and Local Representations Local node Ghost node 0 123 4 59 Global: each process stores a unique local set of vertices (and each vertex is owned by exactly one process) Local: each process stores a unique local set of vertices as well as ghost nodes from neighboring processes

51 51 Distributed Arrays Proc 10 Proc 0Proc 1 Proc 10 Proc 0Proc 1 Box-type stencil Star-type stencil Data layout and ghost values

52 52 Logically Regular Meshes DA - Distributed Array: object containing info about vector layout across processes and communication of ghost values Form a DA (1, 2, or 3-dim) –DACreate2d(….,DA *) DA_[NON,X,Y,XY]PERIODIC number of grid points in x- and y-directions processors in x- and y-directions degrees of freedom per node stencil width... Update ghostpoints –DAGlobalToLocalBegin(DA,…) –DAGlobalToLocalEnd(DA,…)

53 53 Vectors and DAs The DA object contains information about the data layout and ghost values, but not the actual field data, which is contained in PETSc vectors Global vector: parallel –each process stores a unique local portion –DACreateGlobalVector(DA da,Vec *gvec); Local work vector: sequential –each processor stores its local portion plus ghost values –DACreateLocalVector(DA da,Vec *lvec); –uses “natural” local numbering of indices (0,1,…nlocal-1)

54 54 Updating the Local Representation DAGlobalToLocalBegin(DA, –Vec global_vec, –INSERT_VALUES or ADD_VALUES –Vec local_vec); DAGlobalToLocal End(DA,…) Two-step process that enables overlapping computation and communication

55 55 Profiling Integrated monitoring of –time –floating-point performance –memory usage –communication All PETSc events are logged if compiled with -DPETSC_LOG (default); can also profile application code segments Print summary data with option: -log_summary See supplementary handout with summary data

56 56 Driven Cavity: Running the program (1) 1 processor: (thermally-driven flow) –mpirun -np 1 ex8 -snes_mf -snes_monitor -grashof 1000.0 - lidvelocity 0.0 2 processors, view DA (and pausing for mouse input): –mpirun -np 2 ex8 -snes_mf -snes_monitor -da_view_draw -draw_pause -1 View contour plots of converging iterates –mpirun ex8 -snes_mf -snes_monitor -snes_vecmonitor Matrix-free Jacobian approximation with no preconditioning (via -snes_mf ) … does not use explicit Jacobian evaluation

57 57 Driven Cavity: Running the program (2) Use MatFDColoring for sparse finite difference Jacobian approximation; view SNES options used at runtime –mpirun ex8 -snes_view -mat_view_info Set trust region Newton method instead of default line search –mpirun ex8 -snes_type tr -snes_view -snes_monitor Set transpose-free QMR as Krylov method; set relative KSP convergence tolerance to be.01 –mpirun ex8 -ksp_type tfqmr -ksp_rtol.01 -snes_monitor

58 58 Sample Error Traceback Breakdown in ILU factorization due to a zero pivot

59 59 Sample Memory Corruption Error

60 60 Sample Out-of-Memory Error

61 61 Sample Floating Point Error

62 62 Summary Using callbacks to set up the problems for nonlinear solvers Managing data layout and ghost point communication with DAs (for structured meshes) and VecScatters ( for unstructured meshes) Evaluating parallel functions and Jacobians Consistent profiling and error handling

63 63 Extensibility Issues Most PETSc objects are designed to allow one to “drop in” a new implementation with a new set of data structures (similar to implementing a new class in C++). Heavily commented example codes include –Krylov methods: petsc/src/sles/ksp/impls/cg –preconditioners: petsc/src/sles/pc/impls/jacobi Feel free to discuss more details with us in person.

64 64 Caveats Revisited Developing parallel, non-trivial PDE solvers that deliver high performance is still difficult, and requires months (or even years) of concentrated effort. PETSc is a toolkit that can ease these difficulties and reduce the development time, but it is not a black-box PDE solver nor a silver bullet. Users are invited to interact directly with us regarding correctness or performance issues by writing to petsc-maint@mcs.anl.gov.

65 65 Where is PETSc headed next? As parallel programming models continue to evolve, so will PETSc New algorithms Extended tools for parallel mesh/vector manipulation Numerical software interoperability issues, e.g., in collaboration with –Argonne colleagues in the ALICE project http://www.mcs.anl.gov/alice –DOE colleagues in the Common Component Architecture (CCA) Forum http://www.acl.lanl.gov/cca-forum

66 66 References PETSc Users Manual, by Balay, Gropp, McInnes, and Smith Using MPI, by Gropp, Lusk, and Skjellum MPI Homepage: http://www.mpi-forum.org Domain Decomposition, by Smith, Bjorstad, and Gropp

67 67 PETSc at NERSC Installed on CRAY T3E-900: mcurie.nersc.gov Info available via: http://acts.nersc.gov/petsc Access via modules package –module load petsc Sample makefile – $PETSC_DIR/src/sles/examples/tutorials/makefile Sample example program –$PETSC_DIR/src/sles/examples/tutorials/ex2.c this example is discussed in detail in Part I of the PETSc Users Manual, available via http://www.mcs.anl.gov/petsc


Download ppt "Numerical Software Libraries for the Scalable Solution of PDEs Introduction to PETSc Satish Balay, Bill Gropp, Lois C. McInnes, Barry Smith Mathematics."

Similar presentations


Ads by Google