Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Software Framework for Easy Parallelization of PDE Solvers Hans Petter Langtangen Xing Cai Dept. of Informatics University of Oslo.

Similar presentations


Presentation on theme: "A Software Framework for Easy Parallelization of PDE Solvers Hans Petter Langtangen Xing Cai Dept. of Informatics University of Oslo."— Presentation transcript:

1 A Software Framework for Easy Parallelization of PDE Solvers Hans Petter Langtangen Xing Cai Dept. of Informatics University of Oslo

2 Parallel CFD 2000 Outline of the Talk

3 Parallel CFD 2000 The Question Starting point: sequential PDE solvers How to do the parallelization? Resulting parallel solvers should have 4 good parallel efficiency 4 good overall numerical performance We need 4 a good parallelization strategy 4 a good and simple implementation of the strategy

4 Parallel CFD 2000 Problem Domain Partial differential equationsPartial differential equations Finite elements/differencesFinite elements/differences Communication through message passingCommunication through message passing

5 Parallel CFD 2000 A Known Problem “The hope among early domain decomposition workers was that one could write a simple controlling program which would call the old PDE software directly to perform the subdomain solves. This turned out to be unrealistic because most PDE packages are too rigid and inflexible.” “The hope among early domain decomposition workers was that one could write a simple controlling program which would call the old PDE software directly to perform the subdomain solves. This turned out to be unrealistic because most PDE packages are too rigid and inflexible.” - Smith, Bjørstad and Gropp - Smith, Bjørstad and Gropp One remedy: Use of object-oriented programming techniques Use of object-oriented programming techniques

6 Parallel CFD 2000 Domain Decomposition Solution of the original large problem through iteratively solving many smaller subproblemsSolution of the original large problem through iteratively solving many smaller subproblems Can be used as solution method or preconditionerCan be used as solution method or preconditioner Flexibility -- localized treatment of irregular geometries, singularities etcFlexibility -- localized treatment of irregular geometries, singularities etc Very efficient numerical methods -- even on sequential computersVery efficient numerical methods -- even on sequential computers Suitable for coarse grained parallelizationSuitable for coarse grained parallelization

7 Parallel CFD 2000 Overlapping DD Alternating Schwarz method for two subdomains Example: solving an elliptic boundary value problem in A sequence of approximations where

8 Parallel CFD 2000 Additive Schwarz Method Subproblems can be solved in parallelSubproblems can be solved in parallel Subproblems are of the same form as the original large problem, with possibly different boundary conditions on artificial boundariesSubproblems are of the same form as the original large problem, with possibly different boundary conditions on artificial boundaries

9 Parallel CFD 2000 Convergence of the Solution Single Single-phase groundwater flow

10 Parallel CFD 2000 Coarse Grid Correction This DD algorithm is a kind of block Jacobi iterationThis DD algorithm is a kind of block Jacobi iteration Problem: often (very) slow convergenceProblem: often (very) slow convergence Remedy: coarse grid correctionRemedy: coarse grid correction A kind of two-grid multigrid algorithmA kind of two-grid multigrid algorithm Coarse grid solve on each processorCoarse grid solve on each processor

11 Parallel CFD 2000 Observations DD is a good parallelization strategyDD is a good parallelization strategy A program for the original global problem can be reused (modulo B.C.) for each subdomainA program for the original global problem can be reused (modulo B.C.) for each subdomain Communication of overlapping point values is requiredCommunication of overlapping point values is required The approach is not PDE-specificThe approach is not PDE-specific No need for global dataNo need for global data Data distribution impliedData distribution implied Explicit temporal scheme are a special case where no iteration is needed (“exact DD”)Explicit temporal scheme are a special case where no iteration is needed (“exact DD”)

12 Parallel CFD 2000 Goals for the Implementation Reuse sequential solver as subdomain solverReuse sequential solver as subdomain solver Add DD management and communication as separate modulesAdd DD management and communication as separate modules Collect common operations in generic library modulesCollect common operations in generic library modules Flexibility and portabilityFlexibility and portability Simplified parallelization process for the end-userSimplified parallelization process for the end-user

13 Parallel CFD 2000 Generic Programming Framework

14 Parallel CFD 2000 The Administrator Administrator Parameters DD algorithm Operations ParametersParameters solution method or preconditioner, max iterations stopping criterion etc DD algorithmDD algorithm Subdomain solve + coarse grid correction OperationsOperations Matrix-vector product, inner-product etc

15 Parallel CFD 2000 The Subdomain Simulator Subdomain Simulator -- a generic representationSubdomain Simulator -- a generic representation C++ class hierarchyC++ class hierarchy Interface of generic member functionsInterface of generic member functions Subdomain Simulator seq. solver add-oncommunication

16 Parallel CFD 2000 The Communicator Need functionality for exchanging point values inside the overlapping regionsNeed functionality for exchanging point values inside the overlapping regions Build a generic communication module: The communicatorBuild a generic communication module: The communicator Encapsulation of communication related codes. Hidden concrete communication model. MPI in use, but easy to changeEncapsulation of communication related codes. Hidden concrete communication model. MPI in use, but easy to change

17 Parallel CFD 2000 Realization Object-oriented programming (C++, Java, Python)Object-oriented programming (C++, Java, Python) Use inheritanceUse inheritance –Simplifies modularization –Supports reuse of sequential solver (without touching its source code!)

18 Parallel CFD 2000 Generic Subdomain Simulators SubdomainSimulatorSubdomainSimulator –abstract interface to all subdomain simulators, as seen by the Administrator SubdomainFEMSolverSubdomainFEMSolver –Special case of SubdomainSimulator for finite element-based simulators These are generic classes, not restricted to specific application areasThese are generic classes, not restricted to specific application areasSubdomainSimulator SubdomainFEMSolver

19 Parallel CFD 2000 Making the Simulator Parallel class SimulatorP : public SubdomainFEMSolver public Simulator public Simulator{ // … just a small amount of codes // … just a small amount of codes virtual void createLocalMatrix () virtual void createLocalMatrix () { Simualtor::makeSystem (); } { Simualtor::makeSystem (); }};SubdomainSimulator SubdomainFEMSolverSimulatorSimulatorPAdministrator

20 Parallel CFD 2000 Performance Algorithmic efficiencyAlgorithmic efficiency 4efficiency of original sequential simulator(s) 4efficiency of domain decomposition method Parallel efficiencyParallel efficiency 4communication overhead (low) 4coarse grid correction overhead (normally low) 4load balancing –subproblem size –work on subdomain solves

21 Parallel CFD 2000 Summary So Far A generic approachA generic approach Works if the DD algorithm works for the problem at handWorks if the DD algorithm works for the problem at hand Implementation in terms of class hierarchiesImplementation in terms of class hierarchies The new parallel-specific code, SimulatorP, is very small and simple to writeThe new parallel-specific code, SimulatorP, is very small and simple to write

22 Parallel CFD 2000 Application  Single-phase groundwater flow  DD as the global solution method  Subdomain solvers use CG+FFT  Fixed number of subdomains M =32 (independent of P )  Straightforward parallelization of an existing simulator P: number of processors

23 Parallel CFD 2000 Two-phase Porous Media Flow PEQ: SEQ: DD as preconditioner for global BiCGtab solving pressure eq. Multigrid V-cycle in subdomain solves

24 Parallel CFD 2000 Two-Phase Porous Media Flow Simulation result obtained on 16 processors

25 Parallel CFD 2000 Two-phase Porous Media Flow History of saturation for water and oil

26 Parallel CFD 2000 Nonlinear Water Waves Fully nonlinear 3D water waves Primary unknowns: Parallelization based on an existing sequential Diffpack simulator

27 Parallel CFD 2000 Nonlinear Water Waves DD as preconditioner for global CG solving Laplace eq.DD as preconditioner for global CG solving Laplace eq. Multigrid V-cycle as subdomain solverMultigrid V-cycle as subdomain solver Fixed number of subdomains M =16 (independent of P )Fixed number of subdomains M =16 (independent of P ) Subgrids from partition of a global 41x41x41 gridSubgrids from partition of a global 41x41x41 grid

28 Parallel CFD 2000 Nonlinear Water Waves 3D Poisson equation in water wave simulation

29 Parallel CFD 2000 Application  Test case: 2D linear elasticity, 241 x 241 global grid.  Vector equation  Straightforward parallelization based on an existing Diffpack simulator

30 Parallel CFD 2000 2D Linear Elasticity

31 Parallel CFD 2000 2D Linear Elasticity DD as preconditioner for a global BiCGStab methodDD as preconditioner for a global BiCGStab method Multigrid V-cycle in subdomain solvesMultigrid V-cycle in subdomain solves I: number of global BiCGStab iterations neededI: number of global BiCGStab iterations needed P: number of processors ( P =#subdomains)P: number of processors ( P =#subdomains)

32 Parallel CFD 2000 Diffpack O-O software environment for scientific computationO-O software environment for scientific computation Rich collection of PDE solution components - portable, flexible, extensibleRich collection of PDE solution components - portable, flexible, extensible www.diffpack.comwww.diffpack.com H.P.Langtangen, Computational Partial Differential Equations, Springer 1999H.P.Langtangen, Computational Partial Differential Equations, Springer 1999

33 Parallel CFD 2000 Straightforward Parallelization Develop a sequential simulator, without paying attention to parallelismDevelop a sequential simulator, without paying attention to parallelism Follow the Diffpack coding standardsFollow the Diffpack coding standards Use add-on libraries for parallelization specific functionalitiesUse add-on libraries for parallelization specific functionalities Add a few new statements for transformation to a parallel simulatorAdd a few new statements for transformation to a parallel simulator

34 Parallel CFD 2000 Linear-algebra-level Approach Parallelize matrix/vector operationsParallelize matrix/vector operations –inner-product of two vectors –matrix-vector product –preconditioning - block contribution from subgrids Easy to useEasy to use –access to all existing Diffpack iterative methods, preconditioners and convergence monitors –“hidden” parallelization – need only to add a few lines of new code –arbitrary choice of number of procs at run-time –less flexibility than DD

35 Parallel CFD 2000 New Library Tool class GridPartAdmclass GridPartAdm –Generate overlapping or non-overlapping subgrids –Prepare communication patterns –Update global values –matvec, innerProd, norm

36 Parallel CFD 2000 Mesh Partition Example

37 Parallel CFD 2000 A Simple Coding Example Handle(GridPartAdm) adm; // access to parallelizaion functionalities Handle(LinEqAdm) lineq; // administrator for linear system & solver //... #ifdef PARALLEL_CODE adm->scan (menu); adm->prepareSubgrids (); adm->prepareCommunication (); lineq->attachCommAdm (*adm); #endif //... lineq->solve (); set subdomain list = DEFAULT set global grid = grid1.file set partition-algorithm = METIS set number of overlaps = 0

38 Parallel CFD 2000 Single-phase Groundwater Flow Highly unstructured grid Highly unstructured grid Discontinuity in the coefficient K Discontinuity in the coefficient K

39 Parallel CFD 2000 Measurements 130,561 degrees of freedom 130,561 degrees of freedom Overlapping subgrids Overlapping subgrids Global BiCGStab using (block) ILU prec. Global BiCGStab using (block) ILU prec.

40 Parallel CFD 2000 A Fast FEM N-S Solver Operator splitting in the tradition of pressure correction, velocity correction, Helmholtz decompositionOperator splitting in the tradition of pressure correction, velocity correction, Helmholtz decomposition This version is due to Ren & UtnesThis version is due to Ren & Utnes

41 Parallel CFD 2000 A Fast FEM N-S Solver Calculation of an intermediate velocityCalculation of an intermediate velocity

42 Parallel CFD 2000 A Fast FEM N-S Solver Solution of a Poisson EquationSolution of a Poisson Equation Correction of the intermediate velocityCorrection of the intermediate velocity

43 Parallel CFD 2000 Test Case: Vortex-Shedding

44 Parallel CFD 2000 Simulation Snapshots Pressure

45 Parallel CFD 2000 Simulation Snapshots Pressure

46 Parallel CFD 2000 Animated Pressure Field

47 Parallel CFD 2000 Simulation Snapshots Velocity

48 Parallel CFD 2000 Simulation Snapshots Velocity

49 Parallel CFD 2000 Animated Velocity Field

50 Parallel CFD 2000 Some CPU-Measurements The pressure equation is solved by the CG method

51 Parallel CFD 2000 Summary Goal: provide software and programming rules for easy parallelization of sequential simulatorsGoal: provide software and programming rules for easy parallelization of sequential simulators Two parallelization strategies:Two parallelization strategies: –domain decomposition: very flexible, compact visible code/algorithm –parallelization at the linear algebra level: “automatic” hidden parallelization Performance: satisfactory speed-upPerformance: satisfactory speed-up

52 Parallel CFD 2000 Future Application DD with different PDEs and local solvers –Out in deep sea: Eulerian, finite differences, Boussinesq PDEs, F77 code –Near shore: Lagrangian, finite element, shallow water PDEs, C++ code


Download ppt "A Software Framework for Easy Parallelization of PDE Solvers Hans Petter Langtangen Xing Cai Dept. of Informatics University of Oslo."

Similar presentations


Ads by Google