Presentation is loading. Please wait.

Presentation is loading. Please wait.

David E. Keyes Center for Computational Science Old Dominion University Institute for Computer Applications in Science & Engineering (ICASE) NASA Langley.

Similar presentations


Presentation on theme: "David E. Keyes Center for Computational Science Old Dominion University Institute for Computer Applications in Science & Engineering (ICASE) NASA Langley."— Presentation transcript:

1 David E. Keyes Center for Computational Science Old Dominion University Institute for Computer Applications in Science & Engineering (ICASE) NASA Langley Research Center Institute for Scientific Computing Research (ISCR) Lawrence Livermore National Laboratory Algorithms and Software for Terascale Computation of PDEs and PDE-constrained Optimization

2 INRIA Sophia-Antipolis Colloquium In memoriam, Joseph-Louis Lagrange l Born 1736, Turin l Died 10 April 1813, Paris l Winner, Prix de l’Académie des Sciences, 1772, 1774, 1780 l Mécanique Analytique, 1788 l Elected F.R.S., 1791 l Théorie des Fonctions Analytiques, 1797 “If I had been rich, I probably would not have devoted myself to mathematics.”

3 INRIA Sophia-Antipolis Colloquium “As long as algebra and geometry have been separated, their progress has been slow and their uses limited; but since these two sciences have been united, they have lent each mutual forces, and have marched together towards perfection.” Another Quotation of Lagrange

4 INRIA Sophia-Antipolis Colloquium Plan of presentation l Imperative of “optimal” algorithms for terascale computing l Basic domain decomposition and multilevel algorithmic concepts l Illustrations of solver performance on ASCI platforms (with a little help from my friends…) l Terascale Optimal PDE Simulations (TOPS) software project of the U.S. DOE l Conclusions

5 INRIA Sophia-Antipolis Colloquium Terascale simulation has been “sold” Environment global climate contaminant transport Lasers & Energy combustion ICF Engineering crash testing aerodynamics Biology drug design genomics Applied Physics radiation transport supernovae Scientific Simulation In these, and many other areas, simulation is an important complement to experiment.

6 INRIA Sophia-Antipolis Colloquium Terascale simulation has been “sold” Environment global climate contaminant transport Lasers & Energy combustion ICF Engineering crash testing aerodynamics Biology drug design genomics Experiments controversial Applied Physics radiation transport supernovae Scientific Simulation In these, and many other areas, simulation is an important complement to experiment.

7 INRIA Sophia-Antipolis Colloquium Terascale simulation has been “sold” Environment global climate contaminant transport Lasers & Energy combustion ICF Engineering crash testing aerodynamics Biology drug design genomics Experiments controversial Applied Physics radiation transport supernovae Scientific Simulation Experiments dangerous In these, and many other areas, simulation is an important complement to experiment.

8 INRIA Sophia-Antipolis Colloquium Terascale simulation has been “sold” Environment global climate contaminant transport Lasers & Energy combustion ICF Engineering crash testing aerodynamics Biology drug design genomics Experiments controversial Applied Physics radiation transport supernovae Experiments prohibited or impossible Scientific Simulation Experiments dangerous In these, and many other areas, simulation is an important complement to experiment.

9 INRIA Sophia-Antipolis Colloquium Terascale simulation has been “sold” Environment global climate contaminant transport Lasers & Energy combustion ICF Engineering crash testing aerodynamics Biology drug design genomics Experiments controversial Applied Physics radiation transport supernovae Experiments prohibited or impossible Scientific Simulation Experiments dangerous In these, and many other areas, simulation is an important complement to experiment. Experiments difficult to instrument

10 INRIA Sophia-Antipolis Colloquium Terascale simulation has been “sold” Environment global climate contaminant transport Lasers & Energy combustion ICF Engineering crash testing aerodynamics Biology drug design genomics Experiments controversial Applied Physics radiation transport supernovae Experiments prohibited or impossible Scientific Simulation Experiments dangerous In these, and many other areas, simulation is an important complement to experiment. Experiments difficult to instrument Experiments expensive

11 INRIA Sophia-Antipolis Colloquium Terascale simulation has been “sold” Environment global climate contaminant transport Lasers & Energy combustion ICF Engineering crash testing aerodynamics Biology drug design genomics Experiments controversial Applied Physics radiation transport supernovae Experiments prohibited or impossible Scientific Simulation Experiments dangerous However, simulation is far from proven! To meet expectations, we need to handle problems of multiple physical scales. Experiments difficult to instrument Experiments expensive

12 ‘97‘98‘99‘00‘01‘02‘03‘04‘05‘06 100+ Tflop / 30 TB Time (CY) Capability 1+ Tflop / 0.5 TB Plan Develop Use 30+ Tflop / 10 TB Red 3+ Tflop / 1.5 TB Blue 10+ Tflop / 4 TB White 50+ Tflop / 25 TB Large platforms have been provided ASCI program of the U.S. DOE has roadmap to go to 100 Tflop/s by 2006 www.llnl.gov/asci/platforms Sandia Los Alamos Livermore Livermore

13 INRIA Sophia-Antipolis Colloquium NSF’s 13.6 TF TeraGrid coming on line 26 24 8 4 HPSS 5 UniTree External Networks Site Resources SDSC 4.1 TF 225 TB Caltech NCSA/PACI 8 TF 240 TB Argonne TeraGrid: NCSA, SDSC, Caltech, Argonne www.teragrid.org c/o I. Foster

14 INRIA Sophia-Antipolis Colloquium Bird’s-eye View of the Earth Simulator System 65m 50m Double Floor for IN Cables Interconnection Network (IN) Cabinets Cartridge Tape Library System Power Supply System Air Conditioning System Processor Node (PN) Cabinets Disks Earth Simulator

15 INRIA Sophia-Antipolis Colloquium Air-conditioning system Double floor for IN cables and air-conditioning Air-conditioning return duct Lightning protection system Power supply system Seismic isolation system Cross-section of Earth Simulator Building

16 INRIA Sophia-Antipolis Colloquium Building for operation and research Building for computer system Power plant Earth Simulator

17 INRIA Sophia-Antipolis Colloquium Building platforms is the “easy” part l Algorithms must be n highly concurrent and straightforward to load balance n latency tolerant n cache friendly (temporal and spatial locality of reference) n highly scalable (in the sense of convergence) l Goal for algorithmic scalability: fill up memory of arbitrarily large machines while preserving constant* running times with respect to proportionally smaller problem on one processor l Domain-decomposed multilevel methods “natural” for all of these l Domain decomposition also “natural” for software engineering * or logarithmically growing

18 INRIA Sophia-Antipolis Colloquium Algorithmic requirements from architecture l Must run on physically distributed memory units connected by message-passing network, each serving one or more processors with multiple levels of cache T3E “horizontal” aspects“vertical” aspects message passing, shared memory threadsregister blocking, cache blocking, prefetching

19 INRIA Sophia-Antipolis Colloquium Keyword: “Optimal” l Convergence rate nearly independent of discretization parameters n Multilevel schemes for rapid linear convergence of linear problems n Newton-like schemes for quadratic convergence of nonlinear problems l Convergence rate as independent as possible of physical parameters n Continuation schemes n Physics-based preconditioning unscalable scalable Problem Size (increasing with number of processors) Time to Solution 200 150 50 0 100 10100 1000 1 Steel/rubber composite Parallel multigrid c/o M. Adams, Berkeley-Sandia The solver is a key part, but not the only part, of the simulation that needs to be scalable

20 INRIA Sophia-Antipolis Colloquium Why Optimal Algorithms? l The more powerful the computer, the greater the importance of optimality l Example: n Suppose Alg1 solves a problem in time CN 2, where N is the input size n Suppose Alg2 solves the same problem in time CN n Suppose that the machine on which Alg1 and Alg2 have been parallelized to run has 10,000 processors l In constant time (compared to serial), Alg1 can run a problem 100X larger, whereas Alg2 can run a problem 10,000X larger

21 INRIA Sophia-Antipolis Colloquium Why Optimal?, cont. l Alternatively, filling the machine’s memory, Alg1 requires 100X time, whereas Alg2 runs in constant time l Is 10,000 processors a reasonable expectation? n Yes, we have it today (ASCI White)! l Could computational scientists really use 10,000X? n Of course; we are approximating the continuum n A grid for weather prediction allows points every 1km versus every 100km on the earth’s surface n In 2D 10,000X disappears fast; in 3D even faster l However, these machines are expensive (ASCI White is $100M, plus ongoing operating costs), and optimal algorithms are the only algorithms that we can afford to run on them

22 INRIA Sophia-Antipolis Colloquium Decomposition strategies for L u=f in  l Operator decomposition l Function space decomposition l Domain decomposition Consider, e.g., the implicitly discretized parabolic case

23 INRIA Sophia-Antipolis Colloquium Operator decomposition l Consider ADI l Iteration matrix consists of four sequential (“multiplicative”) substeps per timestep n two sparse matrix-vector multiplies n two sets of unidirectional bandsolves l Parallelism within each substep l But global data exchanges between bandsolve substeps

24 INRIA Sophia-Antipolis Colloquium Function space decomposition l Consider a spectral Galerkin method l System of ordinary differential equations l Perhaps are diagonal matrices l Perfect parallelism across spectral index l But global data exchanges to transform back to physical variables at each step

25 INRIA Sophia-Antipolis Colloquium Domain decomposition l Consider restriction and extension operators for subdomains,, and for possible coarse grid, l Replace discretized with l Solve by a Krylov method, e.g., CG l Matrix-vector multiplies with n parallelism on each subdomain n nearest-neighbor exchanges, global reductions n possible small global system (not needed for parabolic case) =

26 INRIA Sophia-Antipolis Colloquium Comparison l Operator decomposition (ADI) n natural row-based assignment requires all-to-all, bulk data exchanges in each step (for transpose) l Function space decomposition (Fourier) n natural mode-based assignment requires all-to-all, bulk data exchanges in each step (for transform) l Domain decomposition (Schwarz) n natural domain-based assignment requires local (nearest neighbor) data exchanges, global reductions, and optional small global problem

27 INRIA Sophia-Antipolis Colloquium Theoretical scaling of domain decomposition (for three common network topologies) l With logarithmic-time (hypercube- or tree-based) global reductions and scalable nearest neighbor interconnects: n optimal number of processors scales linearly with problem size (“scalable”, assumes one subdomain per processor) l With power-law-time (3D torus-based) global reductions and scalable nearest neighbor interconnects: n optimal number of processors scales as three-fourths power of problem size (“almost scalable”) l With linear-time (common bus) network: n optimal number of processors scales as one-fourth power of problem size (*not* scalable) n bad news for conventional Beowulf clusters, but see 2000 & 2001 Bell Prize “price-performance awards” using multiple commodity NICs per Beowulf node!

28 INRIA Sophia-Antipolis Colloquium Three Basic Concepts l Iterative correction l Schwarz preconditioning l Schur preconditioning Some “ Advanced” Concepts l Polynomial combinations of Schwarz projections l Schwarz-Schur combinations n Schwarz on Schur-reduced system n Schwarz inside Schur-reduced system l Nonlinear Schwarz } the meat of the talk new!

29 INRIA Sophia-Antipolis Colloquium Iterative correction l The most basic idea in iterative methods l Evaluate residual accurately, but solve approximately, where is an approximate inverse to A l A sequence of complementary solves can be used, e.g., with first and then one has l Optimal polynomials of lead to various preconditioned Krylov methods l Scale recurrence, e.g., with, leads to multilevel methods

30 INRIA Sophia-Antipolis Colloquium smoother Finest Grid First Coarse Grid coarser grid has fewer cells (less work & storage) Restriction transfer from fine to coarse grid Recursively apply this idea until we have an easy problem to solve A Multigrid V-cycle Prolongation transfer from coarse to fine grid Multilevel Preconditioning

31 INRIA Sophia-Antipolis Colloquium Schwarz Preconditioning l Given A x = b, partition x into subvectors, corresp. to subdomains of the domain of the PDE, nonempty, possibly overlapping, whose union is all of the elements of l Let Boolean rectangular matrix extract the subset of : l Let The Boolean matrices are gather/scatter operators, mapping between a global vector and its subdomain support

32 INRIA Sophia-Antipolis Colloquium Iteration count estimates from the Schwarz theory l In terms of N and P, where for d-dimensional isotropic problems, N=h -d and P=H -d, for mesh parameter h and subdomain diameter H, iteration counts may be estimated as follows: Ο(P 1/3 )Ο(P 1/2 ) 1-level Additive Schwarz Ο(1) 2-level Additive Schwarz Ο((NP) 1/6 )Ο((NP) 1/4 ) Domain Jacobi (  =0) Ο(N 1/3 )Ο(N 1/2 ) Point Jacobi in 3Din 2DPreconditioning Type l Krylov-Schwarz iterative methods typically converge in a number of iterations that scales as the square-root of the condition number of the Schwarz-preconditioned system

33 INRIA Sophia-Antipolis Colloquium Schur Preconditioning l Given a partition l Condense: l Let M be a good preconditioner for S l Then is a preconditioner for A l Moreover, solves with may be done approximately if all degrees of freedom are retained

34 INRIA Sophia-Antipolis Colloquium Schwarz polynomials l Polynomials of Schwarz projections that are combinations of additive and multiplicative may be appropriate for certain implementations l We may solve the fine subdomains concurrently and follow with a coarse grid (redundantly/cooperatively) l This leads to algorithm “Hybrid II” in S-B-G’96: l Convenient for “SPMD” (single prog/multiple data)

35 INRIA Sophia-Antipolis Colloquium Schwarz-on-Schur l Preconditioning the Schur complement is complex in and of itself; Schwarz can be used on the reduced problem l “Neumann-Neumann” alg l “Balancing Neumann-Neumann” alg l Multigrid on the Schur complement

36 INRIA Sophia-Antipolis Colloquium Schwarz-inside-Schur l Consider Newton’s method for solving the nonlinear rootfinding problem derived from the necessary conditions for constrained optimization l Constraint l Objective l Lagrangian l Form the gradient of the Lagrangian with respect to each of x, u, and :

37 INRIA Sophia-Antipolis Colloquium Schwarz-inside-Schur l Equality constrained optimization leads to the KKT system for states x, designs u, and multipliers l Then l Newton Reduced SQP solves the Schur complement system H  u = g, where H is the reduced Hessian

38 INRIA Sophia-Antipolis Colloquium Schwarz-inside-Schur, cont. l Problems n is the Jacobian of a PDE  huge! n involve Hessians of objective and constraints  second derivatives and huge n H is unreasonable to form, store, or invert l Solutions n Use Schur preconditioning on full system n Form forward action of Hessians by automatic differentiation (vector-to-vector map) n Form approximate inverse action of state Jacobian and its transpose by Schwarz

39 INRIA Sophia-Antipolis Colloquium Nonlinear Schwarz preconditioning l Nonlinear Schwarz has Newton both inside and outside and is fundamentally Jacobian-free l It replaces with a new nonlinear system possessing the same root, l Define a correction to the partition (e.g., subdomain) of the solution vector by solving the following local nonlinear system: where is nonzero only in the components of the partition l Then sum the corrections:

40 INRIA Sophia-Antipolis Colloquium Nonlinear Schwarz, cont. l It is simple to prove that if the Jacobian of F(u) is nonsingular in a neighborhood of the desired root then and have the same unique root l To lead to a Jacobian-free Newton-Krylov algorithm we need to be able to evaluate for any : n The residual n The Jacobian-vector product l Remarkably, (Cai-Keyes, 2000) it can be shown that where and l All required actions are available in terms of !

41 INRIA Sophia-Antipolis Colloquium Experimental example of nonlinear Schwarz Newton’s method Additive Schwarz Preconditioned Inexact Newton (ASPIN) Difficulty at critical Re Stagnation beyond critical Re Convergence for all Re

42 INRIA Sophia-Antipolis Colloquium “Unreasonable effectiveness” of Schwarz l When does the sum of partial inverses equal the inverse of the sums? When the decomposition is right! l Good decompositions are a compromise between conditioning and parallel complexity, in practice Let be a complete set of orthonormal row eigenvectors for A : or Then and — the Schwarz formula!

43 INRIA Sophia-Antipolis Colloquium Some anecdotal successes l Newton-Krylov-Schwarz Computational aerodynamics (Anderson et al., NASA, ODU, ANL) l FETI (Schwarz on Schur) Computational structural dynamics (Farhat & Pierson, CU-Boulder) l LNKS (Schwarz inside Schur) PDE-constrained optimization (Biros, NYU & Ghattas, CMU)

44 INRIA Sophia-Antipolis Colloquium Newton-Krylov-Schwarz Newton nonlinear solver asymptotically quadratic Krylov accelerator spectrally adaptive Schwarz preconditioner parallelizable Popularized in parallel Jacobian-free form under this name by Cai, Gropp, Keyes & Tidriri (1994)

45 INRIA Sophia-Antipolis Colloquium Jacobian-Free Newton-Krylov Method l In the Jacobian-Free Newton-Krylov (JFNK) method, a Krylov method solves the linear Newton correction equation, requiring Jacobian-vector products l These are approximated by the Fréchet derivatives so that the actual Jacobian elements are never explicitly needed, where  is chosen with a fine balance between approximation and floating point rounding error Schwarz preconditions, using approximate elements

46 INRIA Sophia-Antipolis Colloquium Computational Aerodynamics mesh c/o D. Mavriplis, ICASE Implemented in PETSc www.mcs.anl.gov/petsc Transonic “Lambda” Shock, Mach contours on surfaces

47 INRIA Sophia-Antipolis Colloquium Fixed-size Parallel Scaling Results Four orders of magnitude in 13 years c/o K. Anderson, W. Gropp, D. Kaushik, D. Keyes and B. Smith 128 nodes 43min 3072 nodes 2.5min, 226Gf/s 11M unknowns 15µs/unknown 70% efficient This scaling study, featuring our widest range of processor number, was done for the incompressible case.

48 INRIA Sophia-Antipolis Colloquium Fixed-size Parallel Scaling Results on ASCI Red ONERA M6 Wing Test Case, Tetrahedral grid of 2.8 million vertices on up to 3072 ASCI Red Nodes (Pentium Pro 333 MHz processors)

49 INRIA Sophia-Antipolis Colloquium PDE Workingsets l Smallest: data for single stencil l Largest: data for entire subdomain l Intermediate: data for a neighborhood collection of stencils, reused as possible

50 INRIA Sophia-Antipolis Colloquium Improvements Resulting from Locality Reordering Factor of Five!

51 INRIA Sophia-Antipolis Colloquium Cache Traffic for PDEs l As successive workingsets “drop” into a level of memory, capacity (and with effort conflict) misses disappear, leaving only compulsory, reducing demand on main memory bandwidth Traffic decreases as cache gets bigger or subdomains get smaller

52 INRIA Sophia-Antipolis Colloquium FETI-DP for structural mechanics 1mdof 4mdof 9mdof 18mdof30mdof 60mdof c/o C. Farhat and K. Pierson l Numerically scalable, hardware scalable solutions for realistic solid/shell models l Used in Sandia codes Salinas, Adagio, Andante

53 INRIA Sophia-Antipolis Colloquium PDE-constrained Optimization c/o G. Biros and O. Ghattas Lagrange-Newton-Krylov-Schur implemented in Veltisto/PETSc wing tip vortices, no control (l); optimal control (r) wing tip vortices, no control (l); optimal control (r) optimal boundary controls shown as velocity vectors l Optimal control of laminar viscous flow n optimization variables are surface suction/injection n objective is minimum drag n 700,000 states; 4,000 controls n 128 Cray T3E processors n ~5 hrs for optimal solution (~1 hr for analysis) www.cs.nyu.edu/~biros/veltisto/

54 INRIA Sophia-Antipolis Colloquium l Lab-university collaborations to develop “Integrated Software Infrastructure Centers” (ISICs) and partner with application groups l For FY2002, 51 new projects at $57M/year total n Approximately one-third for ISICs n A third for grid infrastructure and collaboratories n A third for applications groups l 5 Tflop/s IBM SP platforms “Seaborg” at NERSC (#3 in latest “Top 500”) and “Cheetah” at ORNL (being installed now) available for SciDAC

55 INRIA Sophia-Antipolis Colloquium Introducing “Terascale Optimal PDE Simulations” (TOPS) ISIC Nine institutions, $18M, five years, 24 co-PIs

56 INRIA Sophia-Antipolis Colloquium TOPS l Not just algorithms, but vertically integrated software suites l Portable, scalable, extensible, tunable, modular implementations Starring PETSc and hypre, among other existing packages l Driven by three applications SciDAC groups n LBNL-led “21st Century Accelerator” designs n ORNL-led core collapse supernovae simulations n PPPL-led magnetic fusion energy simulations intended for many others

57 INRIA Sophia-Antipolis Colloquium Background of PETSc Library (in which FUN3D example was implemented) l Developed by Balay, Gropp, McInnes & Smith (ANL) to support research, prototyping, and production parallel solutions of operator equations in message-passing environments; now joined by four additional staff (Buschelman, Kaushik, Knepley, Zhang) under SciDAC l Distributed data structures as fundamental objects - index sets, vectors/gridfunctions, and matrices/arrays l Iterative linear and nonlinear solvers, combinable modularly and recursively, and extensibly l Portable, and callable from C, C++, Fortran l Uniform high-level API, with multi-layered entry l Aggressively optimized: copies minimized, communication aggregated and overlapped, caches and registers reused, memory chunks preallocated, inspector-executor model for repetitive tasks (e.g., gather/scatter) See http://www.mcs.anl.gov/petsc

58 INRIA Sophia-Antipolis Colloquium PETSc code User code Application Initialization Function Evaluation Jacobian Evaluation Post- Processing PCKSP PETSc Main Routine Linear Solvers (SLES) Nonlinear Solvers (SNES) Timestepping Solvers (TS) User Code/PETSc Library Interactions

59 INRIA Sophia-Antipolis Colloquium PETSc code User code Application Initialization Function Evaluation Jacobian Evaluation Post- Processing PCKSP PETSc Main Routine Linear Solvers (SLES) Nonlinear Solvers (SNES) Timestepping Solvers (TS) User Code/PETSc Library Interactions To be AD code

60 INRIA Sophia-Antipolis Colloquium Background of Hypre Library (to be combined with PETSc under SciDAC) l Developed by Chow, Cleary & Falgout (LLNL) to support research, prototyping, and production parallel solutions of operator equations in message-passing environments; now joined by seven additional staff (Henson, Jones, Lambert, Painter, Tong, Treadway, Yang) under ASCI and SciDAC l Object-oriented design similar to PETSc l Concentrates on linear problems only l Richer in preconditioners than PETSc, with focus on algebraic multigrid l Includes other preconditioners, including sparse approximate inverse (Parasails) and parallel ILU (Euclid) See http://www.llnl.gov/CASC/hypre/

61 INRIA Sophia-Antipolis Colloquium Hypre’s “Conceptual Interfaces” Data Layout structuredcompositeblock-strucunstrucCSR Linear Solvers GMG,...FAC,...Hybrid,...AMGe,...ILU,... Linear System Interfaces Slide c/o E. Chow, LLNL

62 INRIA Sophia-Antipolis Colloquium Sample of Hypre’s Scaled Efficiency Slide c/o E. Chow, LLNL

63 INRIA Sophia-Antipolis Colloquium Scope for TOPS l Design and implementation of “solvers” n Time integrators, with sens. analysis n Nonlinear solvers, with sens. analysis n Optimizers n Linear solvers n Eigensolvers l Software integration l Performance optimization Optimizer Linear solver Eigensolver Time integrator Nonlinear solver Indicates dependence Sens. Analyzer

64 INRIA Sophia-Antipolis Colloquium TOPS Philosophy on PDEs l Solution of a system of PDEs is rarely a goal in itself n PDEs are solved to derive various outputs from specified inputs n Actual goal is characterization of a response surface or a design or control strategy n Together with analysis, sensitivities and stability are often desired  Tools for PDE solution should also support related desires

65 INRIA Sophia-Antipolis Colloquium TOPS Philosophy on Operators l A continuous operator may appear in a discrete code in many different instances n Optimal algorithms tend to be hierarchical and nested iterative n Processor-scalable algorithms tend to be domain- decomposed and concurrent iterative n Majority of progress towards desired highly resolved, high fidelity result occurs through cost-effective low resolution, low fidelity parallel efficient stages  Operator abstractions and recurrence must be supported

66 INRIA Sophia-Antipolis Colloquium It’s 2002; do you know what your solver is up to? Has your solver not been updated in the past five years? Is your solver running at 1-10% of machine peak? Do you spend more time in your solver than in your physics? Is your discretization or model fidelity limited by the solver? Is your time stepping limited by stability? Are you running loops around your analysis code? Do you care how sensitive to parameters your results are? If the answer to any of these questions is “yes”, you are a potential customer!

67 INRIA Sophia-Antipolis Colloquium TOPS project goals/success metrics l Understand range of algorithmic options and their tradeoffs (e.g., memory vs. time, inner iteration work vs. outer) l Can try all reasonable options from different sources easily without recoding or extensive recompilation l Know how their solvers are performing l Spend more time in their physics than in their solvers l Are intelligently driving solver research, and publishing joint papers with TOPS researchers l Can simulate truly new physics, as solver limits are steadily pushed back (finer meshes, higher fidelity models, complex coupling, etc.) TOPS will have succeeded if users —

68 INRIA Sophia-Antipolis Colloquium Conclusions l Domain decomposition and multilevel iteration the dominant paradigm in contemporary terascale PDE simulation l Several freely available software toolkits exist, and successfully scale to thousands of tightly coupled processors for problems on quasi-static meshes l Concerted efforts underway to make elements of these toolkits interoperate, and to allow expression of the best methods, which tend to be modular, hierarchical, recursive, and above all — adaptive! l Many challenges loom at the “next scale” of computation l Undoubtedly, new theory/algorithms will be part of the solution

69 INRIA Sophia-Antipolis Colloquium Acknowledgments l Collaborators or Contributors: n George Biros (NYU) n Xiao-Chuan Cai (Univ. Colorado, Boulder) n Omar Ghattas (Carnegie-Mellon) n Dinesh Kaushik (ODU) n Dana Knoll (LANL) n Dimitri Mavriplis (ICASE) n Kendall Pierson (Sandia) n PETSc team at Argonne National Laboratory: Satish Balay, Bill Gropp, Lois McInnes, Barry Smith l Sponsors: DOE, NASA, NSF l Computer Resources: LLNL, LANL, SNL, NERSC

70 INRIA Sophia-Antipolis Colloquium Related URLs l Personal homepage: papers, talks, etc. http://www.math.odu.edu/~keyes l SciDAC initiative http://www.science.doe.gov/scidac l TOPS project http://www.math.odu.edu/~keyes/scidac l PETSc project http://www.mcs.anl.gov/petsc l Hypre project http://www.llnl.gov/CASC/hypre l ASCI platforms http://www.llnl.gov/asci/platforms

71 INRIA Sophia-Antipolis Colloquium Bibliography l Jacobian-Free Newton-Krylov Methods: Approaches and Applications, Knoll & Keyes, 2002, to be submitted to J. Comp. Phys. l Nonlinearly Preconditioned Inexact Newton Algorithms, Cai & Keyes, 2002, to appear in SIAM J. Sci. Comp. l High Performance Parallel Implicit CFD, Gropp, Kaushik, Keyes & Smith, 2001, Parallel Computing 27:337-362 l Four Horizons for Enhancing the Performance of Parallel Simulations based on Partial Differential Equations, Keyes, 2000, Lect. Notes Comp. Sci., Springer, 1900:1-17 l Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel CFD, Gropp, Keyes, McInnes & Tidriri, 2000, Int. J. High Performance Computing Applications 14:102-136 l Achieving High Sustained Performance in an Unstructured Mesh CFD Application, Anderson, Gropp, Kaushik, Keyes & Smith, 1999, Proceedings of SC'99 l Prospects for CFD on Petaflops Systems, Keyes, Kaushik & Smith, 1999, in “Parallel Solution of Partial Differential Equations,” Springer, pp. 247-278 l How Scalable is Domain Decomposition in Practice?, Keyes, 1998, in “Proceedings of the 11th Intl. Conf. on Domain Decomposition Methods,” Domain Decomposition Press, pp. 286-297

72 INRIA Sophia-Antipolis Colloquium EOF

73 INRIA Sophia-Antipolis Colloquium Agenda for future research l High concurrency (100,000 processors) l Asynchrony l Fault tolerance l Automated tuning l Integration of simulation with studies of sensitivity, stability, and optimization

74 INRIA Sophia-Antipolis Colloquium High Concurrency Today Future l 100,000 processors, in a room or as part of a grid l Most phases of DD computations scale well n favorable surface-to-volume comm-to-comp ratio l However, latencies will nix frequent exact reductions l Paradigm: extrapolate data in retarded messages; correct (if necessary) when message arrives, such as in C(p,q,j) schemes by Garbey and Tromeur-Dervout l 10,000 processors in a single room with tightly coupled network l DD computations scale well, when provided with n network rich enough for parallel near neighbor communication n fast global reductions (complexity sublinear in processor count)

75 INRIA Sophia-Antipolis Colloquium Asynchrony Today Future l Adaptivity requirements and far- flung, nondedicated networks will lead to idleness and imbalance at synchronization points l Need algorithms with looser outer loops than global Newton-Krylov l Can we design algorithms that are robust with respect to incomplete convergence of inner tasks, like inexact Newton? l Paradigm: nonlinear Schwarz with regional (not global) nonlinear solvers where most execution time is spent l A priori partitionings for quasi- static meshes provide load- balanced computational tasks between frequent synchronization points l Good load balance is critical to parallel scalability on 1,000 processors and more

76 INRIA Sophia-Antipolis Colloquium Fault Tolerance Today Future c/o A. Geist l With 100,000 processors or worldwide networks, MTBF will be in minutes l Checkpoint-restart could take longer than the time to next failure l Paradigm: naturally fault tolerant algorithms, robust with respect to failure, such as a new FD algorithm at ORNL l Fault tolerance is not a driver in most scientific application code projects l FT handled as follows: n Detection of wrong u System – in hardware u Framework – by runtime env u Library – in math or comm lib n Notification of application u Interrupt – signal sent to job u Error code returned by app process n Recovery u Restart from checkpoint u Migration of task to new hardware u Reassignment of work to remaining tasks

77 INRIA Sophia-Antipolis Colloquium Automated Tuning Today Future l Less knowledgeable users required to employ parallel iterative solvers in taxing applications l Need safe defaults and automated tuning strategies l Paradigm: parallel direct search (PDS) derivative-free optimization methods, using overall parallel computational complexity as objective function and algorithm tuning parameters as design variables, to tune solver in preproduction trial executions l Knowledgeable user-developers parameterize their solvers with experience and theoretically informed intuition for: n problem size/processor ratio n outer solver type n Krylov solver type n DD preconditioner type n maximum subspace dimensions n overlaps n fill levels n inner tolerances n potentially many others

78 INRIA Sophia-Antipolis Colloquium Integrated Software Today Future l Analysis increasingly an “inner loop” around which more sophisticated science-driven tasks are wrapped l Need PDE task functionality (e.g., residual evaluation, Jacobian evaluation, Jacobian inverse) exposed to optimization/sensitivity/stability algorithms l Paradigm: integrated software based on common distributed data structures l Each analysis is a “special effort” l Optimization, sensitivity analysis (e.g., for uncertainty quantification), and stability analysis to fully exploit and contextualize scientific results are rare


Download ppt "David E. Keyes Center for Computational Science Old Dominion University Institute for Computer Applications in Science & Engineering (ICASE) NASA Langley."

Similar presentations


Ads by Google