Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bill Cochran Oak Ridge National Laboratory The AMP Backplane Discreet Management of Numerical Libraries and Multiphysics Data.

Similar presentations


Presentation on theme: "Bill Cochran Oak Ridge National Laboratory The AMP Backplane Discreet Management of Numerical Libraries and Multiphysics Data."— Presentation transcript:

1 Bill Cochran Oak Ridge National Laboratory cochranwk@ornl.gov The AMP Backplane Discreet Management of Numerical Libraries and Multiphysics Data

2 The AMP Backplane Discreet Management of Numerical Libraries and Multiphysics Data Oak Ridge National Lab Kevin Clarno Bobby Philip Bill Cochran Srdjan Simunovic Rahul Sampath Srikanth Allu Gokan Yesilyurt Jung Ho Lee James Banfield Pallab Barai Los Alamos National Lab Gary Dilts Bogdan Mihaila Idaho National Lab Richard Martineau Glen Hansen Samet Kadioglu Ray Berry Oak Ridge National Lab Sreekanth Pannala Phani Nukala Larry Ott Jay Billings Los AlamosNational Lab Mike Rogers Argonne National Lab Abdellatif Yacout Los Alamos National Lab Cetin Unal Steven Lee Oak Ridge National Lab Larry Ott John Turner Argonne National Lab Marius Stan Developers:Collaborators:Advisors:

3 Epetra_Vector x; Vec y; N_Vector z; VecAXPBYPCZ ( z, alpha, 1, 0, x, y ); The AMP Backplane Vectors

4 Why So Many Libraries? AMP uses: Moertel and ML SNES and KSP IDA Contact Preconditioning JFNK Time integration

5 Epetra_CrsMatrix P; Mat A; N_Vector x, y, z; P.Multiply ( false, x, y ); MatMult ( A, y, z ); Vec multiPhysicsSolution; Vec tempPellet, displacementPellet; Vec thermoMechanicsPellet; SolveThermoMechanics ( thermoMechanicsPellet ); Epetra_Vector x; Vec y; N_Vector z; VecAXPBYPCZ ( z, alpha, 1, 0, x, y ); The AMP Backplane VectorsMatrices Meshes Mechanics Temperature Oxygen Diffusion Burn Up Neutronics Etc. stk::mesh::Entity curElement; libMesh::FE integrator; integrator.reinit ( &curElement );

6 How Does It Work? Virtual methods Polymorphism Templates Iterators Standard template library

7 How Do I Use It? Master six classes AMP::Vector AMP::Matrix AMP::MeshManager AMP::MeshManager::Adapter AMP::Variable AMP::DOFMap Linear combinations, Norms, Get/Set etc. Matrix-Vector products Scaling etc. Multiple domains Parallel management I/O Space allocation etc. Entity iteration Boundary conditions Memory management Vector indexing etc. Mapping mesh entities to indices in vectors and matrices Describe desired memory layout Index individual physics

8 In Parallel How Do I Use It ? Step 1: makeConsistent() Step 2: ??? Step 3: Profit! Multicore Multi-multicore

9 AMP::Vector::shared_ptr thermalResidual; AMP::Vector::shared_ptr thermalSolution; thermalResidual = residual->subsetVectorForVariable ( temperatureVar ); thermalSolution = solution->subsetVectorForVariable ( temperatureVar ); AMP::Vector::shared_ptr epetraView; epetraView = AMP::EpetraVector::view ( vector ); Epetra_Vector &epetraVec = epetraView->castTo ().getEpetra_Vector(); AMP::Vector::shared_ptr sundialsView; sundialsView = AMP::SundialsVector::view ( vector ); N_Vector sundialsVec; sundialsVec = sundialsView->castTo ().getNVector(); How Discreet Is It? AMP::Vector::shared_ptr petscView; petscView = AMP::PetscVector::view ( vector ); Vec petscVec; petscVec = petscView->castTo ().getVec(); -Most vector functionality -Enough matrix functionality -Works with SNES and KSP -Most vector functionality -Works with IDA -Single domain/single physics -Default linear algebra engine -Hopefully, limitation eased by Tpetra -Variables describe -Memory layout -Physics -Discretization

10 What About Performance? 1)C++ 2) Virtual methods Clever compiler optimizations Iterative access: L2Norm(), dot(), min(), axpy(), scale(), … Non-iterative access: for ( i = 0 ; i != numElems ; i++ ) for ( j = 0 ; j != 8 ; j++ ) vector->addValue ( elem[8*i+j], phi ); FORTRAN-esque speed

11 Digression Time to perform dot product 2 vectors: Virtual method penalty: Time to perform tight loop virtual method dot product: Dot product # floating point ops: Dot product FLOPS (FORTRAN style): Similar sized matvec w.r.t. FLOPS: matvec cache penalty: 0.05 secs 50% 0.075 secs 2n-1 40n-20 24n-12 40%

12 What About Performance? 1)C++ 2) Virtual methods Clever compiler optimizations Iterative access: Non-iterative access: for ( i = 0 ; i != numElems ; i++ ) for ( j = 0 ; j != 8 ; j++ ) vector->addValue ( elem[8*i+j], phi[j] ); FORTRAN-esque speed for ( i = 0 ; i != numElems ; i++ ) vector->addValues ( 8, elem + 8*i, phi );

13 Does it work? 100,000+ unit tests: AMP interface AMP interface vs PETSc AMP interface vs Epetra PETSc wrappers SUNDIALS wrappers Epetra vs PETSc Various bugs found in development Single physics, single domain Multiple physics, single domain Single physics, multiple domains Multiple physics, multiple domains Multiple linear algebra engines Views Clones Clones of views Views of clones AMP vectors PETSc views Sundials views Serial Parallel

14 SUNDIALS IDA time integration PETSc SNES JFNK quasi-static Trilinos ML preconditioning What Can It Do?

15 Time (s) Number of cores 10.75k Elements/core 21.5k Elements/core 43k Elements/core Reading Meshes 128 domains (88M Elements) 32 domains (22M Elements) Superscaling

16 What Can It Do? Multiphysics Multidomain Multicore

17 What’s On The Horizon? “PMPIO” check pointing and restart Rudimentary contact search On-the-fly d.o.f. extraction Hot swap linear algebra engines Better interface for multi* data

18 What’s Left To Do? Performance testing and tuning More libraries Generalized discretizations Bringing everything together


Download ppt "Bill Cochran Oak Ridge National Laboratory The AMP Backplane Discreet Management of Numerical Libraries and Multiphysics Data."

Similar presentations


Ads by Google