DOE 2000: The Advanced Computational Testing and Simulation Toolkit James R. McGraw Lawrence Livermore National Laboratory.

Slides:



Advertisements
Similar presentations
Building a CFD Grid Over ThaiGrid Infrastructure Putchong Uthayopas, Ph.D Department of Computer Engineering, Faculty of Engineering, Kasetsart University,
Advertisements

DELOS Highlights COSTANTINO THANOS ITALIAN NATIONAL RESEARCH COUNCIL.
Earth System Curator Spanning the Gap Between Models and Datasets.
Current Progress on the CCA Groundwater Modeling Framework Bruce Palmer, Yilin Fang, Vidhya Gurumoorthi, Computational Sciences and Mathematics Division.
1 A Common Application Platform (CAP) for SURAgrid -Mahantesh Halappanavar, John-Paul Robinson, Enis Afgane, Mary Fran Yafchalk and Purushotham Bangalore.
Hiperspace Lab University of Delaware Antony, Sara, Mike, Ben, Dave, Sreedevi, Emily, and Lori.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
1 Dr. Frederica Darema Senior Science and Technology Advisor NSF Future Parallel Computing Systems – what to remember from the past RAMP Workshop FCRC.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Programming Tools and Environments: Linear Algebra James Demmel Mathematics and EECS UC Berkeley.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Center for Component Technology for Terascale Simulation Software (aka Common Component Architecture) (aka CCA) Rob Armstrong & the CCA Working Group Sandia.
Center for Component Technology for Terascale Simulation Software 122 June 2002Workshop on Performance Optimization via High Level Languages and Libraries.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
Center for Programming Models for Scalable Parallel Computing: Project Meeting Report Libraries, Languages, and Execution Models for Terascale Applications.
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
Coupling Parallel Programs via MetaChaos Alan Sussman Computer Science Dept. University of Maryland With thanks to Mike Wiltberger (Dartmouth/NCAR)
1 Using the PETSc Parallel Software library in Developing MPP Software for Calculating Exact Cumulative Reaction Probabilities for Large Systems (M. Minkoff.
Overview of the ACTS Toolkit For NERSC Users Brent Milne John Wu
The ACTS Toolkit (What can it do for you?) Osni Marques and Tony Drummond ( LBNL/NERSC )
BLU-ICE and the Distributed Control System Constraints for Software Development Strategies Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
ANS 1998 Winter Meeting DOE 2000 Numerics Capabilities 1 Barry Smith Argonne National Laboratory DOE 2000 Numerics Capability
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Nov. 14, 2012 Hank Childs, Lawrence Berkeley Jeremy Meredith, Oak Ridge Pat McCormick, Los Alamos Chris Sewell, Los Alamos Ken Moreland, Sandia Panel at.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
The Globus Project: A Status Report Ian Foster Carl Kesselman
Grid Computing Research Lab SUNY Binghamton 1 XCAT-C++: A High Performance Distributed CCA Framework Madhu Govindaraju.
Components for Beam Dynamics Douglas R. Dechow, Tech-X Lois Curfman McInnes, ANL Boyana Norris, ANL With thanks to the Common Component Architecture (CCA)
High Energy and Nuclear Physics Collaborations and Links Stu Loken Berkeley Lab HENP Field Representative.
Center for Component Technology for Terascale Simulation Software CCA is about: Enhancing Programmer Productivity without sacrificing performance. Supporting.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Presented by An Overview of the Common Component Architecture (CCA) The CCA Forum and the Center for Technology for Advanced Scientific Component Software.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
1 Parallel Programming Aaron Bloomfield CS 415 Fall 2005.
Futures Lab: Biology Greenhouse gasses. Carbon-neutral fuels. Cleaning Waste Sites. All of these problems have possible solutions originating in the biology.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
I/O for Structured-Grid AMR Phil Colella Lawrence Berkeley National Laboratory Coordinating PI, APDEC CET.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
1 Object Oriented Logic Programming as an Agent Building Infrastructure Oct 12, 2002 Copyright © 2002, Paul Tarau Paul Tarau University of North Texas.
Distributed Components for Integrating Large- Scale High Performance Computing Applications Nanbor Wang, Roopa Pundaleeka and Johan Carlsson
Cracow Grid Workshop, November 5-6, 2001 Concepts for implementing adaptive finite element codes for grid computing Krzysztof Banaś, Joanna Płażek Cracow.
Connections to Other Packages The Cactus Team Albert Einstein Institute
Toward interactive visualization in a distributed workflow Steven G. Parker Oscar Barney Ayla Khan Thiago Ize Steven G. Parker Oscar Barney Ayla Khan Thiago.
BioPSE NCRR SCIRun2 -THE PROJECT -OBJECTIVES -DEVELOPMENTS -TODAY -THE FUTURE.
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
CCA Common Component Architecture Distributed Array Component based on Global Arrays Manoj Krishnan, Jarek Nieplocha High Performance Computing Group Pacific.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Globus: A Report. Introduction What is Globus? Need for Globus. Goal of Globus Approach used by Globus: –Develop High level tools and basic technologies.
Satisfying Requirements BPF for DRA shall address: –DAQ Environment (Eclipse RCP): Gumtree ISEE workbench integration; –Design Composing and Configurability,
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN CCA Status, Code Walkthroughs, and Demonstrations.
Toward a Distributed and Parallel High Performance Computing Environment Johan Carlsson and Nanbor Wang Tech-X Corporation Boulder,
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN This work has been sponsored by the Mathematics,
Workflow Management Concepts and Requirements For Scientific Applications.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
1/30/2003 Los Alamos National Laboratory1 A Migration Framework for Legacy Scientific Applications  Current tendency: monolithic architectures large,
Unified Parallel C at LBNL/UCB UPC at LBNL/U.C. Berkeley Overview Kathy Yelick LBNL and U.C. Berkeley.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
VisIt Project Overview
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Collaborations and Interactions with other Projects
University of Technology
Scalable Systems Software for Terascale Computer Centers
Presentation transcript:

DOE 2000: The Advanced Computational Testing and Simulation Toolkit James R. McGraw Lawrence Livermore National Laboratory

2 ACTS Toolkit: Context Customer Applications in Fortran or C++ Operating System, Network, Hardware Layer (homogeneous and heterogeneous distributed parallel systems) ACTS Toolkit Contents ACTS Toolkit Support for Customers

3 ACTS Toolkit: Current Contents Application Support Tools Numerics Code Development Tools Code Execution Tools Adapt previously autonomous numerical libraries to permit interoperable use on selected DP applications. Adapt and integrate object- oriented libraries for defining and manipulating parallel distributed arrays. Develop a portable, parallel run- time class library and integrate it with tools for remote steering and visualization. Develop object-oriented tools for assisting code developers to write particle simulation-based applications. Round 1 FundingRound 2 Funding Expand the interoperable behavior with more numerical algorithms relevant to ER and other DOE applications. Expand tools for managing arrays on complex memory hierarchies and enable run-time linking to diverse class libraries. Incorporate use of parallel program debugging and analysis tools with the current system and improve the tools for use on computational grids.

4 Metrics for Success More clients using the software tools Improved performance of tools for clients Reduced duplication of effort in tool development Providing new capabilities Publications

5 Overarching Theme: Interoperability Encrypt Authenticate Authorize Control Conference Share PETSc Aztec PVODEHypre Opt++ SuperLU

6 Collaborating Projects LBNL Berkeley U. Tenn. USC Indiana U. Ore. LANL ANL LLNL PNNL ORNL SNL Frame- works Run- Time Numerics Projects Linear Algebra ODEs/PDEs Optimization Comp. Arch. (CCA) Data Layout Apps. Dev. Env. OO Execution Distr. Computing Test and Evaluate

7 Collaborating Projects LBNL Berkeley U. Tenn. USC Indiana U. Ore. LANL ANL LLNL PNNL ORNL SNL Frame- works Run- Time Numerics Projects Linear Algebra ODEs/PDEs Optimization Comp. Arch. (CCA) Data Layout Apps. Dev. Env. OO Execution Distr. Computing Test and Evaluate

8 Collaborating Projects LBNL Berkeley U. Tenn. USC Indiana U. Ore. LANL ANL LLNL PNNL ORNL SNL Frame- works Run- Time Numerics Projects Linear Algebra ODEs/PDEs Optimization Comp. Arch. (CCA) Data Layout Apps. Dev. Env. OO Execution Distr. Computing Test and Evaluate

9 Numerical Toolkit Efforts Large-scale (100,000,000+ DOF) simulations » computational fluid dynamics » combustion » structures » materials » usually PDE based Large-scale optimization » often involving simulation » may be stochastic

10 PVODE-PETSc (LLNL-ANL) Complementary functionality » Parallel ODE integrators (PVODE) – sophisticated time-step control for accuracy – special scaled non-linear solver – object based » Scalable preconditioners and Krylov methods (PETSc) – run on processors – highly efficient block matrix storage formats – object oriented

11 PVODE-PETSc Use Sintering - simulating the dynamics of micro- structural interactions via the method of lines, requiring the solution of a large set of coupled ODEs Previously used LSODE, limited to 100s of DOF, now can handle 10,000s Wen Zhang, Oakland University and Joachim Schneibel, Oak Ridge

12 Utah ASCI ASAP Center Center for the Simulation of Accidental Fires and Explosions “...problem-solving environment in which fundamental chemistry and engineering physics are fully coupled with non-linear solvers, optimization, computational steering, …” PETSc + SAMRAI (LLNL) Using the PETSc nonlinear PDE solvers Already has fed back into PETSc nonlinear solver enhancements

13 Lab Related Toolkit Usage ALE3D test problems run with PETSc based parallel multigrid solver » Run on NERSC 512 processor T3E and LLNL ASCI Blue Pacific » Simulation with 16 million+ degrees of freedom » Mark Adams (Berkeley) Version of ARES and Teton run with PETSc Ground water flow simulation (gimrt-LLNL) Multi-phase flow (LANL), sequential legacy code

14 System Software - Distributed Computing and Communications Build on Nexus communication library to provide ACTS toolkit with » Integrated support for multithreading and communication » High-performance multi-method communication » Dynamic process creation » Integration with distributed computing Apply these capabilities to ACTS toolkit components & applications

15 Connections: Nexus Nexus MPICH HPC++ etc.PST CIF ALICE ManyWorlds NEOS CC++ MOL Globus RIO SCIRun DAVE-2 DOE2000 collaboratory projects DOE2000 ACTS toolkit projects Numerical libraries Other parallel tools AutoPilot Akenti Paradyn PAWS CAVERN

16 Remote Browsing of Large Datasets Problem: interactively explore very large (TB+) datasets Interactive client VRUI with view mgmt support Data reduction at remote client (subsampling) Use of Globus to authenticate, transfer data, access data ANL USC/ISI UC ASAP CIT ASAP

17 Applications in Flash ASCI ASAP Center Prototype “global shell” that permits: » Sign-on once via public key technology » Locate available computers » Start computation on an appropriate system » Monitor progress of computation » Get [subsampled] output files » Manipulate locally

18 Cumulvs: Collaborative Infrastructure for Interacting with Scientific Simulations Coordination of collection and dissemination of information to/from parallel tasks to multiple viewers. task viewer CUMULVS Unix Host A NT Host B Unix Host C Remote Person Using AVS Remote Person Using VR Interface Local Viewer Custom GUI

19 Cumulvs Capabilities Multiple distinct data views Links to a variety of visualization tools Dynamic linking to running application Coordinated computational steering Hetergeneous checkpointing

20 Cumulvs Integration & Interoperability Integration with InDEPS code development environment (ORNL & SNL) » Combustion Simulations » Material Impact and Deformation » Smooth Particle Hydrodynamics Remotely monitored T3E Applications (ORNL, LBL, and LLNL) Tcl/Tk language bindings for Cumulvs (NCSA, ORNL) » Viewers: VTK & Tango, VR / OpenGL viewer, Immersadesk, and CAVE » Apps.: Chesapeake Bay Simulation, Neutron Star Collision, DOD codes

21 PADRE: Parallel Asynchronous Data Routing Engine Caterpillar Engine Geometry Caterpillar ® Engine Geometry using Pro/ENGINEER using Pro/ENGINEER ®

22 PADRE Purpose : Data Distribution Interoperability Data Distribution Library e.g. Multi-block PARTI (UM), KeLP (UCSD), PGSLib (LANL), Global Arrays (PNNL) Communication Library e.g. MPI, PVM Application Libraries e.g. Overture, POOMA, AMR++

23 Dynamic Distribution (redistribution, etc.) Generation and Execution of Communication Schedules » General Array Language Support » TULIP Interface Support » Cached schedules for performance » Lazy Evaluation » Message Latency Hiding Subarray operation support for different distributions (AMR) Gather/Scatter support for indirect addr. (Overlapping Grid Apps) PADRE Services

24 PADRE in Use Default data distribution mechanism in Overture » Permits multiple distribution mechanisms within Overture for specialized applications (e.g. AMR) » Optimized communication using standard MPI and ROSE Optimizing preprocessor Global Arrays » Will provide alternative to GA interface for C++ users of GA » Provides access to Geometry in a form identical to what they already use » Gives PADRE access to one-sided message passing (important for parallel grid generation mechanisms within Overture) Ready for Use in POOMA II » KeLP distributions in PADRE close to domain layout in POOMA » Can provide geometry and AMR capabilities to POOMA apps.

25 Interoperability Goal: Scientific Apps. via Plug-and-Play Discretization Algebraic SolversParallel I/O Grids Data Reduction Physics Modules Optimization Derivative Computation Collaboration DiagnosticsSteeringVisualization Adaptive Solution

26 Common Component Architecture: Active Participants Cal Tech Indiana University University of Utah Sandia National Laboratory Argonne National Laboratory Lawrence Berkeley National Laboratory Lawrence Livermore National Laboratory Los Alamos National Laboratory Oak Ridge National Laboratory Pacific Northwest Laboratories

27 Basic Idea Component Concepts » Autonomous interchangeable parts » Links between components that create an application » Builders » Framework provides context or environment to components

28 Existing Component Models: The good, bad, and ugly What’s good about them? » Everyone knows how they should work. » They are accepted standards. What’s bad about them? » Not geared toward high performance computing. » Often based on a single framework. » Geared toward a single language (e.g. Beans). » Meant for a single environment (e.g. COM).

29 CCA Extensions to Existing Models gPorts (generalized Ports) » Similar to the CORBA3.0 User/Provider Port spec. » Uses a type of IUnknown pattern similar to COM. » Stolen from data-flow component models » Defined by linked interfaces. » Draft gPort spec in pre-RFC stage: Scientific IDL » Once component model is defined, language interoperability is the issue. – Include legacy programs. – Include legacy programmers » Draft IDL spec in pre-RFC stage:

30 ACTS Toolkit: Remaining Holes Application Support Tools Numerics Code Development Tools Code Execution Tools Visualization & Analysis Tools Expansion of tools to provide portable performance, multi-language programming environments, and tools for software maintenance of scientific applications. Integration of tools for helping application users to visualize and analyze the results of complex 3D computations. Expansion of tools for helping application developers share common domain-specific tools, such as adaptive mesh refinement and unstructured grids. Automated differentiation tools and ??? Expansion of tools to provide parallel I/O, dynamic load-balancing, intelligent storage management.