Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN CCA Status, Code Walkthroughs, and Demonstrations.

Slides:



Advertisements
Similar presentations
Current Progress on the CCA Groundwater Modeling Framework Bruce Palmer, Yilin Fang, Vidhya Gurumoorthi, Computational Sciences and Mathematics Division.
Advertisements

CCA Common Component Architecture CCA Forum Tutorial Working Group A Look at More Complex.
DOE 2000: The Advanced Computational Testing and Simulation Toolkit James R. McGraw Lawrence Livermore National Laboratory.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Global Climate Modeling Research John Drake Computational Climate Dynamics Group Computer.
Allen D. Malony Department of Computer and Information Science University of Oregon Performance Technology for Scientific (Parallel.
The Role of DANSE at SNS Steve Miller Scientific Computing Group Leader January 22, 2007.
Overview of Eclipse Parallel Tools Platform Adam Leko UPC Group HCS Research Laboratory University of Florida Color encoding key: Blue: Information Red:
Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), Center Director David Brown (LLNL), Co-PI Ed D’Azevedo (ORNL), Co-PI Joe Flaherty.
Center for Component Technology for Terascale Simulation Software (aka Common Component Architecture) (aka CCA) Rob Armstrong & the CCA Working Group Sandia.
Loads Balanced with CQoS Nicole Lemaster, Damian Rouson, Jaideep Ray Sandia National Laboratories Sponsor: DOE CCA Meeting – January 22, 2009.
CCA Forum Summer Meeting1 27 July CCA Common Component Architecture The SciDAC Center for Technology for Advanced Scientific Component Software(TASCS)
An Automated Component-Based Performance Experiment and Modeling Environment Van Bui, Boyana Norris, Lois Curfman McInnes, and Li Li Argonne National Laboratory,
Center for Component Technology for Terascale Simulation Software 122 June 2002Workshop on Performance Optimization via High Level Languages and Libraries.
Common Component Architecture for High-Performance Computing CCA Working Group info: Rob Armstrong,
CCA Forum Fall Meeting October CCA Common Component Architecture Update on TASCS Component Technology Initiatives CCA Fall Meeting October.
1 TOPS Solver Components Language-independent software components for the scalable solution of large linear and nonlinear algebraic systems arising from.
The Common Component Architecture and XCAT Indiana University Extreme! Lab.
CCA Forum Autumn Meeting15 October 2006 CCA Common Component Architecture Component Technology Initiatives How can we exploit the component environment.
CQoS Update Li Li, Boyana Norris, Lois Curfman McInnes Argonne National Laboratory Kevin Huck University of Oregon.
CCA Data Object Contributors Ben Allan, Rob Armstrong, David Bernholdt, Robert Clay, Lori Freitag, Jim Kohl, Ray Loy, Manish Parashar, Craig Rasmussen,
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN Interfaces: The Key to Code Reuse and.
Coupling Parallel Programs via MetaChaos Alan Sussman Computer Science Dept. University of Maryland With thanks to Mike Wiltberger (Dartmouth/NCAR)
1 Using the PETSc Parallel Software library in Developing MPP Software for Calculating Exact Cumulative Reaction Probabilities for Large Systems (M. Minkoff.
Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), David Brown (LLNL), Lori Freitag (ANL), PIs Ed D’Azevedo (ORNL), Joe Flaherty (RPI),
CCA Common Component Architecture CCA Forum Tutorial Working Group Welcome to the Common.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY 1 Parallel Solution of the 3-D Laplace Equation Using a Symmetric-Galerkin Boundary Integral.
ANS 1998 Winter Meeting DOE 2000 Numerics Capabilities 1 Barry Smith Argonne National Laboratory DOE 2000 Numerics Capability
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
CCA Common Component Architecture CCA Forum Tutorial Working Group Welcome to the Common.
1 Trade-offs in High-Performance Numerical Library Design Lois Curfman McInnes Mathematics and Computer Science Division Argonne National Laboratory The.
Plans and Opportunities Involving Beam Dynamics Components ComPASS SAP Project and Phase I and II Doe SBIR Boyana Norris (ANL) In collaboration with Stefan.
Grid Computing Research Lab SUNY Binghamton 1 XCAT-C++: A High Performance Distributed CCA Framework Madhu Govindaraju.
Components for Beam Dynamics Douglas R. Dechow, Tech-X Lois Curfman McInnes, ANL Boyana Norris, ANL With thanks to the Common Component Architecture (CCA)
High Energy and Nuclear Physics Collaborations and Links Stu Loken Berkeley Lab HENP Field Representative.
SAP Participants: Douglas Dechow, Tech-X Corporation Lois Curfman McInnes, Boyana Norris, ANL Physics Collaborators: James Amundson, Panagiotis Spentzouris,
Strategies for Solving Large-Scale Optimization Problems Judith Hill Sandia National Laboratories October 23, 2007 Modeling and High-Performance Computing.
Center for Component Technology for Terascale Simulation Software CCA is about: Enhancing Programmer Productivity without sacrificing performance. Supporting.
SCIRun and SPA integration status Steven G. Parker Ayla Khan Oscar Barney.
CCA Common Component Architecture 1SciDAC PI Meeting22-24 March 20041SciDAC PI Meeting22-24 March 2004 Enabling Scientific Applications with the Common.
Combinatorial Scientific Computing and Petascale Simulation (CSCAPES) A SciDAC Institute Funded by DOE’s Office of Science Investigators Alex Pothen, Florin.
Terascale Simulation Tools and Technology Center TSTT brings together existing mesh expertise from Labs and universities. State of the art: many high-quality.
Presented by An Overview of the Common Component Architecture (CCA) The CCA Forum and the Center for Technology for Advanced Scientific Component Software.
ACES WorkshopJun-031 ACcESS Software System & High Level Modelling Languages by
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
CCA Common Component Architecture CCA Forum Tutorial Working Group Welcome to the Common Component Architecture Tutorial.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
CCA Common Component Architecture CCA Forum Tutorial Working Group A Look at More Complex.
Enabling Self-management of Component-based High-performance Scientific Applications Hua (Maria) Liu and Manish Parashar The Applied Software Systems Laboratory.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
Cracow Grid Workshop, November 5-6, 2001 Concepts for implementing adaptive finite element codes for grid computing Krzysztof Banaś, Joanna Płażek Cracow.
Connections to Other Packages The Cactus Team Albert Einstein Institute
CCA Common Component Architecture CCA Forum Tutorial Working Group A Look at More Complex.
BioPSE NCRR SCIRun2 -THE PROJECT -OBJECTIVES -DEVELOPMENTS -TODAY -THE FUTURE.
J. Ray, S. Lefantzi and H. Najm Sandia National Labs, Livermore Using The Common Component Architecture to Design Simulation Codes.
Presented by Adaptive Hybrid Mesh Refinement for Multiphysics Applications Ahmed Khamayseh and Valmor de Almeida Computer Science and Mathematics Division.
117 December 2001Pacific Northwest National Laboratory Component-Based Software for High-Performance Computing: An Introduction to the Common Component.
1 Craig Rasmussen Advanced Computing Laboratory Los Alamos National Laboratory Introduction to Object- and Component-Oriented Programming.
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN This work has been sponsored by the Mathematics,
Quality of Service for Numerical Components Lori Freitag Diachin, Paul Hovland, Kate Keahey, Lois McInnes, Boyana Norris, Padma Raghavan.
CCA Forum Spring Meeting April CCA Common Component Architecture Fault Tolerance and the Common Component Architecture David E. Bernholdt.
Unstructured Meshing Tools for Fusion Plasma Simulations
James “Jeeembo” Kohl, M.C. Oak Ridge National Laboratory
Programming Models for SimMillennium
Scalable Systems Software for Terascale Computer Centers
HPC Modeling of the Power Grid
GENERAL VIEW OF KRATOS MULTIPHYSICS
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Presentation transcript:

Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN CCA Status, Code Walkthroughs, and Demonstrations CCTTSS Tutorial Working Group This work has been sponsored by the Mathematics, Information and Computational Sciences (MICS) Program of the U.S. Dept. of Energy, Office of Science.

210 April 2002 Center for Component Technology for Terascale Simulation Software (CCTTSS) CCA Forum, Townsend, TN Contributors  Ben Allan, SNL  Rob Armstrong, SNL  David Bernholdt, ORNL  Lori Freitag, ANL  Jim Kohl, ORNL  Lois Curfman McInnes, ANL  Boyana Norris, ANL (Presenter)  Craig Rasmussen, LANL  Jaideep Ray, SNL

310 April 2002 Center for Component Technology for Terascale Simulation Software (CCTTSS) CCA Forum, Townsend, TN Prototype CCA Frameworks  XCAT, Indiana University, Dennis Gannon  Distributed  Network connection  CCAFFEINE, Sandia National Laboratories, Rob Armstrong  SPMD/SCMD parallel  Direct connection  SCIRun/Uintah, University of Utah, Steve Parker  Parallel, multithreaded  Direct connection

410 April 2002 Center for Component Technology for Terascale Simulation Software (CCTTSS) CCA Forum, Townsend, TN Current Status of CCA  Specification version 0.5  Working prototype frameworks  Working multi-component parallel and distributed demonstration applications  Draft specifications for  Basic scientific data objects  MxN parallel data redistribution  SC01 demonstrations available for download  four different “direct connect” applications, add’l distributed  DC demos: 31 distinct components, up to 17 in any single application, 6 used in more than one application  Components leverage and extend parallel software tools including CUMULVS, GrACE, LSODE, MPICH, PAWS, PETSc, PVM, SUMAA3d, TAO, and Trilinos.

510 April 2002 Center for Component Technology for Terascale Simulation Software (CCTTSS) CCA Forum, Townsend, TN Solution of an unconstrained minimization problem (determining minimal surface area given boundary constraints) using the TAOSolver optimization component TAOSolver uses linear solver components that incorporate abstract interfaces under development by the Equation Solver Interface (ESI) working group; underlying implementations are provided via the new ESI interfaces to parallel linear solvers within the PETSc and Trilinos libraries. These linear solver components are employed in the other two applications as well.

610 April 2002 Center for Component Technology for Terascale Simulation Software (CCTTSS) CCA Forum, Townsend, TN Solution of a two-dimensional heat equation on a square domain using an adaptive structured method. IntegratorLSODE provides a second-order implicit time integrator, and Model provides a discretization. The remaining components are essentially utilities that construct the global ODE system or adaptors that convert the patch-based data structures of the mesh to the globally distributed array structure used for runtime visualization.

710 April 2002 Center for Component Technology for Terascale Simulation Software (CCTTSS) CCA Forum, Townsend, TN Solution of a time- dependent PDE using a finite element discretization on an unstructured mesh IntegratorLSODE provides a second-order implicit time integrator, and FEMDiscretization provides a discretization. This application (and the other two applications as well) use the DADFactory component to describe the parallel data layout so that the CumulsMxN data redistribution component can then collate the data from a multi-processor run to a single processor for runtime visualization.