November 13, 2006 Performance Engineering Research Institute www.peri-scidac.org 1 Scientific Discovery through Advanced Computation Performance Engineering.

Slides:



Advertisements
Similar presentations
Statistical Modeling of Feedback Data in an Automatic Tuning System Richard Vuduc, James Demmel (U.C. Berkeley, EECS) Jeff.
Advertisements

NG-CHC Northern Gulf Coastal Hazards Collaboratory Simulation Experiment Integration Sandra Harper 1, Manil Maskey 1, Sara Graves 1, Sabin Basyal 1, Jian.
Help communities share knowledge more effectively across the language barrier Automated Community Content Editing PorTal.
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
Background Chronopolis Goals Data Grid supporting a Long-term Preservation Service Data Migration Data Migration to next generation technologies Trust.
Priority Research Direction: Portable de facto standard software frameworks Key challenges Establish forums for multi-institutional discussions. Define.
Presented by Suzy Tichenor Director, Industrial Partnerships Program Computing and Computational Sciences Directorate Oak Ridge National Laboratory DOE.
Scientific Grand Challenges Workshop Series: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale Warren M. Washington National.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Hiperspace Lab University of Delaware Antony, Sara, Mike, Ben, Dave, Sreedevi, Emily, and Lori.
Automatic Performance Tuning of Sparse Matrix Kernels Observations and Experience Performance tuning is tedious and time- consuming work. Richard Vuduc.
May 25, 2010 Mary Hall May 25, 2010 Advancing the Compiler Community’s Research Agenda with Archiving and Repeatability * This work has been partially.
Programming Systems for a Digital Human Kathy Yelick EECS Department U.C. Berkeley.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
LIGO-G E ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech Argonne, May 10, 2004.
Commodity Grid (CoG) Kits Keith Jackson, Lawrence Berkeley National Laboratory Gregor von Laszewski, Argonne National Laboratory.
An Automated Component-Based Performance Experiment and Modeling Environment Van Bui, Boyana Norris, Lois Curfman McInnes, and Li Li Argonne National Laboratory,
The Performance Evaluation Research Center (PERC) Patrick H. Worley, ORNL Participating Institutions: Argonne Natl. Lab.Univ. of California, San Diego.
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
Computer Science in UNEDF George Fann, Oak Ridge National Laboratory Rusty Lusk, Argonne National Laboratory Jorge Moré, Argonne National Laboratory Esmond.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
Presented by National Institute for Computational Sciences (NICS): Education, Outreach and Training Julia C. White User Support National Institute for.
What is Computational Science? Shirley Moore CPS 5401 August 27,
Chandrika Kamath and Imola K. Fodor Center for Applied Scientific Computing Lawrence Livermore National Laboratory Gatlinburg, TN March 26-27, 2002 Dimension.
Semantic Cyberinfrastructure for Knowledge and Information Discovery (SCiKID) Proposal Principle Investigator: Eric Rozell Tetherless World Constellation.
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
Accelerating Scientific Exploration Using Workflow Automation Systems Terence Critchlow (LLNL) Ilkay Altintas (SDSC) Scott Klasky(ORNL) Mladen Vouk (NCSU)
October 21, 2015 XSEDE Technology Insertion Service Identifying and Evaluating the Next Generation of Cyberinfrastructure Software for Science Tim Cockerill.
Building an Electron Cloud Simulation using Bocca, Synergia2, TxPhysics and Tau Performance Tools Phase I Doe SBIR Stefan Muszala, PI DOE Grant No DE-FG02-08ER85152.
Opportunities in Parallel I/O for Scientific Data Management Rajeev Thakur and Rob Ross Mathematics and Computer Science Division Argonne National Laboratory.
Office of Science Office of Biological and Environmental Research DOE Workshop on Community Modeling and Long-term Predictions of the Integrated Water.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
Scott Kohn with Tammy Dahlgren, Tom Epperly, and Gary Kumfert Center for Applied Scientific Computing Lawrence Livermore National Laboratory October 2,
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
Presented by Scientific Data Management Center Nagiza F. Samatova Network and Cluster Computing Computer Sciences and Mathematics Division.
Land Ice Verification and Validation (LIVV) Kit Weak scaling behavior for a large dome- shaped test case. It shows that the scaling behavior of a new run.
University of Maryland Towards Automated Tuning of Parallel Programs Jeffrey K. Hollingsworth Department of Computer Science University.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
1 SciDAC High-End Computer System Performance: Science and Engineering Jack Dongarra Innovative Computing Laboratory University of Tennesseehttp://
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
Lawrence Livermore National Laboratory S&T Principal Directorate - Computation Directorate Tools and Scalable Application Preparation Project Computation.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
Presented by Performance Engineering Research Institute (PERI) Patrick H. Worley Computational Earth Sciences Group Computer Science and Mathematics Division.
Globus online Software-as-a-Service for Research Data Management Steve Tuecke Deputy Director, Computation Institute University of Chicago & Argonne National.
Code Motion for MPI Performance Optimization The most common optimization in MPI applications is to post MPI communication earlier so that the communication.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
Supercomputing 2006 Scientific Data Management Center Lead Institution: LBNL; PI: Arie Shoshani Laboratories: ANL, ORNL, LBNL, LLNL, PNNL Universities:
The Performance Evaluation Research Center (PERC) Participating Institutions: Argonne Natl. Lab.Univ. of California, San Diego Lawrence Berkeley Natl.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Workflow-Driven Science using Kepler Ilkay Altintas, PhD San Diego Supercomputer Center, UCSD words.sdsc.edu.
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN This work has been sponsored by the Mathematics,
Data Infrastructure Building Blocks (DIBBS) NSF Solicitation Webinar -- March 3, 2016 Amy Walton, Program Director Advanced Cyberinfrastructure.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Resource Optimization for Publisher/Subscriber-based Avionics Systems Institute for Software Integrated Systems Vanderbilt University Nashville, Tennessee.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Steven L. Lee DOE Program Manager, Office of Science Advanced Scientific Computing Research Scientific Discovery through Advanced Computing Program: SciDAC-3.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
Tools and Libraries for Manycore Computing Kathy Yelick U.C. Berkeley and LBNL.
CernVM and Volunteer Computing Ivan D Reid Brunel University London Laurence Field CERN.
VisIt Project Overview
Performance Technology for Scalable Parallel Systems
Joseph JaJa, Mike Smorul, and Sangchul Song
Bridging the digital divide
Digital library for Earth System Education Teaching Boxes
for more information ... Performance Tuning
Presentation transcript:

November 13, 2006 Performance Engineering Research Institute 1 Scientific Discovery through Advanced Computation Performance Engineering Research Institute David H Bailey Lawrence Berkeley National Laboratory

November 13, 2006 Performance Engineering Research Institute 2 Argonne National Laboratory Lawrence Berkeley National Laboratory Lawrence Livermore National Laboratory Oak Ridge National Laboratory Rice University University of California at San Diego University of Maryland University of North Carolina University of Southern California University of Tennessee Participating Institutions

November 13, 2006 Performance Engineering Research Institute 3 SciDAC Scientific Discovery through Advanced Computation DOE Office of Science’s path to Petascale computational science Maximizing performance is getting harder: Systems are more complicated O(100K) multi-core CPUs SIMD extensions Codes are more complicated Multi-disciplinary Multi-scale IBM BlueGene at LLNL Cray Xt3 at ORNL BeamBeam3D accelerator modeling POP model of El Nino

November 13, 2006 Performance Engineering Research Institute 4 Example of the Challenge Figure courtesy Roger Logan, LLNL

November 13, 2006 Performance Engineering Research Institute 5 SciDAC-1 PERC Performance Evaluation Research Center (PERC) Initial goal was to develop performance related tools Benchmarks Analysis Modeling Optimization Second phase refocused on SciDAC applications incl. Community Climate System Model Plasma Microturbulence Project Omega3P accelerator model

November 13, 2006 Performance Engineering Research Institute 6 Some Lessons Learned Performance portability is critical Codes outlive machines Scientists can’t publish that they migrated code Computational scientists were not interested in tools They wanted experts to work with them Such experts are not scalable

November 13, 2006 Performance Engineering Research Institute 7 SciDAC-2 PERI Performance Engineering Research Institute Performance modeling of applications How fast do we expect to go? Automatic tuning Long term research goal Remove burden from scientific programmers Application engagement Near-term impact on SciDAC applications

November 13, 2006 Performance Engineering Research Institute 8 Performance Modeling Modeling is critical for automation of tuning Need to know where to focus effort Where are the bottlenecks? Need to know when we’re done How fast can we hope to go? Obvious improvements: Greater accuracy Reduced cost Modeling efforts contribute to procurements and other activities beyond PERI automatic tuning

November 13, 2006 Performance Engineering Research Institute 9 Performance Tuning Humans have been doing this for 50 years Compilers have been doing it statically for 40 years Recent self-tuning libraries: PHIPAC, ATLAS, FFTW, SPIRAL, SPOOLES Performance Engineering Research Institute goal: Automatic performance tuning of applications

November 13, 2006 Performance Engineering Research Institute 10 Automatic Performance Tuning of Scientific Code Long-term goals for PERI  Automate the process of tuning software to maximize its performance  Reduce the performance portability challenge facing computational scientists.  Address the problem that performance experts are in short supply  Build upon forty years of human experience and recent success with linear algebra libraries PERI automatic tuning framework

November 13, 2006 Performance Engineering Research Institute 11 Automatic Tuning Steps Triage: where to focus effort Semantic analysis: traditional compiler analysis Transformation: code restructuring Code generation: domain specific code Off-line search: empirical experiments Assembly: choose the best components Training runs: performance data for feedback On-line search: optimize long-running jobs

November 13, 2006 Performance Engineering Research Institute 12 Early Results Empirical optimization of dense matrix-matrix multiplication (Hall, USC) Empirical optimization of Madness kernel (Moore, UTK)

November 13, 2006 Performance Engineering Research Institute 13 PERI Portal Automatic tuning is common goal of half-a-dozen research projects No hope of actually integrating them into one system e.g., Open64, SUIF, and ROSE compilers Instead PERI will bring up a Web portal which will be our interface to application developers There will often be “a man behind the curtain” Goal is research demonstration of capability

November 13, 2006 Performance Engineering Research Institute 14 Application Engagement  Application Engagement  Work directly with DOE computational scientists  Ensure successful performance porting of scientific software  Focus PERI research on real problems  Application Liaisons  Build long-term personal relationships with PERI researchers and scientific code teams  Tiger Teams  Focus on DOE’s highest priorities  SciDAC-2  INCITE Maximizing scientific throughput Optimizing arithmetic kernels

November 13, 2006 Performance Engineering Research Institute 15 Summary SciDAC-2 Performance Engineering Research Institute Performance modeling of scientific applications so we understand what performance is possible Automatic performance tuning to alleviate computational scientists from this recurring problem Near-term impact via direct engagement with SciDAC application teams