Presentation is loading. Please wait.

Presentation is loading. Please wait.

Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science Institute University.

Similar presentations


Presentation on theme: "Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science Institute University."— Presentation transcript:

1 Allen D. Malony, Sameer Shende {malony,shende}@cs.uoregon.edu Department of Computer and Information Science Computational Science Institute University of Oregon Performance EngineeringTechnology for Complex Scientific Component Software

2 Argonne CCA Meeting 2 June 24, 2002 Outline  Complexity and performance technology  Developing performance interfaces for CCA  Performance knowledge repository  Performance observation  TAU performance system  Applications  Implementation  Concluding remarks

3 Argonne CCA Meeting 3 June 24, 2002 Problem Statement How do we create robust and ubiquitous performance technology for the analysis and tuning of component software in the presence of (evolving) complexity challenges? How do we apply performance technology effectively for the variety and diversity of performance problems that arise in the context of CCA components? 

4 Argonne CCA Meeting 4 June 24, 2002 Extended Component Design  PKC: Performance Knowledge Component  POC: Performance Observability Component generic component

5 Argonne CCA Meeting 5 June 24, 2002 Performance Knowledge  Describe and store “known” component’s performance  Benchmark characterizations in performance database  Empirical or analytical performance models  Saved information about component performance  Use for performance-guided selection and deployment  Use for runtime adaptation  Representation must be in common forms with standard means for accessing the performance information

6 Argonne CCA Meeting 6 June 24, 2002 Performance Knowledge Repository & Component  Component performance repository  Implement in component architecture framework  Similar to CCA component repository [Alexandria]  Access by component infrastructure  View performance knowledge as component (PKC)  PKC ports give access to performance knowledge  to other components back to original component  Static/dynamic component control and composition  Component composition performance knowledge

7 Argonne CCA Meeting 7 June 24, 2002 Performance Observation  Ability to observe execution performance is important  Empirically-derived performance knowledge  Does not require measurement integration in component  Monitor during execution to make dynamic decisions  Measurement integration is key  Performance observation integration  Component integration: core and variant  Runtime measurement and data collection  On-line and off-line performance analysis

8 Argonne CCA Meeting 8 June 24, 2002 Performance Observation Component (POC)  Performance observation in a performance-engineered component model  Functional extension of original component design ( )  Include new component methods and ports ( ) for other components to access measured performance data  Allow original component to access performance data  Encapsulate as tightly-couple and co-resident performance observation object  POC “provides” port allow use optmized interfaces ( ) to access ``internal'' performance observations

9 Argonne CCA Meeting 9 June 24, 2002 Component Composition Performance Engineering  Performance of component-based scientific applications depends on interplay of component functions and the computational resources available  Management of component compositions throughout execution is critical to successful deployment and use  Identify key technological capabilities needed to support the performance engineering of component compositions  Two model concepts  performance awareness  performance attention

10 Argonne CCA Meeting 10 June 24, 2002 Performance Awareness of Component Ensembles  Composition performance knowledge and observation  Composition performance knowledge  Can come from empirical and analytical evaluation  Can utilize information provided at the component level  Can be stored in repositories for future review  Extends the notion of component observation to ensemble-level performance monitoring  Associate monitoring components hierarchical component grouping  Build upon component-level observation support  Monitoring components act as performance integrators and routers  Use component framework mechanisms

11 Argonne CCA Meeting 11 June 24, 2002 Performance Engineered Component  Four parts  Performance knowledge  Characterization  Model  Performance observation  Measurement  Analysis  Performance query  Performance control  Extend component design for performance engineering  Keep consistent with CCA model

12 Argonne CCA Meeting 12 June 24, 2002 TAU Performance System Framework  Tuning and Analysis Utilities  Performance system framework for scalable parallel and distributed high- performance computing  Targets a general complex system computation model  nodes / contexts / threads  Multi-level: system / software / parallelism  Measurement and analysis abstraction  Integrated toolkit for performance instrumentation, measurement, analysis, and visualization  Portable, configurable performance profiling/tracing facility  Open software approach  University of Oregon, LANL, FZJ Germany  http://www.cs.uoregon.edu/research/paracomp/tau http://www.cs.uoregon.edu/research/paracomp/tau

13 Argonne CCA Meeting 13 June 24, 2002 General Complex System Computation Model  Node: physically distinct shared memory machine  Message passing node interconnection network  Context: distinct virtual memory space within node  Thread: execution threads (user/system) in context memory Node VM space Context SMP Threads node memory … … Interconnection Network Inter-node message communication * * physical view model view

14 Argonne CCA Meeting 14 June 24, 2002 TAU Performance System Architecture EPILOG Paraver

15 Argonne CCA Meeting 15 June 24, 2002 TAU Status  Instrumentation supported:  Source, preprocessor, compiler, MPI, runtime, virtual machine  Languages supported:  C++, C, F90, Java, Python  HPF, ZPL, HPC++, pC++...  Packages supported:  PAPI [UTK], PCL [FZJ] (hardware performance counter access),  Opari, PDT [UO,LANL,FZJ], DyninstAPI [U.Maryland] (instrumentation),  EXPERT, EPILOG[FZJ],Vampir[Pallas], Paraver [CEPBA] (visualization)  Platforms supported:  IBM SP, SGI Origin, Sun, HP Superdome, Compaq ES,  Linux clusters (IA-32, IA-64, PowerPC, Alpha), Apple, Windows,  Hitachi SR8000, NEC SX, Cray T3E...  Compilers suites supported:  GNU, Intel KAI (KCC, KAP/Pro), Intel, SGI, IBM, Compaq,HP, Fujitsu, Hitachi, Sun, Apple, Microsoft, NEC, Cray, PGI, Absoft, …  Thread libraries supported:  Pthreads, SGI sproc, OpenMP, Windows, Java, SMARTS

16 Argonne CCA Meeting 16 June 24, 2002 Program Database Toolkit Application / Library C / C++ parser Fortran 77/90 parser C / C++ IL analyzer Fortran 77/90 IL analyzer Program Database Files IL DUCTAPE PDBhtml SILOON CHASM TAU_instr Program documentation Application component glue C++ / F90 interoperability Automatic source instrumentation

17 Argonne CCA Meeting 17 June 24, 2002 Program Database Toolkit (PDT)  Program code analysis framework for developing source-based tools for C99, C++ and F90  High-level interface to source code information  Widely portable:  IBM, SGI, Compaq, HP, Sun, Linux clusters,Windows, Apple, Hitachi, Cray T3E...  Integrated toolkit for source code parsing, database creation, and database query  commercial grade front end parsers (EDG for C99/C++, Mutek for F90)  Intel/KAI C++ headers for std. C++ library distributed with PDT  portable IL analyzer, database format, and access API  open software approach for tool development  Target and integrate multiple source languages  Used in CCA for automated generation of SIDL  Use in TAU to build automated performance instrumentation tools (tau_instrumentor)  Can be used to generate code for performance ports in CCA

18 Argonne CCA Meeting 18 June 24, 2002 Performance Database Framework... Raw performance data PerfDML data description Performance analysis programs PerfDML translators Performance analysis and query toolkit ORDB PostgreSQL XML profile data representation Multiple experiment performance database

19 Argonne CCA Meeting 19 June 24, 2002 Integrated Performance Evaluation Environment

20 Argonne CCA Meeting 20 June 24, 2002 Applications: VTF (ASCI ASAP Caltech)  C++, C, F90, Python  PDT, MPI

21 Argonne CCA Meeting 21 June 24, 2002 Applications: SAMRAI (LLNL)  C++  PDT, MPI  SAMRAI timers (groups)

22 Argonne CCA Meeting 22 June 24, 2002 Applications: Uintah (U. Utah ASCI L1 Center)  C++  Mapping performance data, EXPARE experiment system  MPI, sproc

23 Argonne CCA Meeting 23 June 24, 2002 Applications: Uintah (U. Utah) TAU uses SCIRun [U. Utah] for visualization of performance data (online/offline)

24 Argonne CCA Meeting 24 June 24, 2002 Applications: Uintah (contd.) Scalability analysis

25 Argonne CCA Meeting 25 June 24, 2002 Implementation  We need the CCA forum to help:  standardize component performance knowledge repository specification to facilitate sharing  define protocols for accessing performance data  define the interface for performance ports  support this effort  Prototype implementation using TAU  Identify target CCA projects

26 Argonne CCA Meeting 26 June 24, 2002 Concluding Remarks  Complex component systems pose challenging performance analysis problems that require robust methodologies and tools  New performance problems will arise  Instrumentation and measurement  Data analysis and presentation  Diagnosis and tuning  Performance engineered components  Performance knowledge, observation, query and control

27 Argonne CCA Meeting 27 June 24, 2002 References  A. Malony and S. Shende, “Performance Technology for Complex Parallel and Distributed Systems,” Proc. 3rd Workshop on Parallel and Distributed Systems (DAPSYS), pp. 37-46, Aug. 2000.  S. Shende, A. Malony, and R. Ansell-Bell, “Instrumentation and Measurement Strategies for Flexible and Portable Empirical Performance Evaluation,” Proc. Int’l. Conf. on Parallel and Distributed Processing Techniques and Applications (PDPTA), CSREA, pp. 1150-1156, July 2001.  S. Shende, “The Role of Instrumentation and Mapping in Performance Measurement,” Ph.D. Dissertation, Univ. of Oregon, Aug. 2001.  J. de St. Germain, A. Morris, S. Parker, A. Malony, and S. Shende, “Integrating Performance Analysis in the Uintah Software Development Cycle,” ISHPC 2002, Nara, Japan, May, 2002.  URL: http://www.cs.uoregon.edu/research/paracomp/tau

28 Support Acknowledgement  TAU and PDT support:  Department of Energy (DOE)  DOE 2000 ACTS contract  DOE MICS contract  DOE ASCI Level 3 (LANL, LLNL)  U. of Utah DOE ASCI Level 1 subcontract  DARPA  NSF National Young Investigator (NYI) award


Download ppt "Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science Institute University."

Similar presentations


Ads by Google