Allen D. Malony, Sameer Shende, Robert Bell Department of Computer and Information Science Computational Science Institute, NeuroInformatics.

Slides:



Advertisements
Similar presentations
K T A U Kernel Tuning and Analysis Utilities Department of Computer and Information Science Performance Research Laboratory University of Oregon.
Advertisements

Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Dynamic performance measurement control Dynamic event grouping Multiple configurable counters Selective instrumentation Application-Level Performance Access.
Introduction CSCI 444/544 Operating Systems Fall 2008.
NUMA Tuning for Java Server Applications Mustafa M. Tikir.
Chapter 19: Network Management Business Data Communications, 4e.
Robert Bell, Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science.
Sameer Shende Department of Computer and Information Science Neuro Informatics Center University of Oregon Tool Interoperability.
Extensible Scalable Monitoring for Clusters of Computers Eric Anderson U.C. Berkeley Summer 1997 NOW Retreat.
Tools for Engineering Analysis of High Performance Parallel Programs David Culler, Frederick Wong, Alan Mainwaring Computer Science Division U.C.Berkeley.
INTRODUCTION OS/2 was initially designed to extend the capabilities of DOS by IBM and Microsoft Corporations. To create a single industry-standard operating.
TAU Parallel Performance System DOD UGC 2004 Tutorial Allen D. Malony, Sameer Shende, Robert Bell Univesity of Oregon.
The TAU Performance Technology for Complex Parallel Systems (Performance Analysis Bring Your Own Code Workshop, NRL Washington D.C.) Sameer Shende, Allen.
Nick Trebon, Alan Morris, Jaideep Ray, Sameer Shende, Allen Malony {ntrebon, amorris, Department of.
On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks Bernd Mohr 1, Allen D. Malony 2, Rudi Eigenmann 3 1 Forschungszentrum.
Chapter 13 Embedded Systems
EEC-681/781 Distributed Computing Systems Lecture 3 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Allen D. Malony Department of Computer and Information Science Computational Science Institute University of Oregon TAU Performance.
GHS: A Performance Prediction and Task Scheduling System for Grid Computing Xian-He Sun Department of Computer Science Illinois Institute of Technology.
PRASHANTHI NARAYAN NETTEM.
Kai Li, Allen D. Malony, Robert Bell, Sameer Shende Department of Computer and Information Science Computational.
Sameer Shende, Allen D. Malony Computer & Information Science Department Computational Science Institute University of Oregon.
Instrumentation and Measurement CSci 599 Class Presentation Shreyans Mehta.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
University of Kansas Electrical Engineering Computer Science Jerry James and Douglas Niehaus Information and Telecommunication Technology Center Electrical.
1 Babak Behzad, Yan Liu 1,2,4, Eric Shook 1,2, Michael P. Finn 5, David M. Mattli 5 and Shaowen Wang 1,2,3,4 Babak Behzad 1,3, Yan Liu 1,2,4, Eric Shook.
German National Research Center for Information Technology Research Institute for Computer Architecture and Software Technology German National Research.
1 Performance Analysis with Vampir DKRZ Tutorial – 7 August, Hamburg Matthias Weber, Frank Winkler, Andreas Knüpfer ZIH, Technische Universität.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
Uncovering the Multicore Processor Bottlenecks Server Design Summit Shay Gal-On Director of Technology, EEMBC.
Scalable Analysis of Distributed Workflow Traces Daniel K. Gunter and Brian Tierney Distributed Systems Department Lawrence Berkeley National Laboratory.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
What are the main differences and commonalities between the IS and DA systems? How information is transferred between tasks: (i) IS it may be often achieved.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
Martin Schulz Center for Applied Scientific Computing Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
1 Performance Analysis with Vampir ZIH, Technische Universität Dresden.
Performance evaluation of component-based software systems Seminar of Component Engineering course Rofideh hadighi 7 Jan 2010.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Instrumentation in Software Dynamic Translators for Self-Managed Systems Bruce R. Childers Naveen Kumar, Jonathan Misurda and Mary.
Dynamic performance measurement control Dynamic event grouping Multiple configurable counters Selective instrumentation Application-Level Performance Access.
Performance evaluation on grid Zsolt Németh MTA SZTAKI Computer and Automation Research Institute.
Debugging parallel programs. Breakpoint debugging Probably the most widely familiar method of debugging programs is breakpoint debugging. In this method,
Allen D. Malony Department of Computer and Information Science TAU Performance Research Laboratory University of Oregon Discussion:
© 2006, National Research Council Canada © 2006, IBM Corporation Solving performance issues in OTS-based systems Erik Putrycz Software Engineering Group.
6.894: Distributed Operating System Engineering Lecturers: Frans Kaashoek Robert Morris
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
A N I N - MEMORY F RAMEWORK FOR E XTENDED M AP R EDUCE 2011 Third IEEE International Conference on Coud Computing Technology and Science.
Shangkar Mayanglambam, Allen D. Malony, Matthew J. Sottile Computer and Information Science Department Performance.
Parallel Performance Measurement of Heterogeneous Parallel Systems with GPUs Allen D. Malony, Scott Biersdorff, Sameer Shende, Heike Jagode†, Stanimire.
Performane Analyzer Performance Analysis and Visualization of Large-Scale Uintah Simulations Kai Li, Allen D. Malony, Sameer Shende, Robert Bell Performance.
Online Performance Analysis and Visualization of Large-Scale Parallel Applications Kai Li, Allen D. Malony, Sameer Shende, Robert Bell Performance Research.
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Productive Performance Tools for Heterogeneous Parallel Computing
OPERATING SYSTEMS CS 3502 Fall 2017
Performance Technology for Scalable Parallel Systems
SOFTWARE DESIGN AND ARCHITECTURE
University of Technology
Allen D. Malony, Sameer Shende
Outline Introduction Motivation for performance mapping SEAA model
TAU Performance DataBase Framework (PerfDBF)
Performance-Robust Parallel I/O
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

Allen D. Malony, Sameer Shende, Robert Bell Department of Computer and Information Science Computational Science Institute, NeuroInformatics Center University of Oregon Online Performance Monitoring, Analysis, and Visualization of Large-Scale Parallel Applications

ParCo 20032Online Performance Monitoring, Analysis, and Visualization Outline  Problem description  Scaling and performance observation  Concern for measurement intrusion  Interest in online performance analysis  General online performance system architecture  Access models  Profiling and tracing issues  Experiments with the TAU performance system  Online profiling  Online tracing  Conclusions and future work

ParCo 20033Online Performance Monitoring, Analysis, and Visualization Problem Description  Need for parallel performance observation  Instrumentation, measurement, analysis, visualization  In general, there is the concern for intrusion  Seen as a tradeoff with accuracy of performance diagnosis  Scaling complicates observation and analysis  Issues of data size, processing time, and presentation  Online approaches add capabilities as well as problems  Performance interaction, but at what cost?  Tools for large-scale performance observation online  Supporting performance system architecture  Tool integration, effective usage, and portability

ParCo 20034Online Performance Monitoring, Analysis, and Visualization Scaling and Performance Observation  Consider “traditional” measurement methods  Profiling: summary statistics calculated during execution  Tracing: time-stamped sequence of execution events  More parallelism  more performance data overall  Performance specific to each thread of execution  Possible increase in number interactions between threads  Harder to manage the data (memory, transfer, storage, …)  Instrumentation more difficult with greater parallelism?  More parallelism / performance data  harder analysis  More time consuming to analyze  More difficult to visualize (meaningful displays)

ParCo 20035Online Performance Monitoring, Analysis, and Visualization Concern for Measurement Intrusion  Performance measurement can affect the execution  Perturbation of “actual” performance behavior  Minor intrusion can lead to major execution effects  Problems exist even with small degree of parallelism  Intrusion is accepted consequence of standard practice  Consider intrusion (perturbation) of trace buffer overflow  Scale exacerbates the problem … or does it?  Traditional measurement techniques tend to be localized  Suggests scale may not compound local intrusion globally  Measuring parallel interactions likely will be affected  Use accepted measurement techniques intelligently

ParCo 20036Online Performance Monitoring, Analysis, and Visualization Why Complicate Matters with Online Methods?  Adds interactivity to performance analysis process  Opportunity for dynamic performance observation  Instrumentation change  Measurement change  Allows for control of performance data volume  Post-mortem analysis may be “too late”  View on status of long running jobs  Allow for early termination  Computation steering to achieve “better” results  Performance steering to achieve “better” performance  Hmm, isn’t online performance observation intrusive?

ParCo 20037Online Performance Monitoring, Analysis, and Visualization Related Ideas  Computational steering  Falcon (Schwan, Vetter): computational steering  Dynamic instrumentation and performance search  Paradyn (Miller): online performance bottleneck analysis  Adaptive control and performance steering  Autopilot (Reed): performance steering  Peridot (Gerndt): automatic online performance analysis  OMIS/OCM (Ludwig): monitoring system infrastructure  Cedar (Malony): system/hardware monitoring  Virtue (Reed): immersive performance visualization  …

ParCo 20038Online Performance Monitoring, Analysis, and Visualization General Online Performance Observation System  Instrumentation and measurement components  Analysis and visualization components  Performance control and access  Monitoring = measurement + access Performance Data Performance Measurement Performance Control Performance Analysis Performance Visualization Performance Instrument

ParCo 20039Online Performance Monitoring, Analysis, and Visualization Models of Performance Data Access (Monitoring)  Push Model  Producer/consumer style of access and transfer  Application decides when/what/how much data to send  External analysis tools only consume performance data  Availability of new data is signaled passively or actively  Pull Model  Client/server style of performance data access and transfer  Application is a performance data server  Access decisions are made externally by analysis tools  Two-way communication is required  Push/Pull Models

ParCo Online Performance Monitoring, Analysis, and Visualization Online Profiling Issues  Profiles are summary statistics of performance  Kept with respect to some unit of parallel execution  Profiles are distributed across the machine (in memory)  Must be gathered and delivered to profile analysis tool  Profile merging must take place (possibly in parallel)  Consistency checking of profile data  Callstack must be updated to generate correct profile data  Correct communication statistics may require completion  Event identification (not necessary is save event names)  Sequence of profile samples allow interval analysis  Interval frequency depends on profile collection delay

ParCo Online Performance Monitoring, Analysis, and Visualization Online Tracing Issues  Tracing gathers time sequence of events  Possibly includes performance data in event record  Trace buffers distributed across the machine  Must be gathered and delivered to trace analysis tool  Trace merging is necessary (possibly in parallel)  Trace buffers overflow to files (happens even offline)  Consistency checking of trace data  May need to generate “ghost events” before and after  What portion of trace access (since last access)  Trace analysis may be in parallel  Trace buffer storage volume can be controlled

ParCo Online Performance Monitoring, Analysis, and Visualization Performance Control  Instrumentation control  Dynamic instrumentation  Inserts / removes instrumentation at runtime  Measurement control  Dynamic measurement  Enabling / disabling / changing of measurement code  Dynamic instrumentation or measurement variables  Data access control  Selection of what performance data to access  Control of frequency of access

ParCo Online Performance Monitoring, Analysis, and Visualization TAU Performance System Framework  Tuning and Analysis Utilities (aka Tools Are Us)  Performance system framework for scalable parallel and distributed high-performance computing  Targets a general complex system computation model  nodes / contexts / threads  Multi-level: system / software / parallelism  Measurement and analysis abstraction  Integrated toolkit for performance instrumentation, measurement, analysis, and visualization  Portable performance profiling/tracing facility  Open software approach

ParCo Online Performance Monitoring, Analysis, and Visualization TAU Performance System Architecture EPILOG Paraver ParaProf

ParCo Online Performance Monitoring, Analysis, and Visualization Online Profile Measurement and Analysis in TAU  Standard TAU profiling  Per node/context/thread  Profile “dump” routine  Context-level  Profile file per each thread in context  Appends to profile file  Selective event dumping  Analysis tools access files through shared file system  Application-level profile “access” routine

ParCo Online Performance Monitoring, Analysis, and Visualization ParaProf Framework Architecture  Portable, extensible, and scalable tool for profile analysis  Offer “best of breed” capabilities to performance analysts  Build as profile analysis framework for extensibility

ParCo Online Performance Monitoring, Analysis, and Visualization ParaProf Profile Display (VTF)  Virtual Testshock Facility (VTF), Caltech, ASCI Center  Dynamic measurement, online analysis, visualization

ParCo Online Performance Monitoring, Analysis, and Visualization Full Profile Display (SAMRAI++) 512 processes  Structured AMR toolkit (SAMRAI++), LLNL

ParCo Online Performance Monitoring, Analysis, and Visualization Online Performance Profile Analysis (K. Li, UO) Application Performance Steering Performance Visualizer Performance Analyzer Performance Data Reader TAU Performance System Performance Data Integrator SCIRun (Univ. of Utah) // performance data streams // performance data output file system sample sequencing reader synchronization accumulated samples

ParCo Online Performance Monitoring, Analysis, and Visualization Performance Visualization in SCIRun SCIRun program

ParCo 2003 Mini-Symposium21Online Performance Monitoring, Analysis, and Visualization Uintah Computational Framework (UCF)  University of Utah  UCF analysis  Scheduling  MPI library  Components  500 processes  Use for online and offline visualization  Apply SCIRun steering

ParCo 2003 Mini-Symposium22Online Performance Monitoring, Analysis, and Visualization Online Unitah Performance Profiling  Demonstration of online profiling capability  Colliding elastic disks  Test material point method (MPM) code  Executed on 512 processors ASCI Blue Pacific at LLNL  Example 1 (Terrain visualization)  Exclusive execution time across event groups  Multiple time steps  Example 2 (Bargraph visualization)  MPI execution time and performance mapping  Example 3 (Domain visualization)  Task time allocation to “patches”

ParCo 2003 Mini-Symposium23Online Performance Monitoring, Analysis, and Visualization Example 1

ParCo 2003 Mini-Symposium24Online Performance Monitoring, Analysis, and Visualization Example 2

ParCo 2003 Mini-Symposium25Online Performance Monitoring, Analysis, and Visualization Example 2 (continued)

ParCo 2003 Mini-Symposium26Online Performance Monitoring, Analysis, and Visualization Example 3

ParCo 2003 Mini-Symposium27Online Performance Monitoring, Analysis, and Visualization Online Trace Analysis and Visualization  Tracing is more challenging to do online  Trace buffer overflow can already be viewed as “online”  Write to file system (local/remote) on overflow  Causes large intrusion of execution (not synchronized)  There is potentially a lot more data to move around  TAU does dynamic event registration  Requires trace merging to make event ids consistent  Track events that actually occur  Static schemes must predefine all possible events  Decision on whether to keep trace data  Traces can be analyzed to produce statistics

ParCo 2003 Mini-Symposium28Online Performance Monitoring, Analysis, and Visualization VNG Parallel Distributed Trace Analysis  Holger Brunst, Technical University Dresden  In association with Wolfgang Nagel (ASCI PathForward)  Brunst currently visiting University of Oregon  Based on experience in development and use of Vampir  Client - server model with parallel analysis servers  Allow parallel analysis servers and remote visualization  Keep trace data close to where it was produced  Utilize parallel computing and storage resources  Hope to gain speedup efficiencies  Split analysis and visualization functionality  Accepts VTF, STF, and TAU trace formats

ParCo Online Performance Monitoring, Analysis, and Visualization VNG System Architecture  Client - server model with parallel analysis servers  Allow parallel analysis servers and remote analysis vgndvgn pthreads MPI sockets

ParCo Online Performance Monitoring, Analysis, and Visualization Online Trace Analysis with TAU and VNG TAU measurement system vgnd vgn  TAU measurement of application to generate traces  Write traces (currently) to NFS files and unify Trace access control (not yet) Needed for event consistency taumerge

ParCo Online Performance Monitoring, Analysis, and Visualization Experimental Online Tracing Setup  32-processor Linux cluster

ParCo Online Performance Monitoring, Analysis, and Visualization Online Trace Analysis of PERC EVH1 Code  Enhanced Virginia Hydrodynamics #1 (EVH1) Strange behavior seen on Linux platforms

ParCo Online Performance Monitoring, Analysis, and Visualization Evaluation of Experimental Approaches  Currently only supporting push model  File system solution for moving performance data  Is this a scalable solution?  Robust solution that can leverage high-performance I/O  May result in high intrusion  However, does not require IPC  Resolving identifiers in trace events is a real problem  Should be relatively portable

ParCo Online Performance Monitoring, Analysis, and Visualization Possible Improvements  Profile merging at context level to reduce number of files  Merging at node level may require explicit processing  Concurrent trace merging could also reduce files  Hierarchical merge tree  Will require explicit processing  Could consider IPC transfer  MPI (e.g., used in mpiP for profile merging)  Create own communicators  Sockets  PACX between computer server and performance analyzer  …

ParCo Online Performance Monitoring, Analysis, and Visualization Large-Scale System Support  Larger parallel systems will have better infrastructure  Higher performance I/O system and multiple I/O ndoes  Faster, higher bandwith networks (possible several)  Processors devoted to system operations  Hitachi SR8000  System processor per node (8 computational processors)  Remote DMA (RDMA)  RDMA may becoming available on Infiniband  Blue Gene/L  1024 I/O nodes (one per 64 processor) with large memory  Tree network for I/O operations and GigE as well

ParCo Online Performance Monitoring, Analysis, and Visualization Concluding Remarks  Interest in online performance monitoring, analysis, and visualization for large-scale parallel systems  Need to intelligently use  Benefit from other scalability considerations of the system software and system architecture  See as an extension to the parallel system architecture  Avoid solutions that have portability difficulties  In part, this is an engineering problem  Need to work with the system configuration you have  Need to understand if approach is applicable to problem  Not clear if there is a single solution

ParCo Online Performance Monitoring, Analysis, and Visualization Future Work  Build online support in TAU performance system  Extend to support PULL model capabilities  Hierarchical data access solutions  Performance studies  Integrate with SuperMon (Matt Sottile, LANL)  Scalable system performance monitor  Integration with other performance tools  …

ParCo Online Performance Monitoring, Analysis, and Visualization