Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November.

Similar presentations


Presentation on theme: "Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November."— Presentation transcript:

1 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 PAPI Evaluation Patricia J. Teller, Maria G. Aguilera, Thientam Pham, and Roberto Araiza (Leonardo Salayandia, Alonso Bayona, Manuel Nieto, and Michael Maxwell) University of Texas-El Paso. Supported by the Department of Defense PET Program

2 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Main Objectives Provide DoD users with a set of documentation that enables them to easily collect, analyze, and interpret hardware performance data that is highly relevant for analyzing and improving performance of applications on HPC platforms.

3 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Evaluation: Objectives Understand and explain counts obtained for various PAPI metrics Determine reasons why counts may be different from what is expected Calibrate counts, excluding PAPI overhead Work with vendors and/or the PAPI team to fix errors Provide DoD users with information that will allow them to effectively use collected performance data

4 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Evaluation: Methodology - 1 1.Micro-benchmark: design and implement a micro-benchmark that facilitates event count prediction 2.Prediction: predict event counts using tools and/or mathematical models 3.Data collection-1: collect hardware-reported event counts using PAPI 4.Data collection-2: collect predicted event counts using a simulator (not always necessary or possible)

5 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Evaluation: Methodology - 2 5.Comparison: compare predicted and hardware-reported event counts 6.Analysis: analyze results to identify and possibly quantify differences 7.Alternate approach: when analysis indicates that prediction is not possible, use an alternate means to either verify reported event count accuracy or demonstrate that the reported event count seems reasonable

6 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Example Findings - 1 Some hardware-reported event counts mirror expected behavior, e.g., number of floating-point instructions on the MIPS R10K and R12K. Other hardware-reported events can be calibrated, by subtracting that part of the event count associated with the interface (overhead or bias error), to mirror expected behavior, e.g., number of load instructions on the MIPS and POWER processors and instructions completed on the POWER3. In some cases, compiler optimizations effect event counts, e.g., the number of floating-point instructions on the IBM POWER platforms.

7 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Example Findings - 2 Very-long instruction words can affect event counts, e.g., on the Itanium architecture the number of instruction cache misses and instructions retired are dilated by no- ops used to compose very long instruction words. The definition of the event count may be non-standard and, thus, the associated performance data may be misleading, e.g., instruction cache hits on the POWER3. The complexity of hardware features and lack of documentation can make it difficult to understand how to tune performance based on information gleaned from event counts—example: data prefetching, page walker.

8 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Example Findings - 3 Although we have not been able to determine the algorithms used for prefetching, the ingenuity and performance of these mechanisms is striking. In some cases, more instructions are completed than issued on the R10K. The DTLB miss count on the POWER3 varies depending upon the method used to allocate memory (i.e., static, calloc or malloc). Hardware SQRT on POWER3 not counted in total floating-point operations unless combined with another floating-point operation.

9 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Publications Papers DoD Users Group Conference (with Shirley Moore), June 2003. LACSI 2002 (with Shirley Moore), October 2002. DoD Users Group Conference (with members of PAPI team), June 2002. “Hardware Performance Metrics and Compiler Switches: What you see is not always what you get,” with Luiz Derose, submitted for publication. Posters “Hardware Performance Counters: Is what you see, what you get?, Poster SC2003. Presentations PTools Workshop, September 2002. Conference Presentations for Papers above.

10 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Calibration Example - 1 Instructions completed PAPI overhead: 139 on POWER3-II PAPI overhead: 139 on POWER3-II Number of C-level instructions0 (base) 10100100010000100000 Predicted Count034340340034000340000 Mean Reported Count139173479353934139340139 Standard Deviation000000 Reported - Predicted139

11 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Calibration Example - 2 Instructions completed PAPI overhead: 141 for small micro- benchmarks PAPI overhead: 141 for small micro- benchmarks Number of C-level instructions0 (base) 10100100010000100000 Predicted Count034340340034000340000 Mean Reported Count1411754813541.0434152.57340267.9 Standard Deviation0001.14e-50.0003390.000373 Reported – Predicted141 141.04152.57267.90

12 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 RIB/OKC for Evaluation Resources Object-oriented data model to store benchmarks, results and analyses Information organized for ease of use by colleagues external to PCAT To be web-accessible to members Objects linked between them as appropriate Benchmark General description of a benchmark Machine Description of platform Case Specific implementation and results Organization Contact information

13 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 PCAT RIB/OKC Data Repository Example Benchmark name: DTLB misses Development date: 12/2002 Benchmark type: Array Abstract:Code traverses though an array of integers once at regular strides of PAGESIZE. The intention is to create compulsory misses on each array access. Input parameters are: Page size (bytes) and array size (bytes). The number of misses normally expected should be: Array Size / Page Size. Files included: dtlbmiss.c, dtlbmiss.pl About included files: dtlbmiss.c, benchmark source code in C, requires pagesize and arraysize parameters for input and outputs PAPI event count. dtlbmiss.pl, perl script that executes the benchmark 100 times for increasing arraysize parameters and saves benchmark output to text file. Script should be customized for pagesize parameter and arraysize range. Links to files

14 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 PCAT RIB/OKC Example Case Object Name: DTLB misses on Itanium Date: 12/2002 Compiler and options: gcc ver 2.96 20000731 (Red Hat Linux 7.1 2.96-101) –O0 PAPI Event: PAPI_TLB_DM, Data TLB misses Native Event: DTLB_MISSES Experimental methodology: Ran benchmark 100 times with perl script, averages and standard deviations reported Input parameters used: Page size = 16K, Array size = 16K – 160M (increments by multiples of 10) Platform used: HP01.cs.utk.edu (Itanium) Developed by: PCAT Benchmark used: DTLB misses Links to other objects

15 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 PCAT RIB/OKC Example Case Object Results summary: Reported counts closely match the predicted counts, showing differences close to 0% even in the cases with a small number of data references, which may be more susceptible to external perturbation. The counts indicate that prefetching is not performed at the DTLB level. Included files and description: - dtlbmiss.itanium.c: Source code of benchmark, instrumented with PAPI to count PAPI_TLB_DM - dtlbmiss.itanium.pl: Perl script used to run the benchmark - dtlbmiss.itanium.txt: Raw data obtained, each column contains results for a particular array size, each case is run 100 times (i.e., 100 rows included) - dtlbmiss.itanium.xls: Includes raw data, averages of runs, standard deviations and graph of % difference between reported and predicted counts - dtlbmiss.itanium.pdf: Same as dtlbmiss.itanium.xls

16 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 Contributions Infrastructure that facilitates user access of hardware performance data that is highly relevant for analyzing and improving the performance of their applications on HPC platforms. Information that allows users to effectively use the data with confidence.

17 Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November 17-20, 2003 QUESTIONS?


Download ppt "Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November."

Similar presentations


Ads by Google