Höchstleistungsrechenzentrum Stuttgart Matthias M üller Current Efforts of SPEC HPG Application Benchmarks for High Performance Computing IPSJ SIGMPS 2003.

Slides:



Advertisements
Similar presentations
SPEC High Performance Group (An Introduction)
Advertisements

Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
SPEC ENV2002: Environmental Simulation Benchmarks Wesley Jones
SPEC MPI2007 Benchmarks for HPC Systems Ron Lieberman Chair, SPEC HPG HP-MPI Performance Hewlett-Packard Company Dr. Tom Elken Manager, Performance Engineering.
SPECs High-Performance Benchmark Suite, SPEC HPC Rudi Eigenmann Purdue University Indiana, USA.
SPEC OMP Benchmark Suite H. Saito, G. Gaertner, W. Jones, R. Eigenmann, H. Iwashita, R. Lieberman, M. van Waveren, and B. Whitney SPEC High-Performance.
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop this is important for machine procurements and for understanding where HPC technology is.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Benchmarks for Large Systems Matthias Mueller, Kumaran Kalyanasundaram, G. Gaertner, W. Jones,
SPEC HPG Benchmarks for HPC Systems Kumaran Kalyanasundaram for SPEC High-Performance Group Kumaran Kalyanasundaram, PhD Chair, SPEC HPG Manager, SGI Performace.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller Overview of SPEC HPG Benchmarks SPEC BOF SC2003 Matthias Mueller High Performance Computing Center.
Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Benchmarks at HLRS Matthias Mueller High Performance Computing Center Stuttgart
Presentation Outline A word or two about our program Our HPC system acquisition process Program benchmark suite Evolution of benchmark-based performance.
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Weather Research & Forecasting: A General Overview
Real Time Versions of Linux Operating System Present by Tr n Duy Th nh Quách Phát Tài 1.
Mafijul Islam, PhD Software Systems, Electrical and Embedded Systems Advanced Technology & Research Research Issues in Computing Systems: An Automotive.
Full-System Timing-First Simulation Carl J. Mauer Mark D. Hill and David A. Wood Computer Sciences Department University of Wisconsin—Madison.
Μπ A Scalable & Transparent System for Simulating MPI Programs Kalyan S. Perumalla, Ph.D. Senior R&D Manager Oak Ridge National Laboratory Adjunct Professor.
NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
Benchmarking in IT Brand names Fiduciary Forum 2008 Washington DC, March 2008.
WRF Modeling System V2.0 Overview
Monte-Carlo method and Parallel computing  An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Understanding Application Scaling NAS Parallel Benchmarks 2.2 on NOW and SGI Origin 2000 Frederick Wong, Rich Martin, Remzi Arpaci-Dusseau, David Wu, and.
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
NPACI: National Partnership for Advanced Computational Infrastructure Supercomputing ‘98 Mannheim CRAY T90 vs. Tera MTA: The Old Champ Faces a New Challenger.
MP3 / MD740 Strategy & Information Systems Sept. 15, 2004 Computing Hardware – Moore's Law, Hardware Markets, and Computing Evolution Network Effects,
On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks Bernd Mohr 1, Allen D. Malony 2, Rudi Eigenmann 3 1 Forschungszentrum.
Weather Research & Forecasting Model (WRF) Stacey Pensgen ESC 452 – Spring ’06.
Hossein Bastan Isfahan University of Technology 1/23.
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
1 NOAA’s Environmental Modeling Plan Stephen Lord Ants Leetmaa November 2004.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Uncovering the Multicore Processor Bottlenecks Server Design Summit Shay Gal-On Director of Technology, EEMBC.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
© 2006 Open Grid Forum Welcome to GGF 18 at GridWorld Mark Linesch - President, Open Grid Forum Hewlett Packard.
1 Gelato Federation M. Benard – Hewlett Packard University Relations.
Software Research, Inc. Setting the Standard for Software Testing Corporate Background.
Numerical Libraries Project Microsoft Incubation Group Mary Beth Hribar Microsoft Corporation CSCAPES Workshop June 10, 2008 Copyright Microsoft Corporation,
HPCMP Benchmarking Update Cray Henry April 2008 Department of Defense High Performance Computing Modernization Program.
A New Parallel Debugger for Franklin: DDT Katie Antypas User Services Group NERSC User Group Meeting September 17, 2007.
3 rd Annual WRF Users Workshop Promote closer ties between research and operations Develop an advanced mesoscale forecast and assimilation system   Design.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
August 2001 Parallelizing ROMS for Distributed Memory Machines using the Scalable Modeling System (SMS) Dan Schaffer NOAA Forecast Systems Laboratory (FSL)
Model Driven Architecture: An Introduction. Heterogeneity is Permanent Programming languages –~3 million COBOL programmers –~1.6 million VB programmers.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Web Services Strategy MBUS 626 Group 2 Phil Jung Steve Conant Michael Jones.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Shu-Hua Chen University of California, Davis eatheresearch & orecasting
edit type on title master Fortran ISV Release I to L LINPACK TOP500 Technical Systems Division * Scalable Computing Lab 2 Hsin-Ying Lin
Mesoscale Modeling Jon Schrage Summer WRF-“Weather Research and Forecasting” Developed by: – National Center for Atmospheric Research (NCAR) – the.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Hernán García CeCalcULA Universidad de los Andes.
BITS Pilani, Pilani Campus Today’s Agenda Role of Performance.
Societal applications of large scalable parallel computing systems ARTEMIS & ITEA Co-summit, Madrid, October 30th 2009.
Parallel Algorithm Design & Analysis Course Dr. Stephen V. Providence Motivation, Overview, Expectations, What’s next.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Assessing and Understanding Performance
Performance Technology for Scalable Parallel Systems
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
HPC System Acquisition and Service Provision
Fei Cai Shaogang Wu Longbing Zhang and Zhimin Tang
Global Software Defined Data Center (SDDC) market, Size, Share, Market Intelligence, Company Profiles, Market Trends, Strategy, Analysis, Forecast
Cross-Platform Position: 1 Over all position: 2
Performance ICS 233 Computer Architecture and Assembly Language
COMS 361 Computer Organization
Benchmarks Programs specifically chosen to measure performance
High-performance Computing Market Is Expected to Reach USD 50 Billion By 2023 :
Presentation transcript:

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Current Efforts of SPEC HPG Application Benchmarks for High Performance Computing IPSJ SIGMPS 2003 Matthias Mueller High Performance Computing Center Stuttgart Kumaran Kalyanasundaram, G. Gaertner, W. Jones, R. Eigenmann, R. Lieberman, M. van Waveren, and B. Whitney SPEC High Performance Group

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Outline What is SPEC and SPEC HPG? Why do we need benchmarks? Benchmarks currently produced by SPEC HPG What do we need for the future?

Höchstleistungsrechenzentrum Stuttgart Matthias M üller What is SPEC? The Standard Performance Evaluation Corporation (SPEC) is a non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops suites of benchmarks and also reviews and publishes submitted results from our member organizations and other benchmark licensees. For more details see

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Members Members: 3DLabs * Advanced Micro Devices * Apple Computer, Inc. * ATI Research * Azul Systems, Inc. * BEA Systems * Borland * Bull S.A. * Dell * Electronic Data Systems * EMC * Encorus Technologies * Fujitsu Limited * Fujitsu Siemens * Fujitsu Technology Solutions * Hewlett-Packard * Hitachi Data Systems * IBM * Intel * ION Computer Systems * Johnson & Johnson * Microsoft * Mirapoint * Motorola * NEC - Japan * Network Appliance * Novell, Inc. * Nvidia * Openwave Systems * Oracle * Pramati Technologies * PROCOM Technology * SAP AG * SGI * Spinnaker Networks * Sun Microsystems * Sybase * Unisys * Veritas Software * Zeus Technology *

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC HPG = SPEC High-Performance Group Founded in 1994 Mission: To establish, maintain, and endorse a suite of benchmarks that are representative of real-world high-performance computing applications. SPEC/HPG includes members from both industry and academia. Benchmark products: –SPEC OMP (OMPM2001, OMPL2001) –SPEC HPC2002 released at SC 2002

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Currently active SPEC HPG Members Fujitsu HP IBM Intel SGI SUN UNISYS University of Purdue University of Stuttgart

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Where is SPEC Relative to Other Benchmarks ? There are many metrics, each one has its purpose Raw machine performance: Tflops Microbenchmarks: Stream Algorithmic benchmarks: Linpack Compact Apps/Kernels: NAS benchmarks Application Suites: SPEC User-specific applications: Custom benchmarks Computer Hardware Applications

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Why do we need benchmarks? Identify problems: measure machine properties Time evolution: verify that we make progress Coverage: Help the vendors to have representative codes: –Increase competition by transparency –Drive future development (see SPEC CPU2000) Relevance: Help the customers to choose the right computer

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Comparison of different benchmark classes coveragerelevanceIdentify problems Time evolution Micro00+++ Algorithmic-0+++ Kernels00++ SPEC++++ Apps-++00

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC OMP Benchmark suite developed by SPEC HPG Benchmark suite for performance testing of shared memory processor systems Uses OpenMP versions of SPEC CPU2000 benchmarks SPEC OMP mixes integer and FP in one suite OMPM is focused on 4-way to 16-way systems OMPL is targeting 32-way and larger systems

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC OMP Applications Code Applications Language lines ammp Molecular Dynamics C applu CFD, partial LU Fortran 4000 apsi Air pollution Fortran 7500 art Image Recognition\ neural networks C 1300 fma3d Crash simulation Fortran gafort Genetic algorithm Fortran 1500 galgel CFD, Galerkin FE Fortran equake Earthquake modeling C 1500 mgrid Multigrid solver Fortran 500 swim Shallow water modeling Fortran 400 wupwise Quantum chromodynamics Fortran 2200

Höchstleistungsrechenzentrum Stuttgart Matthias M üller CPU2000 vs OMPL2001

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC OMP Results 66 submitted results for OMPM 24 submitted results for OMPL VendorHP SUNSGI ArchitectureSuperdome Fire 15KO3800 CPUPA-8700+Itanium2UltraSPARC III R12000 Speed L1 Inst0.75 MB16 KB32 KB L1 Data1.5 MB16 KB64 KB32 KB L2-256 KB8 MB L KB--

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC OMPL Results: Applications with scaling to 128

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC OMPL Results: Superlinear scaling of applu

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC OMPL Results: Applications with scaling to 64

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC HPC2002 Benchmark Full Application benchmarks (including I/O) targeted at HPC platforms Currently three applications: –SPECenv: weather forecast –SPECseis: seismic processing, used in the search for oil and gas –SPECchem: comp. chemistry, used in chemical and pharmaceutical industries (gamess) Serial and parallel (OpenMP and/or MPI) All codes include several data sizes

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC ENV 2002 Based on the WRF weather model, a state-of-the- art, non-hydrostatic mesoscale weather model, see The WRF (Weather Research and Forecasting) Modeling System development project is a multi- year project being undertaken by several agencies. Members of the WRF Scientific Board include representatives from EPA, FAA, NASA, NCAR, NOAA, NRL, USAF and several universities lines of C and lines of F90

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC ENV2002 Medium data set: SPECenvM2002 –260x164x35 grid over Continental United States –22km resolution –Full physics –I/O associated with startup and final result. –Simulates weather for a 24 hour period starting from Saturday, November 3nd, 2001 at 12:00 A.M. SPECenvS2002 provided for benchmark researchers interested in smaller problems. Test and Train data sets for porting and feedback. The benchmark runs use restart files that are created after the model has run for several simulated hours. This ensures that cumulus and microphysics schemes are fully developed during the benchmark runs.

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPECenv execution models on a Sun Fire 6800 Medium scales better OpenMP best for small size MPI best for medium size

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC HPC2002 Results: SPECenv scaling

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPECseis execution models on a Sun Fire 6800 Medium scales better OpenMP scales better than MPI

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC HPC2002 Results: SPECseis scaling

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPECchem execution models on a Sun Fire 6800 Medium shows better scalability MPI is better than OpenMP

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC HPC2002 Results: SPECchem scaling

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Hybrid Execution for SPECchem

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Current and Future Work of SPEC HPG SPEC HPC: –Update of SPECchem –Improving portability, including tools –Larger datasets New release of SPEC OMP: –Inclusion of alternative sources –Merge OMPM and OMPL on one CD

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Adoption of new benchmark codes Remember that we need to drive the future development! Updates and new codes are important to stay relevant Possible candidates: –Should represent a type of computation that is regularly performed on HPC systems –We currently examine CPU2004 for candidates –Applications from Japan are very welcome !!! Please contact SPEC HPG or me if you have a code for us.

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Conclusion and Summary Results of OMPL and HPC2002: –Scalability of many programs to 128 CPUs Larger data sets show better scalability Best choice of programming model (MPI,OpenMP, hybrid) depends on: –Hardware –Program –Data set size SPEC HPC will continue to update and improve the benchmark suites in order to be representative of the work you do with your applications!

Höchstleistungsrechenzentrum Stuttgart Matthias M üller BACKUP

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC Members Members: 3DLabs * Advanced Micro Devices * Apple Computer, Inc. * ATI Research * Azul Systems, Inc. * BEA Systems * Borland * Bull S.A. * Dell * Electronic Data Systems * EMC * Encorus Technologies * Fujitsu Limited * Fujitsu Siemens * Fujitsu Technology Solutions * Hewlett-Packard * Hitachi Data Systems * IBM * Intel * ION Computer Systems * Johnson & Johnson * Microsoft * Mirapoint * Motorola * NEC - Japan * Network Appliance * Novell, Inc. * Nvidia * Openwave Systems * Oracle * Pramati Technologies * PROCOM Technology * SAP AG * SGI * Spinnaker Networks * Sun Microsystems * Sybase * Unisys * Veritas Software * Zeus Technology * Associates: Argonne National Laboratory * CSC - Scientific Computing Ltd. * Cornell University * CSIRO * Defense Logistics Agency * Drexel University * Duke University * Fachhochschule Gelsenkirchen, University of Applied Sciences * Harvard University * JAIST * Leibniz Rechenzentrum - Germany * Los Alamos National Laboratory * Massey University, Albany * NASA Glenn Research Center * National University of Singapore * North Carolina State University * PC Cluster Consortium * Purdue University * Queen's University * Seoul National University * Stanford University * Technical University of Darmstadt * Tsinghua University * University of Aizu - Japan * University of California - Berkeley * University of Edinburgh * University of Georgia * University of Kentucky * University of Illinois - NCSA * University of Maryland * University of Miami * University of Modena * University of Nebraska - Lincoln * University of New Mexico * University of Pavia * University of Pisa * University of South Carolina * University of Stuttgart * University of Tsukuba * Villanova University * Yale University *

Höchstleistungsrechenzentrum Stuttgart Matthias M üller CPU2000 vs. OMPM2001

Höchstleistungsrechenzentrum Stuttgart Matthias M üller CPU2000 vs OMPL2001

Höchstleistungsrechenzentrum Stuttgart Matthias M üller Program Memory Footprints

Höchstleistungsrechenzentrum Stuttgart Matthias M üller SPEC ENV2002 – data generation The WRF datasets used in SPEC ENV2002 are created using the WRF Standard Initialization (SI) software and standard sets of data used in numerical weather prediction. The benchmark runs use restart files that are created after the model has run for several simulated hours. This ensures that cumulus and microphysics schemes are fully developed during the benchmark runs.