HPC in 2029: Will The March to ZettaFLOPS Succeed? William Gropp www.cs.uiuc.edu/~wgropp.

Slides:



Advertisements
Similar presentations
Automatic Data Movement and Computation Mapping for Multi-level Parallel Architectures with Explicitly Managed Memories Muthu Baskaran 1 Uday Bondhugula.
Advertisements

Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Accelerators for HPC: Programming Models Accelerators for HPC: StreamIt on GPU High Performance Applications on Heterogeneous Windows Clusters
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Priority Research Direction Key challenges General Evaluation of current algorithms Evaluation of use of algorithms in Applications Application of “standard”
Performance Metrics Inspired by P. Kent at BES Workshop Run larger: Length, Spatial extent, #Atoms, Weak scaling Run longer: Time steps, Optimizations,
High-Performance Computing
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Blue Waters An Extraordinary Computing Resource for Advancing.
Parallel Research at Illinois Parallel Everywhere
Simulating Supercell Storms and Tornaodes in Unprecedented Detail and Accuracy Bob Wilhelmson Professor, Atmospheric Sciences Chief Scientist,
1 Lawrence Livermore National Laboratory By Chunhua (Leo) Liao, Stephen Guzik, Dan Quinlan A node-level programming model framework for exascale computing*
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
March 18, 2008SSE Meeting 1 Mary Hall Dept. of Computer Science and Information Sciences Institute Multicore Chips and Parallel Programming.
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
A many-core GPU architecture.. Price, performance, and evolution.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
GLCPC - MSU - 12/15/2009 Blue Waters An Extraordinary Resource for Extraordinary Science Thom Dunning, William Kramer, Marc Snir, William Gropp, Wen-mei.
ISC’11 | Hamburg, Germany | June 19 – June 23, 2011 | Frühjahrstreffen des ZKI-Arbeitskreises "Supercomputing“ Dr. Horst Gietl.
Parallel Programming Henri Bal Rob van Nieuwpoort Vrije Universiteit Amsterdam Faculty of Sciences.
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy’s NNSA U N C L A S S I F I E D Slide 1 Exascale? No problem! Paul Henning.
Lecture 1: Introduction to High Performance Computing.
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
4.x Performance Technology drivers – Exascale systems will consist of complex configurations with a huge number of potentially heterogeneous components.
Are their more appropriate domain-specific performance metrics for science and engineering HPC applications available then the canonical “percent of peak”
Chapter 2 Computer Clusters Lecture 2.3 GPU Clusters for Massive Paralelism.
Slide 1 Auburn University Computer Science and Software Engineering Scientific Computing in Computer Science and Software Engineering Kai H. Chang Professor.
UIUC CSL Global Technology Forum © NVIDIA Corporation 2007 Computing in Crisis: Challenges and Opportunities David B. Kirk.
Extreme scale parallel and distributed systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward.
Extreme-scale computing systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward exa-scale computing.
Uncovering the Multicore Processor Bottlenecks Server Design Summit Shay Gal-On Director of Technology, EEMBC.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
If Exascale by 2018, Really? Yes, if we want it, and here is how Laxmikant Kale.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Problem is to compute: f(latitude, longitude, elevation, time)  temperature, pressure, humidity, wind velocity Approach: –Discretize the.
March 9, 2015 San Jose Compute Engineering Workshop.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Geosciences - Observations (Bob Wilhelmson) The geosciences in NSF’s world consists of atmospheric science, ocean science, and earth science Many of the.
Heterogeneous CPU/GPU co- processor clusters Michael Fruchtman.
Introduction to Research 2011 Introduction to Research 2011 Ashok Srinivasan Florida State University Images from ORNL, IBM, NVIDIA.
© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.
GEOSCIENCE NEEDS & CHALLENGES Dogan Seber San Diego Supercomputer Center University of California, San Diego, USA.
GPU Programming Shirley Moore CPS 5401 Fall 2013
Exascale climate modeling 24th International Conference on Parallel Architectures and Compilation Techniques October 18, 2015 Michael F. Wehner Lawrence.
Lawrence Livermore National Laboratory BRdeS-1 Science & Technology Principal Directorate - Computation Directorate How to Stop Worrying and Learn to Love.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.
LCSE – NCSA Partnership Accomplishments, FY01 Paul R. Woodward Laboratory for Computational Science & Engineering University of Minnesota October 17, 2001.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Advanced User Support for MPCUGLES code at University of Minnesota October 09,
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 Graphic Processing Processors (GPUs) Parallel.
Applications Scaling Panel Jonathan Carter (LBNL) Mike Heroux (SNL) Phil Jones (LANL) Kalyan Kumaran (ANL) Piyush Mehrotra (NASA Ames) John Michalakes.
Is MPI still part of the solution ? George Bosilca Innovative Computing Laboratory Electrical Engineering and Computer Science Department University of.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
NCSA Strategic Retreat: System Software Trends Bill Gropp.
Computer Science and Computational Science Sampath Kannan, Division Director Computing & Communication Foundations Division National Science Foundation.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
High Performance Computing (HPC)
Web: Parallel Computing Rabie A. Ramadan , PhD Web:
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
National Vision for High Performance Computing
Appro Xtreme-X Supercomputers
SCEC Community Modeling Environment (SCEC/CME)
Theoretical and Computational Biophysics Group
Scalable Parallel Interoperable Data Analytics Library
Presentation transcript:

HPC in 2029: Will The March to ZettaFLOPS Succeed? William Gropp

Extrapolation is Risky 1989 – T – 20 years  Intel introduces 486DX  Eugene Brooks writes “Attack of the Killer Micros”  4 years before TOP500  Top systems at about 2 GF Peak 1999 – T – 10 years  NVIDIA introduces the GPU (GeForce 256) Programming GPUs still a challenge  Top system – ASCI Red, 9632 cores, 3.2 TF Peak  MPI is 7 years old

HPC Today High(est)-End systems  1 PF (10 15 Ops/s) achieved on a few “peak friendly” applications  Much worry about scalability, how we’re going to get to an ExaFLOPS  Systems are all oversubscribed DOE INCITE awarded almost 900M processor hours in 2009, many turned away NSF PRAC awards for Blue Waters similarly competitive Widespread use of clusters, many with accelerators; cloud computing services Laptops (far) more powerful than the supercomputers I used as a graduate student

NSF’s Strategy for High-end Computing FY’07 FY’11 FY’10FY’09FY’08 Science and Engineering Capability ( logarithmic scale ) Track 1 System Track 2 Systems UIUC/NCSA (~1 PF sustained) TACC (500+TF peak) UT/ORNL (~1PF peak) Track 2d PSC (?) Leading University HPC Centers ( TF) Track 3 Systems

HPC in 2011 Sustained PF systems  NSF Track 1 “Blue Waters” at Illinois  “Sequoia” Blue Gene/Q at LLNL  Undoubtedly others Still programmed with MPI and MPI+other (e.g., MPI+OpenMP)  But in many cases using toolkits, libraries, and other approaches And not so bad – applications will be able to run when the system is turned on  Replacing MPI will require some compromise – e.g., domain specific (higher-level but less general) Still can’t compile single-threaded code to reliably get good performance – see the work in autotuners. Lesson – there’s a limit to what can be automated. Pretending that there’s an automatic solution will stand in the way of a real solution

HPC in 2019 Exascale (10 18 ) systems arrive  Issues include power, concurrency, fault resilience, memory capacity Likely features  Memory per core (or functional unit) smaller than today’s systems  threads  Heterogeneous processing elements Software will be different  You can use MPI, but constraints will get in your way  Likely a combination of tools, with domain-specific solutions and some automated code generation Algorithms need to change/evolve  Extreme scalability, reduced memory  Managed locality  Participate in fault tolerance

HPC in 2029 Will we even have Zettaflops (10 21 Ops/s)?  Unlikely (but not impossible) in a single (even highly parallel) system Power (again) – you need an extra 1000-fold improvement in results/Joule Concurrency  threads (!) See the Zettaflops workshops –  Will require new device technology Will the high-end have reached a limit after Exascale systems?

The HPC Pyramid in 1993 High Performance Workstations Mid-Range Parallel Processors and Networked Workstations Center Supercomputer s Tera Flop Class

The HPC Pyramid in 2029 (?) Laptops, phones, wristwatches, eye glasses… Single Cabinet Petascale Systems (or attack of the killer GPU successors) Center Exascale Supercomputer s

Blue Waters Project Petascale Allocation Awards 1.Computational Chemistry at the Petascale  Monica Lamm, Mark Gordon, Theresa Windus, Masha Sosonkina, Brett Bode, Iowa State University 2.Testing Hypotheses about Climate Prediction at Unprecedented Resolutions on the Blue Waters System  David Randall, Ross Heikes, Colorado State University; William Large, Richard Loft, John Dennis, Mariana Vertenstein, National Center for Atmospheric Research; Cristiana Stan, James Kinter, Institute for Global Environment and Society; Benjamin Kirtman, University of Miami 3.Petascale Research in Earthquake System Science on Blue Waters  Thomas Jordan, Jacobo Bielak, University of Southern California 4.Breakthrough Petascale Quantum Monte Carlo Calculations  Shiwei Zhang, College of William and Mary 5.Electronic Properties of Strongly Correlated Systems Using Petascale Computing  Sergey Savrasov, University of California, Davis; Kristjan Haule, Gabriel Kotliar, Rutgers University

Blue Waters Project Petascale Allocation Awards 6.Understanding Tornados and Their Parent Supercells Through Ultra- High Resolution Simulation/Analysis  Robert Wilhelmson, Brian Jewett, Matthew Gilmore, University of Illinois at Urbana-Champaign 7.Petascale Simulation of Turbulent Stellar Hydrodynamics  Paul Woodward, Pen-Chung Yew, University of Minnesota, Twin Cities 8.Petascale Simulations of Complex Biological Behavior in Fluctuating Environments  Ilias Tagkopoulos, University of California, Davis 9.Computational Relativity and Gravitation at Petascale: Simulating and Visualizing Astrophysically Realistic Compact Binaries  Manuela Campanelli, Carlos Lousto, Hans-Peter Bischof, Joshua Faber, Yosef Ziochower, Rochester Institute of Technology 10.Enabling Science at the Petascale: From Binary Systems and Stellar Core Collapse to Gamma-Ray Bursts  Eric Schnetter, Gabrielle Allen, Mayank Tyagi, Peter Diener, Christian Ott, Louisiana State University

Blue Waters Project Petascale Allocation Awards 11.Petascale Computations for Complex Turbulent Flows  Pui-Kuen Yeung, James Riley, Robert Moser, Amitava Majumdar, Georgia Institute of Technology 12.Computational Microscope  Klaus Schulten, Laxmikant Kale, University of Illinois at Urbana-Champaign 13.Simulation of Contagion on Very Large Social Networks with Blue Waters  Keith Bisset, Xizhou Feng, Virginia Polytechnic Institute and State University 14.Formation of the First Galaxies: Predictions for the Next Generation of Observatories  Brian O’Shea, Michigan State University; Michael Norman, University of California at San Diego 15.Super Instruction Architecture for Petascale Computing  Rodney Bartlett, Erik Duemens, Beverly Sanders, University of Florida; Ponnuswamy Sadayappan, Ohio State University 16.Peta-Cosmology: Galaxy Formation and Virtual Astronomy  Kentaro Nagamine, University of Nevada at Las Vegas; Jeremiah Ostriker, Princeton University; Renyue Cen, Greg Bryan