Erik P. DeBenedictis, Organizer Sandia National Laboratories Los Alamos Computer Science Institute Symposium 2004 The Path To Extreme Computing Sandia.

Slides:



Advertisements
Similar presentations
Extreme Programming Alexander Kanavin Lappeenranta University of Technology.
Advertisements

Presentation at WebEx Meeting June 15,  Context  Challenge  Anticipated Outcomes  Framework  Timeline & Guidance  Comment and Questions.
Advances in Modeling and Simulation for US Nuclear Stockpile Stewardship February 2, 2009 James S. Peery Director Computers, Computation, Informatics and.
Erik P. DeBenedictis, Organizer Sandia National Laboratories Los Alamos Computer Science Institute Symposium 2004 The Path To Extreme Computing Sandia.
Erik P. DeBenedictis Sandia National Laboratories February 24, 2005 Sandia Zettaflops Story A Million Petaflops Sandia is a multiprogram laboratory operated.
How to Reach Zettaflops Erik P. DeBenedictis Avogadro-Scale Computing April 17, 2008 The Bartos Theater Building E15 MIT SAND C Sandia is a multiprogram.
Scientific Grand Challenges Workshop Series: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale Warren M. Washington National.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Building a Tenure Portfolio Sean Ellermeyer Professor of Mathematics and Interim Chair Presentation for Project NExT Fellows Joint Mathematics Meetings.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
A 100,000 Ways to Fa Al Geist Computer Science and Mathematics Division Oak Ridge National Laboratory July 9, 2002 Fast-OS Workshop Advanced Scientific.
Universidad Politécnica de Baja California. Juan P. Navarro Sanchez 9th level English Teacher: Alejandra Acosta The Beowulf Project.
Cody Clifton KU GSO/AWM joint meeting April 29, 2014.
Improving Contaminant Mixing Models For Water Distribution Pipe Networks Siri Sahib S. Khalsa University of Virginia Charlottesville, VA
1a-1.1 Parallel Computing Demand for High Performance ITCS 4/5145 Parallel Programming UNC-Charlotte, B. Wilkinson Dec 27, 2012 slides1a-1.
Erik P. DeBenedictis Sandia National Laboratories May 16, 2005 Petaflops, Exaflops, and Zettaflops for Science and Defense Sandia is a multiprogram laboratory.
Overview of Future Hardware Technologies The chairman’s notes from the plenary session appear in blue throughout this document. Subject to these notes.
March 6, 2003Thomas Sterling - Caltech & JPL 1 Crystal Ball Panel Thomas Sterling California Institute of Technology and NASA Jet Propulsion Laboratory.
For Official Use Only Records Management: Essential Key to Content Management and eDiscovery Elizabeth L. (Bette) Fugitt, Ed.D. Unit Chief, Records Management.
1 Midrange Computing Working Group Process and Goals Background The MRC Working Group Phase I: - Assessment and Findings - Recommendations for a path forward.
Testing, Integration, Validation, and/or XML Erik DeBenedictis Sandia National Labs Sandia is a multiprogram laboratory operated by Sandia Corporation,
SOS8 Erik P. DeBenedictis Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for.
High Performance Computing: Applications in Science and Engineering REACH Symposium on HPC 10 October IITK REACH Symposia October’10.
Tempe-Feb The National Science Foundation CISE –ANIR Infrastructure programs THOMAS J. GREENE, Ph.D. Senior Program Director, Advanced Networking.
Campus Expectations Task Force Status Report William F. Decker The University of Iowa Chair, CETF 20 September 2005.
A Perspective on the NASA Space Power and Energy Storage Roadmap National Research Council Panel Power Workshop March 21, 2011 H. Sterling Bailey, Ph.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Center for Programming Models for Scalable Parallel Computing: Project Meeting Report Libraries, Languages, and Execution Models for Terascale Applications.
Robert (Bob) E. Kahn Inventor of the Internet. Background Information Born in Brooklyn, New York on December 23, 1938 Earned M.A. and Ph.D. degrees from.
SCSC 311 Information Systems: hardware and software.
Erik P. DeBenedictis Sandia National Laboratories October 24-27, 2005 Workshop on the Frontiers of Extreme Computing Sandia is a multiprogram laboratory.
Page 1 Trilinos Release Improvement Issues Roscoe A. Bartlett Department of Optimization & Uncertainty Estimation Trilinos.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation,
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Revitalizing High-End Computing – Progress Report July 14, 2004 Dave Nelson (NCO) with thanks to John Grosh (DoD)
Erik P. DeBenedictis Sandia National Laboratories May 5, 2005 Reversible Logic for Supercomputing How to save the Earth with Reversible Computing Sandia.
October 12, 2004Thomas Sterling - Caltech & JPL 1 Roadmap and Change How Much and How Fast Thomas Sterling California Institute of Technology and NASA.
Slide-1 LACSI Extreme Computing MITRE ISIMIT Lincoln Laboratory This work is sponsored by the Department of Defense under Army Contract W15P7T-05-C-D001.
What is SBIR?  SBIR is a federal program where small businesses compete for up to $670,000 to research, develop and commercialize a new technology. 
Software Working Group Chairman’s Note: This document was prepared by the “software and applications” working group and was received by the entire workshop.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear.
U.S. Department of Energy’s Office of Science 20 th Meeting of the IEA Large Tokamak ExCo, May th Meeting of the IEA Poloidal Divertor ExCo, May.
NSDL Collections Based on DOE User Facilities Christopher Klaus 10/05/03.
M U N - February 17, Phil Bording1 Computer Engineering of Wave Machines for Seismic Modeling and Seismic Migration R. Phillip Bording February.
Guidelines for Presentations Hasan Çam Computer Science and Engineering Department Arizona State University.
1 DOE Office of Science October 2003 SciDAC Scientific Discovery through Advanced Computing Alan J. Laub.
Erik P. DeBenedictis Sandia National Laboratories October 27, 2005 Workshop on the Frontiers of Extreme Computing Overall Outbrief Sandia is a multiprogram.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
Software Development in HPC environments: A SE perspective Rakan Alseghayer.
Modeling for Verification and Validation Workshop Overview and Scope Tuesday, September 23, 2015 FAA William J. Hughes Technical Center William D. Miller.
Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation,
Erik P. DeBenedictis Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
HPC HPC-5 Systems Integration High Performance Computing 1 Application Resilience: Making Progress in Spite of Failure Nathan A. DeBardeleben and John.
Electromagnetically biased Self-assembly
Future Usage Environments and System Integration Issues David Forslund Cognition Group, Inc. HCMDSS Workshop November 16, 2004.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
CERN VISIONS LEP  web LHC  grid-cloud HL-LHC/FCC  ?? Proposal: von-Neumann  NON-Neumann Table 1: Nick Tredennick’s Paradigm Classification Scheme Early.
ITEC 275 Computer Networks – Switching, Routing, and WANs Week 12 Chapter 14 Robert D’Andrea Some slides provide by Priscilla Oppenheimer and used with.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
CU Development Grants 2016 Information Session 482 MacOdrum Library June 2 nd, 2016.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
Centre of Excellence in Physics at Extreme Scales Richard Kenway.
Learning Objective Chapter 12 Using Reports and Proposals Copyright © 2001 South-Western College Publishing Co. Objectives O U T L I N E Types of Reports.
Topics 8: Advance in Parallel Computer Architectures
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Husky Energy Chair in Oil and Gas Research
Presentation transcript:

Erik P. DeBenedictis, Organizer Sandia National Laboratories Los Alamos Computer Science Institute Symposium 2004 The Path To Extreme Computing Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL Sandia National Laboratories report SAND C Unclassified Unlimited Release Editor’s note: These were presented by Erik DeBenedictis to organize the workshop

Overall Motivation 1 Zettaflops 100 Exaflops 10 Exaflops 1 Exaflops 100 Petaflops 10 Petaflops 1 Petaflops 100 Teraflops System Performance Year     P–125  below 100k B T limit   Best-case logic– 100k B T limit  Quantum Dots/ Reversible Logic mP (green) Best-case logic (red)  [DeBenedictis 04] Technology Plasma Fusion Simulation [Jardin 03] No schedule provided by source Applications [Jardin 03] S.C. Jardin, “Plasma Science Contribution to the SCaLeS Report,” Princeton Plasma Physics Laboratory, PPPL-3879 UC-70, available on Internet. [Malone 03] Robert C. Malone, John B. Drake, Philip W. Jones, Douglas A. Rotman, “High-End Computing in Climate Modeling,” contribution to SCaLeS report. [NASA 99] R. T. Biedron, P. Mehrotra, M. L. Nelson, F. S. Preston, J. J. Rehder, J. L. Rogers, D. H. Rudy, J. Sobieski, and O. O. Storaasli, “Compute as Fast as the Engineers Can Think!” NASA/TM , available on Internet. [NASA 02] NASA Goddard Space Flight Center, “Advanced Weather Prediction Technologies: NASA’s Contribution to the Operational Agencies,” available on Internet. [SCaLeS 03] Workshop on the Science Case for Large-scale Simulation, June 24-25, proceedings on Internet a [DeBenedictis 04], Erik P. DeBenedictis, “Matching Supercomputing to Progress in Science,” July Presentation at Lawrence Berkeley National Laboratory, also published as Sandia National Laboratories SAND report SAND P. Sandia technical reports are available by going to and accessing the technical library. Compute as fast as the engineer can think [NASA 99]  100   1000  [SCaLeS 03]  Geodata Earth  Station Range [NASA 02] Full Global Climate [Malone 03]

The Back and Forth of Computing Limits 1. Public: Moore’s Law continues forever –Justification: California real estate prices go up –Stock prices go up and up (in the late 1990s) 2. Industry (ITRS): Moore’s Law ends –Justification: The types of technology my management is currently investing in is limited to level ____ of 1000 physicists, including a dozen with Nobel prizes: No upper limit on computing per watt –Constructive solution: Reversible Logic 4. A couple skeptical physicists: Nobody has demonstrated devices below the k B T limit –Could there be an undiscovered physical law?

Morning Session Organizational –9:00 Rob Leland Host comments –9:05 Erik DeBenedictis Workshop organization A Big Application –9:15 Philip Jones Climate Modeling Current Technology Limits –10:00 Erik DeBenedictis ITRS Roadmap Break Advanced Architecture –11:00 Peter Kogge PIM architecture Software –11:45 Bill Gropp Software

Afternoon Session Logic –2:00 Michael Frank Reversible Logic Post Transistor Devices –2:45 Craig Lent Quantum Dots 4:00 Panel Session –Thomas Sterling, Caltech/JPL –Horst Simon, LBL/NERSC –David Koester, MITRE/DARPA HPCS –Terry Michalske, Center for Integrated NanoTechnology –Fred Johnson, DOE –Rob Leland, Sandia

Climate Modeling About the Speaker –Philip Jones, Project Leader of the Climate Ocean and Sea Ice Modeling (COSIM), Los Alamos National Laboratory –Phil was part of the SCaLeS report study on computational requirements for climate  modeling See link on Notes –A very important problem for humanity –Many levels of increasing sophistication to 1 ZFLOPS. –Independent validation NASA study for ground processing of Earth sciences data Challenge questions

NASA Climate Earth Station –“Advanced Weather Prediction Technologies: NASA’s Contribution to the Operational Agencies,” Gap Analysis Appendix, May 31, 2002

Technology Limits About the Speaker –Erik DeBenedictis, staff member at Sandia National Laboratories Notes –The “limits of current technology” involves two questions The social question of the dividing line between “current” and “future” technology The technical limits of the technology

Advanced Architectures About the Speaker –Peter Kogge, Professor at Notre Dame, first holder of the endowed McCourtney Chair in Computer Science and Engineering (CSE) –IBM Federal Systems Division, from 1968 until 1994 –IEEE, IBM fellow –Ph. D. Stanford, 1973 Notes –Advanced architectures such as PIM appear to be the first option beyond simple continuation of Moore’s Law –Upside potential about 100 

Software About the speaker –Bill Gropp is Associate Division Director, Senior Computer Scientist, Mathematics and Computer, Science Division, Argonne National Laboratories –Ph. D. Stanford 1982 –Well known for MPI Notes:

Reversible Logic About the Speaker –Michael Frank, Assistant Professor, Florida State University –Ph. D. MIT 1999, “Reversibility for Efficient Computing” Notes –Reversible logic is essential to beat the limits Erik described –It works by recycling energy instead of turning it into heat –Reversible logic is widely accepted in the physics community, but not broadly understood

Quantum Dots About the Speaker –Craig Lent, Freimann Professor of Engineering, University of Notre Dame Notes –Quantum Dots for computation are a promising device technology that could reach to Zettaflops –Published material on quantum dots is mature enough to estimate logic performance, and this performance is pretty good

Panel Session and Group Process Panel Session –Question: “How much should we change supercomputing to enable the applications that are important to us, and how fast.” Results of Workshop –Each panelist may propose one or two concluding statements “Moore’s Law will/won’t solve all problems if you wait long enough” –Audience will vote –Statements and degree of agreement will be the conclusion of workshop

Organizer’s View of Appropriate Answers * Year FLOPS Disc. 1 Discovery 2 Date code ready Year FLOPS QCA? Rev. Logic Cluster/MPP Adv. Arch. QCA? Rev. Logic Cluster/MPP Adv. Arch. Disc. 1 Discovery 2 Notes: * Not necessarily one machine; different applications may require different machines * Specifics are just my ideas Disc. 1 Discovery 2 Current Status * Editor’s note: Which were generally ignored

Thomas Sterling Zettaflops at nano-scale technology is possible [Vote: 11/22] –Size requirements tolerable But packaging is a challenge; [Vote: 21/21] Major obstacles [Vote for only 2 of the options below] –Power [Vote: 13/22] –Latency [Vote: 0/22] –Parallelism [Vote: 12/22] –Reliability [Vote: 4/22] –Programming [Vote: 8/22]

Horst Simon A Zettaflops computer will have emergent intelligent behavior. [Vote: 8/22] The first sustained Petaflops application that wins the G. Bell award will use MPI. [Vote: 12/22] The first sustained Exaflops application that wins the G. Bell award will use MPI. [Vote: 2/22] [Editors note: The qualification of winning the Gordon Bell award implies a general purpose computer and software written in a high level language.]

David Koester Moore’s Law doesn’t matter as long as we need to invest the increase in transistors into machine state — i.e., overhead — instead of real use Keep putting more transistors out there and power stays the same (130 W/chip) all going into overhead; work is only increasing a little by clock rate. Moore’s law doesn’t translate into increase in value. [Vote for all above: 6 agree, 2 disagree]

Terry Michalske Will we always have a fixed architecture that we put an operating system onto…or will the architecture (hardware) reconfigure itself to run the application [Editors note: software identifies parts of hardware that have faults and configures itself to avoid these parts. Vote: 15/22]

Fred Johnson Need to also consider multi-scale math, as a new initiative. [Vote: 21/21] Yet again I/O has been left in the closet [Vote: 21/21]

Rob Leland Early speakers take all the time presenting, so the last speaker need not have slides. [Editor’s note: …and so Rob didn’t. Vote: 21/21] Do we want to be leaders in changing supercomputing? [Vote: 21/21] Do we want to create possibilities that have not been imagined? [Vote: 21/21] How aggressively do we go after Zettaflops? It takes 20 years to develop a supercomputing technology to full production. If the case for Zettaflops in 2025 is strong, we need to start now. Statement: “It is exactly the right time to start thinking about Zettaflops seriously.” [Vote: 21/21]

Erik DeBenedictis The workshop articulated a series of constructive steps toward very high performance computers, at the level of Zettaflops. [Vote: 10/21] The issues deserve more investigation [Vote: 21/21] Reversible logic needs a more thorough understanding [Vote: 15/21]