Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scientific Discovery Mesoscale materials and chemical sciences

Similar presentations


Presentation on theme: "Scientific Discovery Mesoscale materials and chemical sciences"— Presentation transcript:

0 Exascale Computing Initiative (ECI)
Steve Binkley Bob Meisner DOE/ASCR NNSA/ASC April 1, 2015

1 Exascale Applications Respond to DOE/NNSA Missions in Discovery, Design, and National Security
Scientific Discovery Mesoscale materials and chemical sciences Improved climate models with reduced uncertainty Engineering Design Nuclear power reactors Advanced energy technologies Resilient power grid National Security Stockpile stewardship Real-time cybersecurity and incident response Advanced manufacturing Blue Bold Text indicates planned or existing exascale application projects Segue to the broad range of S&E applications that exascale will address These are but a few examples in scientific discovery, engineering design, and national security We will next address four of these important DOE mission areas…

2 Stockpile Stewardship Challenges
Nuclear Stockpile Safety Surety Reliability Robustness Weapons Science Thermonuclear burn p, D, T, He3,He4 Atomic Physics Burning Plasma Coulomb Collisions Quantum interference and diffraction Debye screening Radiation (Photons) Spontaneous and stimulated emission The Need: DOE needs exascale computing to continue to fulfill its mission into the next generation of scientific breakthroughs. This includes multi-physics, multi-scale simulations with increasingly high spatial and temporal resolution, scientific discovery from experimental and other data, … Our future will be determined, driven, and (if we fail) restricted by how we form and employ HPC at exascale Extreme scale science = next generation of scientific breakthroughs These breakthroughs are critical to future progress in all areas of our lives But, … these breakthroughs won't happen without Exascale computing And, … Exascale computing won't happen via business as usual – there are no easy fixes, no magic bullets left, no low hanging fruit Need major re-thinking of computing technology We've done numerous studies to identify requirements, and have determined that exascale computing is challenging but feasible Non-Proliferation and Nuclear Counter Terrorism Hydrodynamics 2 2

3 Exascale Computing Will Underpin Future Scientific Innovations
Mission: Extreme Scale Science Next Generation of Scientific Innovation DOE's mission is to push the frontiers of science and technology to: Enable scientific discovery Provide state-of-the-art scientific tools Plan, implement, and operate user facilities The next generation of advancements will require Extreme Scale Computing 1,000X capabilities of today's Petaflop computers with a similar size and power footprint Extreme Scale Computing, however, cannot be achieved by a “business-as-usual” evolutionary approach The Need: DOE needs exascale computing to continue to fulfill its mission into the next generation of scientific breakthroughs. This includes multi-physics, multi-scale simulations with increasingly high spatial and temporal resolution, scientific discovery from experimental and other data, … Our future will be determined, driven, and (if we fail) restricted by how we form and employ HPC at exascale Extreme scale science = next generation of scientific breakthroughs These breakthroughs are critical to future progress in all areas of our lives But, … these breakthroughs won't happen without Exascale computing And, … Exascale computing won't happen via business as usual – there are no easy fixes, no magic bullets left, no low hanging fruit Need major re-thinking of computing technology We've done numerous studies to identify requirements, and have determined that exascale computing is challenging but feasible Extreme Scale Computing will require major novel advances in computing technology – Exascale Computing Exascale Computing Will Underpin Future Scientific Innovations

4 Exascale Computing Initiative
Top-Line Messages: This effort is driven by the need for significant improvements in computer performance to enable future scientific discoveries. The Department is developing a plan that will result in the deployment of exascale-capable systems by early in the next decade. The budget request preserves options consistent with that timeline and keeps the U.S. globally competitive in high performance computing. It is important to emphasize that this is a major research and development effort to address and influence significant changes in computing hardware and software, and our ability to use computers for scientific discovery and engineering. It is not a race to deploy the first exaflop machine.

5 Exascale Challenges and Issues
Four primary challenges must be overcome Parallelism / concurrency Reliability / resiliency Energy efficiency Memory / Storage Productivity issues Managing system complexity Portability / Generality System design issues Scalability Time to solution Efficiency Extensive Exascale Studies US (DOE, DARPA, … ), Europe, Japan, …

6 Impact of No ECI: What’s at Stake?
Power restrictions will limit the performance of future computing systems Without ECI, industry will build an energy- and footprint-inefficient point solution Declining US leadership in science, engineering, and national security HPC is the foundation of the nation’s nuclear security and economic leadership International R&D investment already surpassing US Asia and Europe: China’s Tianhe-2 is #1 (HPL); EU’s Mont Blanc with ARM Increasing dependence on foreign technology Countries could exert export controls enforced against us There will be unacceptable cybersecurity and computer supply chain risks

7 DOE Exascale Computing Initiative (ECI) R&D Goals
Develop a new era of computers: exascale computers Sustained 1018 operations/second and required storage for broader range of mission-critical applications Create extreme-scale computing: approximately 1,000X performance of today's computers within a similar size, cost, and power footprint Foster new generation of scientific, engineering, and large-data applications Create dramatically more productive systems Usable by a wide variety of scientists and engineers for more problem areas Simplifies efficiency and scalability for shorter time to solution and science result Develop marketable technologies Set industry on new trajectory of progress Exploit economies of scale and trickle-bounce effect Prepare for “Beyond Exascale” This is the Vision High performance through energy efficiency Ease of use Broad utility – not just one-of-a-kind, one-shot system Sustainable market

8 What is Exascale Computing?
What Exascale computing is not Exaflops Linpack Benchmark Computer Just a billion floating-point arithmetic units packaged together What is Exascale computing? 1,000X performance over a “petaflop” system (exaflops sustained performance on complex, real-world applications) Similar power and space requirements as a petaflops computer High programmability, generality, and performance portability Summary: developing an Exascale computer requires significant intellectual and financial investments in four areas. These areas are: system design and integration, enabling hardware technology, software, and collaboration. May want to clarify when presenting that Enabling Technologies are hardware technologies related to processor, memory, and interconnects. ======================= MESSAGE xxx

9 Key Performance Goals for an exascale computer (ECI)
Parameter Performance Sustained 1 – 10 ExaOPS Power 20 MW Cabinets System Memory 128 PB – 256 PB Reliability Consistent with current platforms Productivity Better than or consistent with current platforms Scalable benchmarks Target speedup over “current” systems … Throughput benchmarks ExaOPS = 1018 Operations / sec

10 Exascale Target System Characteristics
20 pJ per average operation Billion-way concurrency (current systems have Million-way) Ecosystem to support new application development and collaborative work, enable transparent portability, accommodate legacy applications High reliability and resilience through self-diagnostics and self-healing Programming environments (high-level languages, tools, …) to increase scientific productivity

11 Exascale Computing We Need to Reinvent Computing
Traditional path of 2x performance improvement every 18 months has ended For decades, Moore's Law plus Dennard scaling provided more, faster transistors in each new process technology This is no longer true – we have hit a power wall! The result is unacceptable power requirements for increased performance We cannot procure an exascale system based on today's or projected future commodity technology Existing HPC solutions cannot be usefully scaled up to exascale Energy consumption would be prohibitive (~300MW) Exascale will require partnering with U.S. computing industry to chart the future Industry at a crossroads and is open to new paths Time is right to push energy efficiency into the marketplace The Problem: the world has changed; we can't continue to make progress on the same path we've become accustomed to. We need to blaze new trails through new territory.

12 Exascale vs. Predecessor Computers
Parameter Sequoia (CPU) Titan (CPU-GPU) Summit & Sierra CPU-GPU Exascale Accepted 2013 2018 Power (MW) 8 9 10 ~ 20 Peak Performance (PF) 20.13 27.11 150 > 1,000 Cabinets 96 200 192 > 200 Nodes 98,304 18,688 3,500 TBD System Memory (TB) 1,573 710 2,100 > 128,000 Linpack performance (PF) 17.17 17.59

13 ECI Strategy Integrate applications, acquisitions, and research and development Exploit co-design process, driven by the full application workflow Develop exascale software stacks Partner with and fund vendors to transition research to product space Collaborate with other government agencies and other countries, as advantageous

14 Partnership with Industry is Vital
We need industry involvement Don't want one-off, stove-piped solutions that are obsolete before they're deployed Need continued “product” availability and upgrade potential beyond the lifetime of this initiative Industry needs us Business model obligates industry to optimize for profit, beat competitors Internal investments heavily weighted towards near-term, evolutionary improvements with small margin over competitors Funding for far-term technology is limited ($) and constrained in scope How do we impact industry? Work with those that have strong advocate(s) within the company Fund development and demonstration of far-term technologies that clearly show potential as future mass-market products (or mass market components of families of products)* *Corollary: do not fund product development Industry has demonstrated that it will incorporate promising technologies into future product lines * Industrial contractor, private communication. How do we engage industry? You need to have some examples ready for the last bullet – industry has demonstrated that it will incorporate new technologies into product lines. Maybe use the Xeon – Phi example. Did you and Shekhar have anything to do with that? If not, use Apple's use of Siri, the productization of voice command from the DARPA / PAL program. The biggest problem you will have with Intel is accommodation of legacy S/W by any revolutionary ECI processor designs, but maybe NVIDIA will prove to be more flexible, or maybe one of Intel's competitors.

15 DOE Progress Towards Exascale
FY2011: MOU between the SC and NNSA for the Coordination of Exascale Activities Exascale Co-Design Centers Funded Request for Information: Critical and Platform Technologies FY2012: Programming Environments (X-Stack) FastFoward 1: Vendor Partnerships on Critical Component technologies FY2013: Exascale Strategy Plan to Congress Operating System / Runtime (OS/R) DesignForward 1: Vendor Partnerships on Critical System-level technologies Meeting with Secretary Moniz, “go get a solid plan with defendable cost” FY2014: Meetings with HPC vendors to validate ECI timeline, update on exascale plans and costs Established Nexus / Plexus lab structure – determine software plans and costs FastForward 2: Exascale Node designs External Review of “Exascale Preliminary Project Design Document (EPPDD)” FY2015: DesignForward 2: Conceptual Designs of Exascale Systems Release to ASCAC “Preliminary Conceptual Design for an Exascale Computing Initiative” Generate requirements for exascale systems to be developed and deployed in FY-2023 Develop and release FOAs and RFPs, for funding in FY-2016 FY2016: Initiate the Exascale Computing Initiative (ECI) DOE Progress Towards Exascale 15

16 Schedule Baseline 16

17 Exploit Co-Design Process
Exascale Co-Design Center for Materials in Extreme Environments (ExMatEx) Director: Timothy Germann (LANL) Center for Exascale Simulation of Advanced Reactors (CESAR) Director: Andrew Siegel (ANL) https://cesar.mcs.anl.gov Center for Exascale Simulation of Combustion in Turbulance (ExaCT) Director: Jacqueline Chen (SNL) We funded three exascale co-design centers. Mission critical applications are: ExMatEx Goal: Develop algorithms, system software, and hardware that will enable pervasive embedding of microscopic behavior modeling and simulation into meso- and macro-scale materials simulations at the Exascale, for the simulation and modeling of materials subjected to extreme mechanical environment. CESAR Goal: Enable predictive modeling and simulation for advanced nuclear reactor design at the Exascale by integrating three distinct software components -- weakly compressible hydrodynamics, particle transport, and structural mechanics -- into one code environment, TRIDENT, for detailed analysis of next-generation reactors ExaCT Goal: Simultaneously redesign all aspects of the combustion simulation process, from algorithms to programming models to hardware architecture, to enable detailed modeling and simulations at realistic operating conditions at the Exascale for transportation and energy production applications. Attached is an animation of a reactive syngas turbulent jet flame in a cross-stream of heated air from a petascale Direct Numerical Simulation (DNS – Navier-Stokes equations w/out turbulence) performed on Jaguar.  The simulation is important to understanding safety issues in fuel injection processes in gas turbines for electricity generation using syngases for pre-combustion carbon capture and storage relevant to industry (GE, Alstom, Siemens).  The DNS provide details into the chemistry and turbulent mixing needed to understand the mechanism of flame anchoring (or not) in the pre-mixer section of a gas turbine. Combustion: The Center for Exascale Simulation of Combustion in Turbulence (“ExaCT”) sees its scientific challenge in reducing the design cycle cost for new combustion technologies and new fuels. Exascale computing will allow for improved direct numerical simulation capabilities, especially in the fidelity of the reactive chemistry models. What you are seeing is a reactive syngas turbulent jet flame in a cross-stream of heated air, from a petascale simulation performed on Jaguar.  The simulation shows some of the turbulence and chemistry details in the flame and is important to understanding safety issues in fuel injection processes in gas turbines for electricity generation.    CESAR's application focus is predictive simulations of the operations of new, advanced nuclear reactors, not only to reduce costs associated with the and testing of a new nuclear reactor, but also to provide independently verification to safety margins needed by regulatory agencies to certify the new design.  Understanding safety scenarios with confidence will require exascale computing resources. What you are seeing is a simulation of water flow past a bundle of reactor fuel rods in a scaled-down, experimental version of an actual small modular reactor (SMR) design being considered. There are experimental data available with this test reactor, thus the simulations may be validated. The picture represents a volume rendering of the velocity in the lower half of the reactor.  The colors represent the speed (red is high, blue is low). The flow goes down laterally in the outer annular cylinder and upward in the inner rod bundle representing the core. While flowing through the core the flow is accelerated and heated.   Materials: Understanding how materials deform in various extreme environments is one of the scientific challenges of the ExMatEx Center. An example shown here is what happens to a nanocrystalline sample of a metal being shocked.  As the shock wave passes through the sample, the crystal grains respond to compression by forming line defects (points on surface of crystal) and by reorienting themselves (regions changing to a different color).  Our experimental facilities, such as the light sources at Argonne and at Stanford, will soon be probing these phenomena with ultrahigh space and time resolutions converging with the length and time scales of such simulations. 17

18 Current partnerships with vendors (jointly funded by SC & NNSA)
Fast Forward Program – node technologies Phase 1: Two year contracts, started July 1, 2012 ($64M ) Phase 2: Two year contracts, started Fall 2014 ($100M) Performers: AMD, Cray, IBM, Intel, NVIDIA Project Goals & Objectives Initiate partnerships with multiple companies to accelerate the R&D of critical node technologies and designs needed for extreme-scale computing. Fund technologies targeted for productization in the 5–10 year timeframe. Design Forward Program – system technologies Phase 1: Two year contracts, started Fall 2013 ($23M) Phase 2: Two year contracts. started Winter 2015 ($10M) Initiate partnerships with multiple companies to accelerate the R&D of interconnect architectures and conceptual designs for future extreme-scale computers. All of the efforts listed above are looking at some aspect of the hardware requirements for energy efficient processing; reduction in data transport energy; and rapid, efficient memory and storage. For example, Intel is looking at low-power, highly efficient memory hierarchies, and how next generation memory architectures, combined with processing power, provide optimal, energy-efficient performance. 18

19 FY-2016 ECI Cross-cut (in $K)
19

20 FY-2016 ECI Cross-cut (in $K)

21 ECI Major Risks Maintaining strong leadership and commitment from the US government. Achieving the extremely challenging power and productivity goals. Decreasing reliability as power efficiency and system complexity/concurrency increase. Vendor commitment and stability; deployment of the developed technology. 21

22 Summary Leadership in high-performance computing (HPC) and large-scale data analysis will advance national competitiveness in a wide array of strategic sectors, including basic science, national security, energy technology, and economic prosperity. The U.S. semiconductor and HPC industries have the ability to develop the necessary technologies for an exascale computing capability early in the next decade. An integrated approach to the development of hardware, software, and applications is required for the development of exascale computers. ECI’s goal is to deploy two capable exascale computing systems. 22

23 END 23


Download ppt "Scientific Discovery Mesoscale materials and chemical sciences"

Similar presentations


Ads by Google