Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exascale Computing Initiative (ECI) Steve Binkley Bob Meisner DOE/ASCRNNSA/ASC April 1, 2015.

Similar presentations


Presentation on theme: "Exascale Computing Initiative (ECI) Steve Binkley Bob Meisner DOE/ASCRNNSA/ASC April 1, 2015."— Presentation transcript:

1 Exascale Computing Initiative (ECI) Steve Binkley Bob Meisner DOE/ASCRNNSA/ASC April 1, 2015

2 Exascale Applications Respond to DOE/NNSA Missions in Discovery, Design, and National Security Scientific Discovery – Mesoscale materials and chemical sciences – Improved climate models with reduced uncertainty Engineering Design  Nuclear power reactors  Advanced energy technologies  Resilient power grid National Security  Stockpile stewardship  Real-time cybersecurity and incident response  Advanced manufacturing Blue Bold Text indicates planned or existing exascale application projects 1

3 Stockpile Stewardship Challenges 2 Atomic Physics Radiation (Photons ) Radiation (Photons ) Spontaneous and stimulated emission Weapons Science Nuclear Stockpile Safety Surety Reliability Robustness Hydrodynamics Burning Plasma Non-Proliferation and Nuclear Counter Terrorism 2

4 Mission: Extreme Scale Science Next Generation of Scientific Innovation DOE's mission is to push the frontiers of science and technology to: – Enable scientific discovery – Provide state-of-the-art scientific tools – Plan, implement, and operate user facilities The next generation of advancements will require Extreme Scale Computing – 1,000X capabilities of today's Petaflop computers with a similar size and power footprint Extreme Scale Computing, however, cannot be achieved by a “business-as-usual” evolutionary approach Exascale Computing Will Underpin Future Scientific Innovations Extreme Scale Computing will require major novel advances in computing technology – Exascale Computing 3

5 Exascale Computing Initiative Top-Line Messages: – This effort is driven by the need for significant improvements in computer performance to enable future scientific discoveries. – The Department is developing a plan that will result in the deployment of exascale-capable systems by early in the next decade. – The budget request preserves options consistent with that timeline and keeps the U.S. globally competitive in high performance computing. – It is important to emphasize that this is a major research and development effort to address and influence significant changes in computing hardware and software, and our ability to use computers for scientific discovery and engineering. It is not a race to deploy the first exaflop machine. 4

6 Exascale Challenges and Issues Four primary challenges must be overcome – Parallelism / concurrency – Reliability / resiliency – Energy efficiency – Memory / Storage Productivity issues – Managing system complexity – Portability / Generality System design issues – Scalability – Time to solution – Efficiency Extensive Exascale Studies – US (DOE, DARPA, … ), Europe, Japan, … 5

7 Power restrictions will limit the performance of future computing systems – Without ECI, industry will build an energy- and footprint-inefficient point solution Impact of No ECI: What’s at Stake? Increasing dependence on foreign technology – Countries could exert export controls enforced against us – There will be unacceptable cybersecurity and computer supply chain risks Declining US leadership in science, engineering, and national security – HPC is the foundation of the nation’s nuclear security and economic leadership – International R&D investment already surpassing US – Asia and Europe: China’s Tianhe-2 is #1 (HPL); EU’s Mont Blanc with ARM 6

8 DOE Exascale Computing Initiative (ECI) R&D Goals Develop a new era of computers: exascale computers – Sustained 10 18 operations/second and required storage for broader range of mission-critical applications – Create extreme-scale computing: approximately 1,000X performance of today's computers within a similar size, cost, and power footprint – Foster new generation of scientific, engineering, and large-data applications Create dramatically more productive systems – Usable by a wide variety of scientists and engineers for more problem areas – Simplifies efficiency and scalability for shorter time to solution and science result Develop marketable technologies – Set industry on new trajectory of progress – Exploit economies of scale and trickle-bounce effect Prepare for “Beyond Exascale” 7

9 What is Exascale Computing? What Exascale computing is not – Exaflops Linpack Benchmark Computer – Just a billion floating-point arithmetic units packaged together What is Exascale computing? – 1,000X performance over a “petaflop” system (exaflops sustained performance on complex, real-world applications) – Similar power and space requirements as a petaflops computer – High programmability, generality, and performance portability 8

10 Key Performance Goals for an exascale computer (ECI) Parameter PerformanceSustained 1 – 10 ExaOPS Power20 MW Cabinets200 - 300 System Memory128 PB – 256 PB ReliabilityConsistent with current platforms Productivity Better than or consistent with current platforms Scalable benchmarks Target speedup over “current” systems … Throughput benchmarksTarget speedup over “current” systems … ExaOPS = 10 18 Operations / sec 9

11 20 pJ per average operation Billion-way concurrency (current systems have Million-way) Ecosystem to support new application development and collaborative work, enable transparent portability, accommodate legacy applications High reliability and resilience through self-diagnostics and self-healing Programming environments (high-level languages, tools, …) to increase scientific productivity Exascale Target System Characteristics 10

12 Exascale Computing We Need to Reinvent Computing Traditional path of 2x performance improvement every 18 months has ended For decades, Moore's Law plus Dennard scaling provided more, faster transistors in each new process technology This is no longer true – we have hit a power wall! The result is unacceptable power requirements for increased performance We cannot procure an exascale system based on today's or projected future commodity technology Existing HPC solutions cannot be usefully scaled up to exascale Energy consumption would be prohibitive (~300MW) Exascale will require partnering with U.S. computing industry to chart the future Industry at a crossroads and is open to new paths Time is right to push energy efficiency into the marketplace

13 Exascale vs. Predecessor Computers ParameterSequoia (CPU) Titan (CPU-GPU) Summit & Sierra CPU-GPU Exascale Accepted2013 2018 Power (MW)8910~ 20 Peak Performance (PF)20.1327.11150> 1,000 Cabinets96200192 > 200 Nodes98,30418,6883,500TBD System Memory (TB)1,5737102,100> 128,000 Linpack performance (PF)17.1717.59 12

14 ECI Strategy Integrate applications, acquisitions, and research and development Exploit co-design process, driven by the full application workflow Develop exascale software stacks Partner with and fund vendors to transition research to product space Collaborate with other government agencies and other countries, as advantageous 13

15 Partnership with Industry is Vital We need industry involvement – Don't want one-off, stove-piped solutions that are obsolete before they're deployed – Need continued “product” availability and upgrade potential beyond the lifetime of this initiative Industry needs us – Business model obligates industry to optimize for profit, beat competitors – Internal investments heavily weighted towards near-term, evolutionary improvements with small margin over competitors – Funding for far-term technology is limited ($) and constrained in scope How do we impact industry? – Work with those that have strong advocate(s) within the company – Fund development and demonstration of far-term technologies that clearly show potential as future mass-market products (or mass market components of families of products)* *Corollary: do not fund product development – Industry has demonstrated that it will incorporate promising technologies into future product lines *Industrial contractor, private communication. 14

16 FY2011: MOU between the SC and NNSA for the Coordination of Exascale Activities Exascale Co-Design Centers Funded Request for Information: Critical and Platform Technologies FY2012: Programming Environments (X-Stack) FastFoward 1: Vendor Partnerships on Critical Component technologies FY2013: Exascale Strategy Plan to Congress Operating System / Runtime (OS/R) DesignForward 1: Vendor Partnerships on Critical System-level technologies Meeting with Secretary Moniz, “go get a solid plan with defendable cost” FY2014: Meetings with HPC vendors to validate ECI timeline, update on exascale plans and costs Established Nexus / Plexus lab structure – determine software plans and costs FastForward 2: Exascale Node designs External Review of “Exascale Preliminary Project Design Document (EPPDD)” FY2015: DesignForward 2: Conceptual Designs of Exascale Systems Release to ASCAC “Preliminary Conceptual Design for an Exascale Computing Initiative” Generate requirements for exascale systems to be developed and deployed in FY-2023 Develop and release FOAs and RFPs, for funding in FY-2016 FY2016: Initiate the Exascale Computing Initiative (ECI) DOE Progress Towards Exascale 15

17 Schedule Baseline 16

18 Exploit Co-Design Process Exascale Co-Design Center for Materials in Extreme Environments (ExMatEx) – Director: Timothy Germann (LANL) – http://www.exmatex.org http://www.exmatex.org Center for Exascale Simulation of Advanced Reactors (CESAR) – Director: Andrew Siegel (ANL) – https://cesar.mcs.anl.gov Center for Exascale Simulation of Combustion in Turbulance (ExaCT) – Director: Jacqueline Chen (SNL) – http://exactcodesign.org http://exactcodesign.org 17

19 Current partnerships with vendors ( jointly funded by SC & NNSA) Fast Forward Program – node technologies Phase 1: Two year contracts, started July 1, 2012 ($64M ) Phase 2: Two year contracts, started Fall 2014 ($100M) Performers: AMD, Cray, IBM, Intel, NVIDIA Project Goals & Objectives Initiate partnerships with multiple companies to accelerate the R&D of critical node technologies and designs needed for extreme-scale computing. Fund technologies targeted for productization in the 5–10 year timeframe. Design Forward Program – system technologies Phase 1: Two year contracts, started Fall 2013 ($23M) Phase 2: Two year contracts. started Winter 2015 ($10M) Performers: AMD, Cray, IBM, Intel, NVIDIA Project Goals & Objectives Initiate partnerships with multiple companies to accelerate the R&D of interconnect architectures and conceptual designs for future extreme-scale computers. Fund technologies targeted for productization in the 5–10 year timeframe. 18

20 FY-2016 ECI Cross-cut (in $K) 19

21 FY-2016 ECI Cross-cut (in $K) 20

22 ECI Major Risks Maintaining strong leadership and commitment from the US government. Achieving the extremely challenging power and productivity goals. Decreasing reliability as power efficiency and system complexity/concurrency increase. Vendor commitment and stability; deployment of the developed technology. 21

23 Summary Leadership in high-performance computing (HPC) and large-scale data analysis will advance national competitiveness in a wide array of strategic sectors, including basic science, national security, energy technology, and economic prosperity. The U.S. semiconductor and HPC industries have the ability to develop the necessary technologies for an exascale computing capability early in the next decade. An integrated approach to the development of hardware, software, and applications is required for the development of exascale computers. ECI’s goal is to deploy two capable exascale computing systems. 22

24 END 23


Download ppt "Exascale Computing Initiative (ECI) Steve Binkley Bob Meisner DOE/ASCRNNSA/ASC April 1, 2015."

Similar presentations


Ads by Google