Presented by Buddy Bland Leadership Computing Facility Project Director National Center for Computational Sciences Leadership Computing Facility (LCF)

Slides:



Advertisements
Similar presentations
Office of Science U.S. Department of Energy FASTOS – June FASTOS – June 2005 WELCOME!!
Advertisements

Develop and Utilize Human Potential 1. Qatar Research Institutes.
The Role of Environmental Monitoring in the Green Economy Strategy K Nathan Hill March 2010.
Strategic Simulation Plan Thomas Zacharia Director Computer Science and Mathematics Division Oak Ridge National Laboratory presented at the Workshop on.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Aug 9-10, 2011 Nuclear Energy University Programs Materials: NEAMS Perspective James Peltz, Program Manager, NEAMS Crosscutting Methods and Tools.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update June 12,
Parallel Research at Illinois Parallel Everywhere
Architecture and Implementation of Lustre at the National Climate Computing Research Center Douglas Fuller National Climate Computing Research Center /
Scientific Grand Challenges Workshop Series: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale Warren M. Washington National.
Nathan S. Lewis George L. Argyros Professor of Chemistry California Institute of Technology with George Crabtree, Argonne NL Arthur Nozik, NREL Mike Wasielewski,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Presented by Next Generation Simulations in Biology: Investigating biomolecular structure, dynamics and function through multiscale modeling Pratul K.
BERAC Charge A recognized strength of the Office of Science, and BER is no exception, is the development of tools and technologies that enable science.
U.S. Department of Energy’s Office of Science Dr. Raymond Orbach February 25, 2003 Briefing for the Basic Energy Sciences Advisory Committee FY04 Budget.
NWfs A ubiquitous, scalable content management system with grid enabled cross site data replication and active storage. R. Scott Studham.
Office of Science Office of Biological and Environmental Research J Michael Kuperberg, Ph.D. Dan Stover, Ph.D. Terrestrial Ecosystem Science AmeriFlux.
Office of Science U.S. Department of Energy U.S. Department of Energy’s Office of Science Raymond L. Orbach Director, Office of Science April 20, 2005.
CLOUD COMPUTING.
Research Groups Committed to Understanding Energy One of the world’s premier centers for R&D on energy production, distribution,
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Presented by CCS Network Roadmap Steven M. Carter Network Task Lead Center for Computational Sciences.
DOE Genomics: GTL Program IT Infrastructure Needs for Systems Biology David G. Thomassen Office of Biological and Environmental Research DOE Office of.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
The Climate Prediction Project Global Climate Information for Regional Adaptation and Decision-Making in the 21 st Century.
GTL Facilities Computing Infrastructure for 21 st Century Systems Biology Ed Uberbacher ORNL & Mike Colvin LLNL.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Cyberinfrastructure: Enabling New Research Frontiers Sangtae “Sang” Kim Division Director – Division of Shared Cyberinfrastructure Directorate for Computer.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
ASCAC-BERAC Joint Panel on Accelerating Progress Toward GTL Goals Some concerns that were expressed by ASCAC members.
Office of Science U.S. Department of Energy Raymond L. Orbach Director Office of Science U.S. Department of Energy Presentation to BESAC December 6, 2004.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
R. Scott Studham, Associate Director Advanced Computing April 13, 2004 HPC At PNNL March 2004.
Ted Fox Interim Associate Laboratory Director Energy and Engineering Sciences Oak Ridge, Tennessee March 21, 2006 Oak Ridge National Laboratory.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
Understanding Science and Technology Through K-8 Education Rollie Otto Center for Science and Engineering Education, Berkeley Lab June 28, 2007.
© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
1 Recommendations Now that 40 GbE has been adopted as part of the 802.3ba Task Force, there is a need to consider inter-switch links applications at 40.
Presented by NCCS Network Roadmap Josh Lothian High Performance Computing Operations National Center for Computational Sciences.
1 OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH The NERSC Center --From A DOE Program Manager’s Perspective-- A Presentation to the NERSC Users Group.
1 Accomplishments. 2 Overview of Accomplishments  Sustaining the Production Earth System Grid Serving the current needs of the climate modeling community.
Advanced Research Computing Projects & Services at U-M
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Welcome to the PRECIS training workshop
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
The Genomics: GTL Program Environmental Remediation Sciences Program Spring Workshop April 3, 2006.
Presented by Ricky A. Kendall Scientific Computing and Workflows National Institute for Computational Sciences Applications National Institute for Computational.
NSF Priority Areas Status Report Dr. Joann Roskoski April 7, 2005.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Inspired by Biology From Molecules to Materials to Machines An NSF-supported assessment of the emerging scientific opportunities at the interface between.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
ENEA GRID & JPNM WEB PORTAL to create a collaborative development environment Dr. Simonetta Pagnutti JPNM – SP4 Meeting Edinburgh – June 3rd, 2013 Italian.
ORNL is managed by UT-Battelle for the US Department of Energy OLCF HPSS Performance Then and Now Jason Hill HPC Operations Storage Team Lead
U.S. Department of Energy’s Office of Science Presentation to the Basic Energy Sciences Advisory Committee (BESAC) Dr. Raymond L. Orbach, Director November.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Tools and Services Workshop
Joslynn Lee – Data Science Educator
National Vision for High Performance Computing
BlueGene/L Supercomputer
Presentation transcript:

Presented by Buddy Bland Leadership Computing Facility Project Director National Center for Computational Sciences Leadership Computing Facility (LCF) Roadmap

2 Bland_LCFRoadmap_SC07 Outline  Systems  Facilities upgrade  Systems infrastructure  Overview  Networking  Storage  Software and science

3 Bland_LCFRoadmap_SC07 Systems CCS firsts (1991–2008)

4 Bland_LCFRoadmap_SC07 Facilities upgrade Preparing computer center for petascale computing Clear 7,000 ft 2 for the computer Install taps for liquid cooling Add 10 MW of power Upgrade chilled water plant

5 Bland_LCFRoadmap_SC07 Systems infrastructure: Overview Current and projected NetworkingFY 2007FY 2008FY 2009 External B/W (GB/s)345 LAN B/W (GB/s) Archival storageFY 2007FY 2008FY 2009 Capacity (PB)41018 Bandwidth (GB/s)41019 Central storageFY 2007FY 2008FY 2009 Capacity (PB) Bandwidth (GB/s)

6 Bland_LCFRoadmap_SC07  Shifting to a hybrid InfiniBand/Ethernet network.  InfiniBand-based network helps meet the bandwidth and scaling needs for the center.  Wide-area network will scale to meet user demand using currently deployed routers and switches. Systems infrastructure: Network 1000 TF 240 GB/s LAN 5 GB/s WAN TF 240 GB/s LAN 5 GB/s WAN 100 TF 60 GB/s LAN 3 GB/s WAN

7 Bland_LCFRoadmap_SC07  Consistent planned growth in ORNL external network bandwidth Systems infrastructure: Network (continued) ORNL and LCF backbone connectivity

8 Bland_LCFRoadmap_SC07  HPSS software has already demonstrated ability to scale to many PB.  Add two silos/year.  Tape capacity and bandwidth, disk capacity and bandwidth are all scaled to maintain a balanced system.  Use new methods to improve data transfer speeds between parallel file systems and archival system. Systems infrastructure: Storage Archival storage 1000 TF 18 PB 19 GB/s TF 10 PB 10 GB/s 100 TF 4 PB 4 GB/s

9 Bland_LCFRoadmap_SC07  Increase scientific productivity by providing single repository for simulation data  Connect to all major LCF resources  Connect to both InfiniBand and Ethernet networks  Potentially becomes the primary file system for the 1000 TF system Systems infrastructure: Storage Central storage 1000 TF 10 PB 240 GB/s TF 1 PB 60 GB/s 100 TF 250 TB 10 GB/s

10 Bland_LCFRoadmap_SC07 Software and science: Overview  Cutting-edge hardware lays the foundation for science at the petascale— scientists using a production petascale system with petascale application development software.  Establishing fully integrated computing environment.  Developing software infrastructure to enable productive utilization and system management.  Empowering scientific and engineering progress and allied educational activities using petascale system.  Developing and educating the next generation of computational scientists to use petascale system.  NCCS Management Plan coordinates the transition to petascale production. 10 Bland_LCFRoadmap_SC07

11 Bland_LCFRoadmap_SC07 Hardware Testbeds Operating system (key releases) File system RAS Software and science Roadmap to deliver Science Day Scale tests using Jaguar Dual-core LWK Quad-core Linux LWK Quad-core Catamount Baker LWK Petascale LWK Establish CFS Lustre center of excellence at ORNL XT4 LW IO interface MPI-Lustre failover LCF-wide file system SIO Lustre clients and external cluster of Lustre servers 50 TF 100 TF 250 TF 1000 TF Dual core Quad- core Expand Hardware supervisory system Mazama system Fault monitor and prediction system C/R on XT3 Science Day 1 On 100 TFOn 250 TF On 1000 TF

12 Bland_LCFRoadmap_SC07 Science drivers  Advanced energy systems  Fuel cells  Fusion and fission energy  Bioenergy  Ethanol production  Environmental modeling  Climate prediction  Pollution remediation  Nanotechnology  Sensors  Storage devices “University, laboratory, and industrial researchers using a broad array of disciplinary perspectives are making use of the leadership computing resources to generate remarkable consequences for American competitiveness.” – Dr. Raymond L. Orbach, Undersecretary for Science, U.S. Department of Energy “University, laboratory, and industrial researchers using a broad array of disciplinary perspectives are making use of the leadership computing resources to generate remarkable consequences for American competitiveness.” – Dr. Raymond L. Orbach, Undersecretary for Science, U.S. Department of Energy

13 Bland_LCFRoadmap_SC07 Software and science: Fusion 10,000 1, Years Full-torus, electromagnetic simulation of turbulent transport with kinetic electrons for simulation times approaching transport time-scale Develop understanding of internal reconnection events in extended MHD, with assessment of RF heating and current drive techniques for mitigation 10 years Develop quantitative, predictive understanding of disruption events in large tokamaks Begin integrated simulation of burning plasma devices – multi-physics predictions for ITER Expected outcomes Computer performance (TFLOPS) Gyrokinetic ion turbulence in full torus Extended MHD of moderate scale devices  Turbulence simulation at transport time scale  Wave/plasma coupled to MHD and transport  MHD at long time scales 2D wave/plasma-mode conversion, all orders Integrated simulation— burning plasma Gyrokinetic ion turbulence in a flux tube 1D wave/plasma, reduced equation MHD Years 0105

14 Bland_LCFRoadmap_SC07 Software and science: Biology 5 years  Metabolic flux modeling for hydrogen and carbon fixation pathways  Constrained flexible docking simulations of interacting proteins 10 years  Multiscale stochastic simulations of microbial metabolic, regulatory, and protein interaction networks  Dynamic simulations of complex molecular machines Expected outcomes Computer performance (TFLOPS) Years 0105 Community metabolic Regulatory signaling simulations Constraint- based Flexible docking Genome scale Protein threading Comparative genomics Constrained rigid docking Cell pathway and network simulation Molecular machine classical simulation Protein machine interactions

15 Bland_LCFRoadmap_SC07 Software and science: Climate 5 years  Fully coupled carbon- climate simulation  Fully coupled sulfur- atmospheric chemistry simulation 10 years  Cloud-resolving 30-km spatial resolution atmosphere climate simulation  Fully coupled, physics, chemistry, biology earth system model Expected outcomes Years Tflops Tbytes 0105 Computer performance (TFLOPS) Trop chemistry Interactive Eddy resolve Dyn veg Biogeochem Strat chem Cloud resolve

16 Bland_LCFRoadmap_SC07 Software and science: Nanoscience 5 years  Realistic simulation of self-assembly and single-molecule electron transport  Finite-temperature properties of nanoparticles/quantu m corrals 10 years  Multiscale modeling of molecular electronic devices  Computation-guided search for new materials/nanostructur es Expected outcomes Computation-guided search for new nanostructures Computer performance (TFLOPS) Years 0105 Kinetics and catalysis of nano and enzyme systems Spin dynamics switching of nanobits Molecular and nanoelectronics Magnetic phase diagrams Multiscale modeling of hysteresis Design of materials and structures First principles of domain wall interactions in nanowires Thermodynamics and kinetics of small molecules Magnetic structure of quantum corrals Magnetic nanocluster s

17 Bland_LCFRoadmap_SC07 Contact Arthur S. (Buddy) Bland Leadership Computing Facility Project Director National Center for Computational Sciences (865) Bland_LCFRoadmap_SC07