Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pioneering Science and Technology Office of Science U.S. Department of Energy 1 MCS Vision Petascale Computing Grid Computing Computational Science and.

Similar presentations


Presentation on theme: "Pioneering Science and Technology Office of Science U.S. Department of Energy 1 MCS Vision Petascale Computing Grid Computing Computational Science and."— Presentation transcript:

1 Pioneering Science and Technology Office of Science U.S. Department of Energy 1 MCS Vision Petascale Computing Grid Computing Computational Science and Engineering Petascale Computing Grid Computing Computational Science and Engineering Increase by several orders of magnitude the computing power that can be applied to individual scientific problems, thus enabling progress in understanding complex physical and biological systems. Interconnect the world’s most important scientific databases, computing systems, instruments and facilities to improve scientific productivity and remove barriers to collaboration. Make high-end computing a core tool for challenging modeling, simulation and analysis problems.

2 Pioneering Science and Technology Office of Science U.S. Department of Energy 2 MCS Products/Resources Enabling technologies -“middleware” -“tools” -“support applications” Scientific applications Hardware Other fundamental CS research

3 Pioneering Science and Technology Office of Science U.S. Department of Energy 3 Enabling Technologies Globus Toolkit -Software infrastructure/standards for Grid computing MPICH -Our free implementation of MPI Jumpshot -Software for analysis of message passing pNetCDF -High performance parallel I/O library PetsC -Toolkit for parallel matrix solves Visualization (“Futures lab”) -Scalable parallel visualization software, large-scale displays Access Grid -Collaboration environment

4 Pioneering Science and Technology Office of Science U.S. Department of Energy 4 Collaboration Technology – the Access Grid Multi-way meetings and conferences over the Internet Using high-quality video/audio technology Large format display: 200+ installations worldwide Easily replicated configurations, open source software www.accessgrid.org

5 Pioneering Science and Technology Office of Science U.S. Department of Energy 5 The Grid Links People with Distributed Resources on a National Scale Sensors

6 Pioneering Science and Technology Office of Science U.S. Department of Energy 6 Some key scientific applications Flash -Community code for general Astrophysical phenomena -ASCI project with UC Nek5 -Biological fluids pNeo -Neo-cortex simulations for study of epileptic seizures QMC -Monte Carlo simulations of atomic nuclei Nuclear Reactor Simulations

7 Pioneering Science and Technology Office of Science U.S. Department of Energy 7 Hardware Chiba City – Software Scalability R&D Addresses scalability issues in system software, open source software, and applications code. 512 CPUs, 256 nodes, Myrinet, 2TB storage, Linux. DOE OASCR funded. Installed in 1999. Jazz – Linux Cluster for ANL Applications Supports and enhances ANL application community. 50+ projects from a spectrum of S&E divisions 350 CPUs, Myrinet, 20TB storage. ANL funded. Installed in 2002. Achieved 1.1 TF sustained. Blue Gene prototype – coming soon two-rack system scalable to twenty racks

8 Pioneering Science and Technology Office of Science U.S. Department of Energy 8 Other HPC areas Architecture and Performance Evaluation Programming Models and Languages Systems Software Numerical Methods and Optimization Software components Software Verification Automatic Differentiation

9 Pioneering Science and Technology Office of Science U.S. Department of Energy 9 I-WIRE Flash Blue Gene NLCF Teragrid I-Wire Impact Two concrete examples of the impact of I-WIRE

10 Pioneering Science and Technology Office of Science U.S. Department of Energy 10 UIUC/NCSA Starlight International Optical Network Hub (NU-Chicago) Argonne National Laboratory U Chicago IIT UIC Chicago U Chicago Gleacher Center James R. Thompson Center – Illinois Century Network 40 Gb/s Distributed Terascale Facility Network Commercial Fiber Hub I-WIRE State Funded Dark Fiber Optical Infrastructure to support Networking and Applications Research -$11.5M Total Funding - $6.5M FY00-03 - $5M in process for FY04-5 -Application Driven - Access Grid: Telepresence & Media - TeraGrid: Computational and Data Grids -New Technologies Proving Ground - Optical Network Technologies - Middleware and Computer Science Research Deliverables -A flexible infrastructure to support advanced applications and networking research (Illinois Wired/Wireless Infrastructure for Research and Education)

11 Pioneering Science and Technology Office of Science U.S. Department of Energy 11 UI-Chicago Illinois Inst. Tech Northwestern Univ-Chicago “Starlight” U of Chicago I-55 I-90/94 I-290 I-294 UIUC/NCSA Urbana (approx 140 miles South) ANL IITUIC UChicago Main UIUC/NCSA Starlight / NU-C ICN UChicago Gleacher Ctr Commercial Fiber Hub Status: I-WIRE Geography Argonne Nat’l Lab (approx 25 miles SW)

12 Pioneering Science and Technology Office of Science U.S. Department of Energy 12 TeraGrid Vision: A Unified National HPC Infrastructure that is Persistent and Reliable Largest NSF compute resources Largest DOE instrument (SNS) Fastest network Massive storage Visualization instruments Science Gateways Community databases E.g: Geosciences: 4 data collections including high-res CT scans, global telemetry data, worldwide hydrology data, and regional LIDAR terrain data

13 Pioneering Science and Technology Office of Science U.S. Department of Energy 13 Resources and Services (33TF, 1.1PB disk, 12 PB tape)

14 Pioneering Science and Technology Office of Science U.S. Department of Energy 14 Los Angeles Atlanta SDSCTACC NCSAPurdueORNL UC/ANLPSC Caltech Chicago IU Current TeraGrid Network Resources: Compute, Data, Instrument, Science Gateways

15 Pioneering Science and Technology Office of Science U.S. Department of Energy 15 Flash Flash Project -Community Astrophysics code -DOE funded ASCI program at UC/Argonne -4 million per year over ten years -Currently in 7 th year Flash Code/Framework -Heavy emphasis on software engineering, performance, and usability -500+ downloads -Active user community -Runs on all major hpc platforms -Public automated testing facility -Extensive user documentation

16 Pioneering Science and Technology Office of Science U.S. Department of Energy 16 Flash -- Simulating Astrophysical processes Cellular detonation Compressed turbulence Helium burning on neutron stars Richtmyer-Meshkov instability Laser-driven shock instabilities Nova outbursts on white dwarfs Rayleigh-Taylor instability Flame-vortex interactions Gravitational collapse/Jeans instability Wave breaking on white dwarfs Shortly: Relativistic accretion onto NS Orzag/Tang MHD vortex Type Ia Supernova Intracluster interactions Magnetic Rayleigh-Taylor

17 Pioneering Science and Technology Office of Science U.S. Department of Energy 17 How has fast network helped Flash? Flash in production for five years Generating terabytes of data Currently done “by hand” -Data transferred locally from supercomputing centers for visualization/analysis -Data remotely visualized at UC using Argonne servers -Can harness data storage across several sites Not just “visionary” grid ideas that are useful. Immediate “mundane” things as well!

18 Pioneering Science and Technology Office of Science U.S. Department of Energy 18 Buoyed Progress in HPC FLASH flagship application for BG/L -Currently being run on 4K processors at Watson -Will run on 16K procs in several months Argonne partnership with Oak Ridge for National Leadership Class Computing Facility -Non-classified computing -BG at Argonne -X1, Black Widow at ORNL -Application focus groups apply for time

19 Pioneering Science and Technology Office of Science U.S. Department of Energy 19 Petaflops Hardware is Just Around the Corner TeraflopsPetaflops We are here

20 Pioneering Science and Technology Office of Science U.S. Department of Energy 20 Diverse Architectures for Petaflop Systems vIBM – Blue Gene -Puts processors + cache + network interfaces on same chip -Achieves high packaging density, low power consumption vCray – RS and X1 -10K processor (40 TF) Red Storm at SNL -1K processor (20 TF) X1 to ORNL Emerging -Field Programmable Gate Arrays -Processor in Memory -Streams … v= systems slated for DOE National Leadership Computing Facility

21 Pioneering Science and Technology Office of Science U.S. Department of Energy 21 NLCF Target Application Areas SciDAC Fusion SciDAC Astrophysics Genomes to Life Nanophase Materials SciDAC Climate SciDAC Chemistry

22 Pioneering Science and Technology Office of Science U.S. Department of Energy 22 The Blue Gene Consortium: Goals Provide new capabilities to selected applications partnerships. Provide functional requirements for a petaflop/sec version of BG. Build a community around a new class of architecture. -Thirty university and lab partners -About ten hardware partners and about twenty software collaborators Develop a new, sustainable model of partnership. -“Research product” by passing normal “productization” process/costs -Community-based support model (hub and spoke) Engage (or re-engage) computer science researchers with high-performance computing architecture. -Broad community access to hardware systems -Scalable operating system research and novel software research Partnership of DOE, NSF, NIH, NNSA, and IBM will work on computer science, computational science, and architecture development. Kickoff meeting was April 27, 2004, in Chicago.

23 Pioneering Science and Technology Office of Science U.S. Department of Energy 23 Determining application fit How will applications map onto different petaflop architectures? applications BG/L BG/PBG/Q Vector/parallel X1 BlackWidow Cluster Red Storm ? Massively parallel Fine grain Our focus

24 Pioneering Science and Technology Office of Science U.S. Department of Energy 24 Application Analysis Process A priori algorithmic performanceanalysis Scalability Analysis (Jumpshot, FPMPI) Refine model Tune code Validate model (BG/L hardware) Look for inauspicious scalability bottlenecksLook for inauspicious scalability bottlenecks May be based on data fromMay be based on data from conventional systems or BGSim conventional systems or BGSim Extrapolate conclusionsExtrapolate conclusions Model 0 th order behaviorModel 0 th order behavior Detailed message passing statistics on BG hardware Detailed architectural adjustments PerformanceAnalysis (PAPI, hpmlib) Rewrite algorithms? Use of memory hierarchyUse of memory hierarchy Use of pipelineUse of pipeline Instruction mixInstruction mix Bandwidth vs. latencyBandwidth vs. latency etc.etc. 0 th order scaling problems

25 Pioneering Science and Technology Office of Science U.S. Department of Energy 25 High performance resources operate in complex and highly distributed environments Argonne co-founded the Grid -Establishing a persistent, standards-based infrastructure and applications interfaces that enable high performance assess to computation and data -ANL created the Global Grid Forum, an international standards body -We lead development of the Globus Toolkit ANL staff are PIs and key contributors in many grid technology development and application projects -High performance data transport, Grid security, virtual organization mgt., Open Grid Services Architecture, … -Access Grid – group-to-group collaboration via large multimedia displays


Download ppt "Pioneering Science and Technology Office of Science U.S. Department of Energy 1 MCS Vision Petascale Computing Grid Computing Computational Science and."

Similar presentations


Ads by Google