Presentation is loading. Please wait.

Presentation is loading. Please wait.

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Honors Seminar in High Performance Computing, Spring 2000 Prof. Sid Karin x45075.

Similar presentations


Presentation on theme: "S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Honors Seminar in High Performance Computing, Spring 2000 Prof. Sid Karin x45075."— Presentation transcript:

1 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Honors Seminar in High Performance Computing, Spring 2000 Prof. Sid Karin skarin@ucsd.edu x45075

2 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions History SDSC/NPACI Applications

3 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions of Supercomputers The most powerful machines available. Machines that cost about 25M$ in year 2000 $. Machines sufficiently powerful to model physical processes including accurate laws of nature and realistic geometry, and including large quantities of observational/experimental data.

4 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Supercomputer Performance Metrics Benchmarks Applications Kernels Selected Algorithms Theoretical Peak Speed (Guaranteed not to exceed speed) TOP 500 List

5 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Misleading Performance Specifications in the Supercomputer Field David H.Bailey RNR Technical Report RNR-92-005 December 1,1992 http://www.nas.nasa.gov/Pubs/TechReports/RNRreports/dbailey/RN R-92-005/RNR-92-005.html

6 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions History SDSC/NPACI Applications

7 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Applications Cryptography Nuclear Weapons Design Weather / Climate Scientific Simulation Petroleum Exploration Aerospace Design Automotive Design Pharmaceutical Design Data Mining Data Assimilation

8 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Applications cont’d. Processes too complex to instrument Automotive crash testing Air flow Processes too fast to observe Molecular interactions Processes too small to observe Molecular interactions Processes too slow to observe Astrophysics

9 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Applications cont’d. Performance Price Performance / Price

10 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Theory Experiment Simulation Data- intensive computing (mining) Data- intensive computing (assimilation) Numerically intensive computing

11 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Supercomputer Architectures Vector Parallel Vector, Shared Memory Parallel Hypercubes Meshes Clusters SIMD vs. MIMD Shared vs. Distributed Memory Cache Coherent Memory vs. Message Passing Clusters of Shared Memory Parallel Systems

12 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 The Cray - 1 A vector computer that worked A balanced computing system CPU Memory I/O A photogenic computer

13 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 1976: The Supercomputing “Island” Today: A Continuum Number of machines Performance

14 S AN D IEGO S UPERCOMPUTER C ENTER CSE190

15 S AN D IEGO S UPERCOMPUTER C ENTER CSE190

16 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 The Cray X-MP Shared Memory Parallel Vector Followed by Cray Y-MP, C-90, J-90, T90…..

17 S AN D IEGO S UPERCOMPUTER C ENTER CSE190

18 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 The Cray -2 Parallel Vector Shared Memory Very Large Memory (256 MW) Actually 256K MW = 262 MW One word = 8 Bytes Liquid Immersion cooling

19 S AN D IEGO S UPERCOMPUTER C ENTER CSE190

20 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Cray Companies Control Data Cray Research Inc. Cray Computer Company Inc. SRC Inc.

21 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Thinking Machines SIMD vs. MIMD Evolution from CM-1 to CM-2 ARPA Involvement

22 S AN D IEGO S UPERCOMPUTER C ENTER CSE190

23 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 1 st Teraflops System for US Academia “Blue Horizon” 1 TFLOPs IBM SP 144 8-processor compute nodes 12 2-processor service nodes 1,176 Power3 processors at 222 MHz > 640 GB memory (4 GB/node), 10.6GB/s bandwidth, upgrade to > 1 TB later 6.8 TB switch-attached disk storage Largest SP with 8-way nodes High-performance access to HPSS Trailblazer switch (current ~115MB/s bandwidth) interconnect with subsequent upgrade

24 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 UCSD Currently #10 on Dongarra’s Top 500 List Top 500 List Actual Linpack benchmark sustained 558 Gflops on 120 nodes Projected Linpack benchmark is 650 Gflops on 144 nodes Theoretical peak 1.023 Tflops

25 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 First Tera MTA is at SDSC

26 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Tera MTA Architectural Characteristics Multithreaded architecture Randomized, flat, shared memory 8 CPUs, 8 GB RAM now going to 16 (later this year) High bandwidth to memory (word per cycle per CPU) Benefits Reduced programming effort: single parallel model for one or many processors Good scalability

27 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 SDSC’s road to terascale computing

28 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Site Prep 12,000 sq ft 120 ft 100 ft

29 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Site Prep 12,000 sq ft 120 ft 100 ft

30 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Facilities Accomplishments 12,000 sq. ft. of floor space 1.6 MWatts of power 530 tons of cooling capability 384 cabinets to house the 6144 CPU’s 48 cabinets for metarouters 96 cabinets for disks 9 cabinets for 36 HIPPI switches about 348 miles of fiber cable

31 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain SST System Final Configuration  Cray Origin 2000 - 3.072 TeraFLOPS peak  48X128 CPU Origin 2000 (250MHz R10K)  6144 CPUs: 48 X 128 CPU SMPs  1536 GB memory total: 32 GB memory per 128 CPU SMP  76 TB Fibre Channel RAID disks  36 x HIPPI-800 switch Cluster Interconnect  To be deployed later this year:  9 x HIPPI-6400 32-way switch Cluster Interconnect

32 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Accomplishments On-site integration of 48X128 system completed (including upgrades) HiPPI-800 Interconnect completed 18GB Fiber Channel Disk completed Integrated Visualization (16 IR Pipes) Most Site Prep completed System integrated into LANL secure computing environment Web based tool for tracking status

33 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Accomplishments-cont Linpack - achieved 1.608TeraFLOPs accelerated schedule-2 weeks after install system validation run on 40x126 configuration f90/MPI version run of over 6 hours sPPM - turbulence modeling code validated full system integration used all 12 HiPPI boards/SMP and 36 switches used special “MPI” HiPPI bypass library ASCI codes scaling

34

35 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Summary Installed ASCI Blue Mountain computer ahead of schedule and achieved Linpack record two weeks after install. ASCI application codes are being developed and used.

36 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Half

37 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Rack

38 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Trench

39 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Connect any pair of the DSM computers through the crossbar switches Connect directly only computers to switches, optimizing latency and bandwidth (there are no direct links DSM DSM or switch switch) Support a 3-D toroidal 4x4x3 DSM configuration by establishing non-blocking simultaneous links across all sets of 6 faces of the computer grid Maintain full interconnect bandwidth for subsets of DSM computers (48 DSM computers divided into 2, 3, 4, 6, 8,12, 24, or 48 separate, non-interacting groups) Network Design Principles

40 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 123456789101112131415161718 123456 6 Groups of 8 Computers each 18 16x16 Crossbar Switches 18 Separate Networks

41 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 sPPM Hydro on 6144 CPUs Problem Subdomain: 8x4x4 process layout 128 CPUs/1 DSM) 12 HiPPI-800 NICs Router CPU on Neighbor SMP 1-HiPPI-800 NIC Router CPU Problem Domain: (4x4x3 DSM layout 48 DSMs/6144 CPUs)

42 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 sPPM Scaling on Blue Mountain

43 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions History SDSC/NPACI Applications

44 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 SDSC A National Laboratory for Computational Science and Engineering

45 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 A Distributed National Laboratory for Computational Science and Engineering

46 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Continuing Evolution Individuals SDSC NPACI Resources Education Outreach & Training Resources Partners 19852000 Technology & applications thrusts Enabling technologies Applications

47 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 NPACI is a Highly Leveraged National Partnership of Partnerships 46 institutions 20 states 4 countries 5 national labs Many projects Vendors and industry Government agencies

48 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Accelerate Scientific Discovery Through the development and implementation of computational and computer science techniques Mission

49 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Collect data from digital libraries, laboratories, and observation Analyze the data with models run on the grid Visualize and share data over the Web Publish results in a digital library Changing How Science is Done Vision

50 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Embracing the Scientific Community Capability Computing Provide compute and information resources of exceptional capability Discovery Environments Develop and deploy novel, integrated, easy-to-use computational environments Computational Literacy Extend the excitement, benefits, and opportunities of computational science Goals: Fulfilling the Mission

51 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 NPACI Objectives Deploy teraflops-scale computers Create a national metacomputing infrastructure Enable data-intensive computing Create persistent intellectual infrastructure Conduct outreach to new communities Advance computing technology in support of science

52 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Partnership Organizing Principle: “Thrusts” Molecular Science Neuroscience Earth Systems Science Engineering Metasystems Programming Tools & Environments Data-intensive Computing Interaction Environments Computational Literacy EOT Discovery Environments APPLICATIONS Discovery Environments TECHNOLOGIES Capability Computing RESOURCES

53 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Technology Thrusts Metasystems Programming Tools and Environments Data- Intensive Computing Interaction Environments

54 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Applications Thrusts Neuroscience Engineering Molecular Science Earth Systems Science

55 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Projects Meld Applications and Technology Data-Intensive Computing + Neuroscience Brain databases Metasystems and Parallel Tools + Engineering

56 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions History SDSC/NPACI Applications

57 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Modeling Galaxy Dynamics and Evolution Project Leaders Lars Hernquist, Harvard and John Dubinski, U Toronto Stuart Johnson and Bob Leary, SDSC SAC Program First images from Blue Horizon Simulation 24M particles = 10M stars + 2M dark matter halo in each galaxy Working on 120M particle run Run on all 1,152 processors during acceptance Animation - Collision between Milky Way and Andromeda “Due to SAC efforts, our simulations run two to three times faster, we can ask more precise questions and get better answers”...Hernquist

58 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Bioinformatics Infrastructure for Large-Scale Analyses Next-generation tools for accessing, manipulating, and analyzing biological data Russ Altman, Stanford University Reagan Moore, SDSC Analysis of Protein Data Bank, GenBank and other databases Accelerate key discoveries for health and medicine

59 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Protein Folding in a Distributed Computing Environment Simulating protein movement governing reactions within cells Andrew Grimshaw, U Virginia Charles Brooks, The Scripps Research Institute Bernard Pailthorpe, UCSD/SDSC Computationally intensive Distributed computing power from Legion

60 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Telescience for Advanced Tomography Applications Integrates remote instrumentation, distributed computing, federated databases, image archives, and visualization tools. Mark Ellisman, UCSD Fran Berman, UCSD Carl Kesselman, USC 3-D tomographic reconstruction of biological specimens

61 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 MICE: Transparent Supercomputing Molecular Interactive Collaborative Environment Gallery allows researchers, students to search for, visualize, and manipulate molecular structures Integrates key SDSC technological strengths Biological databases Transparent supercomputing Web-based Virtual Reality Modeling Language

62 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 The Protein Data Bank World’s single scientific resource for depositing and searching protein structures Protein structure data growing exponentially 10,500 structures in PDB today 20,000 by the year 2001 Vital to the advancement of biological sciences Working towards a digital continuum from primary data to final scientific publication Capture of primary data from high- energy synchrotrons (e.g. Stanford Linear Accelerator Center) requires 50Mbps network bandwidth 1CD3: The PDB’s 10,000th structure.

63 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Biodiversity Species Workshop Web interface to modeling tools Provides geographic, climate, and other base data Species Analyst Compiles data from on-line museum collections Scientists can focus on the scientific questions NPACI partners U Kansas, U New Mexico, and SDSC Biological-scale Modeling Predicted distribution of the mountain trogon. Data points (pins) are from 14 museum collections.

64 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Digital Galaxy Collaboration with Hayden Planetarium American Museum of Natural History Support from NASA Linking SDSC’s mass storage to Hayden Planetarium requires 155 Mbps MPIRE Galaxy Renderer Scalable volume visualization Linked to database of astronomical objects Produces translucent, filament- like objects An artificial nebula, modeled after a planetary nebula

65 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Looking out for San Diego’s Regional Ecology Unique partnership 31 federal, state, regional, and local agencies John Helly, et al., SDSC Combines technologies and multi-agency data Sensing, analysis, VRML Physical, chemical, and biological data Web-based tool for science and public policy

66 S AN D IEGO S UPERCOMPUTER C ENTER CSE190 AMICO: The Art of Managing Art Art Museum Image Consortium (AMICO) 28 art museums working toward educational use of digital multimedia Launch of the AMICO Library includes more than 50,000 works of art AMICO, CDL, SDSC XML information mediation SDSC SRB data management Links between images, scholarly research, educational material

67 CT Image (512x512x250) MacNCTPlan Treatment Planning Software Beam Characteristics Voxel Energy Deposition MCNP Particle Transport Simulation (Typically 21x21x25) MIT Nuclear Engineering Department Beth Israel Deaconess Medical Center, Boston Boron Neutron Capture Therapy Beam Port Borated Tumor Epithermal Neutrons (10 13 /s) BNCT & The Treatment Planning Process

68 Typical MCNP BNCT simulation: 1 cm resolution (21x21x25) 1 million particles 1 hour on 200 MHz PC ASCI Blue Mountain MCNP simulation: 4 mm resolution (64x64x62) 100 million particles 1/2 hour on 6048 CPUs ASCI Blue Mountain MCNP simulation: 1 mm resolution (256x256x250) 100 million particles 1-2 hours on 3072 CPUs MCNP BNCT Simulation Results


Download ppt "S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Honors Seminar in High Performance Computing, Spring 2000 Prof. Sid Karin x45075."

Similar presentations


Ads by Google