Presentation is loading. Please wait.

Presentation is loading. Please wait.

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely.

Similar presentations


Presentation on theme: "SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely."— Presentation transcript:

1 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely Co-Principal Investigator Project Scientist

2 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO What is Gordon? A “data-intensive” supercomputer based on SSD flash memory and virtual shared memory Emphasizes MEM and IO over FLOPS A system designed to accelerate access to massive data bases being generated in all fields of science, engineering, medicine, and social science The NSF’s most recent Track 2 award to the San Diego Supercomputer Center (SDSC) Coming Summer 2011

3 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Why Gordon? Growth of digital data is exponential “data tsunami” Driven by advances in digital detectors, networking, and storage technologies Making sense of it all is the new imperative data analysis workflows data mining visual analytics multiple-database queries on demand data-driven applications

4 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO The Memory Hierarchy Flash SSD, O(TB) 1000 cycles Potential 10x speedup for random I/O to large files and databases

5 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Architecture: “Supernode” 32 Appro Extreme-X compute nodes Dual processor Intel Sandy Bridge 240 GFLOPS 64 GB 2 Appro Extreme-X IO nodes Intel SSD drives 4 TB ea. 560,000 IOPS ScaleMP vSMP virtual shared memory 2 TB RAM aggregate 8 TB SSD aggregate 240 GF Comp. Node 64 GB RAM 240 GF Comp. Node 64 GB RAM 4 TB SSD I/O Node vSMP memory virtualization

6 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Architecture: Full Machine 32 supernodes = 1024 compute nodes Dual rail QDR Infiniband network 3D torus (4x4x4) 4 PB rotating disk parallel file system >100 GB/s SN DDDDDD

7 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Peak Capabilities Speed245 TFLOPS Mem (RAM)64 TB Mem (SSD)256 TB Mem (RAM+SSD)320 TB Ratio (MEM/SPEED)1.31 BYTES/FLOP IO rate to SSDs35 Million IOPS Network bandwidth16 GB/s bi-directional Network latency 1  sec. Disk storage4 PB Disk IO Bandwidth>100 GB/sec

8 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon is designed specifically for data- intensive HPC applications Such applications involve “very large data-sets or very large input-output requirements” Two data-intensive application classes are important and growing Data Mining “the process of extracting hidden patterns from data… with the amount of data doubling every three years, data mining is becoming an increasingly important tool to transform this data into information.” Wikipedia Data-Intensive Predictive Science solution of scientific problems via simulations that generate large amounts of data

9 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO High Performance Computing (HPC) vs High Performance Data (HPD) AttributeHPCHPD Key HW metricPeak FLOPSPeak IOPS Architectural featuresMany small-memory multicore nodes Fewer large-memory SMP nodes Typical applicationNumerical simulationDatabase query Data mining ConcurrencyHigh concurrencyLow concurrency or serial Data structuresData easily partitioned e.g. grid Data not easily partitioned e.g. graph Typical disk I/O patternsLarge block sequentialSmall block random Typical usage modeBatch processInteractive

10 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Data mining applications will benefit from Gordon De novo genome assembly from sequencer reads & analysis of galaxies from cosmological simulations and observations Will benefit from large shared memory Federations of databases and Interaction network analysis for drug discovery, social science, biology, epidemiology, etc. Will benefit from low latency I/O from flash

11 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Data-intensive predictive science will benefit from Gordon Solution of inverse problems in oceanography, atmospheric science, & seismology Will benefit from a balanced system, especially large RAM per core & fast I/O Modestly scalable codes in quantum chemistry & structural engineering Will benefit from large shared memory

12 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Dash: towards a supercomputer for data intensive computing

13 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Project Timeline Phase 1: Dash development (9/09-7/11) Phase 2: Gordon build and acceptance (3/11-7/11) Phase 3: Gordon operations (7/11-6/14)

14 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Comparison of the Dash and Gordon systems Doubling capacity halves accessibility to any random data on a given media 1.2TB per 100K tpm-C IOPS per tpm-C 150K tpm-C 1996 =.7 -.8smaller memory subsystem 2006 =.3 -.5 System ComponentDash Gordon Node Characteristics (# sockets, cores, DRAM) 2 sockets, 8 cores, 48 GB 2 sockets, TBD cores, 64 GB Compute Nodes (#)64 1024 Processor TypeNehalem Sandy Bridge Clock Speed (GHz)2.4 TBD Peak Speed (Tflops)4.9 245 DRAM (TB)3 64 I/O Nodes (#)2 64 I/O Controllers per Node2 with 8 ports 1 with 16 ports Flash (TB)2 256 Total Memory: DRAM + flash (TB)5 320 vSMPYes 32-node Supernodes2 32 InterconnectInfiniBand Disk.5 PB 4.5 PB

15 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon project wins storage challenge at SC09 with Dash

16 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO We won SC09 Data Challenge with Dash! With these numbers: IOR 4KB RAMFS 4Million+ IOPS on up to.750 TB of DRAM (1 supernode’s worth) 88K+ IOPS on up to 1 TB of flash (1 supernode’s worth) Speed up Palomar Transients database searches 10x to 100x Best IOPS per dollar Since that time we boosted flash IOPS to 540K (hitting our 2011 performance targets – it is now 2009

17 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Dash Update – early vSMP test results

18 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Dash Update – early vSMP test results

19 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Next Steps Continue vSMP and flash SSD assessment and development on Dash Prototype Gordon application profiles using Dash New application domains New usage modes and operational support mechanisms New user support requirements Work with TRAC to identify candidate apps Assemble Gordon User Advisory Committee International Data-Intensive Conference Fall 2010


Download ppt "SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely."

Similar presentations


Ads by Google