Presentation is loading. Please wait.

Presentation is loading. Please wait.

State-of-the-art Storage Solutions...and more than that. Fabrizio Magugliani EMEA HPC Business Development and Sales

Similar presentations

Presentation on theme: "State-of-the-art Storage Solutions...and more than that. Fabrizio Magugliani EMEA HPC Business Development and Sales"— Presentation transcript:

1 State-of-the-art Storage Solutions...and more than that. Fabrizio Magugliani EMEA HPC Business Development and Sales European AFS Workshop 2009 September 28 th -30 th 2009 Department of Computer Science and Automation/University Roma Tre 1

2 What does E4 Computer Engineering stand for ? E4 = Engineering 4 (for) Computing 2 E4 builds the solutions that accomplish the users requirements

3 Workstation (fluid-dynamics, video editing … ) Server (firewall, computing node, scientific apps …) Storage (from small DB up to big data requirements) SAN – Storage Area Network HPC Cluster, GPU Cluster, Interconnect Products and Services 3 Wide – Reliable – Advanced System config and optimization

4 Technology Partners 4

5 5 Customer References

6 Choosing the right computing node Form factor: [1U,7U] Socket: [1,2,4,8] Core: 4,6 Memory size Accelerators (GPUs) Architecture Non Uniform Memory Access (AMD) Form factor: [1U,7U] Socket: [1,2,4] Core: 4,6 Memory size Accelerators (GPUs) Architecture Uniform Memory Access (INTEL) Form factor: Workstation (graphic) Server rack-mount Blade 6

7 Choosing the right accelerator 7

8 Choosing and connecting the right accelerator 8

9 Choosing the right accelerator: Tesla S1070 Architecture Power Supply Thermal Management System Monitoring PCIe x16 Gen2 Switch PCI Express Cables to Host System(s) PCIe x16 Gen2 PCIe x16 Gen2 Tesla GPU 4GB GDDR3 DRAM Tesla GPU 4GB GDDR3 DRAM PCIe x16 Gen2 Switch Tesla GPU 4GB GDDR3 DRAM Tesla GPU 4GB GDDR3 DRAM Multiplexes PCIe bus between 2 GPUs Each 2 GPU sub-system can be connected to a different host

10 10 © NVIDIA Corporation GFLOPS on 16 GPUs ~ 99% Scaling Choosing the right accelerator: performance

11 Choosing the right interconnection technologies Gigabit Ethernet entry level on every solution. Ideal solution for codes with low interprocess communication requirements InfiniBand DDR Gb/s, integrable on motherboard (first cluster InfiniBand 2005, Caspur) 10 Gb/s Ethernet Quadrics Myrinet 11 1 GbE10 GbE 10 GbE RDMA (Chelsio) IB DDR (InfiniHost) IB QDR (ConnectX) Latency (microsecondi)50 102,51,2 Bandwith (MB/s) Bisectional Bandwith (MB/S)

12 Interconnect Gigabit Ethernet: Ideal solution for applications requiring moderate bandwidth among processes Infiniband DDR Gb/s motherboard-based. Infinipath on HTX slot, tested with latencies less than 2 microseconds. Myrinet, Quadrics

13 Choosing the right Storage 13 HD section MB/s Storage type Performance Disk Server-EHT MB/s SAN - FC Up to 1 GB/s HPC storage MB/s per each chassis ETH interface HPC storage 6 GB/s – FC, IB interface Storage space of PB - DataDirect

14 Storage Disk subassembly Disk Server SATA /SAS Storage SAS / SAS Ideal for HPC applications, ethernet i/f Ideal for HPC applications, FC/IB i/f InterfacePerformance Ctrl RAID PCI-Ex ETH FCETH InfiniBandFC ETH 200MB/s 300 – 800 MB/s Up to 1 GB/s Up to 3 GB/s 500 MB/s per chassis

15 Storage Server high flexibility, low power consumption solution engineered by E4 for high bandwidth requirements. COTS-based (2 CPU INTEL Nehalem) RAM can be configured according to the users requirements (up to 144GB DDR3) Controller SAS/SATA multi lane 48 TB in 4U 1GbE (n via trunking), 10GbE, Infiniband DDR/QDR 374 units installed at CERN (Geneva), 70 in several customers

16 HPC Storage Systems Data Direct NetworkPANASAS Cluster Storage Interface: FC / IB Performance: up to 6GB/s 560TB per Storage System Ideal areas: Real time data acquisition Simulation Biomedicine, Genomics Oil & Gas Rich media Finance Clustered storage system based on Panasas File System Parallel Asynchronous Object-based Snapshot Interface : 4x1GbE, 1x10GbE, IB (router) Performace (x shelf) 500 – 600 MB/s up to 100s GB/S (sequential) 20 TB per shelf, 200TB/rack, up to PBs SSD (optimal for random I/O)

17 HPC Storage Systems File Systems NFS lustre GPFS panasas AFS

18 Storage Area Network E4 is Qlogic Signature Partner Latest technology Based on high performance I/F Fibre Channel 4+ 4 Gb multipath HA Failover for mission critical applicationss (finance, biomedics..) Oracle RAC

19 Reliability: basic requirement, guaranteed by E4s production cycle Selection of quality components Production process taken care of in every detail Burn-in to prevent infantile mortality of the component At least 72h accelerated stress test in a room with high temperature (35C) 24h individual test of each sub- system 48h simultaneous test of each sub- system Systems validation - Rigid quality procedure 19 OS installation to prevent HW/SW incompatibility

20 Case Histories 20

21 Case History – Oracle RAC 21

22 Case History – INTEL Enginsoft May 2007 INTEL Infinicluster 96 computing nodes Intel quad core 2,66GHz 4 TFLOPS 1,5 TB RAM Interconnection: Infiniband 4x DDR 20Gbps 30 TB Storage FC Applications fields: Computer Aided Engineering 22

23 Case History – CERN computing servers 1U Server 1U with high computing capacity Applications field: Educational, academic research Customer: CERN (Geneva), main national computing and research centres nodes dual Xeon® 2,8Ghz 4,6 TFLOPS nodes Xeon® Woodcrest 3GHz 6 TFLOPS 2 TB RAM System installed up to July 08 : over 3000 units 23

24 Case History – AMD CASPUR June 2005 AMD Infinicluster 24 computing nodes Opteron dual core 2,4GHz, 460GFLOPS 192GB RAM Interconnection Infiniband Expanded at 64 nodes: 1,2 TFLOPS, 512GB RAM Cluster SLACS 2004 Cluster SLACS 24 computing nodes Opteron, 200GFLOPS 128GB RAM Managed by CASPUR on behalf of Sardinian LAboratory for Computational materials Science, l'INFM (Istituto Nazionale Fisica della Materia) 24

25 Case History – CRS4 Cluster 96 core February computing nodes Opteron Dual core, 384 GFLOPS 192 GB RAM in total Applications fields : environmental sciences Renewable energy, fuel cell bioinformatics 25

26 Case History – Cluster HPC Myrinet 2005 Cluster HPC interconnection Myrinet 16 computing nodes dual Intel® Xeon® 3.2 GHz High speed interconnetcion Myrinet Storage SCSI to SATA 5 TB Monitor KVM 2 switch Ethernet 24 ports layer 3 Applications fields : Educational, research Customer : ICAR CNR of Palermo 26

27 12 Computes Nodes 96 core - 24 CPU INTEL Nehalem 5520 GFLOPS (peak): 920 RAM: 288 GB 6 GPU server nVIDIA S GPU TESLA 5760 core singola precisione 720 core doppia precisione GLOPS(peak): (24 TFLOPS) Case History – CNR/ICAR Hybryd Cluster (CPU + GPU)

28 1 Front end Node 48-port Gb Ethernet Switch 24-port Infiniband 20Gb/s Switch Case History – CNR/ICAR Hybryd Cluster (CPU + GPU)

29 Hybrid cluster CPU/GPU – ICAR CNR Cosenza - ALEPH

30 Case History – CNR/ICAR

31 Case History – EPFL

32 E4: The right partner for HPC

33 Questions?

34 Feel free to contact me: Fabrizio Magugliani

35 Thank you! E4 Computer Engineering SpA Via Martiri della Liberta Scandiano (RE), Italy Switchboard:

36 E4 Computer Engineering: The perfect partner for HPC 36

Download ppt "State-of-the-art Storage Solutions...and more than that. Fabrizio Magugliani EMEA HPC Business Development and Sales"

Similar presentations

Ads by Google