Presentation is loading. Please wait.

Presentation is loading. Please wait.

Trends and Perspectives for HPC infrastructures

Similar presentations

Presentation on theme: "Trends and Perspectives for HPC infrastructures"— Presentation transcript:

1 Trends and Perspectives for HPC infrastructures
Carlo Cavazzoni, CINECA

2 outline HPC resource in EUROPA (PRACE) Today HPC architectures
Technology trends Cineca roadmaps (toward 50PFlops) EuroExa project

3 Peer reviewed open access PRACE Projects (Tier-0)
The PRACE RI provides access to distributed persistent pan-European world class HPC computing and data management resources and services. Expertise in efficient use of the resources is available through participating centers throughout Europe. Available resources are announced for each Call for Proposals.. European Local Tier 0 Tier 1 Tier 2 National Peer reviewed open access PRACE Projects (Tier-0) PRACE Preparatory (Tier-0) DECI Projects (Tier-1)

4 TIER-0 System, PRACE regular calls
CURIE (GENCI, Fr), BULL Cluster, Intel Xeon, Nvidia cards, Infiniband network FERMI (CINECA, It) & JUQUEEN (Juelich, D), IBM BGQ, Power processors, custom 5D torus net. MARENOSTRUM (BSC, S), IBM DataPlex, Intel Xeon node, Infiniband net. HERMIT (HLRS, D), Cray XE6, AMD procs, custom 3D torus net. 1PFLops SuperMUC (LRZ, D), IBM DataPlex, Intel Xeon Node, Infiniband net..

5 TIER-1 Systems, DECI calls
DECI site Machine name System type chip peak perfor- mance (Tflops) GPU cards Bulgaria (NCSA) EA"ECNIS" IBM BG/P PowerPC 450 27 Czech Repulic (VSB-TUO) Anselm Bull Bullx Intel Sandy Bridge-EP 66 23 nVIDIA Tesla 4 Intel Xeon Phi P5110 Finland (CSC) Sisu Cray XC30 Intel Sandy Bridge 244.9 France (CINES) Jade SGI ICE EX8200 Intel Quad-Core E5472/X5560 267.88 France (IDRIS) Babel 139 Germany (Jülich) JuRoPA Intel cluster Intel Xeon X5570 207 Germany (RZG) Genius IMB BG/P 54 iDataPlex 200 Ireland (ICHEC) Stokes Sgi ICE 8200EX Intel Xeon E5650 Italy (CINECA) PLX Intel Westmere 293 548 nVIDIA Tesla M2070/ M2070Q Norway (SIGMA) Abel MegWare cluster 260 Poland (WCSS) Supernova Cluster Intel Westmere-EP 51.58 Poland (PSNC) chimera SGI UV1000 Intel Xeon E7-8837 21.8 cane cluster AMD&GPU AMD Opteron™ 6234 224.3 334 NVIDIA TeslaM2050 Poland (ICM) boreasz IBM Power 775 (Power7) IBM Power7 74.5 Poland (Cyfronet) Zeus-gpgpu Linux Cluster Intel Xeon X5670/E5645 136.8 48 M2050/160 M2090 Spain (BSC) MinoTauro Bull Cuda Cluster Intel Xeon E5649 182 256 nVIDIA Tesla M2090 Sweden (PDC) Lindgren Cray XE6 AMD Opteron 305 Switzerland (CSCS) Monte Rosa 402 The Netherlands (SARA) Huygens IBM pSeries 575 Power 6 65 Turkey (UYBHM) Karadeniz HP Cluster Intel Xeon 5550 2.5 UK (EPCC) HECToR 829.03 UK (ICE-CSE) ICE Advance IBM BG/Q PowerPC A2 1250

6 HPC Architectures Hybrid: two model Homogeneus:
Server class processors: Server class nodes Special purpose nodes Accelerator devices: Nvidia Intel AMD FPGA two model Homogeneus: Server class node: Standar processors Special porpouse nodes Special purpose processors

7 Networks Standard/switched: Special purpose/Topology: Infiniband BGQ
CRAY TOFU (Fujitsu) TH Express-2 (Thiane-2)

8 Programming Models fundamental paradigm:
Message passing Multi-threads Consolidated standard: MPI & OpenMP New task based programming model Special purpose for accelerators: CUDA Intel offload directives OpenACC, OpenCL, Ecc… NO consolidated standard Scripting: python

9 Roadmap to Exascale (architectural trends)

10 Dennard scaling law (downscaling)
new VLSI gen. old VLSI gen. L’ = L / 2 V’ = V / 2 F’ = F * 2 D’ = 1 / L2 = 4D P’ = P do not hold anymore! The core frequency and performance do not grow following the Moore’s law any longer L’ = L / 2 V’ = ~V F’ = ~F * 2 D’ = 1 / L2 = 4 * D P’ = 4 * P Increase the number of cores to maintain the architectures evolution on the Moore’s law The power crisis! Programming crisis!

11 Moore’s Law Economic and market law
Stacy Smith, Intel’s chief financial officer, later gave some more detail on the economic benefits of staying on the Moore’s Law race. The cost per chip “is going down more than the capital intensity is going up,” Smith said, suggesting Intel’s profit margins should not suffer because of heavy capital spending. “This is the economic beauty of Moore’s Law.” And Intel has a good handle on the next production shift, shrinking circuitry to 10 nanometers. Holt said the company has test chips running on that technology. “We are projecting similar kinds of improvements in cost out to 10 nanometers,” he said. So, despite the challenges, Holt could not be induced to say there’s any looming end to Moore’s Law, the invention race that has been a key driver of electronics innovation since first defined by Intel’s co-founder in the mid-1960s. From WSJ It is all about the number of chips per Si wafer!

12 But! 14nm VLSI 0.54 nm Si lattice 300 atoms!
There will be still 4~6 cycles (or technology generations) left until we reach 11 ~ 5.5 nm technologies, at which we will reach downscaling limit, in some year between (H. Iwai, IWJT2008).

13 What about Applications?
In a massively parallel context, an upper limit for the scalability of parallel applications is determined by the fraction of the overall execution time spent in non-scalable operations (Amdahl's law). maximum speedup tends to 1 / ( 1 − P ) P= parallel fraction core P = serial fraction=

14 Architectural trends Number of FPUs App. Parallelism Peak Performance
Moore law FPU Performance Dennard law Number of FPUs Moore + Dennard App. Parallelism Amdahl's law

15 HPC Architectures Hybrid, but… two model Homogeneus, but…
What 100PFlops system we will see … my guess IBM (hybrid) Power8+Nvidia GPU Cray (homo/hybrid) with Intel only! Intel (hybrid) Xeon + MIC Arm (homo) only arm chip, but… Nvidia/Arm (hybrid) arm+Nvidia Fujitsu (homo) sparc high density low power China (homo/hybrid) with Intel only Room for AMD console chips

16 Chip Architecture Mobile, Tv set, Screens Strongly market driven
Video/Image processing Strongly market driven Intel ARM NVIDIA Power AMD New arch to compete with ARM Less Xeon, but PHI Main focus on low power mobile chip Qualcomm, Texas inst. , Nvidia, ST, ecc new HPC market, server maket GPU alone will not last long ARM+GPU, Power+GPU Embedded market Power+GPU, only chance for HPC Console market Still some chance for HPC

17 CINECA Roadmaps

18 Roadmap 50PFlops Power consumption R&D Deployment Time line
EURORA 50KW, PLX 350 KW, BGQ 1000KW + ENI EURORA or PLX upgrade 400KW; BGQ 1000KW, Data repository 200KW; - ENI R&D Eurora EuroExa STM / ARM board EuroExa STM / ARM prototype PCP Proto 1PF in a rack EuroExa STM / ARM PF platform ETP proto towards exascale board Deployment Eurora industrial prototype TF Eurora or PLX upgrade 1PF peak, 350TF scalar multi petaflop system Tier-0 50PF Tier-1 towards exascale Time line 2013 2014 2015 2016 2017 2018 2019 2020

19 Tier 1 CINECA Procurement Q2014 Requisiti di alto livello del sistema
Potenza elettrica assorbita: 400KW Dimensione fisica del sistema: 5 racks Potenza di picco del sistema (CPU+GPU): nell'ordine di 1PFlops Potenza di picco del sistema (solo CPU): nell'ordine di 300TFlops

20 Tier 1 CINECA Requisiti di alto livello del sistema
Architettura CPU: Intel Xeon Ivy Bridge Numero di core per CPU: >3GHz, oppure 2.4GHz La scelta della frequenza ed il numero di core dipende dal TDP del socket, dalla densità del sistema e dalla capacità di raffreddamento Numero di server: , ( Peak perf = 600 * 2socket * 12core * 3GHz * 8Flop/clk = 345TFlops ) Il numero di server del sistema potrà dipendere dal costo o dalla geometria della configurazione in termini di numero di nodi solo CPU e numero di nodi CPU+GPU Architettura GPU: Nvidia K40 Numero di GPU: >500 ( Peak perf = 700 * 1.43TFlops = 1PFlops ) Il numero di schede GPU del sistema potrà dipendere dal costo o dalla geometria della configurazione in termini di numero di nodi solo CPU e numero di nodi CPU+GPU

21 Tier 1 CINECA Requisiti di alto livello del sistema
Vendor identificati: IBM, Eurotech DRAM Memory: 1GByte/core Verrà richiesta la possibilità di avere un sottoinsieme di nodi con una quantità di memoria più elevata Memoria non volatile locale: >500GByte SSD/HD a seconda del costo e dalla configurazione del sistema Cooling: sistema di raffreddamento a liquido con opzione di free cooling Spazio disco scratch: >300TByte (provided by CINECA)

22 Thank you

Download ppt "Trends and Perspectives for HPC infrastructures"

Similar presentations

Ads by Google