Presentation is loading. Please wait.

Presentation is loading. Please wait.

The PRACE project and the Application Development Programme (WP8-2IP) Claudio Gheller (ETH-CSCS)

Similar presentations


Presentation on theme: "The PRACE project and the Application Development Programme (WP8-2IP) Claudio Gheller (ETH-CSCS)"— Presentation transcript:

1 The PRACE project and the Application Development Programme (WP8-2IP) Claudio Gheller (ETH-CSCS)

2 PRACE - Partnership for Advanced Computing in Europe PRACE and has the aim of creating a European Research Infrastructure providing world class systems and services and coordinating their use throughout Europe. 2

3 PRACE-2IP HPCEURHET PRACE History – An Ongoing Success Story 3 20042005200620072008 Creation of the Scientific Case HPC part of the ESFRI Roadmap; creation of a vision involving 15 European countries Signature of the MoUCreation of the PRACE Research Infrastructure PRACE RI 2009201020112012 PRACE Initiative PRACE Preparatory Phase Project PRACE-1IP PRACE-3IP 2013

4 PRACE architecture European HPC-facilities at the top of an HPC provisioning pyramid –Tier-0: European Centres for Petaflop/s –Tier-1: National Centres –Tier-2: Regional/University Centres Creation of a European HPC ecosystem –HPC service providers on all tiers –Scientific and industrial user communities –The European HPC hard- and software industry –Other e-Infrastructures capability # of systems Tier-0 European centres Tier-1 National centres Tier-2 Regional/University centres

5 22 partners (21 countries), funding 18 Million € Preparation/Coordination: FZJ/JSC/PRACE PMO 1.9.2011 – 31.8.2013, extended to 31.8.2014 (only selected WPs) Main objectives: –Provision of HPC resources access –Refactoring and scaling of major user codes –Tier-1 Integration (DEISA  PRACE) –Consolidation of the Research Infrastructure PRACE-2IP 5

6 Funding 20 Million € Started: summer 2013 Objectives –Provision of HPC resources access –Planned: Pre-commercial procurement exercise –Planned: Industry application focus PRACE-3IP 6

7 Access to Tier-0 supercomputers Open Call for Proposals Technical Peer Review Scientific Peer Review Technical experts in PRACE systems and software Access Committee Priorisation + Resource Allocation Project + Final Report ResearcherResearchers with expertise in scientific field of proposal ~ 2 Months ~ 3 Months~ 1 year PRACE director decides on the proposal of the Access Committee - -

8 Distribution of resources 8

9 PRACE-2IP WP8: Enabling Scientific Codes to the Next Generation of HPC Systems 9

10 –WP1 Management –WP2 Framework for Resource Interchange –WP3 Dissemination –WP4 Training –WP5 Best Practices for HPC Systems Commissioning –WP6 European HPC Infrastructure Operation and Evolution –WP7 Scaling Applications for Tier-0 and Tier-1 Users –WP8 Community Code Scaling –WP9 Industrial Application Support –WP10 Advancing the Operational Infrastructure –WP11 Prototyping –WP12 Novel Programming Techniques PRACE 2IP workpackages 10 ETH leading the WP

11 WP8: involved centers 11

12 WP8 objectives Initiate a sustainable program in application development for coming generation of supercomputing architectures with a selection of community codes targeted at problems of high scientific impact that require HPC. Refactoring of community codes in order to optimally map applications to future supercomputing architectures. Integrate and validate these new developments into existing applications communities. 12

13 WP8 principles scientific communities, with their high-end research challenges, are the main drivers for software development; synergy between HPC experts and application developers from the communities; Supercomputer have to recast their service activities in order to support, guide and enable scientific program developers and researchers in refactoring codes and re-engineering algorithms strong commitment from the scientific community has to be granted 13

14 WP8 workflow 14 Scientific Domains and Communities Selection Scientific Communities Engagement Codes screening Codes Performance Analysis and ModelCommunities build-up Codes and kernels selection Communities consolidationCodes Refactoring Prototypes experimentation Code Validation and reintegration Task 1 Task 3 Task 2

15 Communities selection (task 1) the candidate community must have high impact on science and/or society; the candidate community must rely on and leverage high performance computing; WP8 can have a high impact on the candidate community; the candidate community must be willing to actively invest in software refactoring and algorithm re-engineering. o Astrophysics o Climate o Material Science o Particle Physics o Engineering 15

16 Codes and kernels selection methodology (task 1) Performance Modelling methodology Objective and quantitative way to select code and estimate possible performance improvements –Performance modelling goal is gaining insight into an application’s performance on a given computer system. –achieved first by measurement and analysis, and then by the synthesis of the application and computing system characteristics –also represents a predictive tool, estimating the behaviour on a different computing architecture identifying the most promising areas for performance improvement. 16

17 Selected codes and institution in charge (task 1) 17 RAMSESETH PFARMSTFC EAF-PAMRUC-LCA OASISCEA I/OICHEC ICONETH NEMOSTFC Fluidity/ICOMSTFC ABINITCEA QuantumESPRESSOCINECA YAMBO UC-LCA SIESTA BSC OCTOPUSUC-LCA EXCITING/ELKETH PLQCD CASTORC ELMER VSB-TUO CODE_SATURN STFC ALYA BSC ZFS HLRS

18 Codes Refactoring (task 2) Still running (last few weeks) Specific codes’ kernels are being re-designed and re- implemented according to the workplans defined in task 1 Each group works independently Check points at Face to Face workshops and All- Hands meetings Specific Wiki Web site implemented for report progresses, collect and exchange information and documents and to manage and release implemented code: http://prace2ip-wp8.hpcforge.org 18

19 Codes validation and re-introduction (task 3) Collaborative work (daily basis) involving code developers and HPC experts Dedicated workshops Face to Face meetings Participation and contribution to conferences This way, no actual need of a special/specific re- integration procedure was needed 19

20 20 Case study: RAMSES The RAMSES code was developed to study the evolution of the large-scale structure of the universe and the process of galaxy formation. adaptive mesh refinement (AMR) multi- species code (baryons – hydrodynamics – plus dark matter – N-Body) Gravity couples the two components. Solved by multigrid approach Other components supported (e.g. MHD, radiative transfer), but not subject of our anaysis

21 Performance analysis example Parallel Profiling, large test (512 3 base grid 9 refinement levels – 250 GB): strong scaling 21 For this test Communication becomes the most relevant part, and it is dominated by synchronizations, due to the difficulties in load balancing the AMR-Multigrid algorithms Strong improvements can be obtained tuning the load balance among computational elements (nodes?)

22 Performance analysis: conclusions The performance analysis identified the critical kernels of the code: Hydro: all the functions needed to solve the hydrodynamic problem are included. Within these functions, we have those that collect from grids at different resolutions the data necessary to update each single cell, those that calculate fluxes to solve conservation equations, Riemann solvers, finite- volume solvers. Gravity: this group comprises functions needed to calculate the gravitational potential at different resolutions using a multigrid-relaxation approach. MPI: comprises all the communication related MPI calls (data tranfer, synchronisation, management) 22

23 HPC architectures model

24 Performance improvements Two main objectives 1.hybrid OpenMP+MPI parallelization, to exploit systems with distributed nodes, each accounting for cores with shared memory 2.Exploitation of accelerators, in particular GPUs, adopting different paradigms (CUDA, OpenCL, directives) From the analysis of the performance and of the characteristics of the kernels under investigation we can say that: The Hydro kernel is suitable for both approaches. Specific care must be posed to memory access issues. The Gravity kernel can benefit from the hybrid implementation. Due to the multigrid structure, however, an efficient GPU version can be particularly challenging, so it will be considered only if time and resources permit. 24

25 Performance modeling Hybrid version (trivial modeling): T HYBRID = T MPI  MPI,NTOT / (  OMP,Ncores  MPI,Nnodes ) GPU version T TOT = T CPU + T CPU-GPU + T GPU-GPU + T GPU 25

26 Performance model example 26

27 Results: Hybrid code (OpenMP+MPI) 27

28 GPU implementation – approach 1

29 Step 2: solve Hydro equations for cell i,j,k New Hydro variables Copy to the CPU Step 3: compose results array Step 4: copy results back to the CPU

30 Results Sedov Blast wave test (hydro only, unigrid): Times in seconds ACCNVECTORTtotTaccTtransfEff. Speed-up OFF - 1 Pe1094.5400 ON51255.8338.229.22.012820513 ON102445.6629.279.22.669969252 ON204842.0825.369.23.068611987 ON409641.3223.29.23.293965517 ON819241.1923.159.23.304535637 20 GB tranferred in/out (constant overhead)

31 Performance pitfalls © CSCS 2013 - Claudio Gheller 31 Amount of transferred data –Overhead increasing linearly with data size Data structure, irregular data distribution –PREVENTS any asynchronous operation: NO overlap of computation and data transfer. –Ineffective memory access –Prevents coalesced memory access Low flops per byte ratio –this is intrinsic to the algorithm… Asynchronous operations not permitted –See above…

32 GPU implementation – approach 2 Hydro variables Gravitational forces Other quantities CPU memory Step 1: compose data chunks on the CPU Data chunks are the basic building block of the RAMSES’ AMR hierarchy: OCTs and their refinements

33 Data is moved to and from the GPU in chunks Data transfer and computation can be overlapped New Hydro variables Copy to the CPU Step 2: copy multiple data chunks to the GPU Step 3: solve Hydro equations for chuncks N, M… Step 4: compose results array

34 Advantages over previous implementation Data is regularly distributed in each chunk and its access is efficient. Improved flop per byte ratio Effective usage of the GPU computing architecture Data re-organization is performed on the CPU and its overhead hidden by asynchronous processes Data transfer overhead almost completely hidden AMR naturally supported DRAWBACKS: much more complex implementation © CSCS 2013 - Claudio Gheller 34

35 Conclusions PRACE is providing European scientist top level HPC services PRACE-2IP WP8 successfully introduced a methodology for code development relying on a close synergy between scientists, community codes developers and HPC experts Many community codes re-design and implemented to exploit novel HPC architectures (see http://prace2ip- wp8.hpcforge.org/ for details)http://prace2ip- wp8.hpcforge.org/ Most of WP8 results are already available to the community WP8 is going to be extended one more year (no similar activity in PRACE-3IP) 35


Download ppt "The PRACE project and the Application Development Programme (WP8-2IP) Claudio Gheller (ETH-CSCS)"

Similar presentations


Ads by Google