Presentation is loading. Please wait.

Presentation is loading. Please wait.

April 10, 2008, Garching Claudio Gheller CINECA The DEISA HPC Grid for Astrophysical Applications.

Similar presentations


Presentation on theme: "April 10, 2008, Garching Claudio Gheller CINECA The DEISA HPC Grid for Astrophysical Applications."— Presentation transcript:

1 April 10, 2008, Garching Claudio Gheller CINECA (c.gheller@cineca.it) The DEISA HPC Grid for Astrophysical Applications

2 April 10, 2008, Garching Disclaimer My background: Computer science in astrophysics My involvement in DEISA: Support to scientific extreme computing projects (DECI) I’m not: A systems espert A networking expert

3 April 10, 2008, Garching Conclusions DEISA is not Grid computing It is (super) super computing

4 April 10, 2008, Garching The DEISA project: overview What is: DEISA (Distributed European Infrastructure for Super-computing Applications) is a consortium of leading national EU supercomputing centres Goals: deploy and operate a persistent, production quality, distributed supercomputing environment with continental scope. When: The Project is funded by European Commission: May 2004 - April 2008. It has been re-funded (DEISA2): May 2008 – April 2010

5 April 10, 2008, Garching The DEISA project: drivers oSupport High Performance Computing. oIntegrate the Europe’s most powerful supercomputing systems. oEnable scientific discovery across a broad spectrum of science and technology. oBest exploitation of the resources both at site level and European level oPromote openness and usage of standards

6 April 10, 2008, Garching The DEISA project: what is NOT oDEISA is not a middleware development project. oDEISA, actually, is not a Grid: it does not support Grid computing. Rather it supports Cooperative Computing.

7 April 10, 2008, Garching BSC, Barcelona Supercomputing Centre, Spain CINECA, Consorzio Interuniversitario, Italy CSC, Finnish Information Technology Centre for Science, Finland EPCC/HPCx, University of Edinburgh and CCLRC, UK ECMWF, European Centre for Medium-Range Weather Forecast, UK FZJ, Research Centre Juelich, Germany HLRS, High Performance Computing Centre Stuttgart, Germany LRZ, Leibniz Rechenzentrum Munich, Germany RZG, Rechenzentrum Garching of the Max Planck Society, Germany IDRIS, Institut du Développement et des Resources en Informatique Scientifique – CNRS, France SARA, Dutch National High Performance Computing, Netherlands The DEISA project: core partners

8 April 10, 2008, Garching Three activity areas Networking: management, coordination and dissemination Service Activities: running the infrastructure Joint Research Activities: porting and running scientific applications on the DEISA infrastructure The DEISA project: Project Organization

9 April 10, 2008, Garching Deisa Activities, some (maybe too many…) details (1) Service Activities: Network Operation and Support. (FZJ leader). Deployment and operation of a gigabit per second network infrastructure for an European distributed supercomputing platform. Data Management with Global file systems. (RZG leader). Deployment and operation of global distributed file systems, as basic building blocks of the "inner" super-cluster, and as a way of implementing lobal data management in a heterogeneous Grid. Resource Management. (CINECA leader). Deployment and operation of global scheduling services for the European super cluster, as well as for its heterogeneous Grid extension. Applications and User Support. (IDRIS leader). Enabling the adoption by the scientific community of the distributed supercomputing infrastructure, as an efficient instrument for the production of leading computational science. Security. (SARA leader). Providing administration, authorization and authentication for a heterogeneous cluster of HPC systems, with special emphasis on single sign-on

10 April 10, 2008, Garching The DEISA Extreme Computing Initiative (DECI) See http://www.deisa.org/applications Deisa Activities, some (maybe too many…) details (2)

11 April 10, 2008, Garching JRA2: Cosmological Applications Goals: to avail the Virgo Consortium of the most advanced features of Grid computing by porting their production applications –GADGET and FLASH to make an effective use of the DEISA infrastructure to lay the foundations of a Theoretical Virtual Observatory Leaded by EPCC which works in close partnership with the Virgo Consortium –JRA2 managed jointly by Gavin Pringle (EPCC/DEISA) and Carlos Frenk (co-PI of both Virgo and VirtU) –work progressed after gathering clear user requirements from Virgo Consortium. –requirements and results published as public DEISA deliverables.

12 April 10, 2008, Garching Current DEISA status variety of systems connected via GEANT/GEANT2 (Premium IP) centres contribute 5% to 10% of CPU cycles to DEISA –running projects selected from the DEISA Extreme Computing Initiative (DECI) calls Premium IP is a service that offers network priority over other traffic on GÉANT. Premium IP traffic takes priority over all other services.

13 April 10, 2008, Garching DEISA HPC systems IDRIS IBM P4 ECMWF IBM P4 FZJ IBM P4 RZG IBM P4 HLRS NEC SX8 HPCX IBM P5 SARA SGI ALTIX LRZ SGI ALTIX BSC IBM PPC CSC IBM P4 CINECA IBM P5

14 April 10, 2008, Garching DEISA technical hints: software stack UNICORE is the grid “glue” –not built on Globus –EPCC developing UNICORE command-line interface Other components –IBM’s General Parallel File System multiclusterGPFS can span different systems over a WAN recent developments for Linux as well as AIX –IBM’s Load Leveler for job scheduling Multicluster Load Leveler can re-route batch jobs to different machines also available on Linux

15 April 10, 2008, Garching DEISA model large parallel jobs running on a single supercomputer –network latency between machines not a significant issue jobs submitted – ideally - via UNICORE, in practice via Load Leveler –re-routed where appropriate to remote resources Single-Sign-On access via GSI-SSH GPFS absolutely crucial to this model –jobs have access to data no matter where they run –no source code changes required standard fread/fwrite(or READ/WRITE) calls to Unix files also have a Common Production Environment –defines a common set of environment variables –defined locally to map to appropriate resources Eg $DEISA_WORK will point to local workspace

16 April 10, 2008, Garching Running ideally on DEISA Fill all the gaps restart/continue jobs on any machine from file checkpoints –no need to recompile application program –no need to manually stage data multi-step jobs running on multiple machines easy access to data for post-processing after a run

17 April 10, 2008, Garching Running on DEISA: Load Leveler IDRIS IBM P4 ECMWF IBM P4 FZJ IBM P4 RZG IBM P4 HLRS NEC SX8 HPCX IBM P5 LRZ SGI ALTIX CSC IBM P4 CINECA IBM P5 SARA SGI ALTIX BSC IBM PPC AIX LL-MC AIX LL AIX LL-MC Super-UX NQS II AIX LL LINUX LSF LINUX PBS Pro AIX LL-MC LINUX LL Job

18 April 10, 2008, Garching Running ideally on DEISA: Unicore IDRIS FZJ IBMRZGHLRSCINECA SARA AIX LL-MC AIX LL AIX LL-MC Super-UX NQS II LINUX LSF LINUX PBS Pro AIX LL-MC LINUX LL HPCX AIX LL LRZ CSC Gateway ECMWF Gateway FZJ Gateway IDRIS Gateway HLRS Gateway HPCX Gateway LRZ Gateway RZG Gateway SARA Gateway BSC Gateway CINECA Gateway CSC NJS FZJ IBM P4 IDB UUDB NJS IDRIS IBM P4 IDBUUDB NJS HLRS NEC SX8 IDB UUDB NJS HPCX IBM P5 IDB UUDB NJS LRZ SGI ALTIX IDB UUDB NJS RZG IBM P4 IDB UUDB NJS SARA SGI ALTIX IDB UUDB NJS BSC IBM PPC IDB UUDB NJS CINECA IBM P5 IDB UUDB NJS CSC IBM P4 IDB UUDB NJS ECMWF IBM P4 IDB UUDB ECMWF BSC

19 April 10, 2008, Garching GPFS Multicluster HPC systems mount /deisa/sitename users read/write directly from/to these file systems /deisa/idr /deisa/cne /deisa/rzg /deisa/fzj /deisa/csc

20 April 10, 2008, Garching DEISA Common Production Environment (DCPE) DCPE… what is it? both a set of software (the software stack ) and a generic interface to access the software (based on the Modules tool ) Required to both offer a common interface to the users and to hide the differences between local installations Essential feature for job migration inside homogeneous super-clusters The DCPE includes: shells (Bash and Tcsh), compilers (C, C++, Fortran and Java), libraries (for numerical analysis, data formatting, etc.), tools (debuggers, profilers, editors, development tools), applications.

21 April 10, 2008, Garching Modules Framework oModules tool chosen because it was well known by many sites and many users oPublic domain software oTcl implementation used Modules: ooffer a common interface different software components on different computers, oto hide different names and configurations oto manage individually each software and load only those required into the user environment, ofor each user to change the version of each software independently of the others, ofor each user to switch independently between the current default version of a software to another one (older or newer).

22 April 10, 2008, Garching The HPC users’ vision Initial vision: “Full” Distributed computing IDRIS IBM P4 ECMWF IBM P4 FZJ IBM P4 RZG IBM P4 HLRS NEC SX8 HPCX IBM P5 SARA SGI ALTIX LRZ SGI ALTIX BSC IBM PPC CSC IBM P4 CINECA IBM P5 Task1 Task2 Task3

23 April 10, 2008, Garching The HPC users visions Initial vision: “Full” Distributed computing IDRIS IBM P4 ECMWF IBM P4 FZJ IBM P4 RZG IBM P4 HLRS NEC SX8 HPCX IBM P5 SARA SGI ALTIX LRZ SGI ALTIX BSC IBM PPC CSC IBM P4 CINECA IBM P5 Task1 Task2 Task3 Impossible!!!!

24 April 10, 2008, Garching The HPC users vision Jump computing IDRIS IBM P4 ECMWF IBM P4 FZJ IBM P4 RZG IBM P4 HLRS NEC SX8 HPCX IBM P5 SARA SGI ALTIX LRZ SGI ALTIX BSC IBM PPC CSC IBM P4 CINECA IBM P5 Task

25 April 10, 2008, Garching The HPC users vision Jump computing IDRIS IBM P4 ECMWF IBM P4 FZJ IBM P4 RZG IBM P4 HLRS NEC SX8 HPCX IBM P5 SARA SGI ALTIX LRZ SGI ALTIX BSC IBM PPC CSC IBM P4 CINECA IBM P5 Task Difficult… HPC applications are… HPC applications!!! Fine tuned on the architectures

26 April 10, 2008, Garching So… what… Jump computing is useful to reduce queue waiting times. Find the gap… and fill it… can work, better on homogeneous systems IDRIS IBM P4 ECMWF IBM P4 FZJ IBM P4 RZG IBM P4 HLRS NEC SX8 HPCX IBM P5 LRZ SGI ALTIX CSC IBM P4 CINECA IBM P5 SARA SGI ALTIX BSC IBM PPC AIX LL-MC AIX LL AIX LL-MC Super-UX NQS II AIX LL LINUX LSF LINUX PBS Pro AIX LL-MC LINUX LL Job

27 April 10, 2008, Garching So… what… Single image filesystem is a great solution!!!!! (even if moving data…) IDRIS IBM P4 ECMWF IBM P4 FZJ IBM P4 RZG IBM P4 HLRS NEC SX8 HPCX IBM P5 LRZ SGI ALTIX CSC IBM P4 CINECA IBM P5 SARA SGI ALTIX BSC IBM PPC AIX LL-MC AIX LL AIX LL-MC Super-UX NQS II AIX LL LINUX LSF LINUX PBS Pro AIX LL-MC LINUX LL DEISA GPFS SHARED FILESYSTEM

28 April 10, 2008, Garching So… what… Usual Grid solution requires to learn new stuff… Often scientists are not willing to… DEISA rely on Load Leveler (or other common scheduling systems)… same scripts, same commands you are used to!!! However, only IBM systems support LL… The Common Production Environment offers a shared (and friendly) set of tools to the users. However, compromises must be accepted…

29 April 10, 2008, Garching High latency Low latency Low integration High integration Internet GRID Distributed computing and data grids: EGEE Capacity cluster Capacity supercomputer Distributed supercomputing DEISA Capability supercomputer Enabling computing HPC centres Summing up… Growing up, DEISA is moving away from a Grid. In order to fulfill the needs of HPC users, it is trying to become a huge supercomputer. On the other hand, DEISA2 must lead to a service infrastructure and users’ expectations MUST be matched (no more time for experiments…)

30 April 10, 2008, Garching DECI: enabling Science to DEISA oIdentification, deployment and operation of a number of « flagship » applications requiring the infrastructure services, in selected areas of science and technology. o European Call for proposals in May - June every year. Applications are selected on the basis of scientific excellence, innovation potential and relevance criteria, with the collaboration of the HPC national evaluation committees. oDECI users are supported by the Applications Task Force (ATASKF), whose objective is to enable and deploy the Extreme Computing applications.

31 April 10, 2008, Garching LFI-SIM DECI Project (2006) Planck (useless) overview: Planck is the 3rd generation space mission for the mapping and the analysis of the microwave sky: its unprecedented combination of sky and frequency coverage, accuracy, stability and sensitivity is designed to achieve the most efficient detection of the Cosmic Microwave Background ( CMB ) in both temperature and polarisation. In order to achieve the ambitious goals of the mission, unanimously acknowledged by the scientific community to be of the highest importance, data processing of extreme accuracy is needed. Principal Investigator(s) Fabio Pasian (INAF- O.A.T.), Hannu Kurki-Suonio (Univ. of Helsinki) Leading Institution INAF -O.A Trieste and Univ. of Helsinki Partner Institution(s) o INAF-IASF Bologna, o Consejo Superior de Investigaciones Cientificas (Instituto de Fisica de Cantabria), o Max-Planck Institut für Astrophysik Garching, o SISSA Trieste, o University of Milano, o University “Tor Vergata” Rome DEISA Home Site CINECA

32 April 10, 2008, Garching Need of simulations in Planck NOT the typical DECI-HPC project !!! Simulations are used to: oassess likely science outcomes; oset requirements on instruments in order to achieve the expected scientific results; otest the performance of data analysis algorithms and infrastructure; ohelp understanding the instrument and its noise properties; oanalyze known and unforeseen systematic effects; odeal with known physics and new physics. Predicting the data is fundamental to understand them.

33 April 10, 2008, Garching Simulation pipeline Add foregrounds Generate CMB sky Add foregrounds “Observe” sky with LFI reference sky maps Time-Ordered Data cosmological parameters frequency sky maps cosmological parameters Add foregrounds Data reduction Freq. merge Comp. sep. component maps C(l) evaluation C(l) Parameter evaluation Knowledge and details increase over time, therefore the whole computational chain must be iterated many times instrument parameters NEED OF HUGE COMPUTATIONAL RESOURCES GRID can be a solution!!!

34 April 10, 2008, Garching Planck & DEISA DEISA was expected to be used to osimulate many times the whole mission of Planck’s LFI instrument, on the basis of different scientific and instrumental hypotheses; oreduce, calibrate and analyse the simulated data down to the production of the final products of the mission, in order to evaluate the impact of possible LFI instrumental effects on the quality of the scientific results, and consequently to refine appropriately the data processing algorithms. Model 1 Model 2 Model 3 Model N

35 April 10, 2008, Garching Outcomes oPlanck simulations are essential to get the best possible understanding of the mission and to have a “conscious expectation of the unexpected” oThey also allow to properly plan Data Processing Centre resources oThe usage of the EGEE grid resulted to be more suitable for such project since it provides fast access to small/medium computing resources. Most of the Planck pipeline is happy with such resources!!! oHowever DEISA was useful to produce massive sets of simulated data and to perform and test the data processing steps which requires large computing resources (lots of coupled processors, large memories, large bandwidth…) oInteroperation between the two grid infrastructures (possibly based on the G-Lite middleware) is expected in the next years


Download ppt "April 10, 2008, Garching Claudio Gheller CINECA The DEISA HPC Grid for Astrophysical Applications."

Similar presentations


Ads by Google