Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Performance Computing on the GRID Infrastructure of COMETA S. Orlando 1,2, G. Peres 3,1,2, F. Reale 3,1,2, F. Bocchino 1,2, G.G. Sacco 2, M. Miceli.

Similar presentations


Presentation on theme: "High Performance Computing on the GRID Infrastructure of COMETA S. Orlando 1,2, G. Peres 3,1,2, F. Reale 3,1,2, F. Bocchino 1,2, G.G. Sacco 2, M. Miceli."— Presentation transcript:

1 High Performance Computing on the GRID Infrastructure of COMETA S. Orlando 1,2, G. Peres 3,1,2, F. Reale 3,1,2, F. Bocchino 1,2, G.G. Sacco 2, M. Miceli 2 1 INAF - Osservatorio Astronomico di Palermo, Italy, 2 Consorzio COMETA, Italy, 3 Dip.S.F.A., Universita’ di Palermo, Italy 1. Rationale Our group has a long-term and solid experience in developing and applying numerical models to study astrophysical plasmas (e.g. solar corona, supernova remnants, or protostellar jets) and in optimizing the hydrodynamic and magnetohydrodynamic (MHD) codes for efficient execution on High Performance Computing (HPC) systems. As a natural development of our activity and given our strong interest on HPC modeling of astrophysical plasmas, we have been among the promoters of the constitution of the COMETA consortium, and of the implementation and development of an e-infrastructure in Sicily based on the GRID paradigm. In particular, we have contributed to setup the infrastructure in order to run HPC applications on the GRID. In this contribution, we report on our experience regarding to porting HPC applications to the GRID and to the first HPC simulations performed. 1. Rationale Our group has a long-term and solid experience in developing and applying numerical models to study astrophysical plasmas (e.g. solar corona, supernova remnants, or protostellar jets) and in optimizing the hydrodynamic and magnetohydrodynamic (MHD) codes for efficient execution on High Performance Computing (HPC) systems. As a natural development of our activity and given our strong interest on HPC modeling of astrophysical plasmas, we have been among the promoters of the constitution of the COMETA consortium, and of the implementation and development of an e-infrastructure in Sicily based on the GRID paradigm. In particular, we have contributed to setup the infrastructure in order to run HPC applications on the GRID. In this contribution, we report on our experience regarding to porting HPC applications to the GRID and to the first HPC simulations performed. 2. The FLASH Code Framework: Advanced Simulation and Computing (ASC) Academic Strategic Alliances Program (ASAP) Center (USA) Main development site: FLASH Center, The University of Chicago, USA Main features: Modular, multi-D, adaptive-mesh, parallel code capable of handling general compressible flow problems in astrophysical environments Collaboration OAPa/FLASH center: to upgrade, to expand, and to apply extensively FLASH to astrophysical systems –New FLASH modules implemented @ OAPa: non-equilibrium ionization, Spitzer thermal conduction, Spitzer viscosity, radiative losses. 2. The FLASH Code Framework: Advanced Simulation and Computing (ASC) Academic Strategic Alliances Program (ASAP) Center (USA) Main development site: FLASH Center, The University of Chicago, USA Main features: Modular, multi-D, adaptive-mesh, parallel code capable of handling general compressible flow problems in astrophysical environments Collaboration OAPa/FLASH center: to upgrade, to expand, and to apply extensively FLASH to astrophysical systems –New FLASH modules implemented @ OAPa: non-equilibrium ionization, Spitzer thermal conduction, Spitzer viscosity, radiative losses. 3. HPC vs. GRID In general, HPC applications require parallel computers and computer clusters, i.e. multi-processors computing systems with a low latency intercommunication network. An HPC simulation may require a number of processors > 32 and a CPU time (summed on all the processors) > 5000 h. Typically, an HPC application requires about 10000 h of CPU time, i.e. ~7 days of execution time using 64 processors. Examples of HPC applications are hydrodynamic and MHD multi- dimensional simulations. An example of HPC platform is the IBM/Sp5 hosted at CINECA (Italy). In general, the GRID infrastructures are not designed for HPC and porting HPC applications to GRID is considered a technological challenge. Our group together with the COMETA support team has successfully ported HPC applications to the GRID infrastructure of the COMETA consortium. 3. HPC vs. GRID In general, HPC applications require parallel computers and computer clusters, i.e. multi-processors computing systems with a low latency intercommunication network. An HPC simulation may require a number of processors > 32 and a CPU time (summed on all the processors) > 5000 h. Typically, an HPC application requires about 10000 h of CPU time, i.e. ~7 days of execution time using 64 processors. Examples of HPC applications are hydrodynamic and MHD multi- dimensional simulations. An example of HPC platform is the IBM/Sp5 hosted at CINECA (Italy). In general, the GRID infrastructures are not designed for HPC and porting HPC applications to GRID is considered a technological challenge. Our group together with the COMETA support team has successfully ported HPC applications to the GRID infrastructure of the COMETA consortium. 4. Porting HPC Applications to GRID Porting HPC applications to GRID is one of the aims of the COMETA consortium. Each cluster of the infrastructure has been designed and equipped with a low latency intercommunication network (InfiniBand) to allow the best performance of HPC applications. Since most HPC applications are based on the MPI library, MPI-1 and MPI-2 libraries have been distributed on the GRID infrastructure. In our experience, GRID infrastructures can be used to execute HPC applications if the following requirements are satisfied:  HPC queue with preemption capability on the other queues;  use of watchdog utility for job monitoring during execution;  long term proxy to run jobs whose execution is time-consuming (~21 days). 4. Porting HPC Applications to GRID Porting HPC applications to GRID is one of the aims of the COMETA consortium. Each cluster of the infrastructure has been designed and equipped with a low latency intercommunication network (InfiniBand) to allow the best performance of HPC applications. Since most HPC applications are based on the MPI library, MPI-1 and MPI-2 libraries have been distributed on the GRID infrastructure. In our experience, GRID infrastructures can be used to execute HPC applications if the following requirements are satisfied:  HPC queue with preemption capability on the other queues;  use of watchdog utility for job monitoring during execution;  long term proxy to run jobs whose execution is time-consuming (~21 days). 5. First HPC Runs As a first HPC application on the GRID infrastructure, we used the FLASH code to explore the importance of magnetic-field-oriented thermal conduction in the interaction of supernova remnant (SNR) shocks with radiative gas clouds (Orlando et al. 2008, ApJ 678, 274). As part of this project, we performed two demanding MHD simulations, describing the shock-cloud interaction, on the GRID infrastructure of COMETA. Each of the simulations required: 64 processors, and ~ 10000 CPU hours (summed on all the processors) to cover 20000 yr of SNR evolution. The model describes the impact of a planar supernova shock front with an isobaric cloud by solving numerically the time-dependent MHD equations of mass, momentum, and energy conservation. The model takes into account the thermal conduction with flux ``saturation'', and the radiative losses from an optically thin plasma. The thermal conductivity in the presence of an organized ambient magnetic field is known to be highly anisotropic and it can be extraordinarily reduced in the direction transverse to the field. 5. First HPC Runs As a first HPC application on the GRID infrastructure, we used the FLASH code to explore the importance of magnetic-field-oriented thermal conduction in the interaction of supernova remnant (SNR) shocks with radiative gas clouds (Orlando et al. 2008, ApJ 678, 274). As part of this project, we performed two demanding MHD simulations, describing the shock-cloud interaction, on the GRID infrastructure of COMETA. Each of the simulations required: 64 processors, and ~ 10000 CPU hours (summed on all the processors) to cover 20000 yr of SNR evolution. The model describes the impact of a planar supernova shock front with an isobaric cloud by solving numerically the time-dependent MHD equations of mass, momentum, and energy conservation. The model takes into account the thermal conduction with flux ``saturation'', and the radiative losses from an optically thin plasma. The thermal conductivity in the presence of an organized ambient magnetic field is known to be highly anisotropic and it can be extraordinarily reduced in the direction transverse to the field. 6. Results As expected, we found that the thermal exchanges between the cloud and the surrounding medium depend on the initial B orientation (Orlando et al. 2008, Apj 678, 274). The figure shows the mass density distribution in the (x,y) plane, in log scale, at the labeled times in the case of magnetic field oriented along the x axis. The primary shock propagates upward and impact on the cloud. In this case, the magnetic field is trapped at the nose of the cloud leading to a continuous increase of the magnetic pressure and field tension there. The magnetic field, B, gradually envelope the cloud and the main consequences of this can be summarized as follows: B reduces the heat conduction through the cloud surface; B partially suppress the hydrodynamic instabilities that would develop at the cloud boundary; the cloud expansion and evaporation induced by the thermal conduction are limited by the confining effect of B; the thermal insulation promotes radiative cooling and condensation of plasma, inducing thermal instabilities. Acknowledgments: The software used in this work was in part developed by the DOE-supported ASC / Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. This work makes use of results produced by the PI2S2 Project managed by the Consorzio COMETA, a project co-funded by the Italian Ministry of University and Research (MIUR) within the Piano Operativo Nazionale ``Ricerca Scientifica, Sviluppo Tecnologico, Alta Formazione'' (PON 2000-2006). 6. Results As expected, we found that the thermal exchanges between the cloud and the surrounding medium depend on the initial B orientation (Orlando et al. 2008, Apj 678, 274). The figure shows the mass density distribution in the (x,y) plane, in log scale, at the labeled times in the case of magnetic field oriented along the x axis. The primary shock propagates upward and impact on the cloud. In this case, the magnetic field is trapped at the nose of the cloud leading to a continuous increase of the magnetic pressure and field tension there. The magnetic field, B, gradually envelope the cloud and the main consequences of this can be summarized as follows: B reduces the heat conduction through the cloud surface; B partially suppress the hydrodynamic instabilities that would develop at the cloud boundary; the cloud expansion and evaporation induced by the thermal conduction are limited by the confining effect of B; the thermal insulation promotes radiative cooling and condensation of plasma, inducing thermal instabilities. Acknowledgments: The software used in this work was in part developed by the DOE-supported ASC / Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. This work makes use of results produced by the PI2S2 Project managed by the Consorzio COMETA, a project co-funded by the Italian Ministry of University and Research (MIUR) within the Piano Operativo Nazionale ``Ricerca Scientifica, Sviluppo Tecnologico, Alta Formazione'' (PON 2000-2006).


Download ppt "High Performance Computing on the GRID Infrastructure of COMETA S. Orlando 1,2, G. Peres 3,1,2, F. Reale 3,1,2, F. Bocchino 1,2, G.G. Sacco 2, M. Miceli."

Similar presentations


Ads by Google