Presentation is loading. Please wait.

Presentation is loading. Please wait.

Special Jobs: MPI Alessandro Costa INAF Catania

Similar presentations


Presentation on theme: "Special Jobs: MPI Alessandro Costa INAF Catania"— Presentation transcript:

1 Special Jobs: MPI Alessandro Costa INAF Catania
Corso di Calcolo Parallelo Grid Computing Catania - ITALY 25-29 September 2006 Special Jobs: MPI Alessandro Costa INAF Catania

2 Outline Overview MPI - What is a MPI job. - A example of MPI job
- How to submit a MPI job.

3 About MPI Execution of parallel jobs is an essential
issue for modern conceptions of informatics and application. Most used library for parallel jobs support is (Message Passing Interface) MPI At the state of the art, parallel jobs can run inside single Computing Elements (CE) only; several projects are involved into studies concerning the possibility of executing parallel jos on Worker Nodes (WNs) belonging to differents CEs.

4 Requirements & Settings
In order to guarantee that MPI job can run, the following requirements MUST BE satisfied: the MPICH software must be installed and placed in the PATH environment variable, on all the WNs of the CE.

5 I/O VO <name of VO> SW DIR defined in each WN will contain a name of a directory shared among the WN belonging to the same CE (MPICH TAG) Distributed I/O : YES Master-Slave I/O: YES Parallel I/O: NO

6 A example of MPI job

7 A example of MPI: job two attributes
JobType="MPICH"; NodeNumber=4; The UI automatically requires the MPICH runtime environment installed on the CE and a number of CPUs at least equal to the required number of nodes. This is done by adding an expression like the following: (other.GlueCEInfoTotalCPUs >= <NodeNumber> ) && Member("MPICH",other.GlueHostApplicationSoftwareRunTimeEnvironment) (this addition is automatically performed by the UI)

8 A example of MPI: job two attributes
Type = "Job"; JobType = "MPICH"; NodeNumber = 3; Executable = "MPIpi"; Arguments = " "; StdOutput = "MPIpi.out"; StdError = "MPIpi.err"; InputSandbox = {"MPIpi"}; OutputSandbox = {"MPIpi.err","MPIpi.out"}; RetryCount=6; ShallowRetryCount=6;

9 How to choose a LRMS I prova_mpi]$ glite-job-list-match mpi.jdl Selected Virtual Organisation name (from proxy certificate extension): gilda Connecting to host glite-rb2.ct.infn.it, port 7772 *************************************************************************** COMPUTING ELEMENT IDs LIST The following CE(s) matching your job requirements have been found: *CEId* iceage-ce-01.ct.infn.it:2119/jobmanager-lcgpbs-infinite iceage-ce-01.ct.infn.it:2119/jobmanager-lcgpbs-long iceage-ce-01.ct.infn.it:2119/jobmanager-lcgpbs-short grid011f.cnaf.infn.it:2119/jobmanager-lcgpbs-infinite grid011f.cnaf.infn.it:2119/jobmanager-lcgpbs-long grid011f.cnaf.infn.it:2119/jobmanager-lcgpbs-short …..

10 How to choose a LRMS II Type = "Job"; JobType = "MPICH";
NodeNumber = 3; Executable = "MPIpi"; Arguments = " "; StdOutput = "MPIpi.out"; StdError = "MPIpi.err"; InputSandbox = {"MPIpi"}; OutputSandbox = {"MPIpi.err","MPIpi.out"}; RetryCount=6; ShallowRetryCount=6; Requirements = RegExp(“LRMS”,other.GlueCEUniqueID); e.g grid011f.cnaf.infn.it

11 Example mpi code on grid
Use the Leibniz formula for pi and write an MPICH job for pi computation. ….. The accuracy should be a parameter given as argv[1] ……

12 A simple solution MPItest.c #include "mpi.h" #include <stdio.h>
#include <math.h> int main( int argc, char *argv[] ) { int n, myid, numprocs, i; double PI25DT = ; double mypi, pi, sum; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid);

13 A simple solution n=atoi(argv[1])*numprocs; sum = 0.0;
for (i = myid ; i < n; i += numprocs) { sum += pow((-1),i)*(4.0 / (2*i+1)); } mypi = sum; MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf("pi is approximately %.16f, Error is %.16f\n",pi, fabs(pi - PI25DT)); MPI_Finalize(); return 0;

14 /*better performances than pow(,) solution…..*/
for (i = myid ; i < n; i += numprocs) { if (i%2) sum += / (2*i+1); else sum += 4.0 / (2*i+1); } /*better performances than pow(,) solution…..*/ n=atoi(argv[1])*numprocs; sum = 0.0; for (i = myid ; i < n; i += numprocs) { sum += pow((-1),i)*(4.0 / (2*i+1)); } mypi = sum; MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf("pi is approximately %.16f, Error is %.16f\n",pi, fabs(pi - PI25DT)); MPI_Finalize(); return 0;

15 NodeNumber * “Arguments”
Type = "Job"; JobType = "MPICH"; NodeNumber = 3; Executable = "MPIpi"; Arguments = " "; StdOutput = "MPIpi.out"; StdError = "MPIpi.err"; InputSandbox = {"MPIpi"}; OutputSandbox = {"MPIpi.err","MPIpi.out"}; RetryCount=6; ShallowRetryCount=6; NodeNumber * “Arguments” This job will evaluate the first addends of the Leibniz formula

16 NodeNumber * “Arguments”
……………..for (i = myid ; i < n; i += numprocs)…………… e.g numprocs=4 arguments=3 n=3*4 myid=0 {0,4,8} myid=1 {1,5,9} myid=2 {2,6,10} myid=3 {3,7,11} NodeNumber * “Arguments” This job will evaluate the first NodeNumber * “Arguments” addends of the Leibniz formula

17 Questions…

18 References GLITE 3.0 user guide manuals series
Special jobs


Download ppt "Special Jobs: MPI Alessandro Costa INAF Catania"

Similar presentations


Ads by Google