Presentation is loading. Please wait.

Presentation is loading. Please wait.

3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.

Similar presentations


Presentation on theme: "3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language."— Presentation transcript:

1 3/12/2013Computer Engg, IIT(BHU)1 MPI-1

2 MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language or compiler specification Not a specific implementation, several implementations (like pthread) Standard for distributed memory, message passing, parallel computing Distributed Memory

3 GOALS OF MPI SPECIFICATION Provide source code portability. Allow efficient implementations. Flexible to port different algorithms on different hardware environments. Support for heterogeneous architectures – processors not identical.

4 REASONS FOR USING MPI Standardization – virtually all HPC platforms. Portability – same code runs on another platform. Performance – vendor implementations should exploit native hardware features. Functionality – 115 routines. Availability – a variety of implementations available.

5 BASIC MODEL Communicators and Groups Group: Ordered set of processes. Each process is associated with a unique integer rank. Rank from 0 to (N-1) for N processes. An object in system memory accessed by handle. MPI_GROUP_EMPTY MPI_GROUP_NULL

6 BASIC MODEL (CONTD.) Communicator: Group of processes that may communicate with each other. MPI messages must specify a communicator. An object in memory. Handle to access the object. There is a default communicator (automatically defined): MPI_COMM_WORLD Identify the group of all processes.

7 COMMUNICATORS Intra-Communicator – All processes from the same group Inter-Communicator – Processes picked up from several groups

8 COMMUNICATOR AND GROUPS Allow you to organize tasks, based upon function, into task groups. Enable Collective Communications (later) operations across a subset of related tasks. Safe communications. Many Communicators at the same time. Dynamic – can be created and destroyed at run time. Process may be in more than one group/communicator – unique rank in every group/communicator.

9 VIRTUAL TOPOLOGIES coord (0,0): rank 0 coord (0,1): rank 1 coord (1,0): rank 2 coord (1,1): rank 3 Attach graph topology information to an existing communicator

10 SEMANTICS Header file #include (C) include mpif.h (fortran) Java, Python etc.

11 MPI PROGRAM STRUCTURE

12 MPI FUNCTIONS – MINIMAL SUBSET MPI_Init – Initialize MPI. MPI_Comm_size – size of group associated with the communicator. MPI_Comm_rank – identify the process. MPI_Send MPI_Recv MPI_Finalize

13 CLASSIFICATION OF MPI ROUTINES Environment Management MPI_Init, MPI_Finalize Point-to-Point Communication MPI_Send, MPI_Recv Collective Communication MPI_Reduce, MPI_Bcast Information on the Processes MPI_Comm_rank, MPI_Get_processor_name

14 MPI_INIT All MPI programs call this before using other MPI functions int MPI_Init(int *pargc, char ***pargv); Must be called in every MPI program Must be called only once and before any other MPI functions are called. Pass command line arguments to all processes. int main(int argc, char **argv) { MPI_Init(&argc, &argv); … }

15 MPI_COMM_SIZE Number of processes in the group associated with a communicator int MPI_Comm_size(MPI_Comm comm, int *psize); Find out number of processes being used by your application. int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); … }

16 MPI_COMM_RANK Rank of the calling process within the communicator. Unique Rank between 0 and (p-1) Can be called task ID int MPI_Comm_rank(MPI_Comm comm, int *rank); Unique rank for a process in each communicator it belongs to. Used to identify work for the processor. int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); … }

17 MPI_FINALIZE Terminates the MPI execution environment Last MPI routine to be called in any MPI program int MPI_Finalize(void); int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); printf(“no. of processors: %d\n rank: %d”, p, rank); MPI_Finalize(); }

18 How To Compile a MPI Program Open MPI implementation on our Cluster. mpicc -o test test.c Like gcc only mpicc not a special compiler $mpicc: gcc: no input files Mpi implemented just as any other library Just a wrapper around gcc that includes required command line parameters.

19 How To Run a Compiled MPI Program mpirun -np X test Will run X copies of program in your current run time environment. np option specifies number of copies of program.

20 MPIRUN Only rank 0 process can receive standard input. mpirun redirects standard input of all others to /dev/null Open MPI redirects standard input of mpirun to standard input of rank 0 process. Node which invoked mpirun need not be the same as the node for the MPI_COMM_WORLD rank 0 process mpirun directs standard output and error of remote nodes to the node that invoked mpirun. SIGTERM, SIGKILL kill all processes in the communicator. All other signals ignored.

21 SOME MORE FUNCTIONS int MPI_Init (&flag) Check if MPI_Initialized has been called. int MPI_Wtime() Returns elapsed wall clock time in seconds (double precision) on the calling processor. int MPI_Wtick() Returns the resolution in seconds (double precision) of MPI_Wtime() Message Passing Functionality That is what MPI is meant for!


Download ppt "3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language."

Similar presentations


Ads by Google