Presentation is loading. Please wait.

Presentation is loading. Please wait.

3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:

Similar presentations


Presentation on theme: "3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:"— Presentation transcript:

1 3/12/2013Computer Engg, IIT(BHU)1 MPI-2

2 POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types: Synchronous send Blocking send / blocking receive Non-blocking send / non-blocking receive Buffered send Combined send/receive "Ready" send

3 POINT-TO-POINT COMMUNICATION Processes can be collected into groups Each message is sent in a context, and must be received in the same context A group and context together form a Communicator A process is identified by its rank in the group associated with a communicator. Messages are sent with an accompanying user defined integer tag, to assist the receiving process in identifying the message. MPI_ANY_TAG

4 BLOCKING SEND/RECEIVE int MPI_Send(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm communicator ) buf: pointer - data to send. count: number of elements in buffer. Datatype : which kind of data types in buffer ?

5 BLOCKING SEND/RECEIVE int MPI_Send(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm communicator ) buf: pointer - data to send count: number of elements in buffer. Datatype : which kind of data types in buffer ?

6 BLOCKING SEND/RECEIVE int MPI_Send(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm communicator ) buf: pointer - data to send count: number of elements in buffer. Datatype : which kind of data types in buffer ?

7

8 BLOCKING SEND/RECEIVE int MPI_Send(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm communicator ) buf: pointer - data to send count: number of elements in buffer. Datatype : which kind of data types in buffer ? dest: the receiver tag: the label of the message communicator: set of processors involved (MPI_COMM_WORLD)

9 BLOCKING SEND/RECEIVE int MPI_Send(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm communicator ) buf: pointer - data to send count: number of elements in buffer. Datatype : which kind of data types in buffer ? dest: the receiver tag: the label of the message communicator: set of processors involved (MPI_COMM_WORLD)

10 BLOCKING SEND/RECEIVE int MPI_Send(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm communicator ) buf: pointer - data to send count: number of elements in buffer. Datatype : which kind of data types in buffer ? dest: the receiver tag: the label of the message communicator: set of processors involved (MPI_COMM_WORLD)

11 BLOCKING SEND/RECEIVE (CONTD.)

12 "return" after it is safe to modify the application buffer. Safe modifications will not affect the data intended for the receive task does not imply that the data was actually received Blocking send can be synchronous which means there is handshaking occurring with the receive task to confirm a safe send. A blocking send can be asynchronous if a system buffer is used to hold the data for eventual delivery to the receive. A blocking receive only "returns" after the data has arrived and is ready for use by the program.

13 NON-BLOCKING SEND/RECEIVE return almost immediately simply "request" the MPI library to perform the operation when it is able Cannot predict when that will happen request a send/receive and start doing other work! unsafe to modify the application buffer (your variable space) until you know that the non- blocking operation has been completed MPI_Isend (&buf,count,datatype,dest,tag,comm,&request) MPI_Irecv (&buf,count,datatype,source,tag,comm,&request)

14 NON-BLOCKING SEND/RECEIVE (CONTD.)

15 To check if the send/receive operations have completed. int MPI_Irecv (void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Comm comm, MPI_Request *req); int MPI_Wait(MPI_Request *req, MPI_Status *status); A call to this subroutine cause the code to wait until the communication pointed by req is complete. input/output, identifier associated to a communications event (initiated by MPI_ISEND or MPI_IRECV). input/output, identifier associated to a communications event (initiated by MPI_ISEND or MPI_IRECV). NON-BLOCKING SEND/RECEIVE (CONTD.)

16 int MPI_Test(MPI_Request *req, int *flag, MPI_Status *status); A call to this subroutine sets flag to true if the communication pointed by req is complete, sets flag to false otherwise. NON-BLOCKING SEND/RECEIVE (CONTD.)

17 SYNCHRONOUS MODE Send can be started whether or not a matching receive was posted. Send completes successfully only if a corresponding receive was already posted and has already started to receive the message sent. Blocking send & Blocking receive in synchronous mode. Simulate a synchronous communication. Synchronous Send is non-local.

18 BUFFERED MODE Send operation can be started whether or not a matching receive has been posted. It may complete before a matching receive is posted. Operation is local. MPI must buffer the outgoing message. Error will occur if there is insufficient buffer space. The amount of available buffer space is controlled by the user.

19 BUFFER MANAGEMENT int MPI_Buffer_attach( void* buffer, int size) Provides to MPI a buffer in the user's memory to be used for buffering outgoing messages. int MPI_Buffer_detach( void* buffer_addr, int* size) Detach the buffer currently associated with MPI. MPI_Buffer_attach( malloc(BUFFSIZE), BUFFSIZE); /* a buffer of BUFFSIZE bytes can now be used by MPI_Bsend */ MPI_Buffer_detach( &buff, &size); /* Buffer size reduced to zero */ MPI_Buffer_attach( buff, size); /* Buffer of BUFFSIZE bytes available again */

20 READY MODE A send may be started only if the matching receive is already posted. The user must be sure of this. If the receive is not already posted, the operation is erroneous and its outcome is undefined. Completion of the send operation does not depend on the status of a matching receive. Merely indicates that the send buffer can be reused. Ready-send could be replaced by a standard-send with no effect on the behavior of the program other than performance.

21 ORDER AND FAIRNESS Order: MPI Messages are non-overtaking. When a receive matches 2 messages. When a sent message matches 2 receive statements. Message-passing code is deterministic, unless the processes are multi-threaded or the wild-card MPI_ANY_SOURCE is used in a receive statement. Fairness: – MPI does not guarantee fairness. – Example: task 0 sends a message to task 2. However, task 1 sends a competing message that matches task 2's receive. Only one of the sends will complete.

22 MPI References The Standard itself: at http://www.mpi-forum.orghttp://www.mpi-forum.org All MPI official releases, in both postscript and HTML Books: Using MPI: Portable Parallel Programming with the Message-Passing Interface, 2nd Edition, by Gropp, Lusk, and Skjellum, MIT Press, 1999. Also Using MPI-2, w. R. Thakur MPI: The Complete Reference, 2 vols, MIT Press, 1999. Designing and Building Parallel Programs, by Ian Foster, Addison-Wesley, 1995. Parallel Programming with MPI, by Peter Pacheco, Morgan-Kaufmann, 1997. Other information on Web: at http://www.mcs.anl.gov/mpi http://www.mcs.anl.gov/mpi For man pages of open MPI on the web : http://www.open- mpi.org/doc/v1.4/ : http://www.open- mpi.org/doc/v1.4/ apropos mpi


Download ppt "3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:"

Similar presentations


Ads by Google