Presentation is loading. Please wait.

Presentation is loading. Please wait.

Please visit our web site: Point-to-Point Communication.

Similar presentations


Presentation on theme: "Please visit our web site: Point-to-Point Communication."— Presentation transcript:

1 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Point-to-Point Communication

2 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Kinds of Communication Blocking and Nonblocking Communications

3 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Kinds of Communication  Blocking Sender does not return from an MPI call until the message buffer (the user ’ s container for the message) can be reused. i.e. the message has been sent Receiver does not return until the receiving message buffer contains all of the message

4 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Kinds of Communication  Non-blocking Sender call returns after nonblocking send start call initiates the send operation. A separate send complete call is needed to complete the communication, i.e., to verify that the data has been copied out of the send buffer Receiver call returns after nonblocking receive start call initiates the receive operation. A separate receive complete call is needed to complete the receive operation and verify that the data has been received into the receive buffer. Other MPI procedures test or wait for the completion of sends and receives

5 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Communication Modes  Standard  Buffered  Synchronous  Ready

6 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Communication Modes  Modes are determined by the name of the MPI SEND procedure used e.g. MPI_BSEND specifies a buffered send  Standard (no letter) up to MPI to decide whether outgoing messages will be buffered MPI may buffer outgoing messages  the send call may complete before a matching receive is invoked. MPI may choose not to buffer outgoing messages, for performance reasons  the send call will not complete until a matching receive has been posted Non-local operation – another process may have ‘ to do something ’ before this operation completes, e.g. successful completion of the send operation may depend on the occurrence of a matching receive.

7 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Communication Modes  Buffered (B letter) It can be started whether or not a matching receive has been posted and it may complete before a matching receive is posted Therefore MPI must buffer the outgoing message, so as to allow the send call to complete Local operation – another process does not have to do anything before this operation completes, e.g. buffered mode send operation may complete before a matching receive is posted

8 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Communication Modes  Synchronous (S letter) It can be started whether or not a matching receive was posted Complete successfully only if a matching receive is posted, and the receive operation has started to receive the message Thus, the completion of a synchronous send not only indicates that the send buffer can be reused, but also indicates that the receiver has reached a certain point in its execution Non-local operation

9 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Communication Modes  Ready (R letter) Sender starts only if the matching receive has been posted Otherwise, the operation is erroneous and its outcome is undefined The completion of the send operation does not depend on the status of a matching receive, and merely indicates that the send buffer can be reused Non-local operation

10 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Communication Modes  A possible communication protocol for the various communication modes is outlined below. ready send: The message is sent as soon as possible. synchronous send: The sender sends a request- to-send message. The receiver stores this request. When a matching receive is posted, the receiver sends back a permission-to-send message, and the sender now sends the message. standard send: First protocol may be used for short messages, and second protocol for long messages. buffered send: The sender copies the message into a buffer and then sends it with a nonblocking send (using the same protocol as for standard send).

11 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Graphical Representation of the Implementation Models User Data Buffer User Data Buffer Send Buffer UsedNo Receive Buffer Used SenderReceiver

12 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Graphical Representation of the Implementation Models User Data Buffer User Data Buffer Send Buffer UsedReceive Buffer Used SenderReceiver

13 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Graphical Representation of the Implementation Models User Data Buffer User Data Buffer No Send Buffer UsedNo Receive Buffer Used SenderReceiver

14 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Graphical Representation of the Implementation Models User Data Buffer User Data Buffer No Send Buffer UsedReceive Buffer Used SenderReceiver

15 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Point-to-Point Communication Blocking Functions

16 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_SEND  Standard send  This routine may block until the message is received  C int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) Input Parameters  buf: initial address of send buffer (choice)  count: number of elements in send buffer (non negative integer)  datatype: datatype of each send buffer element (handle)  dest: rank of destination (integer)  tag: message tag (integer)  comm: communicator (handle)  Fortran MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR

17 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_RECV  basic receive  MPI_ANY_SOURCE receives from any source in the communicator  MPI_ANY_TAG accepts any incoming message tag  C int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) Input Parameters  count: maximum number of elements in receive buffer (integer)  datatype: datatype of each receive buffer element (handle)  source: rank of source (integer)  tag: message tag (integer)  comm: communicator (handle) Output Parameters  buf: initial address of receive buffer (choice)  status: status object (Status) contains information on data that was actually received, e.g. MPI_SOURCE, MPI_TAG, MPI_ERROR, and other information not directly accessible to programmer

18 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_RECV  Fortran MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

19 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_BSEND  Send in buffered mode  All parameters are the same as MPI_SEND  C int MPI_Bsend (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)  Fortran MPI_BSEND(BUF, COUNT, DATATYPE, DEST, TAG, COM, IERR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR

20 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_SSEND  Send in synchronous mode  All parameters are the same as MPI_SEND  C int MPI_Ssend (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)  Fortran MPI_SSEND(BUF, COUNT, DATATYPE, DEST, TAG, COM, IERR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR

21 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_RSEND  Send in ready mode  All parameters are the same as MPI_SEND  C int MPI_Rsend (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)  Fortran MPI_RSEND(BUF, COUNT, DATATYPE, DEST, TAG, COM, IERR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR

22 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_SENDRECV  The blocking send and receive operations combine in one call the sending of a message to one destination and the receiving of another message, from another process  Very useful for executing a shift operation across a chain of processes  C int MPI_Sendrecv(void *sendbuf, int sendcount, MPI_Datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, MPI_Datatype recvtag, MPI_Comm comm, MPI_Status *status)

23 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_SENDRECV Input Parameters  sendbuf: initial address of send buffer (choice)  sendcount: number of elements in send buffer (integer)  sendtype: type of elements in send buffer (handle)  dest: rank of destination (integer)  sendtag: send tag (integer)  recvcount: number of elements in receive buffer (integer)  recvtype: type of elements in receive buffer (handle)  source: rank of source (integer)  recvtag: receive tag (integer)  comm: communicator (handle)

24 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_SENDRECV Output Parameters  recvbuf: initial address of receive buffer (choice)  status: status object (Status)  Fortran MPI_SENDRECV(SENDBUF, SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVBUF, RECVCOUNT, RECVTYPE, SOURCE, RECVTAG, COMM, STATUS, IERROR) SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVCOUNT, RECVTYPE, SOURCE, RECV TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

25 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_PROBE  Blocking test for a message. Does not “ receive ” the message – a subsequent MPI_Recv() will receive it.  MPI_PROBE behaves like MPI_IPROBE except that it is a blocking call that returns only after a matching message has been found.  C int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status) Input Parameters  source: source rank, or MPI_ANY_SOURCE (integer)  tag: tag value, or MPI_ANY_TAG (integer)  comm: communicator (handle) Output Parameter  status: status object (Status)  Fortran MPI_PROBE(SOURCE, TAG, COMM, STATUS, IERROR) INTEGER SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

26 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_BUFFER_ATTACH  Use in MPI_BSEND  Provides to MPI a buffer in the user's memory to be used for buffering outgoing messages  Only one buffer can be attached to a process at a time  C int MPI_Buffer_attach (void* buffer, int size) Input Parameters  buffer: initial buffer address (choice)  size: buffer size, in bytes (integer)  Fortran MPI_BUFFER_ATTACH (BUFFER, SIZE, IERROR) BUFFER(*) INTEGER SIZE, IERROR

27 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Blocking Functions – MPI_BUFFER_DETACH  Use in MPI_BSEND  Detach the buffer currently associated with MPI  C int MPI_Buffer_detach( void* buffer_addr, int* size) Output Parameters  buffer_addr: initial buffer address (choice)  size: buffer size, in bytes (integer)  Fortran MPI_BUFFER_DETACH( BUFFER_ADDR, SIZE, IERROR) BUFFER_ADDR(*) INTEGER SIZE, IERROR

28 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_GET_COUNT  Returns the number of entries received. (We count entries, each of type datatype, not bytes.)  C int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count) Input Parameters  status: return status of receive operation (Status)  datatype: datatype of each receive buffer entry (handle) Output Paramter  count: number of received entries (integer)  Fortran MPI_GET_COUNT(STATUS, DATATYPE, COUNT, IERROR) INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR

29 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SEND 1  The root node sends out a message process 1. Process 1 send back to Process 0.  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Comm_size MPI_Finalize MPI_Send MPI_Recv MPI_Get_count

30 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SEND 1 (C)  /*  // The root node sends out a message process 1. Process 1 send back to Process 0.  */  #include //for input/output  #include //for mpi routines  #define BUFSIZE 64//The size of the messege being passed  main( int argc, char** argv)  {  int my_rank;//the rank of this process  int n_processes;//the total number of processes  char buf[BUFSIZE];//a buffer for the messege  int tag=0;//not important here  int count;  MPI_Status status;//not important here  MPI_Init(&argc, &argv); //Initializing mpi  MPI_Comm_size(MPI_COMM_WORLD, &n_processes);//Getting # of processes  MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);//Getting my rank  if( my_rank == 0 )  {  //send to the next node */  printf("Hello world! I am %d of %d, sending to Proc %d\n", my_rank, n_processes, my_rank+1);  MPI_Send(buf, BUFSIZE, MPI_CHAR, my_rank+1, tag, MPI_COMM_WORLD);

31 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SEND 1 (C)  //recieve from the last node */  MPI_Recv(buf, BUFSIZE, MPI_CHAR, n_processes-1, tag, MPI_COMM_WORLD, &status);  MPI_Get_count(&status, MPI_CHAR, &count);  printf("Hello world! I am %d of %d, recv %d entries from proc 1\n", my_rank, n_processes, count);  }  else  {  //recieve from proc 0  MPI_Recv(buf, BUFSIZE, MPI_CHAR, 0, tag, MPI_COMM_WORLD, &status);  MPI_Get_count(&status, MPI_CHAR, &count);  printf("Hello world! I am %d of %d, recv %d entries from Proc 0\n", my_rank, n_processes, count);  //send to the next node  printf("Hello world! I am %d of %d, sending to Proc 0\n", my_rank, n_processes);  MPI_Send(buf, BUFSIZE, MPI_CHAR, 0, tag, MPI_COMM_WORLD);  }  MPI_Finalize();//Finalize MPI  return 0;  }

32 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SEND 1 (Fortran) C CThe root node sends out a message process 1. Process 1 send back to Process 0. C program main include 'mpif.h' parameter (BUFSIZE = 64) integer my_rank, n_processes, tag integer ierr integer count, size integer status(MPI_STATUS_SIZE) double precision buf(BUFSIZE) size = BUFSIZE tag = 2001 call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, my_rank, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, n_processes, ierr) if (my_rank.eq. 0) then C/* send to the next node */ print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', sending to Proc ', my_rank+1

33 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SEND 1 (Fortran) call MPI_SEND(buf, size, MPI_DOUBLE_PRECISION, my_rank+1, 1tag, MPI_COMM_WORLD, ierr) C/* recieve from the last node */ call MPI_RECV(buf, size, MPI_DOUBLE_PRECISION, n_processes-1, 1tag, MPI_COMM_WORLD, status, ierr) call MPI_GET_COUNT(status, MPI_DOUBLE_PRECISION, count, ierr) print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', recv ', count,' entries from proc 0' else C/* recieve from the previous node */ call MPI_RECV(buf, size, MPI_DOUBLE_PRECISION, my_rank-1, tag, 1MPI_COMM_WORLD, status, ierr) call MPI_GET_COUNT(status, MPI_DOUBLE_PRECISION, count, ierr) print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', recv ', count,' entries from Proc 0' C/* send to the next node */ print *, 'Hello world! I am ', my_rank, ' of ', n_processes, ' sending to Proc 0' call MPI_SEND(buf, size, MPI_DOUBLE_PRECISION, 0, tag, MPI_COMM_WORLD, ierr) endif call MPI_FINALIZE(ierr) end

34 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SEND 1 (Fortran) call MPI_GET_COUNT(status, MPI_DOUBLE_PRECISION, count, ierr) print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', recv ', count,' entries from proc 0' else C/* recieve from the previous node */ call MPI_RECV(buf, size, MPI_DOUBLE_PRECISION, my_rank-1, tag, 1MPI_COMM_WORLD, status, ierr) call MPI_GET_COUNT(status, MPI_DOUBLE_PRECISION, count, ierr) print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', recv ', count,' entries from proc 1' C/* send to the next node */ print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', sending to ', mod((my_rank+1), n_processes) call MPI_SEND(buf, size, MPI_DOUBLE_PRECISION, mod 1((my_rank+1), n_processes), tag, MPI_COMM_WORLD, ierr) endif finish=MPI_Wtime() C/* Print out the results. */ if (my_rank.eq. 0) then print *, 'Total time used was ', finish-start, ' seconds' endif call MPI_FINALIZE(ierr) end

35 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 1  Using MPI_Bsend to send a single message in buffered mode  A buffered mode send operation can be started whether or not a matching receive has been posted  In this program, the MPI_Bsend completes before a matching receive is posted  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Finalize MPI_Bsend MPI_Recv MPI_Buffer_attach MPI_Buffer_detach

36 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 1 (C)  // use MPI_Bsend to send a single message in buffered mode  // note that a buffered mode send operation can be started  // whether or not a matching receive has been posted  // in this case, the MPI_Bsend completes before a matching receive is posted  #include // for input/output  #include //for mpi routines  #define BUFSIZE 2048//The size of the messege being passed  main( int argc, char** argv)  {  double start,finish;  int rank;//the rank of this process  int tag = 0;  int bufsize, abufsize;  float *buf, *abuf, *message;  MPI_Status status;//not important here  bufsize = (BUFSIZE * sizeof(float) + MPI_BSEND_OVERHEAD);  buf = (float *)malloc(bufsize);  message = (float *)malloc(sizeof(float) * BUFSIZE);

37 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 1 (C)  MPI_Init(&argc, &argv);//Initializing mpi  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  if( rank == 0 )  {  printf("Hello world! I am proc 0, sending to proc 1..\n");  MPI_Buffer_attach(buf, bufsize);  /* a buffer can now be used by MPI_Bsend */  //send to the proc 1  MPI_Bsend(message, BUFSIZE, MPI_FLOAT, 1, tag, MPI_COMM_WORLD);  MPI_Buffer_detach(&abuf, &abufsize);  /* Buffer size reduced to zero */  free(abuf);  }  else if( rank == 1)  {  //sleep for 3 sec  sleep(3);  printf("Hello world! I am proc 1, just wake up!!\n");  //recieve from the proc 0  MPI_Recv(message, BUFSIZE, MPI_FLOAT, 0, tag, MPI_COMM_WORLD, &status);  printf("Hello world! Received 1 message from proc 1!!\n");  }  printf("Proc %d finished!!\n", rank);  MPI_Finalize();  return 0;  }

38 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 1 (Fortran)  C  C The buffer size in MPI_BSEND must be interger  C If you put a parameter in it, eg.BUFSIZE, it will cause error  C  call MPI_BSEND(message, messagesize, MPI_REAL,  11, tag, MPI_COMM_WORLD, ierr)  call MPI_BUFFER_DETACH(BUFFER, size, ierr)  else if (rank == 1) then  call SLEEP(3)  print *, 'Hello world! I am proc ', rank,  1', just wake up!!'  call MPI_RECV(message, BUFSIZE, MPI_REAL,  10, tag, MPI_COMM_WORLD, status, ierr)  print *, 'Hello world! Received 1 message ',  1'proc 1!!'  endif  print *, 'Proc ', rank, ' finished!!'  call MPI_FINALIZE(ierr)  end

39 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 1 (Fortran)  program main  include 'mpif.h'  parameter (BUFSIZE = 2048)  integer ierr, rank  integer tag, status(MPI_STATUS_SIZE)  integer size, messagesize  real message(BUFSIZE)  real BUFFER*(*)  call MPI_INIT(ierr)  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)  tag = 0  messagesize = BUFSIZE  if (rank == 0) then  print *, 'Hello world! I am proc ', rank,  1', sending to proc 1..'  size = BUFSIZE * 4 + MPI_BSEND_OVERHEAD  call MPI_BUFFER_ATTACH(BUFFER, size, ierr)

40 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 2  Using MPI_Bsend to send a several messages in buffered mode  Total sum of memory(buffer) must be allocated first, otherwise it will produce error  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Finalize MPI_Bsend MPI_Recv MPI_Buffer_attach MPI_Buffer_detach

41 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 2 (C)  // use MPI_Bsend to send several messages in buffered mode  // note that the total sum of memory(buffer) must be allocated first,  // otherwise it will produce error  #include  #define M 3// the number of times of sending messages  int main( int argc, char** argv )  {  int n, i;  int rank;  int size;  int *buf;  int *abuf;  int blen;  int ablen;  MPI_Status status;  MPI_Init( &argc, &argv );  MPI_Comm_size( MPI_COMM_WORLD, &size );  MPI_Comm_rank( MPI_COMM_WORLD, &rank );  if( size != 2 ) {  if( rank == 0 ) {  printf("Error: 2 processes required\n");  fflush(stdout);  }  MPI_Abort(MPI_COMM_WORLD, MPI_ERR_OTHER );  }

42 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 2 (C)  if( rank == 0 ){  blen = M * (sizeof(int) + MPI_BSEND_OVERHEAD);  buf = (int*) malloc(blen);  MPI_Buffer_attach(buf, blen);  printf("attached %d bytes\n", blen);  fflush(stdout);  for(i = 0; i < M; i ++) {  printf("starting send %d...\n", i);  fflush(stdout);  n = i;  MPI_Bsend(&n, 1, MPI_INT, 1, i, MPI_COMM_WORLD );  printf("complete send %d\n", i);  fflush(stdout);  sleep(1);  }  MPI_Buffer_detach(&abuf, &ablen);  printf("detached %d bytes\n", ablen);  free(abuf);  } else {  for(i = M - 1; i >= 0; i --) {  printf("starting recv %d...\n", i);  fflush(stdout);  MPI_Recv(&n, M, MPI_INT, 0, i, MPI_COMM_WORLD, &status );  printf("complete recv: %d. received %d\n", i, n);  fflush(stdout);  }  MPI_Finalize();  return 0;  }

43 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 2 (Fortran)  program main  include 'mpif.h'  parameter (M = 3)  integer ierr, rank  integer tag, status(MPI_STATUS_SIZE)  integer size, messagesize  real buf*(*)  integer n, i  integer blen  call MPI_INIT(ierr)  call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr)  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)  if( size.ne. 2 ) then  if( rank.eq. 0 ) then  print *, 'Error: 2 processes required'  endif  call MPI_Abort(MPI_COMM_WORLD, MPI_ERR_OTHER, ierr);  endif

44 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_BSEND 2 (Fortran)  if (rank == 0) then  blen = M * (4 + MPI_BSEND_OVERHEAD)  call MPI_BUFFER_ATTACH(buf, blen, ierr)  print *, 'attached ', blen, ' bytes'  do i = 0, M - 1  print *, 'starting send ', i  n = i  call MPI_BSEND(n, 1, MPI_INTEGER,  11, i, MPI_COMM_WORLD, ierr)  print *, 'complete send ', i  call SLEEP(1)  enddo  call MPI_BUFFER_DETACH(buf, blen, ierr)  print *, 'detached ', blen, ' bytes'  else if (rank == 1) then  do i = M - 1, 0, -1  print *, 'starting recv...', i  call MPI_RECV(n, M, MPI_INTEGER,  10, i, MPI_COMM_WORLD, status, ierr)  print *, 'complete recv: ', i, ' received ', n  enddo  endif  print *, 'Proc ', rank, ' finished!!'  call MPI_FINALIZE(ierr)  end

45 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SENDRECV  All the processes send message to the next process simultaneously, and then receive message from the previous process  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Comm_size MPI_Sendrecv MPI_Finalize

46 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SENDRECV (C)  /*  * Illustrate the usage of MPI_SEND_RECV  * All the proc send message to the next proc simultaneously,  * and then receive message from the previous proc  */  #include //for input/output  #include //for mpi routines  main( int argc, char** argv)  {  int rank;//the rank of this process  int tag=0;//not important here  int nproc;  int sendbuf[1], recvbuf[1];  int dest;//store the destination  MPI_Status status;//not important here

47 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SENDRECV (C)  MPI_Init(&argc, &argv);//Initializing mpi  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  MPI_Comm_size(MPI_COMM_WORLD, &nproc); //Getting total no of proc  dest = (rank + 1) % nproc;  printf("Proc %d: Sending message to %d...\n", rank, dest);  if( rank == 0 )  {  MPI_Sendrecv(sendbuf, 1, MPI_INT, dest, tag, recvbuf, 1, MPI_INT, nproc - 1, tag, MPI_COMM_WORLD, &status);  }  else  {  MPI_Sendrecv(sendbuf, 1, MPI_INT, dest, tag, recvbuf, 1, MPI_INT, rank - 1, tag, MPI_COMM_WORLD, &status);  }  printf("Proc %d: Receive message from %d...\n", rank, status.MPI_SOURCE);  MPI_Finalize();//Finalize MPI  return 0;  }

48 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SENDRECV (Fortran)  C/*  C* Illustrate the usage of MPI_SEND_RECV  C* All the proc send message to the next proc simultaneously,  C* and then receive message from the previous proc  C*/  program main  include 'mpif.h'  integer rank, tag  integer ierr, nproc, dest  integer sendbuf(1), recvbuf(1)  integer status(MPI_STATUS_SIZE)  tag = 0  CALL MPI_INIT(ierr)  CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)  CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nproc, ierr)

49 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_SENDRECV (Fortran)  dest = MOD ((rank + 1), nproc)  print *, 'Proc ', rank, ': Sending message to ', dest, '...'  IF (rank.EQ.0) THEN  CALL MPI_SENDRECV(sendbuf, 1, MPI_INTEGER, dest, tag, recvbuf, 1, MPI_INTEGER, nproc - 1, tag, MPI_COMM_WORLD, status, ierr);  ELSE  CALL MPI_SENDRECV(sendbuf, 1, MPI_INTEGER, dest, tag, recvbuf, 1, MPI_INTEGER, rank - 1, tag, MPI_COMM_WORLD, status, ierr);  END IF  print *, 'Proc ', rank, ': Receive message from ', status(MPI_SOURCE), '...'  call MPI_FINALIZE(ierr)  end

50 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_PROBE  The process 0 and process 1 send messages to process 2 separately, process 2 calls MPI_PROBE to know whether the messages come, if so, it calls MPI_RECV to receive the messages  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Finalize MPI_Send MPI_Recv MPI_Probe

51 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_PROBE (C)  /*  * The proc 0 and proc 1 send to proc 2 separately,  * proc 2 call MPI_PROBE to know whether the messages come,  * if so, it calls MPI_RECV to receive the messages  * Compile: mpicc mpi_probe01.c -o mpi_probe01  * Run: mpirun -np 3 mpi_probe01  */  #include //for input/output  #include //for mpi routines  main( int argc, char** argv)  {  int rank;//the rank of this process  int tag=0;//not important here  int n, i[1];  float x[1];  MPI_Status status;//not important here  MPI_Init(&argc, &argv); //Initializing mpi  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  if( rank == 0 )  {  printf("Proc 0: Sleep for 5 sec before send...\n");  sleep(5);  printf("Proc 0: Sending message...\n");  MPI_Send(i, 1, MPI_INT, 2, tag, MPI_COMM_WORLD);  }

52 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_PROBE (C)  else if( rank == 1)  {  printf("Proc 1: Sleep for 3 sec before send...\n");  sleep(3);  printf("Proc 1: Sending message...\n");  MPI_Send(x, 1, MPI_FLOAT, 2, tag, MPI_COMM_WORLD);  }  else //rank == 2  {  for (n = 1; n <= 2; n ++)  {  printf("Proc 2: Wait for message comes...\n");  MPI_Probe(MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &status);  printf("Proc 2: New message...\n");  if (status.MPI_SOURCE == 0)  {  MPI_Recv(i, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);  }  else  {  MPI_Recv(x, 1, MPI_FLOAT, 1, tag, MPI_COMM_WORLD, &status);  }  printf("Proc 2: Received message from Proc %d successfully!!\n", status.MPI_SOURCE);  }  MPI_Finalize();//Finalize MPI  return 0;  }

53 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_PROBE (Fortran)  C/*  C * The proc 0 and proc 1 send to proc 2 separately,  C * proc 2 call MPI_PROBE to know whether the messages come,  C * if so, it calls MPI_RECV to receive the messages  C * Compile: mpif77 mpi_probe01.f -o mpi_probe01  C * Run: mpirun -np 3 mpi_probe01  C */  program main  include 'mpif.h'  integer rank, tag  integer ierr, n, i  real x  integer status(MPI_STATUS_SIZE)  tag = 0  CALL MPI_INIT(ierr)  CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)  IF (rank.EQ.0) THEN  print *, 'Proc 0: Sleep for 5 sec before send...'  CALL SLEEP(5)  print *, 'Proc 0: Sending message...'  CALL MPI_SEND(i, 1, MPI_INTEGER, 2, tag, MPI_COMM_WORLD, ierr)

54 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc MPI_PROBE (Fortran)  ELSE IF(rank.EQ.1) THEN  print *, 'Proc 1: Sleep for 3 sec before send...'  CALL SLEEP(3)  print *, 'Proc 1: Sending message...'  CALL MPI_SEND(x, 1, MPI_REAL, 2, tag, MPI_COMM_WORLD, ierr)  ELSE! rank.EQ.2  DO n=1, 2  print *, 'Proc 2: Wait for message comes...'  CALL MPI_PROBE(MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, status, ierr)  print *, 'Proc 2: New message...'  IF (status(MPI_SOURCE).EQ. 0) THEN  CALL MPI_RECV(i, 1, MPI_INTEGER, 0, tag, MPI_COMM_WORLD, status, ierr)  ELSE  CALL MPI_RECV(x, 1, MPI_REAL, 1, tag, MPI_COMM_WORLD, status, ierr)  END IF  print *, 'Proc 2: Received message from Proc ',status(MPI_SOURCE),' successfully!!'  END DO  END IF  call MPI_FINALIZE(ierr)  end

55 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Point-to-Point Communication Nonblocking Functions

56 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Send and Receive  A nonblocking send call indicates that the system may start copying data out of the send buffer. The sender should not access any part of the send buffer after a nonblocking send operation is called, until the send completes.  A nonblocking receive call indicates that the system may start writing data into the receive buffer. The receiver should not access any part of the receive buffer after a nonblocking receive operation is called, until the receive completes.

57 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_ISEND  Start a standard mode, nonblocking send (immediate send)  C int MPI_Isend( buf, count, datatype, dest, tag, comm, request )  void *buf;  int count, dest, tag;  MPI_Datatype datatype;  MPI_Comm comm;  MPI_Request *request; Input Parameters  buf: initial address of send buffer (choice)  count: number of elements in send buffer (non negative integer)  datatype: datatype of each send buffer element (handle)  dest: rank of destination (integer)  tag: message tag (integer)  comm: communicator (handle)

58 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_ISEND Output Parameters  request: communication request (handle)  The request can be used later to query the status of the communication or wait for its completion  Fortran MPI_ISEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR)

59 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_IBSEND  Start a buffered mode, nonblocking send  Parameters are the same as MPI_ISEND  C int MPI_Ibsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)  Fortran MPI_IBSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR

60 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_ISSEND  Start a synchronous mode, nonblocking send  Parameters are the same as MPI_ISSEND  C int MPI_Issend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)  Fortran MPI_ISSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR

61 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_IRSEND  Start a ready mode nonblocking send  Parameters are the same as MPI_ISSEND  C int MPI_Irsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)  Fortran MPI_IRSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR

62 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_IRECV  Start a nonblocking receive  C int MPI_Irecv(buf, count, datatype, source, tag, comm, request)  void *buf;  int count, source, tag;  MPI_Datatype datatype;  int source;  int tag;  MPI_Comm comm;  MPI_Request *request; Input Parameters  count: number of elements in receive buffer (integer)  datatype: datatype of each receive buffer element (handle)  source: rank of source (integer) MPI_ANY_SOURCE receives from any source in the communicator  tag: message tag (integer) MPI_ANY_TAG accepts any incoming message tag  comm: communicator (handle)

63 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_IRECV Output Parameters  buf: initial address of receive buffer (choice)  request: communication request (handle)  Fortran MPI_IRECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR) BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR

64 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_WAIT  Blocking call that completes the MPI_Isend() or MPI_Irecv() call  This call will make your process hang until the operation identified by the request is complete.  To follow MPI_Isend immediately with MPI_Wait is the same as to call MPI_Send. But splitting the latter into the former lets you do a number of other things between the calls to MPI_Isend and MPI_Wait.

65 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_WAIT  C int MPI_Wait(request, status)  MPI_Request *request;  MPI_Status *status Input/Output Parameter  request: request (handle) Output Parameter  status: status object (Status)  Fortran MPI_WAIT(REQUEST, STATUS, IERROR) INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR

66 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_TEST  Nonblocking call that tests for completion of an MPI_Isend() or MPI_Irecv() call.  Unlike MPI_Wait this call doesn't hang waiting for the communication request to get completed.  It returns right away with flag = true if the operation is complete and the value of request is set to MPI_REQUEST_NULL.  Otherwise flag = false and the value of request remains unchanged.  Most commonly you are likely to use MPI_Test in a loop: checking if the communication has completed, then doing something else, then checking again, and so on.

67 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_TEST  C int MPI_Test(request, flag, status)  MPI_Request *request  int *flag  MPI_Status *status Input/Output Parameter  request: communication request (handle) Output Parameter  flag: true if operation completed (logical)  status: status object (Status)  Fortran MPI_TEST(REQUEST, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR

68 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking Functions – MPI_IPROBE  MPI_IPROBE(source, tag, comm, flag, status) returns flag = true if there is a message that can be received and that matches the pattern specified by the arguments source, tag, and comm.  MPI_IPROBE behaves like MPI_PROBE except that it is a nonblocking call  C int MPI_Iprobe(int source, int tag, MPI_Comm comm, int *flag, MPI_Status *status) Input Parameters  source: source rank, or MPI_ANY_SOURCE (integer)  tag: tag value or MPI_ANY_TAG (integer)  comm: communicator (handle) Output Parameters  Flag: (logical)  status: status object (Status)  Fortran MPI_IPROBE(SOURCE, TAG, COMM, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

69 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Notes on Nonblocking Communications  Typically used in situations where a lot of computations could be performed while a process is waiting for a send/receive to complete  Must insure arguments to send/receive are unmodified until completion  NOT a fast alternative to traditional send/receive  They won't hang a program  They can be interleaved with useful work

70 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 1  Demonstrate the nonblocking communication using nonblocking send and receive, also using MPI_Wait to ensure the communication is complete  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Isend MPI_Irecv MPI_Wait MPI_Finalize

71 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 1 (C)  /*  proc 0 use MPI_Isend to send message,  proc 1 use MPI_Irecv to receive message,  use MPI_Wait to make sure the messages are sent or received  */  #include // for input/output  #include //for mpi routines  #define BUFSIZE 2048//The size of the messege being passed  main(int argc, char** argv)  {  double start,finish;  int rank;//the rank of this process  int tag = 0;  float *sendbuf, *recvbuf;  MPI_Status status;//not important here  MPI_Request request;//not important here  sendbuf = (float *)malloc(sizeof(float)*BUFSIZE);  recvbuf = (float *)malloc(sizeof(float)*BUFSIZE);

72 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 1 (C)  MPI_Init(&argc, &argv);//Initializing mpi  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  if( rank == 0 )  {  printf("Hello world! I am proc 0, sending to proc 1\n");  //send to the proc 1  MPI_Isend(sendbuf, BUFSIZE, MPI_FLOAT, 1, tag, MPI_COMM_WORLD, &request);  MPI_Wait (&request, &status);  }  else if( rank == 1)  {  //recieve from the proc 0  MPI_Irecv(recvbuf, BUFSIZE, MPI_FLOAT, 0, tag, MPI_COMM_WORLD, &request);  MPI_Wait (&request, &status);  }  printf("Proc %d finished!!\n", rank);  MPI_Finalize();  free(sendbuf);  free(recvbuf);  return 0;  }

73 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 1 (Fortran) program main include 'mpif.h' integer ierr, rank real sendbuf(2048) real recvbuf(2048) integer count, tag, request, status call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) count = 2048 tag = 0 if (rank == 0) then print *, 'Hello world! I am proc ', rank, 1', sending to proc 1' call MPI_ISEND(sendbuf, count, MPI_REAL, 11, tag, MPI_COMM_WORLD, request, ierr) call MPI_WAIT(request, status, ierr) else if (rank == 1) then call MPI_IRECV(recvbuf, count, MPI_REAL, 10, tag, MPI_COMM_WORLD, request, ierr) call MPI_WAIT(request, status, ierr) endif print *, 'Proc ', rank, ' finished!!' call MPI_FINALIZE(ierr) end

74 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 2  Demonstrate the nonblocking communication using nonblocking send and receive, also using MPI_Test to test whether the communication is complete  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Isend MPI_Irecv MPI_Wait MPI_Test MPI_Finalize

75 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 2 (C)  /*  proc 0 use MPI_Isend to send message,  proc 1 use MPI_Irecv to receive message,  */  #include // for input/output  #include //for mpi routines  #define BUFSIZE 2048//The size of the messege being passed  main(int argc, char** argv)  {  double start,finish;  int rank;//the rank of this process  int i = 0;  int tag = 0;  int flag = 0;  float *sendbuf, *recvbuf;  MPI_Status status;//not important here  MPI_Request request;//not important here  sendbuf = (float *)malloc(sizeof(float)*BUFSIZE);  recvbuf = (float *)malloc(sizeof(float)*BUFSIZE);  MPI_Init(&argc, &argv);//Initializing mpi

76 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 2 (C)  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  if(rank == 0)  {  //sleep for 3 sec  sleep(3);  printf("Hello world! I am proc 0, sending to proc 1\n");  //send to the proc 1  MPI_Isend(sendbuf, BUFSIZE, MPI_FLOAT, 1, tag, MPI_COMM_WORLD, &request);  MPI_Wait (&request, &status);  }  else if(rank == 1)  {  //recieve from the proc 0  MPI_Irecv(recvbuf, BUFSIZE, MPI_FLOAT, 0, tag, MPI_COMM_WORLD, &request);  do  {  printf("Wait %d\n", i++);  MPI_Test (&request, &flag, &status);  }  while (flag == 0);  }  printf("Proc %d finished!!\n", rank);  MPI_Finalize();  free(sendbuf);  free(recvbuf);  return 0;  }

77 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 2 (Fortran) program main include 'mpif.h' integer ierr, rank real sendbuf(2048) real recvbuf(2048) integer count, tag, request, status, i logical flag call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) count = 2048 tag = 0 flag =.FALSE. if (rank == 0) then call SLEEP(3) print *, 'Hello world! I am proc ', rank, 1', sending to proc 1 ‘

78 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Nonblocking 2 (Fortran) call MPI_ISEND(sendbuf, count, MPI_REAL, 11, tag, MPI_COMM_WORLD, request, ierr) call MPI_WAIT(request, status, ierr) else if (rank == 1) then call MPI_IRECV(recvbuf, count, MPI_REAL, 10, tag, MPI_COMM_WORLD, request, ierr) DO WHILE (.NOT. flag) print *, 'Wait ', i call MPI_TEST(request, flag, status, ierr) i = i + 1 END DO endif print *, 'Proc ', rank, ' finished!!' call MPI_FINALIZE(ierr) end

79 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Case Study  MPI_SEND and MPI_RECV Let ’ s look at 3 reasonable ways to perform communication between 2 processes which exchange messages  One always works  One always deadlocks That is, both processes hang waiting for the other to communicate  One may or may not work depending on the actual protocols used by the MPI implementation

80 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Case Study – One Always Works  Algorithm: Determine what rank the process is If rank == 0  Send a message from send_buffer to process with rank 1  Receive a message into recv_buffer from process with rank 1 Else if rank == 1  Receive a message into recv_buffer from process with rank 0  Send a message from send_buffer to process with rank 0 Send first Receive next Receive first Send next Processer 0Processer 1 Time

81 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Pseudo code – One Always Works  Determine the rank of the process  If rank == 0 then Send message to rank 1 Receive message from rank 1  Else if rank == 1 then Receive message from rank 0 Send message to rank 0  End  C casestudy01.c Compilation:  mpicc casestudy01.c – o casestudy01 Run:  mpirun – np 2 casestudy01  Fortran casestudy01.f Compilation:  mpif77 casestudy01.f – o casestudy01 Run:  mpirun – np 2 casestudy01

82 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Case Study – One Always Deadlocks  Algorithm: Determine what rank the process is If rank == 0  Receive a message into recv_buffer from process with rank 1  Send a message from send_buffer to process with rank 1 Else if rank == 1  Receive a message into recv_buffer from process with rank 0  Send a message from send_buffer to process with rank 0 Receive first Send next Receive first Send next Processer 0Processer 1 Time

83 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Pseudo code – One Always Deadlocks  Determine the rank of the process  If rank == 0 then Receive message from rank 1 Send message to rank 1  Else if rank == 1 then Receive message from rank 0 Send message to rank 0  End  C casestudy02.c Compilation:  mpicc casestudy02.c – o casestudy02 Run:  mpirun – np 2 casestudy02  Fortran casestudy02.f Compilation:  mpif77 casestudy02.f – o casestudy02 Run:  mpirun – np 2 casestudy02

84 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Case Study – the Worst Case, One May or May Not Work  Algorithm: Determine what rank the process is If rank == 0  Send a message from send_buffer to process with rank 1  Receive a message into recv_buffer from process with rank 1 Else if rank == 1  Send a message from send_buffer to process with rank 0  Receive a message into recv_buffer from process with rank 0 Send first Receive next Send first Receive next Processer 0Processer 1 Time

85 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Pseudo code – the Worst Case, One May or May Not Work  Determine the rank of the process  If rank == 0 then Send message to rank 1 Receive message from rank 1  Else if rank == 1 then Send message to rank 0 Receive message from rank 0  End  C casestudy03.c Compilation:  mpicc casestudy03.c – o casestudy03 Run:  mpirun – np 2 casestudy03  Fortran casestudy03.f Compilation:  mpif77 casestudy03.f – o casestudy03 Run:  mpirun – np 2 casestudy03

86 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc Reasons for Work and Deadlock  #The program was tested in LAM 6.5.9  In standard mode, it is up to MPI to decide whether outgoing messages will be buffered.  MPI may buffer outgoing messages (< 2048). In such a case, the send call may complete before a matching receive is invoked.  On the other hand (> 2048), buffer space may be unavailable, or MPI may choose not to buffer outgoing messages, for performance reasons. In this case, the send call will not complete until a matching receive has been posted, and the data has been moved to the receiver.

87 Please visit our web site: http://www.sci.hkbu.edu.hk/tdgc The End


Download ppt "Please visit our web site: Point-to-Point Communication."

Similar presentations


Ads by Google