Presentation is loading. Please wait.

Presentation is loading. Please wait.

MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.

Similar presentations


Presentation on theme: "MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank."— Presentation transcript:

1 MA471Fall 2003 Lecture5

2 More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank –MPI_Send, MPI_Recv –MPI_Barrier Only MPI_Send and MPI_Recv truly communicate messages.. These are “point to point” communications i.e. process to process communication

3 MPI_Isend Unlike MPI_Send, MPI_Isend does not wait for the output buffer to be free for further use before returning This mode of action is known as “non- blocking” http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Isend.html

4 MPI_Isend details MPI_Isend: Begins a nonblocking send Synopsis int MPI_Isend( void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request ) Input Parameters buf initial address of send buffer (choice) count number of elements in send buffer (integer) datatype datatype of each send buffer element (handle) dest rank of destination (integer) tag message tag (integer) comm communicator (handle) Output Parameter request communication request (handle) http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Isend.html

5 MPI_Isend analogy Analogy time…. Isend is like calling the mailperson to take a letter away and receiving a tracking number You don’t know if the letter is gone until you check your mailbox (i.e. check online with the tracking number). When you know the letter is gone you can use the letterbox again… (strained analogy).

6 MPI_Irecv Post a non-blocking receive request This routine exits without necessarily completing the message receive We use MPI_Wait to see if the requested message is in..

7 MPI_Irecv details MPI_Irecv: Begins a nonblocking receive Synopsis int MPI_Irecv( void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request ) Input Parameters buf initial address of receive buffer (choice) count number of elements in receive buffer (integer) datatype datatype of each receive buffer element (handle) source rank of source (integer) tag message tag (integer) comm communicator (handle) Output Parameter Request communication request (handle) http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Irecv.html

8 MPI_Irecv analogy Analogy time…. Irecv is like telling the mailbox to anticipate the delivery of a letter. You don’t know if the letter has arrived until you check your mailbox (i.e. check online with the tracking number). When you know the letter is here you can open it and read it..

9 MPI_Wait Wait for a requested MPI operation to complete

10 MPI_Wait details MPI_Wait: Waits for an MPI send or receive to complete Synopsis int MPI_Wait ( MPI_Request *request, MPI_Status *status) Input Parameter request request (handle) Output Parameter Status status object (Status). May be MPI_STATUS_NULL. http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Wait.html

11 MPI_Test MPI_Test: Determines if an MPI send or receive is complete without blocking. Synopsis int MPI_Test( MPI_Request *request, int *flag, MPI_Status *status) Input Parameter request request (handle) Output Parameters Boolean: flag, flag[0] is true if the request[0] operation has been completed. Status status object (Status). May be MPI_STATUS_NULL. http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Test.html

12 Example: Isend, Irecv, Wait Sequence

13 MPI_Request ISENDrequest; MPI_Status ISENDstatus; MPI_Request IRECVrequest; MPI_Status IRECVstatus; char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char)); int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess; int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID; fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess); info = MPI_Isend(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD, &ISENDrequest); info = MPI_Irecv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &IRECVrequest); fprintf(stdout, "Process %d just about to wait for requests to finish\n", procID); MPI_Wait(&IRECVrequest, &IRECVstatus); MPI_Wait(&ISENDrequest, &ISENDstatus); fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess); The isend status wait is a courtesy to make sure that the message has gone out before we go to finalize

14 Example: Game of Snap Rules: 1)Each player receives 26 cards 2)Each player plays a card onto their own stack 3)If two cards with the same value (i.e 2 of hearts and 2 of clubs) then the first player to shout snap wins both piles of cards… 4)Return to 2 unless one player has all the cards.

15 Possible (non-optimal) Parallel Setup One processor plays role of dealer Two extra processors each play role of player.

16 Symbolic Pseudo-Code Dealer: Player 1: Player 2: Here’s the difficulty – if Dealer is in a blocked call waiting (MPI_Wait) to receive (MPI_Irecv) from a predetermined player, then the test of snap is unfair. Solution – dealer should call MPI_Test repeatedly to find out who’s message arrives first (works since MPI_Test is non-blocking). Deal Show card Say snap to dealer

17 If(procID == dealer) A) MPI_Isend the cards to each player B) post MPI_Wait to make sure cards arrive C) post MPI_Test for winning flag Else A) MPI_Irecv the cards from dealer B) MPI_Wait to make sure cards arrived End If(procID != 0) a) MPI_Isend top card b) MPI_Irecv bottom card c) MPI_Wait on Isend request d) MPI_Wait on Irecv request e) add received card to accumulating pile f) check to see if sent card matches received card i) if match MPI_Isend snap message to dealer MPI_Wait for winner message from dealer if winner: collect all cards from accumulating pile if card total =52 or 0… send winning flag to dealer endif return to a) Else a) MPI_Irecv for snap messages from two player processes b) MPI_Test on messages from two players (in random order) c) When MPI_Test proves true, send winning procID to two processes d) MPI_Test for winning flag Dealer: Player 1: Player 2: Deal Show card Say snap to dealer

18 Profiling Your Code Using Upshot With these parallel codes it can be difficult to foresee every way the code can behave In the following we will see upshot in action upshot is part of the mpi release (for the most part)

19 Example 1 Profiling MPI_Send and MPI_Recv

20 Instructions For Using Upshot Add –mpilog to the compile flags

21 Clean and Recompile ON BLACKBEAR (BB): 1)cp –r ~cs471aa/MA471Lec3F03 ~/ 2)cd ~/MA471Lec3F03 3)make –f Makefile.mpeBB clean 4)make –f Makefile.mpeBB 5)qsub MPIcommuning 6)% use ‘qstat’ to make sure the run has finished 7) clog2alog MPIcommuning 8)% make sure that a file MPIcommuning.alog has been created 9)% set up an xserver on your current PC 10) upshot MPIcommuning.alog

22 /* initiate MPI */ int info = MPI_Init(&argc, &argv); /* NOW We can do stuff */ int Nprocs, procID; /* find the number of processes */ MPI_Comm_size(MPI_COMM_WORLD, &Nprocs); /* find the unique identity of this process */ MPI_Comm_rank(MPI_COMM_WORLD, &procID); /* insist that all processes have to go through this routine before the next commands */ info = MPI_Barrier(MPI_COMM_WORLD); /* test a send and recv pair of operations */{ MPI_Status recvSTATUS; char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char)); int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess; int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID; fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess); info = MPI_Send(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD); info = MPI_Recv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &recvSTATUS); fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess); } info = MPI_Finalize();

23 Results Viewed In Upshot Click on “Setup”

24 The Main Upshot Viewer This should appear after pressing “Setup”

25 Time History The horizontal axis is physical time, running left to right time

26 Time History Each MPI call is color coded on each process Process

27 Zoom in on Profile 1 2 (1)Process 1 sends message to process 4 (2)Process 4 receives message from process 1

28 Zoom in on Profile 1 3 (1)Process 2 sends message to process 3 (2)Process 3 receives message from process 2 Observations

29 Example 2 Profiling MPI_Isend and MPI_Irecv

30 MPI_Request ISENDrequest; MPI_Status ISENDstatus; MPI_Request IRECVrequest; MPI_Status IRECVstatus; char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char)); int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess; int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID; fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess); info = MPI_Isend(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD, &ISENDrequest); info = MPI_Irecv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &IRECVrequest); fprintf(stdout, "Process %d just about to wait for requests to finish\n", procID); MPI_Wait(&IRECVrequest, &IRECVstatus); MPI_Wait(&ISENDrequest, &ISENDstatus); fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess);

31 Profile for Isend, Irecv, Wait sequence Notice: before I called Wait the process could have done a bunch of operations, i.e. avoid all that wasted compute time while the message is in transit! Notice that not much time is spent in Irecv

32 With Work Between (Isend,Irecv) and Wait The neat point here is that while the message was in transit the process could get on a do some computations…

33 Isends Irecvs Close up

34 Lab Activity We will continue with the parallel implementation of your card games Use upshot to profile your code’s parallel activity and include this in your presentations and reports Anyone ready to report yet?

35 Next Lecture Global MPI communication routines Building a simple finite element solver for Poisson’s equation Making the Poisson solver parallel …


Download ppt "MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank."

Similar presentations


Ads by Google