Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.

Similar presentations


Presentation on theme: "Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we."— Presentation transcript:

1 Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we basically need? –The ability to start the tasks –A way for them to communicate

2 Communication Cooperative –All parties agree to transfer data –Message passing is cooperative Data must be explicitly sent and received Any change in the receiver’s memory is made with the receiver’s participation One-sided –One worker performs the transfer of data –Data can be accessed without waiting for another process

3 What is MPI? A message passing library specification –Message-passing model –Not a compiler specification (i.e. not a language) –Not a specific product Designed for parallel computers, clusters, and heterogeneous networks

4 The MPI Process Development began in early 1992 Open process/Broad participation –IBM,Intel, TMC, Meiko, Cray, Convex, Ncube –PVM, p4, Express, Linda, … –Laboratories, Universities, Government Final version of draft in May 1994 Public and vendor implementations are now widely available

5 Point to Point Communication A message is sent from a sender to a receiver There are several variations on how the sending of a message can interact with the program

6 Synchronous A synchronous communication does not complete until the message has been received –A FAX or registered mail

7 Asynchronous An asynchronous communication completes as soon as the message is on the way. –A post card or email

8 Blocking and Non-blocking Blocking operations only return when the operation has been completed –Normal FAX machines Non-blocking operations return right away and allow the program to do other work –Receiving a FAX

9 Collective Communications Point-to-point communications involve pairs of processes. Many message passing systems provide operations which allow larger numbers of processes to participate

10 Types of Collective Transfers Barrier –Synchronizes processors –No data is exchanged but the barrier blocks until all processes have called the barrier routine Broadcast (sometimes multicast) –A broadcast is a one-to-many communication –One processor sends one message to several destinations Reduction –Often useful in a many-to-one communication

11 What’s in a Message? An MPI message is an array of elements of a particular MPI datatype All MPI messages are typed –The type of the contents must be specified in both the send and the receive

12 Basic C Datatypes in MPI MPI DatatypeC datatype MPI_CHARsigned char MPI_SHORTsigned short int MPI_INTsigned int MPI_LONGsigned long int MPI_UNSIGNED_CHARunsigned char MPI_UNSIGNED_SHORTunsigned short int MPI_UNSIGNED_INTunsigned int MPI_UNSIGNED_LONGunsigned long int MPI_FLOATfloat MPI_DOUBLEdouble MPI_LONG_DOUBLElong double MPI_BYTE MPI_PACKED

13 MPI Handles MPI maintains internal data-structures which are referenced by the user through handles Handles can be returned by and passed to MPI procedures Handles can be copied by the usual assignment operation

14 MPI Errors MPI routines return an int that can contain an error code The default action on the detection of an error is to cause the parallel operation to abort –The default can be changed to return an error code

15 Initializing MPI The first MPI routine called in any MPI program must be the initialization routine MPI_INIT MPI_INIT is called once by every process, before any other MPI routines int mpi_Init( int *argc, char **argv );

16 Skeleton MPI Program #include main( int argc, char** argv ) { MPI_Init( &argc, &argv ); /* main part of the program */ MPI_Finalize(); }

17 Communicators A communicator handle defines which processes a particular command will apply to All MPI communication calls take a communicator handle as a parameter, which is effectively the context in which the communication will take place MPI_INIT defines a communicator called MPI_COMM_WORLD for each process that calls it

18 Communicators Every communicator contains a group which is a list of processes The processes are ordered and numbered consecutively from 0. The number of each process is known as its rank –The rank identifies each process within the communicator The group of MPI_COMM_WORLD is the set of all MPI processes

19 Point-to-point Communication Always involves exactly two processes The destination is identified by its rank within the communicator There are four communication modes provided by MPI (these modes refer to sending not receiving) –Buffered –Synchronous –Standard –Ready

20 Standard Send When using standard-mode send –It is up to MPI to decide whether outgoing messages will be buffered. –Completes once the message has been sent, which may or may not imply that the massage has arrived at its destination –Can be started whether or not a matching receive has been posted. It may complete before a matching receive is posted. –Has non-local completion semantics, since successful completion of the send operation may depend on the occurrence of a matching receive.

21 Standard Send MPI_Send( buf, count, datatype, dest, tag, comm ) Where –buf is the address of the data to be sent –count is the number of elements of the MPI datatype which buf contains –datatype is the MPI datatype –dest is the destination process for the message. This is specified by the rank of the destination within the group associated with the communicator comm –tag is a marker used by the sender to distinguish between different types of messages –comm is the communicator shared by the sender and the receiver

22 Synchronous Send MPI_Ssend( buf, count, datatype, dest, tag, comm ) –can be started whether or not a matching receive was posted –will complete successfully only if a matching receive is posted, and the receive operation has started to receive the message sent by the synchronous send. –provides synchronous communication semantics: a communication does not complete at either end before both processes rendezvous at the communication. –has non-local completion semantics.

23 Buffered Send A buffered-mode send –Can be started whether or not a matching receive has been posted. It may complete before a matching receive is posted. –Has local completion semantics: its completion does not depend on the occurrence of a matching receive. –In order to complete the operation, it may be necessary to buffer the outgoing message locally. For that purpose, buffer space is provided by the application.

24 Ready Mode Send A ready-mode send –completes immediately –may be started only if the matching receive has already been posted. –has the same semantics as a standard-mode send. –saves on overhead by avoiding handshaking and buffering


Download ppt "Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we."

Similar presentations


Ads by Google