Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.

Similar presentations


Presentation on theme: "1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben."— Presentation transcript:

1 1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben

2 2 MPI (Message Passing Interface) §MPI is a specification for message passing libraries. l Standardized (replaced all previous message passing libraries) l Practical l Portable (vendor independent) l Efficient §Industry standard for writing message passing programs. §Implementations are available for both vendor and public domains.

3 3 MPI (Message Passing Interface) §1980s - early 1990s: l Number of incompatible software tools for writing message passing programs for distributed memory systems. l The need for a standard arose. l MPI Forum 175 individuals from 40 organizations Parallel computer vendors, software programmers, academia and application scientists.

4 4 MPI (Message Passing Interface) §Originally, MPI was targeted for distributed memory systems. §Popularity of shared memory systems (SMP / NUMA architectures) resulted in appearance of MPI implementations for these platforms. §MPI is now used on just about any common parallel architecture including massively parallel machines, SMP clusters, workstation clusters and heterogeneous networks.

5 5 MPI (Message Passing Interface) §All parallelism is explicit l The programmer is responsible for correctly identifying parallelism and using MPI routines.

6 6 MPI (Message Passing Interface) §Format of MPI calls ret = MPI_Xxxx(parameter,...) ret is MPI_SUCCESS if successful.

7 7 MPI (Message Passing Interface)

8 8 §Communicators define which collection of processes may communicate with each other. §MPI_COMM_WORLD

9 9 MPI (Message Passing Interface) §Rank l Unique, integer identifier (begin at zero and are contiguous). l Often used conditionally to control program execution

10 10 MPI: the Message Passing Interface The minimal set of MPI routines. MPI_Init Initializes MPI. MPI_Finalize Terminates MPI. MPI_Comm_size Determines the number of processes. MPI_Comm_rank Determines the label of calling process. MPI_Send Sends a message. MPI_Recv Receives a message. MPI: Rich set of routines 100+ routines

11 11 §MPI_Init §Initializes the MPI execution environment. l Must be called in every MPI program, l Before any other MPI functions l Called only once in an MPI program. l May be used to pass the command line arguments to all processes.

12 12 MPI_Finalize §Terminates the MPI execution environment. §Last MPI routine called in every MPI program.

13 13 Hello World MPI Program #include int main(int argc, char *argv[]) { int rank, size; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Hello, world! I am %d of %d\n", rank, size); MPI_Finalize(); return 0; } Processes viewed as arranged in one-dimension

14 14 Point-to-Point Communication §Message passing between two different MPI tasks.

15 15 §Different types of send and receive routines.

16 16 §Synchronous and asynchronous communication.

17 17 The Building Blocks: Send and Receive Operations §The prototypes of these operations are as follows: send(void *sendbuf, int nelems, int dest) receive(void *recvbuf,int nelems,int source) §Consider the following code segments: P0 P1 a = 100; receive(&b, 1, 0) send(&a, 1, 1); printf("%d\n", b); a = 0; §The semantics of the send operation require that the value received by process P1 must be 100 as opposed to 0. §This motivates the design of the send and receive protocols.

18 18 Non-Buffered Blocking Message Passing Operations Handshake for a blocking non-buffered send/receive operation. It is easy to see that in cases where sender and receiver do not reach communication point at similar times, there can be considerable idling overheads. Synchronous communication overhead.

19 19 Non-Buffered Blocking Message Passing Operations Handshake for a blocking non-buffered send/receive operation. It is easy to see that in cases where sender and receiver do not reach communication point at similar times, there can be considerable idling overheads. Idling Overhead

20 20 §Deadlocks in blocking non-buffered operations

21 21 §In a perfect world, every send operation will time perfectly with a matching receive operation.

22 22 Suppose … §A send operation occurs 5 seconds before the receive is ready - where is the message while the receive is pending? §Multiple sends arrive at the same receiving task which can only accept one send at a time.

23 23 Buffered Blocking Message Passing Operations §A simple solution to the idling and deadlocking problem outlined above is to rely on buffers at the sending and receiving ends. §The sender simply copies the data into the designated buffer and returns after the copy operation has been completed. §The data must be buffered at the receiving end as well. §Buffering trades off idling overhead for buffer copying overhead.

24 24 §The MPI implementation (not the MPI standard) decides what happens to data in these types of cases. §Typically, a system buffer area is reserved to hold data in transit.

25 25

26 26 Blocking §Most of the MPI point-to-point routines can be used in either blocking or non-blocking mode. §Blocking: l A blocking send routine will only "return" after it is safe to modify the application buffer (your send data) for reuse. l Safe means that modifications will not affect the data intended for the receive task. Safe does not imply that the data was actually received - it may very well be sitting in a system buffer.

27 27 Blocking l A blocking send can be synchronous which means there is handshaking occurring with the receive task to confirm a safe send. l A blocking send can be asynchronous if a system buffer is used to hold the data for eventual delivery to the receive. l A blocking receive only "returns" after the data has arrived and is ready for use by the program.

28 28 Buffered Blocking Message Passing Operations Bounded buffer sizes can have significant impact on performance. P0 P1 for (i = 0; i < 1000; i++){ produce_data(&a); receive(&a, 1, 0); send(&a, 1, 1); consume_data(&a); } } What if consumer was much slower than producer?

29 29 Buffered Blocking Message Passing Operations P0 P1 receive(&a, 1, 1); receive(&a, 1, 0); send(&b, 1, 1); send(&b, 1, 0);

30 30 Buffered Blocking Message Passing Operations Deadlocks are still possible with buffering since receive operations block. P0 P1 receive(&a, 1, 1); receive(&a, 1, 0); send(&b, 1, 1); send(&b, 1, 0);

31 31 MPI_Send (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

32 32 MPI_Recv (void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)

33 33 MPI Datatypes MPI DatatypeC Datatype MPI_CHARsigned char MPI_SHORTsigned short int MPI_INTsigned int MPI_LONGsigned long int MPI_UNSIGNED_CHARunsigned char MPI_UNSIGNED_SHORTunsigned short int MPI_UNSIGNEDunsigned int MPI_UNSIGNED_LONGunsigned long int MPI_FLOATfloat MPI_DOUBLEdouble MPI_LONG_DOUBLElong double MPI_BYTE MPI_PACKED


Download ppt "1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben."

Similar presentations


Ads by Google