1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

The Building Blocks: Send and Receive Operations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
Programming Using the Message Passing Paradigm Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel.
HPDC Spring MPI 11 CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs. 2 – 3:20 p.m Message Passing Interface.
Distributed Systems CS Programming Models- Part II Lecture 17, Nov 2, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
Sahalu JunaiduICS 573: High Performance Computing6.1 Programming Using the Message Passing Paradigm Principles of Message-Passing Programming The Building.
CS 179: GPU Programming Lecture 20: Cross-system communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Director of Contra Costa College High Performance Computing Center
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
1 What is MPI?  MPI = Message Passing Interface  Specification of message passing libraries for developers and users  Not a library by itself, but specifies.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
CS 420 – Design of Algorithms MPI Data Types Basic Message Passing - sends/receives.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
MPI Jakub Yaghob. Literature and references Books Gropp W., Lusk E., Skjellum A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface,
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
PPC SPRING MPI 11 CSCI-4320/6340: Parllel Programming and Computing: West Hall, Tues, Friday 12-1:20 p.m. Message Passing Interface (MPI 1) Prof.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
1. 2 The logical view of a machine supporting the message-passing paradigm consists of p processes, each with its own exclusive address space. The logical.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Distributed Systems CS Programming Models- Part II Lecture 14, Oct 28, 2013 Mohammad Hammoud 1.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Principles of Message-Passing Programming.
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Principles of Message-Passing Programming.
Message Passing Models
Distributed Systems CS
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
NOTE: FOR CLASS USE MPICH Version of MPI!!
Programming Parallel Computers
Presentation transcript:

1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben

2 MPI (Message Passing Interface) §MPI is a specification for message passing libraries. l Standardized (replaced all previous message passing libraries) l Practical l Portable (vendor independent) l Efficient §Industry standard for writing message passing programs. §Implementations are available for both vendor and public domains.

3 MPI (Message Passing Interface) §1980s - early 1990s: l Number of incompatible software tools for writing message passing programs for distributed memory systems. l The need for a standard arose. l MPI Forum 175 individuals from 40 organizations Parallel computer vendors, software programmers, academia and application scientists.

4 MPI (Message Passing Interface) §Originally, MPI was targeted for distributed memory systems. §Popularity of shared memory systems (SMP / NUMA architectures) resulted in appearance of MPI implementations for these platforms. §MPI is now used on just about any common parallel architecture including massively parallel machines, SMP clusters, workstation clusters and heterogeneous networks.

5 MPI (Message Passing Interface) §All parallelism is explicit l The programmer is responsible for correctly identifying parallelism and using MPI routines.

6 MPI (Message Passing Interface) §Format of MPI calls ret = MPI_Xxxx(parameter,...) ret is MPI_SUCCESS if successful.

7 MPI (Message Passing Interface)

8 §Communicators define which collection of processes may communicate with each other. §MPI_COMM_WORLD

9 MPI (Message Passing Interface) §Rank l Unique, integer identifier (begin at zero and are contiguous). l Often used conditionally to control program execution

10 MPI: the Message Passing Interface The minimal set of MPI routines. MPI_Init Initializes MPI. MPI_Finalize Terminates MPI. MPI_Comm_size Determines the number of processes. MPI_Comm_rank Determines the label of calling process. MPI_Send Sends a message. MPI_Recv Receives a message. MPI: Rich set of routines 100+ routines

11 §MPI_Init §Initializes the MPI execution environment. l Must be called in every MPI program, l Before any other MPI functions l Called only once in an MPI program. l May be used to pass the command line arguments to all processes.

12 MPI_Finalize §Terminates the MPI execution environment. §Last MPI routine called in every MPI program.

13 Hello World MPI Program #include int main(int argc, char *argv[]) { int rank, size; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Hello, world! I am %d of %d\n", rank, size); MPI_Finalize(); return 0; } Processes viewed as arranged in one-dimension

14 Point-to-Point Communication §Message passing between two different MPI tasks.

15 §Different types of send and receive routines.

16 §Synchronous and asynchronous communication.

17 The Building Blocks: Send and Receive Operations §The prototypes of these operations are as follows: send(void *sendbuf, int nelems, int dest) receive(void *recvbuf,int nelems,int source) §Consider the following code segments: P0 P1 a = 100; receive(&b, 1, 0) send(&a, 1, 1); printf("%d\n", b); a = 0; §The semantics of the send operation require that the value received by process P1 must be 100 as opposed to 0. §This motivates the design of the send and receive protocols.

18 Non-Buffered Blocking Message Passing Operations Handshake for a blocking non-buffered send/receive operation. It is easy to see that in cases where sender and receiver do not reach communication point at similar times, there can be considerable idling overheads. Synchronous communication overhead.

19 Non-Buffered Blocking Message Passing Operations Handshake for a blocking non-buffered send/receive operation. It is easy to see that in cases where sender and receiver do not reach communication point at similar times, there can be considerable idling overheads. Idling Overhead

20 §Deadlocks in blocking non-buffered operations

21 §In a perfect world, every send operation will time perfectly with a matching receive operation.

22 Suppose … §A send operation occurs 5 seconds before the receive is ready - where is the message while the receive is pending? §Multiple sends arrive at the same receiving task which can only accept one send at a time.

23 Buffered Blocking Message Passing Operations §A simple solution to the idling and deadlocking problem outlined above is to rely on buffers at the sending and receiving ends. §The sender simply copies the data into the designated buffer and returns after the copy operation has been completed. §The data must be buffered at the receiving end as well. §Buffering trades off idling overhead for buffer copying overhead.

24 §The MPI implementation (not the MPI standard) decides what happens to data in these types of cases. §Typically, a system buffer area is reserved to hold data in transit.

25

26 Blocking §Most of the MPI point-to-point routines can be used in either blocking or non-blocking mode. §Blocking: l A blocking send routine will only "return" after it is safe to modify the application buffer (your send data) for reuse. l Safe means that modifications will not affect the data intended for the receive task. Safe does not imply that the data was actually received - it may very well be sitting in a system buffer.

27 Blocking l A blocking send can be synchronous which means there is handshaking occurring with the receive task to confirm a safe send. l A blocking send can be asynchronous if a system buffer is used to hold the data for eventual delivery to the receive. l A blocking receive only "returns" after the data has arrived and is ready for use by the program.

28 Buffered Blocking Message Passing Operations Bounded buffer sizes can have significant impact on performance. P0 P1 for (i = 0; i < 1000; i++){ produce_data(&a); receive(&a, 1, 0); send(&a, 1, 1); consume_data(&a); } } What if consumer was much slower than producer?

29 Buffered Blocking Message Passing Operations P0 P1 receive(&a, 1, 1); receive(&a, 1, 0); send(&b, 1, 1); send(&b, 1, 0);

30 Buffered Blocking Message Passing Operations Deadlocks are still possible with buffering since receive operations block. P0 P1 receive(&a, 1, 1); receive(&a, 1, 0); send(&b, 1, 1); send(&b, 1, 0);

31 MPI_Send (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

32 MPI_Recv (void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)

33 MPI Datatypes MPI DatatypeC Datatype MPI_CHARsigned char MPI_SHORTsigned short int MPI_INTsigned int MPI_LONGsigned long int MPI_UNSIGNED_CHARunsigned char MPI_UNSIGNED_SHORTunsigned short int MPI_UNSIGNEDunsigned int MPI_UNSIGNED_LONGunsigned long int MPI_FLOATfloat MPI_DOUBLEdouble MPI_LONG_DOUBLElong double MPI_BYTE MPI_PACKED