1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
The Building Blocks: Send and Receive Operations
Parallel Processing1 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
Sahalu Junaidu ICS 573: High Performance Computing 8.1 Topic Overview Matrix-Matrix Multiplication Block Matrix Operations A Simple Parallel Matrix-Matrix.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
Reference: / Point-to-Point Communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
Lesson2 Point-to-point semantics Embarrassingly Parallel Examples.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
Distributed Systems CS Programming Models- Part II Lecture 17, Nov 2, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
A Brief Look At MPI’s Point To Point Communication Brian T. Smith Professor, Department of Computer Science Director, Albuquerque High Performance Computing.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
CS 179: GPU Programming Lecture 20: Cross-system communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 MPI Primer Lesson 10 2 What is MPI MPI is the standard for multi- computer and cluster message passing introduced by the Message-Passing Interface.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Performance Oriented MPI Jeffrey M. Squyres Andrew Lumsdaine NERSC/LBNL and U. Notre Dame.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
11/04/2010CS4961 CS4961 Parallel Programming Lecture 19: Message Passing, cont. Mary Hall November 4,
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Its.unc.edu 1 University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Point to Point Communication.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
An Introduction to MPI (message passing interface)
1 Using PMPI routines l PMPI allows selective replacement of MPI routines at link time (no need to recompile) l Some libraries already make use of PMPI.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Message Passing Interface Using resources from
Distributed Systems CS Programming Models- Part II Lecture 14, Oct 28, 2013 Mohammad Hammoud 1.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
CS4402 – Parallel Computing
MPI Point to Point Communication
Blocking / Non-Blocking Send and Receive Operations
Parallel Programming with MPI and OpenMP
More on MPI Nonblocking point-to-point routines Deadlock
Distributed Systems CS
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
CSCE569 Parallel Computing
Quiz Questions ITCS 4145/5145 Parallel Programming MPI
More on MPI Nonblocking point-to-point routines Deadlock
Barriers implementations
More Quiz Questions Parallel Programming MPI Non-blocking, synchronous, asynchronous message passing routines ITCS 4/5145 Parallel Programming, UNC-Charlotte,
Synchronizing Computations
5- Message-Passing Programming
September 4, 1997 Parallel Processing (CS 730) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
Presentation transcript:

1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004

2 Function MPI_Send int MPI_Send ( void *message, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm )

3 Function MPI_Recv int MPI_Recv ( void *message, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status )

4 Coding Send/Receive … if (ID == j) { … Receive from I … } … if (ID == i) { … Send to j … } …

5 Send/Receive Not Collective

6 Inside MPI_Send and MPI_Recv Sending ProcessReceiving Process Program Memory System Buffer System Buffer Program Memory MPI_Send MPI_Recv

7 Inside MPI_Send and MPI_Recv Return from MPI_Send  Function blocks until message buffer free  Message buffer is free when Message copied to system buffer, or Message transmitted Return from MPI_Recv  Function blocks until message in buffer  If message never arrives, function never returns

8 Deadlock Deadlock: process waiting for a condition that will never become true Easy to write send/receive code that deadlocks  Two processes: both receive before send  Send tag doesn’t match receive tag  Process sends message to wrong destination process

9 Deadlock Example Will always deadlock, no matter the buffering mode. if(rank==0) { MPI_Recv(...); MPI_Send(...); } else if(rank==1) { MPI_Recv(...); MPI_Send(...); }

10 Send and Receive Synchronization Fully Synchronized (Rendezvous)  Send and Receive complete simultaneously whichever code reaches the Send/Receive first waits  provides synchronization point (up to network delays) Buffered  Receive must wait until message is received  Send completes when message is moved to buffer clearing memory of message for reuse Asynchronous  Sending process may proceed immediately does not need to wait until message is copied to buffer must check for completion before using message memory  Receiving process may proceed immediately will not have message to use until it is received must check for completion before using message

11 Communication modes MPI defines four communication modes  synchronous mode ("safest")  ready mode (lowest system overhead)  buffered mode (decouples sender from receiver)  standard mode (compromise) Communication mode is selected with send routine Calls are also blocking or nonblocking.  Blocking stops the program until the message buffer is safe to use  Non-blocking separates communication from computation

12 Communication modes Buffer-mode: send operation can be started whether or not a matching receive has been posted. It may complete before a matching receive is posted. synchronous-mode: send can be started whether or not a matching receive was posted. However, the send will complete successfully only if a matching receive is posted, and the receive operation has started to receive the message sent by the synchronous send. ready-mode send may be started only if the matching receive has already been posted. standard communication mode. In this mode, it is up to MPI to decide whether outgoing messages will be buffered. A send in standard mode can be started whether or not a matching receive has been posted.

13 Communication modes(cont.) AdvantagesDisadvantages SynchronousSafest, and therefore most portable SEND/RECV order not critical Amount of buffer space irrelevant Can incur substantial syncronization overhead ReadyLowest total overhead SEND/RECV handshake not required RECV must precede SEND BufferedDecouples SEND from RECV No sync overhead on SEND Order of SEND/RECV irrelevant Programmer can control size of buffer space Additional system overhead incurred by copy to buffer Standard Good for many cases Your program may not be suitable

14 Blocking and Non-Blocking Send and receive can be blocking or non- blocking A blocking send can be used with a non- blocking receive, and vice-versa Non-blocking sends can use any mode -- synchronous, buffered, standard, or ready

15 Communication modes(cont.) Communication Blocking Non-Blocking Mode Routines Routines Synchronous MPI_SSEND MPI_ISSEND Ready MPI_RSEND MPI_IRSEND Buffered MPI_BSEND MPI_IBSEND Standard MPI_SEND MPI_ISEND MPI_RECV MPI_IRECV

16 Completion Tests Waiting vs. Testing int MPI_Wait(MPI_Request *request,MPI_Status *status) int MPI_Test(MPI_Request *request,int *flag,MPI_Status *status)

17 Comparisons & General Use Blocking: call MPI_RECV (x,N,MPI_Datatype,…,status,…) Non-Blocking: call MPI_IRECV (x,N,MPI_Datatype,…,request,…) … do work that does not involve array x call MPI_WAIT (request,status) … do work that does involve array x Non-Blocking: call MPI_IRECV (x,N,MPI_Datatype,…,request,…) call MPI_TEST (request,flag,status,…) do while (flag.eq. FALSE) … work that does not involve the array x … call MPI_TEST (request,flag,status,…) end do … do work that does involve the array x …

18 Questions?