Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf.

Slides:



Advertisements
Similar presentations
1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
Advertisements

Parallel Processing1 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
Sahalu Junaidu ICS 573: High Performance Computing 8.1 Topic Overview Matrix-Matrix Multiplication Block Matrix Operations A Simple Parallel Matrix-Matrix.
1 Implementing Master/Slave Algorithms l Many algorithms have one or more master processes that send tasks and receive results from slave processes l Because.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Reference: / Point-to-Point Communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
A Very Short Introduction to MPI Vincent Keller, CADMOS (with Basile Schaeli, EPFL – I&C – LSP, )
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
A Brief Look At MPI’s Point To Point Communication Brian T. Smith Professor, Department of Computer Science Director, Albuquerque High Performance Computing.
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
CS 179: GPU Programming Lecture 20: Cross-system communication.
Non-Blocking I/O CS550 Operating Systems. Outline Continued discussion of semaphores from the previous lecture notes, as necessary. MPI Types What is.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 MPI Primer Lesson 10 2 What is MPI MPI is the standard for multi- computer and cluster message passing introduced by the Message-Passing Interface.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Performance Oriented MPI Jeffrey M. Squyres Andrew Lumsdaine NERSC/LBNL and U. Notre Dame.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
C++ Classes and Data Structures Jeffrey S. Childs
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Its.unc.edu 1 University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Point to Point Communication.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
An Introduction to MPI (message passing interface)
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Implementing Processes and Threads CS550 Operating Systems.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
CS 4410 – Parallel Computing 1 Chap 9 CS 4410 – Parallel Computing Dr. Dave Gallagher Chap 9 Manager Worker.
MPI_Alltoall By: Jason Michalske. What is MPI_Alltoall? Each process sends distinct data to each receiver. The Jth block of process I is received by process.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Chapter 4.
Introduction to parallel computing concepts and technics
CS4402 – Parallel Computing
MPI Point to Point Communication
Blocking / Non-Blocking Send and Receive Operations
An Introduction to Parallel Programming with MPI
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Quiz Questions ITCS 4145/5145 Parallel Programming MPI
Introduction to parallelism and the Message Passing Interface
Send and Receive.
Barriers implementations
Synchronizing Computations
5- Message-Passing Programming
Presentation transcript:

Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard

MPI Message Passing Recall that messages are passed in MPI using MPI_Send and MPI_Recv MPI_Send - sends a message of a given size with a given type to a process with a specific rank. MPI_Recv - receives a message of a maximum size with a given type from a process with a specific rank. MPI_COMM_WORLD - the "world" in which the processes exist. This is a constant.

Sending and Receiving Messages MPI_Send and MPI_Recv have the following parameters: MPI_Send( pointer to message, message size, message type, process rank to send to, message tag or id, MPI_COMM_WORLD) MPI_Recv( pointer to variable used to receive, maximum recv size, message type, process rank to receive from, message tag or id, MPI_COMM_WORLD, MPI_STATUS_IGNORE)

MPI_Send and MPI_Recv Recall that we discussed that MPI_Send and MPI_Recv are blocking send and receive fucntions. These functions do block, however, MPI_Send may only block until the message to be sent has been transferred to a temporary buffer. It is up to MPI to decide to buffer the outgoing message. If MPI does not decide to buffer the outgoing message, the send will block until a matching receive has been posted. It is possible (and is currently true) that MPI_Send might continue after transferring control of its message to the MPI Environment (i.e. the Open Runtime Environment). So, we refer to MPI_Send as a buffered send.

MPI_Sendrecv Recall that when discussing Odd-Even sort, we used a function that would send and receive messages in one operation. Syntax: MPI_Sendrecv(data_to_send, send_size, send_type, destination, send_tag, data_to_recv, recv_size, recv_type, source, recv_tag, communicator, status) Recall that this function allows for data to be both sent and received between processes without the need to worry about the order of the send and receive. Recall that this also avoids the problem of two blocking sends being issued at the same time (i.e. it helps to avoid deadlock).

MPI_Sendrecv Parameters data_to_send – reference to the data to be sent (input) send_size – amount of data to send (input) send_type – MPI type of the data to send (input) Destination – rank of the process to which the data will be sent (input) send_tag – message identifier for the data to be sent (input) data_to_recv – reference to the location where data will be received (output) recv_size – maximum amount of data to receive (input) recv_type – MPI type of data to receive (input) Source – rank of process from which data will be received (input) recv_tag – message identifier for message to be received (input) Communicator – the world in which communication will occur (input, typically MPI_COMM_WORLD) Status – status of the message receipt (output, either MPI_Status variable or MPI_STATUS_IGNORE)

Other Methods to Send and Receive Messages Since programmers often need control of how sends and receives occur, MPI implements true blocking and true non-blocking send and receive functions. MPI_Ssend – MPI’s synchronous send. – This function, when used with MPI_Recv, always blocks until it receives an acknowledgement that the message has indeed been received. MPI_Bsend – MPI’s buffered send. – This function, when used with MPI_Recv, blocks until the message has been transferred to a buffer that the MPI framework can make use of. MPI_Isend and MPI_Irecv – MPI’s non-blocking send and receive functions – These methods continue immediately after being called. – MPI_Isend sends the message and does not wait for an acknowledgement. – MPI_Irecv tells the MPI Environment that a message will be received, but does not wait for the message to be received. Non-blocking send and receive functions must be used with MPI_Test and/or MPI_Wait functions in order to receive the messages that were sent.

Synchronous Send This function, when used with MPI_Recv, always blocks until it receives an acknowledgement that the message has indeed been received. Syntax: MPI_Ssend(buffer, count, type, destination, tag, communicator) Buffer – the data to be sent Count – the number of elements being sent Type – the MPI type of the data being sent Destination – the rank of the process to which the data will be sent Tag – an identifier for the message Communicator – Currently MPI_COMM_WORLD

Buffered Send This function, when used with MPI_Recv, blocks until the message has been transferred to a buffer that the MPI framework can make use of. Syntax: MPI_Bsend(buffer, count, type, destination, tag, communicator) Buffer – the data to be sent Count – the number of elements being sent Type – the MPI type of the data being sent Destination – the rank of the process to which the data will be sent Tag – an identifier for the message Communicator – Currently MPI_COMM_WORLD

Examples We will investigate using MPI_Bsend and MPI_Ssend with in-class examples. First, we will re-implement the ring program from worksheet 6, problem 2. Next we will modify the Odd-Even Sort program to make use of these.

Return Status of a Message It is possible to specify wildcard values in place of the message source and message tag – MPI_TAG_ANY – allow for any tag – MPI_SOURCE_ANY – allow for any source In this case, it is possible to determine the status of the received message using the MPI_Status structure.

MPI_Status The MPI_Status struct contains three values in C. So, given a structure of type MPI_Status identified by status, these values can be retrieved as: –status.MPI_SOURCE – the source of the message –status.MPI_TAG – the tag of the message –status.MPI_ERROR – the error code of the message

MPI_Get_Count Question: What if we don’t know exactly how much data we will receive? Assume for now that our buffer is big enough to hold all the data that we need. – This could be a problem if we are transferring gigabytes of data at a time, though. We could first send a message with the size. Or we could use MPI_Get_Count to determine the size using the status of a message that was just received.

MPI_Get_Count Syntax – MPI_Get_Count(status, data_type, count) status - reference to status variable (input) data_type – MPI type of receive buffer elements (input) count - reference to variable where count will be stored (output)

Example Let’s try to write a simple MPI example with two processes that send an unknown amount of data from one process to another.

Non-Blocking Sending and Receiving Performance can be improved by allowing computations to be performed while waiting for a message to be sent or while waiting to receive a message. The non-blocking send operator starts a send but does not complete it. Similarly, a non-blocking receive operator may start a receive, but not complete it. It may be necessary to issue send complete and/or receive completion operations though. Be aware that non-blocking operations make use of MPI_Request objects to identify a communication operation and match that communication with the operation that will terminate it.

Non-blocking Send MPI_Isend (and similarly Ibsend and Issend) is used to send a message in a non-blocking fashion. Syntax – MPI_Isend(buffer, count, type, destination, tag, communicator, request) Buffer – the data to be sent (input) Count – the number of elements being sent (input) Type – the MPI type of the data being sent (input) Destination – the rank of the process to which the data will be sent (input) Tag – an identifier for the message (input) Communicator – the communication world, currently MPI_COMM_WORLD (input) Request – the communication request result (output)

Non-Blocking Receive MPI_Irecv is used to request receipt of a message in a non-blocking fashion. Syntax – MPI_Irecv(buffer, count, type, source, tag, communicator, request) Buffer – the data to be received (output) Count – the maximum number of elements to receive (input) Type – the MPI type of the data being received (input) Source – the rank of the process from which the data will be received (input) Tag – an identifier for the message (input) Communicator – the communication world, currently MPI_COMM_WORLD (input) Request – the communication request result (output)

But wait… If MPI_Irecv is non-blocking, how do we know if our message has been received? Non-blocking operations simply continue after the operation has been posted and leave it up to MPI to complete the operation. So, to truly know if the message has been received, we need to test and/or wait for its receipt.

MPI_Wait MPI_Wait is used to wait for delivery of a message. It requires the message request from MPI_Irecv to be passed in as a parameter. Syntax: MPI_Wait(request, status) request – a reference to the request structure from the message to be received (i.e. from MPI_Irecv) (input and output) status – the status of the message that was received (output)

MPI_Wait Calling MPI_Wait forces a process to block until the message identified by the request is received. It is possible for a communication to be cancelled or to cause an error. These issues can be handled by examining the result stored in the status variable.

MPI_Test Generally, it is possible to complete work while waiting for a message to be delivered. As previously mentioned in class messages are sent from point-to- point with send and receive. MPI acts in a similar fashion to a postal service. The sender puts the message in an envelop and sends it, by handing off the message to MPI. The receiver waits for delivery of the message in its mailbox from the postal service (MPI). Instead of “standing by the mailbox,” it is possible to go “check the mailbox every once in a while,” and to do something else for a while if the message was not delivered. MPI_Test can be used to check for delivery of a message.

MPI_Test MPI_Test is used to check to see if a message has been delivered. Syntax: MPI_Test(request, flag, status) Request – reference to the requested message, note that this identifies the message (input) Flag – reference to an integer (output), note that this is set to one if the message was delivered Status – reference to the status object associated with the message (output), note that this may be used to determine if delivery of the message was cancelled or resulted in an error.

Example There are several non-blocking MPI examples on the course website. We will look at these examples and make modifications to them, to use MPI_Wait and MPI_Test. It is recommended that you try to implement a while loop that tests for delivery of a message, does work if the message has not been delivered, and exits the while loop after delivery has occurred.

Advanced Testing and Waiting It is possible to test for many messages and to wait for many messages. MPI provides the following functions to test and wait for any, some, and/or all of your messages. – MPI_Testany – MPI_Testsome – MPI_Testall – MPI_Waitany – MPI_Waitsome – MPI_Waitall You should read about each of these functions in the MPI 3.0 Standard.

Reading Assignment Read Chapter 3 of the MPI 3.0 Standard Pay particular attention to sections 3.2, 3.4, 3.5, and 3.7 – 3.10