MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
Parallel Processing1 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Sahalu Junaidu ICS 573: High Performance Computing 8.1 Topic Overview Matrix-Matrix Multiplication Block Matrix Operations A Simple Parallel Matrix-Matrix.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
Reference: / Point-to-Point Communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Lesson2 Point-to-point semantics Embarrassingly Parallel Examples.
Please visit our web site: Point-to-Point Communication.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms.
Distributed Systems CS Programming Models- Part II Lecture 17, Nov 2, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
CS 179: GPU Programming Lecture 20: Cross-system communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Steve Lantz Computing and Information Science Distributed Memory Programming Using Advanced MPI (Message Passing Interface)
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Performance Oriented MPI Jeffrey M. Squyres Andrew Lumsdaine NERSC/LBNL and U. Notre Dame.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
11/04/2010CS4961 CS4961 Parallel Programming Lecture 19: Message Passing, cont. Mary Hall November 4,
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Its.unc.edu 1 University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Point to Point Communication.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
An Introduction to MPI (message passing interface)
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Advanced edition Jakub Yaghob. Initializing MPI – threading int MPI Init(int *argc, char ***argv, int required, int *provided); Must be called as.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Message Passing Interface Using resources from
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
CS 4410 – Parallel Computing 1 Chap 9 CS 4410 – Parallel Computing Dr. Dave Gallagher Chap 9 Manager Worker.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Computer Science Department
Introduction to parallel computing concepts and technics
CS4402 – Parallel Computing
MPI Point to Point Communication
Computer Science Department
Blocking / Non-Blocking Send and Receive Operations
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Cenni sul calcolo parallelo. Descrizione di JDL per i job di tipo MPI.
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Computer Science Department
5- Message-Passing Programming
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Presentation transcript:

MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc. Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Skylar Thompson, University of Washington Aaron Weeden, Earlham College Sunday June 26 – Friday July NCSI Intro Parallel: Non-blocking calls June 26 - July Message Passing Interface

Where are we headed? Blocking Easiest, but might waste time Send Communication Modes (same Receive) Non Blocking Extra things that might go wrong Might be able to overlap wait with other stuff Send/Receive and their friends in focusing on Send and Receive

From where‘d we come? MPI_Init (int *argc, char ***argv) MPI_Comm_rank (MPI_Comm comm, int *rank) MPI_Comm_size (MPI_Comm comm, int *size) MPI_Send( void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_Recv( void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) MPI_Finalize () 3 6 MPI commands

Four Blocking Send Modes Send is the focus MPI_RECV works with all Sends Four Send modes to answer the questions … Do an extra copy to dodge synchronization delay? How do Sends/Receives Start/Finish together? No change to parameters passed to send or receive What does change is the name of the function MPI_Ssend, MPI_Bsend, MPI_Rsend, and MPI_Send 4 basically synchronous communication

4 Blocking Send modes Synchronous – Stoplight Intersection No buffer, but both sides wait for other Buffered – The roundabout You construct Explicit user buffer, alls well as long as within buffer Ready – Fire truck Stoplight Override No buffer, no handshake, Send is the firetruck Standard – The Roundabout Not so standard blend of Synchronous and Buffered Internal buffer? 5 all use same blocking receive

Synchronous MPI_Ssend Send can initiate, before Receive starts Receive must start, before Send sends anything Safest and most portable Doesn’t care about order of Send/Receive Doesn’t care about any hidden internal buffer May have high synchronization overhead 6 no buffer

Buffered MPI_Bsend Send can complete, before Receive even starts Explicit buffer allocation, via MPI_Buffer_attach Error, if buffer overflow Eliminates synchronization overhead, at cost of extra copy 7 explicit user defined buffer

Ready MPI_Rsend Receive must initiate, before Send starts Minimum idle Sender, at expense of Receiver Lowest sender overhead No Sender/Receiver handshake As with Synchronous No extra copy to buffer As with Buffered and Standard 8 no buffer - no synchronization

Standard MPI_Send Buffer may be on send side, receive side, or both Could be Synchronous, but users expect Buffered Goes Synchronous, if you exceed hidden buffer size Potential for unexpected timing behavior 9 mysterious internal buffer

Non-Blocking Send/Receive Call returns immediately, which allows for overlapping other work User must worry about whether … Data to be sent is out of the send buffer Data to be received has finished arriving For sends and receives in flight MPI_Wait – blocking - you go synchronous MPI_Test – non-blocking - Status Check Check for existence of data to receive Blocking: MPI_Probe Non-blocking: MPI_Iprobe 10 basically asynchronous communication

Non-Blocking Call Sequence 11 Restricts other work you can do Don’t use data till receive completes Don’t write to send buffer till send completes requestID -> MPI_Wait MPI_Irecv ->requestID requestID ->MPI_Wait MPI_Isend ->requestID ReceiverSender

Non-blocking Send/Receive MPI_Isend( void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_Irecv( void * buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request ) 12 request ID for status checks

Return to blocking Waiting on a single send MPI_Wait(MPI_Request * request, MPI_Status *status ) Waiting on multiple sends (get status of all) Till all complete, as a barrier MPI_Waitall(int count, MPI_Request *requests, MPI_Status *statuses) Till at least one completes MPI_Waitany(int count, MPI_Request *requests, int *index, MPI_Status *status) Helps manage progressive completions int MPI_Waitsome(int incount, MPI_Request *requests, int *outcount, int *indices, MPI_Status *statuses) 13 waiting for send/receive to complete

Tests don’t block Flag true means completed MPI_Test(MPI_Request *request, int *flag, MPI_Status *status) MPI_Testall(int count, MPI_Request *requests, int *flag, MPI_Status *statuses) int MPI_Testany(int count, MPI_Request *requests, int *index, int *flag, MPI_Status *status) Like a non blocking MPI_Waitsome MPI_Testsome(int incount, MPI_Request *requests, int *outcount, int *indices, MPI_Status *statuses) 14 but give you same info as a wait

Probe to Receive Probes yield incoming size Blocking Probe, wait til match MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status) Non Blocking Probe, flag true if ready MPI_Iprobe(int source, int tag, MPI_Comm comm, int *flag, MPI_Status *status) 15 you can know something's there

Non-Blocking Advantages Avoids Deadlock Decreases Synchronization Overhead Best to Post non-blocking sends and receives as early as possible Do waits as late as possible Otherwise consider using blocking calls 16 fine-tuning your send and receives