Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface Portable Parallel Programs.
Advertisements

MPI Message Passing Interface
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
Reference: / MPI Program Structure.
Reference: Getting Started with MPI.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
1 Message protocols l Message consists of “envelope” and data »Envelope contains tag, communicator, length, source information, plus impl. private data.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Sahalu JunaiduICS 573: High Performance Computing6.1 Programming Using the Message Passing Paradigm Principles of Message-Passing Programming The Building.
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
Parallel Programming with Java
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
CS 420 – Design of Algorithms MPI Data Types Basic Message Passing - sends/receives.
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
An Introduction to MPI (message passing interface)
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
MPI Adriano Cruz ©2003 NCE/UFRJ e Adriano Cruz NCE e IM - UFRJ Summary n References n Introduction n Point-to-point communication n Collective.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
MPI Point to Point Communication
MPI Message Passing Interface
MPI-Message Passing Interface
A Message Passing Standard for MPP and Workstations
Introduction to parallelism and the Message Passing Interface
MPI Message Passing Interface
Programming Parallel Computers
Presentation transcript:

Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we basically need? –The ability to start the tasks –A way for them to communicate

Communication Cooperative –All parties agree to transfer data –Message passing is cooperative Data must be explicitly sent and received Any change in the receiver’s memory is made with the receiver’s participation One-sided –One worker performs the transfer of data –Data can be accessed without waiting for another process

What is MPI? A message passing library specification –Message-passing model –Not a compiler specification (i.e. not a language) –Not a specific product Designed for parallel computers, clusters, and heterogeneous networks

The MPI Process Development began in early 1992 Open process/Broad participation –IBM,Intel, TMC, Meiko, Cray, Convex, Ncube –PVM, p4, Express, Linda, … –Laboratories, Universities, Government Final version of draft in May 1994 Public and vendor implementations are now widely available

Point to Point Communication A message is sent from a sender to a receiver There are several variations on how the sending of a message can interact with the program

Synchronous A synchronous communication does not complete until the message has been received –A FAX or registered mail

Asynchronous An asynchronous communication completes as soon as the message is on the way. –A post card or

Blocking and Non-blocking Blocking operations only return when the operation has been completed –Normal FAX machines Non-blocking operations return right away and allow the program to do other work –Receiving a FAX

Collective Communications Point-to-point communications involve pairs of processes. Many message passing systems provide operations which allow larger numbers of processes to participate

Types of Collective Transfers Barrier –Synchronizes processors –No data is exchanged but the barrier blocks until all processes have called the barrier routine Broadcast (sometimes multicast) –A broadcast is a one-to-many communication –One processor sends one message to several destinations Reduction –Often useful in a many-to-one communication

What’s in a Message? An MPI message is an array of elements of a particular MPI datatype All MPI messages are typed –The type of the contents must be specified in both the send and the receive

Basic C Datatypes in MPI MPI DatatypeC datatype MPI_CHARsigned char MPI_SHORTsigned short int MPI_INTsigned int MPI_LONGsigned long int MPI_UNSIGNED_CHARunsigned char MPI_UNSIGNED_SHORTunsigned short int MPI_UNSIGNED_INTunsigned int MPI_UNSIGNED_LONGunsigned long int MPI_FLOATfloat MPI_DOUBLEdouble MPI_LONG_DOUBLElong double MPI_BYTE MPI_PACKED

MPI Handles MPI maintains internal data-structures which are referenced by the user through handles Handles can be returned by and passed to MPI procedures Handles can be copied by the usual assignment operation

MPI Errors MPI routines return an int that can contain an error code The default action on the detection of an error is to cause the parallel operation to abort –The default can be changed to return an error code

Initializing MPI The first MPI routine called in any MPI program must be the initialization routine MPI_INIT MPI_INIT is called once by every process, before any other MPI routines int mpi_Init( int *argc, char **argv );

Skeleton MPI Program #include main( int argc, char** argv ) { MPI_Init( &argc, &argv ); /* main part of the program */ MPI_Finalize(); }

Communicators A communicator handle defines which processes a particular command will apply to All MPI communication calls take a communicator handle as a parameter, which is effectively the context in which the communication will take place MPI_INIT defines a communicator called MPI_COMM_WORLD for each process that calls it

Communicators Every communicator contains a group which is a list of processes The processes are ordered and numbered consecutively from 0. The number of each process is known as its rank –The rank identifies each process within the communicator The group of MPI_COMM_WORLD is the set of all MPI processes

Point-to-point Communication Always involves exactly two processes The destination is identified by its rank within the communicator There are four communication modes provided by MPI (these modes refer to sending not receiving) –Buffered –Synchronous –Standard –Ready

Standard Send When using standard-mode send –It is up to MPI to decide whether outgoing messages will be buffered. –Completes once the message has been sent, which may or may not imply that the massage has arrived at its destination –Can be started whether or not a matching receive has been posted. It may complete before a matching receive is posted. –Has non-local completion semantics, since successful completion of the send operation may depend on the occurrence of a matching receive.

Standard Send MPI_Send( buf, count, datatype, dest, tag, comm ) Where –buf is the address of the data to be sent –count is the number of elements of the MPI datatype which buf contains –datatype is the MPI datatype –dest is the destination process for the message. This is specified by the rank of the destination within the group associated with the communicator comm –tag is a marker used by the sender to distinguish between different types of messages –comm is the communicator shared by the sender and the receiver

Synchronous Send MPI_Ssend( buf, count, datatype, dest, tag, comm ) –can be started whether or not a matching receive was posted –will complete successfully only if a matching receive is posted, and the receive operation has started to receive the message sent by the synchronous send. –provides synchronous communication semantics: a communication does not complete at either end before both processes rendezvous at the communication. –has non-local completion semantics.

Buffered Send A buffered-mode send –Can be started whether or not a matching receive has been posted. It may complete before a matching receive is posted. –Has local completion semantics: its completion does not depend on the occurrence of a matching receive. –In order to complete the operation, it may be necessary to buffer the outgoing message locally. For that purpose, buffer space is provided by the application.

Ready Mode Send A ready-mode send –completes immediately –may be started only if the matching receive has already been posted. –has the same semantics as a standard-mode send. –saves on overhead by avoiding handshaking and buffering