An Introduction to Parallel Programming with MPI

Slides:



Advertisements
Similar presentations
Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
Advertisements

MPI Collective Communications
1 Collective Operations Dr. Stephen Tse Lesson 12.
MPI_REDUCE() Philip Madron Eric Remington. Basic Overview MPI_Reduce() simply applies an MPI operation to select local memory values on each process,
12c.1 Collective Communication in MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
SOME BASIC MPI ROUTINES With formal datatypes specified.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Collective Communications
Collective Communication.  Collective communication is defined as communication that involves a group of processes  More restrictive than point to point.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
Distributed Systems CS Programming Models- Part II Lecture 17, Nov 2, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
Parallel Programming with Java
Parallel Programming with MPI Matthew Pratola
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Parallel Programming and Algorithms – MPI Collective Operations David Monismith CS599 Feb. 10, 2015 Based upon MPI: A Message-Passing Interface Standard.
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 Collective Communications. 2 Overview  All processes in a group participate in communication, by calling the same function with matching arguments.
ECE 1747H : Parallel Programming Message Passing (MPI)
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
PP Lab MPI programming VI. Program 1 Break up a long vector into subvectors of equal length. Distribute subvectors to processes. Let them compute the.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
Parallel Programming with MPI By, Santosh K Jena..
Lecture 6: Message Passing Interface (MPI). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
MPI – Message Passing Interface Source:
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
MPI_Alltoall By: Jason Michalske. What is MPI_Alltoall? Each process sends distinct data to each receiver. The Jth block of process I is received by process.
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
Computer Science Department
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
Lecture 2: Part II Message Passing Programming: MPI
CS4402 – Parallel Computing
Introduction to MPI Programming
CS 668: Lecture 3 An Introduction to MPI
Computer Science Department
Send and Receive.
Collective Communication with MPI
CS 584.
Collective Communication Operations
Programming with MPI.
Send and Receive.
Distributed Systems CS
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Distributed Systems CS
Lecture 14: Inter-process Communication
High Performance Parallel Programming
MPI: Message Passing Interface
4. Distributed Programming
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Barriers implementations
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message-Passing Computing Message Passing Interface (MPI)
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Computer Science Department
5- Message-Passing Programming
Hello, world in MPI #include <stdio.h> #include "mpi.h"
CS 584 Lecture 8 Assignment?.
Presentation transcript:

An Introduction to Parallel Programming with MPI March 22, 24, 29, 31 2005 David Adams daadams3@vt.edu http://research.cs.vt.edu/lasca/schedule

Outline Collective communications Disclaimers Overview of basic parallel programming on a cluster with the goals of MPI Batch system interaction Startup procedures Quick review Blocking message passing Non-blocking message passing Collective communications

Review Functions we have covered in detail: Useful constants: MPI_INIT MPI_FINALIZE MPI_COMM_SIZE MPI_COMM_RANK MPI_SEND MPI_RECV MPI_ISEND MPI_IRECV MPI_WAIT MPI_TEST Useful constants: MPI_COMM_WORLD MPI_ANY_SOURCE MPI_ANY_TAG MPI_SUCCESS MPI_REQUEST_NULL MPI_TAG_UB

Collective Communications Transmit data to all processes within a communicator domain. (All processes in MPI_COMM_WORLD for example.) Called by every member of a communicator but can not be relied on to synchronize the processes (except MPI_BARRIER). Come only in blocking versions and standard mode semantics. Collective communications are SLOW but are a convenient way of passing the optimization of data transfer to the vendor instead of the end user. Everything accomplished with collective communications could also be done using the functions we have already gone over. They are simply shortcuts and implementer optimizations for communication patterns that are used often by parallel programmers.

BARRIER MPI_BARRIER(COMM, IERROR) IN INTEGER COMM OUT IERROR Blocks the caller until all processes in the group have entered the call to MPI_BARRIER. Allows for process synchronization and is the only collective operation that guarantees synchronization at the call even though others could synchronize as a side effect.

Broadcast MPI_BCAST(BUFFER, COUNT, DATATYPE, ROOT, COMM, IERROR) INOUT <type> BUFFER(*) IN INTEGER COUNT, DATATYPE, ROOT, COMM OUT IERROR Broadcasts a message from the process with rank root to all processes of the communicator group. Serves as both the blocking send and blocking receive for message completion and must be called by every processor in the communicator group. Conceptually, this can be viewed as sending a single message from root to every processor in the group but MPI implementations are free to make this more efficient. On return, the contents of the root processor’s BUFFER have been copied to all processes

Broadcast Data  Data  A0 A0  Processes  Processes A0 A0 A0 A0 A0

Gather MPI_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) OUT <type> RECVBUF(*) IN <type> SENDBUF(*) IN INTEGER SENDCOUNT, RECVCOUNT, SENDTYPE, RECVTYPE, COMM OUT IERROR Each process (including the root) sends the contents of its send buffer to the root process. The root process collects the messages in rank order and stores them in the RECVBUF. If there are n processes in the communicator group then the RECVBUF must be n times larger than the SENDBUF. RECVCOUNT = SENDCOUNT, meaning that the function is looking for the count of objects of type RECVTYPE that it is receiving from each process.

Gather Data  Data  A0 A0 A1 A2 A3 A4 A5  Processes A1  Processes

Scatter MPI_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) OUT <type> RECVBUF(*) IN <type> SENDBUF(*) IN INTEGER SENDCOUNT, RECVCOUNT, SENDTYPE, RECVTYPE, COMM OUT IERROR MPI_SCATTER is the inverse of MPI_GATHER. The outcome of this function is for root to take its SENDBUF and split it into n equal segments, 0 through (n-1), where the ith segment is delivered to the ith process in the group.

Scatter Data  Data  A0 A1 A2 A3 A4 A5 A0  Processes  Processes A1

ALLGATHER Data  Data  A0 A0 B0 C0 D0 E0 F0  Processes B0

ALLTOALL Data  Data  A0 A1 A2 A3 A4 A5 A0 B0 C0 D0 E0 F0  Processes

Global Reductions MPI can perform a global reduction operation across all members of a communicator group. Reduction operations include operations like: Maximum Minimum Sum Product ANDs and ORs

MPI_REDUCE MPI_REDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, ROOT, COMM, IERROR) OUT <type> RECVBUF(*) IN <type> SENDBUF(*) IN INTEGER COUNT, DATATYPE, OP, ROOT, COMM OUT IERROR Combines the elements provided in the input buffer of each process in the group, using the operation OP, and returns the combined value in the output buffer of the process with rank ROOT. Predefined operations include: MPI_MAX MPI_MIN MPI_SUM MPI_PROD MPI_LAND MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR

Helpful Online Information Man pages for MPI: http://www-unix.mcs.anl.gov/mpi/www/ MPI homepage at Argonne National Lab: http://www-unix.mcs.anl.gov/mpi/ Some more sample programs: http://www-unix.mcs.anl.gov/mpi/usingmpi/examples/main.htm Other helpful books: http://fawlty.cs.usfca.edu/mpi/ http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=3614 Some helpful UNIX commands: http://www.ee.surrey.ac.uk/Teaching/Unix/