Computer Science Department

Slides:



Advertisements
Similar presentations
Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
Advertisements

MPI Collective Communications
Sahalu Junaidu ICS 573: High Performance Computing 8.1 Topic Overview Matrix-Matrix Multiplication Block Matrix Operations A Simple Parallel Matrix-Matrix.
MPI_REDUCE() Philip Madron Eric Remington. Basic Overview MPI_Reduce() simply applies an MPI operation to select local memory values on each process,
HPDC Spring MPI 11 CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs. 2 – 3:20 p.m Message Passing Interface.
SOME BASIC MPI ROUTINES With formal datatypes specified.
MPI Collective Communication CS 524 – High-Performance Computing.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Collective Communication.  Collective communication is defined as communication that involves a group of processes  More restrictive than point to point.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
Programming Using the Message Passing Paradigm Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
Sahalu JunaiduICS 573: High Performance Computing6.1 Programming Using the Message Passing Paradigm Principles of Message-Passing Programming The Building.
Message Passing Programming Carl Tropper Department of Computer Science.
Parallel Programming with Java
Parallel Programming with MPI Matthew Pratola
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Parallel Programming and Algorithms – MPI Collective Operations David Monismith CS599 Feb. 10, 2015 Based upon MPI: A Message-Passing Interface Standard.
1 Collective Communications. 2 Overview  All processes in a group participate in communication, by calling the same function with matching arguments.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Parallel Programming with MPI By, Santosh K Jena..
Lecture 6: Message Passing Interface (MPI). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
MPI Jakub Yaghob. Literature and references Books Gropp W., Lusk E., Skjellum A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface,
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
1. 2 The logical view of a machine supporting the message-passing paradigm consists of p processes, each with its own exclusive address space. The logical.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
1 Capstone Project Middleware is computer software that connects software components or applications. The software consists of a set of enabling services.
Computer Science Department
Introduction to MPI Programming Ganesh C.N.
Collectives Reduce Scatter Gather Many more.
Capstone Project Project: Middleware for Cluster Computing
MPI Jakub Yaghob.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
CS4402 – Parallel Computing
Introduction to MPI Programming
MPI Point to Point Communication
CS 668: Lecture 3 An Introduction to MPI
Send and Receive.
Collective Communication with MPI
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Principles of Message-Passing Programming.
An Introduction to Parallel Programming with MPI
Collective Communication Operations
Programming with MPI.
Send and Receive.
To accompany the text ``Introduction to Parallel Computing'',
Programming Using the Message Passing Model
Distributed Systems CS
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
High Performance Parallel Programming
A Message Passing Standard for MPP and Workstations
MPI: Message Passing Interface
CSCE569 Parallel Computing
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message-Passing Computing Message Passing Interface (MPI)
NOTE: FOR CLASS USE MPICH Version of MPI!!
Computer Science Department
5- Message-Passing Programming
Presentation transcript:

Computer Science Department CSCI-4320/6340: Parallel Programming & Computing MPI Point-2-Point and Collective Operations Prof. Chris Carothers Computer Science Department MRC 309a chrisc@cs.rpi.edu www.mcs.anl.gov/research/projects/mpi/www/www3/ PPC Spring 2012 - Platforms

Topic Overview Review: Principles of Message-Passing Programming MPI Point 2 Point Messages.. MPI Datatypes MPI_Isend/MPI_Irecv MPI_Testsome MPI_Cancel MPI_Finalize Collective Operations PPC Spring 2012 - Platforms

MPI Datatypes MPI_BYTE – untyped byte of data MPI_CHAR – 8 bit char MPI_DOUBLE – 64 bit floating point MPI_FLOAT – 32 bit floating point MPI_INT – signed 32 bit integer MPI_LONG – signed 32 bit integer MPI_LONG_LONG – signed 64 bit integer MPI_UNSIGNED_X – unsigned LONG, LONG LONG or SHRORT MPI_UNSIGNED – unsigned 32 bit integer PPC Spring 2012 - Platforms

Review: Mental Model PPC Spring 2012 - Platforms

Review: Principles of Message-Passing Programming The logical view of a machine supporting the message-passing paradigm consists of p processes, each with its own exclusive address space. Each data element must belong to one of the partitions of the space; hence, data must be explicitly partitioned and placed. All interactions (read-only or read/write) require cooperation of two processes - the process that has the data and the process that wants to access the data. These two constraints, while onerous, make underlying costs very explicit to the programmer. PPC Spring 2012 - Platforms

Point-2-Point: MPI_Isend MPI_Isend only “copies” the buffer to MPI and returns. Isend only completes when request status indicates so Is non-blocking int MPI_Isend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) Inputs: buf – initial address of buffer being sent. count – number of elements in send buffer datatype – datatype of each send buffer element dest – rank of destination tag – message tag comm – communicator handle Outputs: request – request handle used to check status of send. PPC Spring 2012 - Platforms

Point-2-Point: MPI_Irecv MPI_Irecv only “posts” a receive to the MPI layer and returns. Irecv is complete when request status indicates so. Is non-blocking and will help to avoid deadlock. int MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request) Inputs: buf – initial address of buffer being sent. count – number of elements in send buffer datatype – datatype of each send buffer element dest – rank of destination tag – message tag comm – communicator handle Outputs: request – request handle used to check status of send. PPC Spring 2012 - Platforms

Point-2-Point: MPI_Testsome MPI_Testsome allows you to test the success of posted send and recv requests. Is non-blocking. int MPI_Testsome(int incount, MPI_Request array_of_requests[], int *outcount, int array_of_indices, MPI_Status array_of_statuses) Inputs: incount – length of array_of_requests array_of_requests – array of request handles Outputs: outcount – number of completed requests array_of_indices – array of indices that completed array_of_statuses – array of status objects for operations that completed Other functions are MPI_Testall PPC Spring 2012 - Platforms

Point-2-Point: MPI_Iprobe Non-blocking test for a message w/o actually have to receive that message. Can be an expensive operation on some platforms. E.g., Iprobe followed by Irecv. int MPI_Iprobe(int source, int tag, MPI_Comm comm, int *flag, MPI_Status *status) Inputs source – source rand or MPI_ANY_SOURCE tag – tag value or MPI_ANY_TAG comm – communicator handle PPC Spring 2012 - Platforms

Point-2-Point: MPI_Cancel Enables the cancellation of a pending request, either sending or receiving. Most commonly used for Irecvs. Can be used for Isends but can be very expensive. int MPI_Cancel(MPI_Request *requests); PPC Spring 2012 - Platforms

Collective Communication and Computation Operations MPI provides an extensive set of functions for performing common collective communication operations. Each of these operations is defined over a group corresponding to the communicator. All processors in a communicator must call these operations. If not, deadlock will happen!! PPC Spring 2012 - Platforms

Collective Communication Operations The barrier synchronization operation is performed in MPI using: int MPI_Barrier(MPI_Comm comm) The one-to-all broadcast operation is: int MPI_Bcast(void *buf, int count, MPI_Datatype datatype, int source, MPI_Comm comm) The all-to-one reduction operation is: int MPI_Reduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int target, MPI_Comm comm) PPC Spring 2012 - Platforms

Predefined Reduction Operations Meaning Datatypes MPI_MAX Maximum C integers and floating point MPI_MIN Minimum MPI_SUM Sum MPI_PROD Product MPI_LAND Logical AND C integers MPI_BAND Bit-wise AND C integers and byte MPI_LOR Logical OR MPI_BOR Bit-wise OR MPI_LXOR Logical XOR MPI_BXOR Bit-wise XOR MPI_MAXLOC max-min value-location Data-pairs MPI_MINLOC min-min value-location PPC Spring 2012 - Platforms

Collective Communication Operations The operation MPI_MAXLOC combines pairs of values (vi, li) and returns the pair (v, l) such that v is the maximum among all vi 's and l is the corresponding li (if there are more than one, it is the smallest among all these li 's). MPI_MINLOC does the same, except for minimum value of vi. PPC Spring 2012 - Platforms

Collective Communication Operations MPI datatypes for data-pairs used with the MPI_MAXLOC and MPI_MINLOC reduction operations. MPI Datatype C Datatype MPI_2INT pair of ints MPI_SHORT_INT short and int MPI_LONG_INT long and int MPI_LONG_DOUBLE_INT long double and int MPI_FLOAT_INT float and int MPI_DOUBLE_INT double and int PPC Spring 2012 - Platforms

Collective Communication Operations If the result of the reduction operation is needed by all processes, MPI provides: int MPI_Allreduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) To compute prefix-sums, MPI provides: int MPI_Scan(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) e.g <3,1,4,0,2>  <3,4,8,8,10> PPC Spring 2012 - Platforms

Collective Communication Operations The gather operation is performed in MPI using: int MPI_Gather(void *sendbuf, int sendcount, MPI_Datatype senddatatype, void *recvbuf, int recvcount, MPI_Datatype recvdatatype, int target, MPI_Comm comm) MPI also provides the MPI_Allgather function in which the data are gathered at all the processes. int MPI_Allgather(void *sendbuf, int sendcount, MPI_Datatype senddatatype, void *recvbuf, MPI_Comm comm) The corresponding scatter operation is: int MPI_Scatter(void *sendbuf, int sendcount, int source, MPI_Comm comm) PPC Spring 2012 - Platforms

Collective Communication Operations The all-to-all personalized communication operation is performed by: int MPI_Alltoall(void *sendbuf, int sendcount, MPI_Datatype senddatatype, void *recvbuf, int recvcount, MPI_Datatype recvdatatype, MPI_Comm comm) Using this core set of collective operations, a number of programs can be greatly simplified. PPC Spring 2012 - Platforms