Message Passing Programming Based on MPI

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

MPI_Gatherv CISC372 Fall 2006 Andrew Toy Tom Lynch Bill Meehan.
Derived Datatypes and Related Features. Introduction In previous sections, you learned how to send and receive messages in which all the data was of a.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Chapter 6 Floyd’s Algorithm. 2 Chapter Objectives Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and.
Derived Datatypes and Related Features Self Test with solution.
Its.unc.edu 1 Derived Datatypes Research Computing UNC - Chapel Hill Instructor: Mark Reed
MPI User-defined Datatypes Techniques for describing non- contiguous and heterogeneous data.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
HDF5 collective chunk IO A Working Report. Motivation for this project ► Found extremely bad performance of parallel HDF5 when implementing WRF- Parallel.
Edgar Gabriel MPI derived datatypes Edgar Gabriel.
1 MPI Datatypes l The data in a message to sent or received is described by a triple (address, count, datatype), where l An MPI datatype is recursively.
Parallel Programming with MPI Matthew Pratola
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 3 Distributed Memory Programming with MPI An Introduction to Parallel Programming Peter Pacheco.
Director of Contra Costa College High Performance Computing Center
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
1 Why Derived Data Types  Message data contains different data types  Can use several separate messages  performance may not be good  Message data.
Parallel Processing1 Parallel Processing (CS 676) Lecture: Grouping Data and Communicators in MPI Jeremy R. Johnson *Parts of this lecture was derived.
MPI-2 Sathish Vadhiyar Using MPI2: Advanced Features of the Message-Passing.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
Lecture 6: Message Passing Interface (MPI). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
MPI Jakub Yaghob. Literature and references Books Gropp W., Lusk E., Skjellum A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface,
MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Sending large message counts (The MPI_Count issue)
An Introduction to MPI (message passing interface)
1 Using PMPI routines l PMPI allows selective replacement of MPI routines at link time (no need to recompile) l Some libraries already make use of PMPI.
Grouping Data and Derived Types in MPI. Grouping Data Messages are expensive in terms of performance Grouping data can improve the performance of your.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
MPI Derived Data Types and Collective Communication
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Computer Science Department
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
High Altitude Low Opening?
MPI Jakub Yaghob.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
CS4402 – Parallel Computing
Introduction to MPI Programming
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
Computer Science Department
Parallel Programming with MPI and OpenMP
Programming with MPI.
Introduction to Message Passing Interface (MPI)
Parallel Computing Message Passing Interface
September 4, 1997 Parallel Processing (CS 676) Lecture 8: Grouping Data and Communicators in MPI Jeremy R. Johnson *Parts of this lecture was derived.
September 4, 1997 Parallel Processing (CS 730) Lecture 7: Grouping Data and Communicators in MPI Jeremy R. Johnson *Parts of this lecture was derived.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
CSCE569 Parallel Computing
Introduction to parallelism and the Message Passing Interface
MPI (continue) An example for designing explicit message passing programs Emphasize on the difference between shared memory code and distributed memory.
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Computer Science Department
MPI (continue) An example for designing explicit message passing programs Emphasize on the difference between shared memory code and distributed memory.
5- Message-Passing Programming
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
Presentation transcript:

Message Passing Programming Based on MPI Derived Data Types Bora AKAYDIN 15.06.2012

Outline Derived Datatypes Packing/Unpacking Datatypes 15.06.2012

inefficient copy of non- Derived Datatypes How to send only the red elements of V in a single communication? V(0) V(1) V(2) V(3) V(4) V(5) V(6) V(7) V(8) V(9) This method requires inefficient copy of non- contiguous data MPI_Send T(0) T(1) T(2) One possibility, copy these elements to a temporary array before sending. Derived Data Types 15.06.2012

Non-struct Derived Data Types There are routines available in MPI library that are more suitable for an array or vector like data structures: MPI_Type_contiguous MPI_Type_vector MPI_Type_indexed All above functions work with a single data type! Derived Data Types 15.06.2012

MPI_Type_contiguous (C) Constructs a type consisting of the replication of a data type into continuous locations. int MPI_Type_contiguous( int count, MPI_Datatype old_type, MPI_Datatype *newtype) old type new type x count (4) = Data constructor with MPI_Type_contiguous Derived Data Types 15.06.2012

mpi_type_contiguous (Fortran) Constructs a type consisting of the replication of a data type into continuous locations. MPI_TYPE_CONTIGUOUS( count, MPI_Datatype old_type, MPI_Datatype newtype,ierr) old type new type x count (4) = Data constructor with mpi_type_contiguous Derived Data Types 15.06.2012

MPI_Type_contiguous In C, if we create a matrix with static memory allocation, we can say that data of the matrix will be in contiguous memory. A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) in the memory (C) double A[3][3]; A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) in the memory (fortran) Derived Data Types 15.06.2012

MPI_Type_contiguous (C) A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) in the memory double A[3][3]; count =3 old_type =MPI_DOUBLE new_type =rowtype MPI_Type_contiguous(int count, MPI_Datatype old_type, MPI_Datatype *newtype); Derived Data Types 15.06.2012

mpi_type_contiguous (Fortran) (0,0) A (1,0) A (2,0) count =3 old_type =MPI_REAL new_type =columntype A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) in the memory (fortran) A (0,1) A (1,1) A (2,1) A (0,2) A (1,2) A (2,2) call mpi_type_contiguous(count, MPI_Datatype old_type, MPI_Datatype newtype,ierr); Derived Data Types 15.06.2012

Handling Non-Contiguous Data How to send only the red elements of V, while avoiding copying non-contiguous data to a temporary array? Define a new data type, in this case a vector with stride of two from original. V(0) V(1) V(2) V(3) V(4) V(5) V(6) V(7) V(8) V(9) vType Derived Data Types 15.06.2012

MPI_Type_vector Similar to contiguous, but allows for regular gaps (stride) in the displacements. old type (MPI_INT) new type blocklength=3 stride=5 count=2 Data constructor with MPI_Type_vector. Derived Data Types 15.06.2012

We can use, MPI_Type_vector to create vector (strided) data type. In C, if we create a matrix with static memory allocation, we can say that data of the matrix will be in contiguous memory. Suppose that, we want to send columns to the each task, instead of rows. double A[3][3]; A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) in the memory A (0,0) A (0,1) A (0,2) A (1,0) A (1,1) A (1,2) A (2,0) A (2,1) A (2,2) We can use, MPI_Type_vector to create vector (strided) data type. Derived Data Types 15.06.2012

MPI_Type_vector double A[3][3]; in the memory A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) in the memory A (0,0) A (0,1) A (0,2) A (1,0) A (1,1) A (1,2) A (2,0) A (2,1) A (2,2) blocklength=1 stride=3 count=3 Derived Data Types 15.06.2012

MPI_Type_vector (C) MPI_Type_vector(int count, int blocklength, stride=3 count=3 MPI_Type_vector(int count, int blocklength, int stride, MPI_Datatype old_type, MPI_Datatype *new_type); Derived Data Types 15.06.2012

mpi_type_vector (Fortran) blocklength=1 stride=3 count=3 mpi_type_vector(count, blocklength, stride, MPI_Datatype old_type, MPI_Datatype new_type,ierr); Derived Data Types 15.06.2012

MPI_Type_vector (C) Sending reds vType V(0) V(1) V(2) V(3) V(4) V(5) blocklength = 1 stride = 2 count = 3 old_type = MPI_DOUBLE new_type = vtype blocklength= ? 1 stride= ? 2 count= ? 3 MPI_Type_vector( count, blocklength, stride, old_type, &vType); MPI_Send(&V[2], 1, vType, dest, tag, MPI_COMM_WORLD); Derived Data Types 15.06.2012

mpi_type_vector (Fortran) Sending reds type blocklength = 1 stride = 2 count = 3 old_type = MPI_INTEGER new_type = type blocklength= ? 1 stride= ? 2 count= ? 3 call mpi_type_vector(count,blocklength,stride,old_type,type,ierr); call mpi_send(V(2), 1, type, dest, tag, MPI_COMM_WORLD,ierr); Derived Data Types 15.06.2012

MPI_Type_indexed Indexed constructor allows one to specify a non-contiguous data layout where displacements between successive blocks need not be equal. A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) (0,3) (1,3) (2,3) (3,0) (3,1) (3,2) (3,3) double A[4][4]; Derived Data Types 15.06.2012

MPI_Type_indexed This allows: Gathering of arbitrary entries from an array and sending them in one message, or Receiving one message and scattering the received message entries into arbitrary locations in an array. Derived Data Types 15.06.2012

MPI_Type_indexed (C) int MPI_Type_indexed( int count, int blocklength[], int indices[], MPI_Datatype old_type, MPI_Datatype *newtype ) count : number of blocks blocklength : number of elements in each block indices : displacement for each block, measured as number of elements Derived Data Types 15.06.2012

mpi_type_indexed (Fortran) mpi_type_indexed( count, blocklength(), indices(), MPI_Datatype old_type, MPI_Datatype newtype,ierr ) count : number of blocks blocklength : number of elements in each block indices : displacement for each block, measured as number of elements Derived Data Types 15.06.2012

MPI_Type_indexed old type new type blen[0]= 2 blen[1]= 3 blen[2]= 1 indices[0]=0 indices[1]=3 indices[2]=8 count= 3 Derived Data Types 15.06.2012

MPI_Type_indexed Suppose that, we have a matrix A(4x4) We want to send upper triangular matrix double A[4][4]; old type = MPI_DOUBLE new type = upper count = 4 blocklen[ ] = (4, 3, 2, 1) indices[ ] = (0, 5, 10, 15) A (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) (0,3) (1,3) (2,3) (3,0) (3,1) (3,2) (3,3) MPI_Type_indexed(count, blocklen, indices, MPI_DOUBLE, upper ) Derived Data Types 15.06.2012

MPI_Type_commit Commits new datatype to the system. Required for all user constructed (derived) data types. int MPI_Type_commit( MPI_Datatype *datatype ) MPI_TYPE_COMMIT( MPI_Datatype datatype,ierr ) Derived Data Types 15.06.2012

MPI_Type_free Deallocates the specified data type object. Use of this routine is especially important to prevent memory exhaustion if many data type objects are created, as in a loop. int MPI_Type_free( MPI_Datatype *datatype ) MPI_TYPE_FREE( MPI_Datatype datatype,ierr ) Derived Data Types 15.06.2012

Packing Different Data Sometimes, users need to send non-contiguous data in a single package. MPI allows them to explicitly pack data into a contiguous buffer before sending it, and unpack it from a contiguous buffer after receiving. Several messages can be successively packed into one packing unit. Derived Data Types 15.06.2012

MPI_Pack (C) data to be buffered int MPI_Pack ( void *packdata, int count, MPI_Datatype datatype, void *buffer, int size, int *position, MPI_Comm comm ) number of input data items datatype of each input data item output buffer start size of buffer, in bytes current position in buffer, in bytes communicator for packed message Derived Data Types 15.06.2012

mpi_pack (Fortran) data to be buffered MPI_PACK ( packdata, count, MPI_Datatype datatype, buffer, size, position, MPI_Comm comm,ierr ) number of input data items datatype of each input data item output buffer start size of buffer, in bytes current position in buffer, in bytes communicator for packed message Derived Data Types 15.06.2012

char array with 25 element integer array with 3 elements Packing Data char: (1 Byte) integer: (4 Byte) char array with 25 element T o d a y i s w n e r f ! u l char c[25]: 18 7 2007 integer array with 3 elements int date[3]: buffer Derived Data Types 15.06.2012

Packing Data buffer At the beginning position=0 y i s w n e r f ! u l 18 7 2007 buffer At the beginning position=0 MPI_Pack(c, 25, MPI_CHAR, buffer, 37, &position, MPI_Comm comm ) position value is updated by MPI_Pack as position=25 MPI_Pack(date, 3, MPI_INT, buffer, 37, &position, MPI_Comm comm ) position value is updated by MPI_Pack as position=37 Derived Data Types 15.06.2012

Packing Data buffer At the beginning position=0 y i s w n e r f ! u l 18 7 2007 buffer At the beginning position=0 call MPI_PACK(c, 25, MPI_CHARACTER, buffer, 37, &position, MPI_Comm comm,ierr ) position value is updated by MPI_Pack as position=25 MPI_Pack(date, 3, MPI_INTEGER, buffer, 37, &position, MPI_Comm comm,ierr ) position value is updated by MPI_Pack as position=37 Derived Data Types 15.06.2012

Sending Packed Data MPI_Send function used to send packed data, MPI_PACKED type is used as datatype. Size of the data must be specified in Bytes. Now, buffer size is 37 Bytes Derived Data Types 15.06.2012

Sending Packed Data buffer MPI_Send( buffer, position, MPI_PACKED, char: (1 Byte) integer: (4 Byte) T o d a y i s w n e r f ! u l 18 7 2007 buffer MPI_Send( buffer, position, MPI_PACKED, dest, tag, MPI_Comm comm); Derived Data Types 15.06.2012

Sending Packed Data buffer call MPI_SEND( buffer, position, char: (1 Byte) integer: (4 Byte) T o d a y i s w n e r f ! u l 18 7 2007 buffer call MPI_SEND( buffer, position, MPI_PACKED, dest, tag, MPI_Comm comm,ierr) Derived Data Types 15.06.2012

Receiving Packed Data MPI_Recv function used to receive packed data, MPI_PACKED type is used as datatype. Size of the data must be specified in Bytes. Derived Data Types 15.06.2012

Receiving Packed Data MPI_Recv( Rbuffer, 37, MPI_PACKED, source, tag, comm, &status); T o d a y i s w n e r f ! u l 18 7 2007 Rbuffer Derived Data Types 15.06.2012

Receiving Packed Data MPI_RECV( Rbuffer, 37, MPI_PACKED, source, tag, comm, status,ierr); T o d a y i s w n e r f ! u l 18 7 2007 Rbuffer Derived Data Types 15.06.2012

MPI_Unpack (C) input buffer start int MPI_Unpack(void *buffer, int size, int *position, void *packdata, int count, MPI_Datatype datatype, MPI_Comm comm ) size of buffer, in bytes current position in buffer, in bytes output buffer start number of items to be unpacked datatype of each output data item communicator for packed message Derived Data Types 15.06.2012

mpi_unpack (Fortran) input buffer start int MPI_UNPACK( buffer, size, position, packdata, count, MPI_Datatype datatype, MPI_Comm comm,ierr ) size of buffer, in bytes current position in buffer, in bytes output buffer start number of items to be unpacked datatype of each output data item communicator for packed message Derived Data Types 15.06.2012

MPI_Unpack The output buffer can be any communication buffer allowed in MPI_Recv. The buffer is a contiguous storage area containing size bytes, starting at address buffer. The value of position is the first location in the buffer occupied by the packed message. position is incremented by the size of the packed message, so that the output value of position is the first location in the buffer after the locations occupied by the message that was unpacked. Derived Data Types 15.06.2012

Unpacking Packed Data (C) MPI_Unpack( buffer, 37, &position, &Rc, 25, MPI_CHAR, MPI_Comm comm); First value position=0 Updated to position=25 T o d a y i s w n e r f ! u l char Rc[25]: Derived Data Types 15.06.2012

Unpacking Packed Data (Fortran) MPI_UNPACK( buffer, 37, position, Rc, 25, MPI_CHARACTER, MPI_Comm comm,ierr); First value position=0 Updated to position=25 T o d a y i s w n e r f ! u l char Rc[25]: Derived Data Types 15.06.2012

Unpacking Packed Data (C) MPI_Unpack( buffer, 37, &position, &Rdate, 3, MPI_INT, MPI_Comm comm); First value position=25 Updated to position=37 18 7 2007 int Rdate[3]: Derived Data Types 15.06.2012

Unpacking Packed Data (Fortran) MPI_UNPACK( buffer, 37, position, Rdate, 3, MPI_INTEGER, MPI_Comm comm,ierr); First value position=25 Updated to position=37 18 7 2007 int Rdate[3]: Derived Data Types 15.06.2012

Programming Activities Writing parallel MPI codes using following routines Contiguous Indexed Vector Pack-unpack Derived Data Types 15.06.2012