1 Process Groups & Communicators  Communicator is a group of processes that can communicate with one another.  Can create sub-groups of processes, or.

Slides:



Advertisements
Similar presentations
1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
Advertisements

Cross-site running on TeraGrid using MPICH-G2 Presented by Krishna Muriki (SDSC) on behalf of Dr. Nick Karonis (NIU)
Source: MPI – Message Passing Interface Communicator groups and Process Topologies Source:
Group-Collective Communicator Creation Ticket #286 Non-Collective Communicator Creation in MPI. Dinan, et al., Euro MPI ‘11.
1 Parallel Programming with MPI- Day 4 Science & Technology Support High Performance Computing Ohio Supercomputer Center 1224 Kinnear Road Columbus, OH.
Introduction to MPI Programming (Part III)‏ Michael Griffiths, Deniz Savas & Alan Real January 2006.
Multi-core and Network Aware MPI Topology Functions Mohammad J. Rashti, Jonathan Green, Pavan Balaji, Ahmad Afsahi, and William D. Gropp Department of.
Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in.
Reference: / MPI Program Structure.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Communicators. Introduction So far, the communicator that you are familiar with is MPI_COMM_WORLD. This is a communicator defined by MPI to permit all.
HPDC Spring MPI 11 CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs. 2 – 3:20 p.m Message Passing Interface.
1 Friday, October 20, 2006 “Work expands to fill the time available for its completion.” -Parkinson’s 1st Law.
1/44 MPI Programming Hamid Reza Tajozzakerin Sharif University of technology.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
1 Parallel Computing—Higher-level concepts of MPI.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Parallel Programming Systems 1. 2 Distributed Memory Multiprocessor The MPP architecture.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 8 Matrix-vector Multiplication.
Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create.
Programming Using the Message Passing Paradigm Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
1 Tuesday, October 31, 2006 “Data expands to fill the space available for storage.” -Parkinson’s Law.
Research Computing UNC - Chapel Hill Instructor: Mark Reed Group and Communicator Management Routines.
Message Passing Programming Carl Tropper Department of Computer Science.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Issues in Translation of High Performance Fortran Bryan Carpenter NPAC at Syracuse University Syracuse, NY 13244
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
1 Collective Communications. 2 Overview  All processes in a group participate in communication, by calling the same function with matching arguments.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Translations Translations and Getting Ready for Reflections by Graphing Horizontal and Vertical Lines.
Parallel Processing1 Parallel Processing (CS 676) Lecture: Grouping Data and Communicators in MPI Jeremy R. Johnson *Parts of this lecture was derived.
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
CS4402 – Parallel Computing
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
CS 591 x I/O in MPI. MPI exists as many different implementations MPI implementations are based on MPI standards MPI standards are developed and maintained.
Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Mathematical Preliminaries
MPI Workshop - III Research Staff Cartesian Topologies in MPI and Passing Structures in MPI Week 3 of 3.
1. 2 The logical view of a machine supporting the message-passing paradigm consists of p processes, each with its own exclusive address space. The logical.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
MPI Advanced edition Jakub Yaghob. Initializing MPI – threading int MPI Init(int *argc, char ***argv, int required, int *provided); Must be called as.
MPI Chapter 3 More Beginning MPI. MPI Philosopy One program for all processes – Starts with init – Get my process number Process 0 is usually the “Master”
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Non-Collective Communicator Creation Ticket #286 int MPI_Comm_create_group(MPI_Comm comm, MPI_Group group, int tag, MPI_Comm *newcomm) Non-Collective Communicator.
MPI Groups, Communicators and Topologies. Groups and communicators In our case studies, we saw examples where collective communication needed to be performed.
Message Passing Interface Using resources from
1 ParallelAlgorithms Parallel Algorithms Dr. Stephen Tse Lesson 9.
Introduction to MPI Programming Ganesh C.N.
Research Staff Passing Structures in MPI Week 3 of 3
Introduction to parallel computing concepts and technics
Source: MPI – Message Passing Interface Communicator groups and Process Topologies Source:
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
Introduction to MPI.
Parallel Programming in C with MPI and OpenMP
MPI Groups, Communicators and Topologies
To accompany the text ``Introduction to Parallel Computing'',
CSCE569 Parallel Computing
September 4, 1997 Parallel Processing (CS 676) Lecture 8: Grouping Data and Communicators in MPI Jeremy R. Johnson *Parts of this lecture was derived.
Programming Using the Message Passing Model
September 4, 1997 Parallel Processing (CS 730) Lecture 7: Grouping Data and Communicators in MPI Jeremy R. Johnson *Parts of this lecture was derived.
CSC 6750: Parallel Programming Fall 2009
A Message Passing Standard for MPP and Workstations
Parallel Programming with MPI- Day 4
Introduction to parallelism and the Message Passing Interface
5- Message-Passing Programming
Presentation transcript:

1 Process Groups & Communicators  Communicator is a group of processes that can communicate with one another.  Can create sub-groups of processes, or sub- communicators  Cannot create process groups or sub- communicators from scratch  Need to create sub-groups or sub-communicators from some existing groups or communicators  i.e. need to start from MPI_COMM_WORLD (or MPI_COMM_SELF)

2 Sub-communicators Communicator Process Group Process sub-group Sub-communicator MPI_Comm MPI_Group

3 Communicator  Process Group  MPI_Group is a handle representing a process group in C; in Fortran it is an integer  MPI_GROUP_EMPTY, predefined, no member in group  MPI_GROUP_NULL, predefined, an invalid handle  MPI_COMM_GROUP gets group associated with communicator int MPI_Comm_group(MPI_Comm comm, MPI_Group *group) MPI_COMM_GROUP(COMM, GROUP, IERROR) integer COMM, GROUP, IERROR... MPI_Group group; MPI_Comm_group(MPI_COMM_WORLD, &group);...

4 Process Groups  MPI_Group_size (): number of processes in group  MPI_Group_rank (): rank of calling process in group; if does not belong to group, return MPI_UNDEFINED int MPI_Group_size(MPI_Group group, int *size) int MPI_Group_rank(MPI_Group group, int *rank) int ncpus, rank; MPI_Group MPI_GROUP_WORLD;... MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); MPI_Group_size(MPI_GROUP_WORLD, &ncpus); MPI_Group_rank(MPI_GROUP_WORLD, &rank);...

5 Group Constructor/Destructor  MPI_Group_incl : create a new group newgroup consisting of n processes in group whose ranks are specified in *ranks ;  MPI_Group_excl : create a new group newgroup consisting of processes from group excluding those n processes specified in *ranks ; int MPI_Group_incl(MPI_Group group, int n, int *ranks, MPI_Group *newgroup) int MPI_Group_excl(MPI_Group group, int n, int *ranks, MPI_Group *newgroup) int MPI_Group_free(MPI_Group *group); MPI_Group MPI_GROUP_WORLD, slave; int ranks[1]; ranks[0] = 0; MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); MPI_Group_excl(MPI_GROUP_WORLD, 1, &ranks, &slave);... MPI_Group_free(&slave);

6 Process Group  Communicator  Only communicators can be used in communication routines  MPI_Comm_create (): create a communicator from a process group  group is a sub-group of that associated with comm  Collective operation int MPI_Comm_create(MPI_Comm comm, MPI_Group group, MPI_Comm *newcomm) MPI_Group MPI_GROUP_WORLD, slave; MPI_Comm slave_comm; int ranks[1]; Ranks[0] = 0; MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); MPI_Group_excl(MPI_GROUP_WORLD, 1, &ranks, &slave); MPI_Comm_create(MPI_COMM_WORLD, slave, &slave_comm);... MPI_Group_free(&slave); MPI_Comm_free(&slave_comm);

7 Communicator Constructors  MPI_Comm_dup (): duplicate communicator  Same group of processes, but different communication context  Communications sent using duplicated comm cannot be received with original comm.  Used for writing parallel libraries; will not interfere with user’s communications  Collective routine; All processes must call same function int MPI_Comm_dup(MPI_Comm comm, MPI_Comm *newcomm)

8 Communicator Constructors  Partition comm into disjoint sub-communicators, one for each value color  Processes providing same color values will belong to the same new communicator newcomm  Within newcomm, processes ranked based on key values that are provided  color must be non-negative.  If MPI_UNDEFINED is provided in color, MPI_COMM_NULL will be returned in newcomm. int MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm *newcomm) int rank, color; MPI_Comm newcomm; MPI_Comm_rank(MPI_COMM_WORLD, &rank); color = rank%3; MPI_Comm_split(MPI_COMM_WORLD, color, rank, &newcomm);...

9 Process Virtual Topology  Linear ranking (0, 1, …, N-1) often does not reflect logical communication patterns of processes  Desirable to arrange processes logically to reflect the topological patterns underlying the problem geometry or numerical algorithm, e.g. 2D or 3D grids  Virtual topology: this logical process arrangement  Virtual topology does not reflect machine’s physical topology  Can build virtual topology on communicators

10 Virtual Topology  Communication patterns can be described by a graph in general  Nodes stand for processes  Edges connect processes that communicate with each other  Many applications use process topologies like rings, 2D or high-D grids or tori.  MPI provides 2 topology constructors  Cartesian topology  General graph topology

11 Cartesian Topology  n-dimensional Cartesian topology  (m1, m2, m3, …, mn) grid; mi is the number of processes in i-th direction  A process can be represented by its coordinate (j1, j2, j3, …, jn), where 0<=js<=ms  Constructor: MPI_Cart_create()  Translation:  Process rank  coordinate  Coordinate  process rank

12 Cartesian Constructor  comm_old : existing communicator  ndims : dimension of Cartesian topology  dims : vector, dimension [ ndims ], number of processes in each direction  periods : vector, dimension [ ndims ], true or false, priodic or not in each direction  reorder : true  may reorder rank; false, no re-ordering of ranks  comm_cart : new communicator with Cartesian topology Int MPI_Cart_create(MPI_Comm comm_old, int ndims, int *dims, int *periods, int reorder, MPI_Comm *comm_cart) MPI_Comm comm_new; int dims[2], periods[2], ncpus; MPI_Comm_size(MPI_COMM_WORLD, &ncpus); dims[0] = 2; dims[1] = ncpus/2; // assume ncpus dividable by 2 periods[0] = periods[1] = 1; MPI_Cart_create(MPI_COMM_WORLD, 2, dims, periods, 0, &comm_new);...

13 Topology Inquiry Functions  MPI_Cartdim_get returns the dimension of Cartesian topology  MPI_Cart_coords returns the coordinates of a process of rank  MPI_Cart_rank returns of the rank of a process with coordinate *coords int MPI_Cartdim_get(MPI_Comm comm, int *ndims) int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims, int *coords) int MPI_Cart_rank(MPI_Comm, int *coords, int *rank) MPI_Comm comm_cart; int ndims, rank, *coords;... // create Cartesian topology on comm_cart MPI_Comm_rank(comm_cart, &rank); MPI_Cartdim_get(comm_cart, &ndims); coords = new int[ndims]; MPI_Cart_coords(comm_cart, rank, ndims, coords); // coords contains coord... for(int i=0;i<ndims;i++) coords[i] = 0; MPI_Cart_rank(comm_cart, coords, &rank);