CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.

Slides:



Advertisements
Similar presentations
Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Message Passing Interface COS 597C Hanjun Kim. Princeton University Serial Computing 1k pieces puzzle Takes 10 hours.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Collective Communication.  Collective communication is defined as communication that involves a group of processes  More restrictive than point to point.
Distributed Systems CS Programming Models- Part II Lecture 17, Nov 2, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallel Programming with Java
CS 179: GPU Programming Lecture 20: Cross-system communication.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 Collective Communications. 2 Overview  All processes in a group participate in communication, by calling the same function with matching arguments.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
PP Lab MPI programming VI. Program 1 Break up a long vector into subvectors of equal length. Distribute subvectors to processes. Let them compute the.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Distributed Systems CS Programming Models- Part II Lecture 14, Oct 28, 2013 Mohammad Hammoud 1.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
PVM and MPI.
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
CS4402 – Parallel Computing
Introduction to MPI.
MPI Message Passing Interface
Send and Receive.
CS 584.
An Introduction to Parallel Programming with MPI
Send and Receive.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Distributed Systems CS
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
MPI: Message Passing Interface
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message-Passing Computing Message Passing Interface (MPI)
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
CS 584 Lecture 8 Assignment?.
Presentation transcript:

CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial from Lawrence Livermore National Laboratories Thanks!

2 (C) 2005 CS 838 Outline Introduction to MPI MPI programming 101 Point-to-Point Communication Collective Communication MPI Environment References

3 (C) 2005 CS 838 Introduction to MPI Message Passing –A collection of co-operating processes –Running on different machines/ executing different code –Communicate through a standard interface. Message Passing Interface (MPI) A library standard established to facilitate portable, efficient programs using message passing. Vendor independent and supported across a large number of platforms.

4 (C) 2005 CS 838 Introduction to MPI Fairly large set of primitives (129 functions) Small set of regularly used routines. MPI routines –Environment Setup –Point-to-Point Communication –Collective Communication –Virtual Topologies –Data Type definitions –Group-Communicator management

5 (C) 2005 CS 838 Outline Introduction to MPI MPI programming 101 Point-to-Point Communication Collective Communication MPI Environment References

6 (C) 2005 CS 838 MPI programming 101 HelloWorld.c #include #include "mpi.h" int main( int argc, char **argv ) { int myid, num_procs; MPI_InitMPI_Init( &argc, &argv ); MPI_Comm_sizeMPI_Comm_size( MPI_COMM_WORLD, &num_procs ); MPI_Comm_rank( MPI_COMM_WORLD, &myid );MPI_Comm_rank printf( "Hello world from process %d of %d\n", myid, num_procs ); MPI_Finalize();MPI_Finalize return 0; } mpicc –o hello HelloWorld.c mpirun –np 16 hello

7 (C) 2005 CS 838 MPI Programming 101 Generic MPI program MPI include file Terminate MPI environment Init MPI environment MPI Message Passing Calls

8 (C) 2005 CS 838 MPI Programming 101 MPI include file # include “mpi.h” Initializing MPI environment MPI_Init (&argc, &argv); Initialize MPI execution environment MPI_Comm_size(MPI_COMM_WORLD, &num_procs); Determine num of processes in the group MPI_Comm_rank(MPI_COMM_WORLD, &myid); Get my rank among the processes Terminating MPI MPI_Finalize(); Compile Script: mpicc (compile and link MPI C programs) Execute Script: mpirun (Run MPI programs)

9 (C) 2005 CS 838 Outline Introduction to MPI MPI programming 101 Point-to-Point Communication Collective Communication MPI Environment References

10 (C) 2005 CS 838 Point-to-Point Communication Message passing between two processes Types of communication –Synchronous send –Blocking send / blocking receive –Non-blocking send / non-blocking receive –Buffered send –Combined send/receive –"Ready" send Any type of send can be paired with any type of receive

11 (C) 2005 CS 838 Point-to-Point Communication #include "mpi.h" #include int main(int argc, char **argv) { int numtasks, rank, dest, source, rc, count, tag=1; char inmsg, outmsg='x'; MPI_Status Stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { dest = 1; source = 1; rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); } else if (rank == 1) { dest = 0; source = 0; rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } printf("Task %d: Received %c from task %d with tag %d \n", rank, inmsg, Stat.MPI_SOURCE, Stat.MPI_TAG); MPI_Finalize(); }

12 (C) 2005 CS 838 Point-to-Point Communication MPI_Send –Basic blocking send operation. Routine returns only after the application buffer in the sending task is free for reuse. int MPI_Send ( void *send_buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm ) MPI_Recv –Receive a message and block until the requested data is available in the application buffer in the receiving task. int MPI_Recv ( void *recv_buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status )

13 (C) 2005 CS 838 Point-to-Point Communication send_buf recv_buf Task 0 Task 1 {src, dest, tag, data} Push-based communication. Wild cards allowed on ‘receiver’ side for src and tag. MPI_Status object can be queried for information on a received message.

14 (C) 2005 CS 838 Point-to-Point Communication Blocking vs. Non-blocking –Blocking: Send routine will "return" after it is safe to modify the send buffer for reuse. Receive "returns" after the data has arrived and is ready for use by the program. –Non-blocking: Send and receive routines return almost immediately. They do not wait for any communication events to complete, such as message copying from user memory to system buffer space or the actual arrival of message. Buffering –System buffer space managed by libraries. –Can impact performance –User-managed buffering is also possible. Order and Fairness –Order: MPI guarantees in-order message delivery. –Fairness: MPI does not guarantee fairness.

15 (C) 2005 CS 838 Point-to-Point Communication MPI_Ssend –Synchronous blocking send: Send a message and block until the application buffer in the sending task is free for reuse and the destination process has started to receive the message. MPI_Bsend –permits the programmer to allocate the required amount of buffer space into which data can be copied until it is delivered MPI_Isend –Non blocking send. Must return to the user without requiring a matching receive at the destination. Does NOT mean we can reuse the send buffer immediately.

16 (C) 2005 CS 838 MPI Datatypes Predefined Elementary Datatypes Eg: MPI_CHAR, MPI_INT, MPI_LONG, MPI_FLOAT Derived DataTypes are also possible –Contiguous –Vector –Indexed –Struct Enables grouping of data for communication

17 (C) 2005 CS 838 Outline Introduction to MPI MPI programming 101 Point-to-Point Communication Collective Communication MPI Environment References

18 (C) 2005 CS 838 Collective Communication Types of Communication –Synchronization - processes wait until all members of the group have reached the synchronization point. –Data Movement - broadcast, scatter/gather, all to all. –Collective Computation (reductions) - one member of the group collects data from the other members and performs an operation (min, max, add, multiply, etc.) on that data. All collective communication is blocking Responsibility of user to make sure all processes in a group participate. Work only with MPI pre-defined datatypes.

19 (C) 2005 CS 838 Collective Communication MPI_Barrier To create a barrier synchronization in a group. int MPI_Barrier (MPI_Comm comm) MPI_Bcast Broadcasts a message from the process with rank "root" to all other processes of the group. Caveat: Receiving processes should also call this function to receive the broadcast. int MPI_Bcast ( void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm )

20 (C) 2005 CS 838 Collective Communication MPI_Scatter Distributes distinct messages from a single source task to each task in the group. int MPI_Scatter ( void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm )

21 (C) 2005 CS 838 Collective Communication MPI_Gather Gathers distinct messages from each task in the group to a single destination task int MPI_Gather ( void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm )

22 (C) 2005 CS 838 Collective Communication MPI_Allgather Concatenation of data to all tasks in a group. Each task in the group, in effect, performs a one-to-all broadcasting operation within the group int MPI_Allgather ( void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm )

23 (C) 2005 CS 838 Collective Communication MPI_Reduce Applies a reduction operation on all tasks in the group and places the result in one task. int MPI_Reduce ( void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm ) –Ops: MPI_MAX, MPI_MIN, MPI_SUM, MPI_PROD etc.

24 (C) 2005 CS 838 Outline Introduction to MPI MPI programming 101 Point-to-Point Communication Collective Communication MPI Environment References

25 (C) 2005 CS 838 MPI Environment Communicators and Groups –Used to determine which processes may communicate with each other. –A group is an ordered set of processes. Each process in a group is associated with a unique integer rank. Rank values start at zero and go to N-1, where N is the number of processes in the group. –A communicator encompasses a group of processes that may communicate with each other. MPI_COMM_WORLD is the default communicator that includes all processes. –Groups can be created manually using MPI group-manipulation routines or by using MPI topology-definition routines.

26 (C) 2005 CS 838 MPI Environment MPI_Init Initializes the MPI execution environment. This function must be called in every MPI program, must be called before any other MPI functions MPI_Comm_size Determines the number of processes in the group associated with a communicator MPI_Comm_rank Determines the rank of the calling process within the communicator MPI_Wtime Returns an elapsed wall clock time in seconds on the calling processor MPI_Finalize Terminates the MPI execution environment. This function should be the last MPI routine called in every MPI program

27 (C) 2005 CS 838 MPI Environment MPI_Init Initializes the MPI execution environment. This function must be called in every MPI program, must be called before any other MPI functions MPI_Comm_size Determines the number of processes in the group associated with a communicator MPI_Comm_rank Determines the rank of the calling process within the communicator MPI_Wtime Returns an elapsed wall clock time in seconds on the calling processor MPI_Finalize Terminates the MPI execution environment. This function should be the last MPI routine called in every MPI program

28 (C) 2005 CS 838 Final Comments Debugging MPI programs –can attach standard debuggers like gdb to an MPI program. Profiling MPI programs –Building wrappers –MPI timers –Generating log files –Viewing log files Refer to online tutorials for more information.

29 (C) 2005 CS 838 References MPI web pages at Argonne National Laboratory MPI online reference MPI tutorial at Livermore National Laboratories