Message Passing and MPI Laxmikant Kale CS 320. 2 Message Passing Program consists of independent processes, –Each running in its own address space –Processors.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Parallel Programming in C with MPI and OpenMP
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
CS 179: GPU Programming Lecture 20: Cross-system communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
ECE 1747H : Parallel Programming Message Passing (MPI)
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
(Superficial!) Review of Uniprocessor Architecture Parallel Architectures and Related concepts CS 433 Laxmikant Kale University of Illinois at Urbana-Champaign.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
Parallel Programming with MPI By, Santosh K Jena..
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Implementing Processes and Threads CS550 Operating Systems.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
PVM and MPI.
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
Introduction to MPI.
MPI Message Passing Interface
Parallel Programming with MPI and OpenMP
CS 584.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Message Passing Models
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
CSCE569 Parallel Computing
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Distributed Memory Programming with Message-Passing
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Parallel Processing - MPI
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

Message Passing and MPI Laxmikant Kale CS 320

2 Message Passing Program consists of independent processes, –Each running in its own address space –Processors have direct access to only their memory –Each processor typically executes the same executable, but may be running different part of the program at a time –Special primitives exchange data: send/receive Early theoretical systems: –CSP: communicating sequential processes –send and matching receive from another processor: both wait. –OCCAM on Transputers used this model –Performance problems due to unnecessary(?) wait Current systems: –Send operations don’t wait for receipt on remote processor

3 Message Passing data PE0 PE1 send receive copy

4 Basic Message Passing We will describe a hypothetical message passing system, –with just a few calls that define the model –Later, we will look at real message passing models (e.g. MPI), with a more complex sets of calls Basic calls: –send(int proc, int tag, int size, char *buf); –recv(int proc, int tag, int size, char * buf); Recv may return the actual number of bytes received in some systems –tag and proc may be wildcarded in a recv: recv(ANY, ANY, 1000, &buf); broadcast: Other global operations (reductions)

5 Pi with message passing Int count, c1, i; main() { Seed s = makeSeed(myProcessor); for (i=0; i<100000/P; i++) { x = random(s); y = random(s); if (x*x + y*y < 1.0) count++; } send(0,1,4, &count);

6 Pi with message passing if (myProcessorNum() == 0) { for (I=0; I<maxProcessors(); I++) { recv(I,1,4, c); count += c; } printf(“pi=%f\n”, 4*count/100000); } } /* end function main */

7 Collective calls Message passing is often, but not always, used for SPMD style of programming: –SPMD: Single process multiple data –All processors execute essentially the same program, and same steps, but not in lockstep All communication is almost in lockstep Collective calls: –global reductions (such as max or sum) –syncBroadcast (often just called broadcast): syncBroadcast(whoAmI, dataSize, dataBuffer); –whoAmI: sender or receiver

8 Standardization of message passing Historically: –nxlib (On Intel hypercubes) –ncube variants –PVM –Everyone had their own variants MPI standard: –Vendors, ISVs, and academics got together –with the intent of standardizing current practice –Ended up with a large standard –Popular, due to vendor support –Support for communicators: avoiding tag conflicts,.. Data types:..

9 A Simple subset of MPI These six functions allow you to write many programs: –MPI_Init –MPI_Finalize –MPI_Comm_size –MPI_Comm_rank –MPI_Send –MPI_Recv

10 MPI Process Creation/Destruction MPI_Init( int argc, char **argv ) Initiates a computation. MPI_Finalize() Terminates a computation.

11 MPI Process Identification MPI_Comm_size( comm, &size ) Determines the number of processes. MPI_Comm_rank( comm, &pid ) Pid is the process identifier of the caller.

12 A simple program #include "mpi.h" #include int main(int argc, char *argv ) { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank(MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); printf( "Hello world! I'm %d of %d\n", rank, size ); MPI_Finalize(); return 0; }

13 MPI Basic Send MPI_Send(buf, count, datatype, dest, tag, comm) buf: address of send buffer count: number of elements datatype: data type of send buffer elements dest: process id of destination process tag: message tag (ignore for now) comm: communicator (ignore for now)

14 MPI Basic Receive MPI_Recv(buf, count, datatype, source, tag, comm, &status) buf: address of receive buffer count: size of receive buffer in elements datatype: data type of receive buffer elements source: source process id or MPI_ANY_SOURCE tag and comm: ignore for now status: status object

15 Running a MPI Program Example: mpirun -np 2 hello Interacts with a daemon process on the hosts. Causes a process to be run on each of the hosts.

16 Other Operations Collective Operations –Broadcast –Reduction –Scan –All-to-All –Gather/Scatter Support for Topologies Buffering issues: optimizing message passing Data-type support

17 Example : Jacobi relaxation Pseudocode: A, Anew: NxN 2D-array of (FP) numbers loop (how many times?) for each I = 1, N for each J between 1, N Anew[I,J] = average of 4 neighbors and itself. Swap Anew and A End loop Red and Blue boundaries held at fixed values (say temperature) Discretization: divide the space into a grid of cells. For all cells except those on the boundary: iteratively compute temperature as average of their neighboring cells’

18 How to parallelize? Decide to decompose data: –What options are there? (e.g. 16 processors) Vertically Horizontally In square chunks –Pros and cons Identify communication needed –Let us assume we will run for a fixed number of iterations –What data do I need from others? –From whom specifically? –Reverse the question: Who needs my data? –Express this with sends and recvs..

19 Ghost cells: a common apparition The data I need from neighbors –But that I don’t modify (therefore “don’t own”) Can be stored in my data structures –So that my inner loops don’t have to know about communication at all.. –They can be written as if they are sequential code.

20 Comparing the decomposition options What issues? –Communication cost –Restrictions

21 How does OpenMP compare?