SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Introduction to Message Passing Interface (MPI) Part I SoCal Annual AAP workshop.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Using MPI - the Fundamentals University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
CS 179: GPU Programming Lecture 20: Cross-system communication.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
CS 484. Message Passing Based on multi-processor Set of independent processors Connected via some communication net All communication between processes.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
An Introduction to MPI Parallel Programming with the Message Passing Interface Prof S. Ramachandram.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed project is to research and develop an effective prediction.
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
Outline Background Basics of MPI message passing
Introduction to parallel computing concepts and technics
MPI Basics.
Introduction to MPI.
MPI Message Passing Interface
CS 584.
Lecture 14: Inter-process Communication
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Introduction to Message Passing Interface (MPI) Part I SoCal Annual AAP workshop October 30, 2006 San Diego Supercomputer Center

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Overview MPI: Background and Key Concepts Compilers and Include files for MPI Initialization Basic Communications in MPI Data types Summary of 6 basic MPI calls Example using basic MPI calls

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Message Passing Interface: Background MPI - Message Passing Interface Library standard defined by committee of vendors, implementers, and parallel programmers Used to create parallel SPMD programs based on message passing Available on almost all parallel machines in C and Fortran About 125 routines including advanced routines 6 basic routines

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO MPI Implementations Most parallel machine vendors have optimized versions Some other popular implementations include:

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO MPI: Key Concepts Parallel SPMD programs based on message passing. Universal multiprocessor model that fits well on separate processors connected by fast/slow network. Normally the same program is running on several different nodes. Nodes communicate using message passing. MP allows a way for the programmer to explicitly associate specific data with processes and allows the compiler and cache management hardware to function fully.

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO MPI Compilers on SDSC machines DataStar: Based on IBM xlf & xlc compilers FORTRAN: mpxlf_r, mpxlf90_r, mpxlf95_r C: mpcc_r C++: mpCC_r Blue Gene: Based on IBM xlf & xlc compilers FORTRAN: mpxlf, mpxlf90, mpxlf95 C: mpcc C++: mpCC TG IA-64 cluster: Based on Intel ifort & icc compilers FORTRAN: mpif77, mpif90 C: mpicc C++: mpiCC

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO MPI Include files The MPI include file C: mpi.h Fortran: mpif.h (a f90 module is a good place for this) Defines many constants used within MPI programs In C defines the interfaces for the functions Compilers know where to find the include files

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Initialization !cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc! ! This code illustrates the use of some basic MPI routines ! ! MPI_INIT : Initialization of MPI ! ! MPI_COMM_RANK: Find the ID of given task ! ! MPI_COMM_SIZE: Find the total number of tasks ! ! MPI_FINALIZE: Close all MPI tasks ! !cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc! PROGRAM init IMPLICIT NONE INCLUDE 'mpif.h' INTEGER:: my_id, ntasks, ierr CALL MPI_INIT( ierr ) CALL MPI_COMM_RANK( MPI_COMM_WORLD, my_id, ierr ) CALL MPI_COMM_SIZE( MPI_COMM_WORLD, ntasks, ierr ) WRITE(*,*) "I am task number", my_id, + ". The total number of tasks is",ntasks CALL MPI_FINALIZE(ierr) END PROGRAM init

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Initialization COMPILING CODE ds100 % mpxlf -o init initial.f ** init === End of Compilation 1 === Compilation successful for file initial.f. SUBMIT JOB ds100 % llsubmit LL.cmd Found valid account 'USE300' for queue 'express' Using ACL 'sdsc_datastar:mahidhar:use300:sstrnp' on DataStar NPACI nodes: all queues Job passed jobfilter llsubmit: Processed command file through Submit Filter: "/users00/loadl/loadl/jobfilter-interactive.pl". llsubmit: The job "ds " has been submitted. CODE OUTPUT (in ouput file) 0: I am task number 0. The total number of tasks is 4 1: I am task number 1. The total number of tasks is 4 2: I am task number 2. The total number of tasks is 4 3: I am task number 3. The total number of tasks is 4

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Communicators A parameter for most MPI calls A collection of processors working on some part of a parallel job MPI_COMM_WORLD is defined in the MPI include file as all of the processors in your job Can create subsets of MPI_COMM_WORLD Processors within a communicator are assigned numbers 0 to n-1

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Minimal MPI program Every MPI program needs these… C version Commonly Used… C version ierr=MPI_Comm_size(MPI_COMM_WORLD, &nPEs); /* What node am I (what is my rank?) */ ierr=MPI_Comm_rank(MPI_COMM_WORLD, &iam);... In C MPI routines are functions and return an error value #include /* the mpi include file */ /* Initialize MPI */ ierr=MPI_Init(&argc, &argv); /* How many total PEs are there */ ierr=MPI_Finalize();

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Minimal MPI program Every MPI program needs these… Fortran version include 'mpif.h' ! MPI include file c Initialize MPI call MPI_Init(ierr) c Find total number of PEs call MPI_Comm_size(MPI_COMM_WORLD, nPEs, ierr) c Find the rank of this node call MPI_Comm_rank(MPI_COMM_WORLD, iam, ierr)... call MPI_Finalize(ierr) In Fortran, MPI routines are subroutines, and last parameter is an error value

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Basic Communications in MPI Data values are transferred from one processor to another One process sends the data Another receives the data Standard, Blocking Call does not return until the message buffer is free to be reused Standard, Nonblocking Call indicates a start of send or received, and another call is made to determine if finished

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Standard, Blocking Send MPI_Send: Sends data to another processor Use MPI_Recv to "get" the data C MPI_Send(&buffer,count,datatype, destination,tag,communicator); Fortran Call MPI_Send(buffer, count, datatype,destination, tag, communicator, ierr) Call blocks until message on the way

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO MPI_Send Call MPI_Send(buffer, count, datatype, destination, tag, communicator, ierr) Buffer: The data Count : Length of source array (in elements, 1 for scalars) Datatype : Type of data, for example : MPI_DOUBLE_PRECISION, MPI_INT, etc Destination : Processor number of destination processor in communicator Tag : Message type (arbitrary integer) Communicator : Your set of processors Ierr : Error return (Fortran only)

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Standard, Blocking Receive Call blocks until message is in buffer C MPI_Recv(&buffer,count, datatype, source, tag, communicator, &status); Fortran Call MPI_ RECV(buffer, count, datatype, source,tag,communicator, status, ierr) Status - contains information about incoming message C MPI_Status status; Fortran Integer status(MPI_STATUS_SIZE)

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Status In C status is a structure of type MPI_Status which contains three fields MPI_SOURCE, MPI_TAG, and MPI_ERROR status.MPI_SOURCE, status.MPI_TAG, and status.MPI_ERROR contain the source, tag, and error code respectively of the received message In Fortran status is an array of INTEGERS of length MPI_STATUS_SIZE, and the 3 constants MPI_SOURCE, MPI_TAG, MPI_ERROR are the indices of the entries that store the source, tag, & error status(MPI_SOURCE), status(MPI_TAG), status(MPI_ERROR) contain respectively the source, the tag, and the error code of the received message.

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO MPI_Recv Call MPI_Recv(buffer, count, datatype, source, tag, communicator, status, ierr) Buffer: The data Count : Max. number of elements that can be received Datatype : Type of data, for example : MPI_DOUBLE_PRECISION, MPI_INT, etc Source : Processor number of source processor in communicator Tag : Message type (arbitrary integer) Communicator : Your set of processors Status: Information about message Ierr : Error return (Fortran only)

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Data types When sending a message, it is given a data type Predefined types correspond to "normal" types MPI_REAL, MPI_FLOAT -Fortran and C real MPI_DOUBLE_PRECISION, MPI_DOUBLE - Fortan and C double MPI_INTEGER and MPI_INT - Fortran and C integer Can create user-defined types

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Basic MPI Send and Receive A parallel program to send & receive data Initialize MPI Have processor 0 send an integer to processor 1 Have processor 1 receive an integer from processor 0 Both processors print the data Quit MPI

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Simple Send & Receive Program #include #include "mpi.h" /************************************************************ This is a simple send/receive program in MPI ************************************************************/ int main(argc,argv) int argc; char *argv[]; { int myid; int tag,source,destination,count; int buffer; MPI_Status status; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid);

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Simple Send & Receive Program (cont.) tag=1234; source=0; destination=1; count=1; if(myid == source){ buffer=5678; MPI_Send(&buffer,count,MPI_INT,destination,tag,MPI_COMM_WORLD); printf("processor %d sent %d\n",myid,buffer); } if(myid == destination){ MPI_Recv(&buffer,count,MPI_INT,source,tag,MPI_COMM_WORLD,&status); printf("processor %d got %d\n",myid,buffer); } MPI_Finalize(); }

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Simple Send & Receive Program program send_recv include "mpif.h” ! This is MPI send - recv program integer myid, ierr,numprocs integer tag,source,destination,count integer buffer integer status(MPI_STATUS_SIZE) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) tag=1234 source=0 destination=1 count=1

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Simple Send & Receive Program (cont.) if (myid.eq. source) then buffer=5678 Call MPI_Send(buffer, count, MPI_INTEGER,destination,& tag, MPI_COMM_WORLD, ierr) write(*,*)"processor ",myid," sent ",buffer endif if (myid.eq. destination) then Call MPI_Recv(buffer, count, MPI_INTEGER,source,& tag, MPI_COMM_WORLD, status,ierr) write(*,*)"processor ",myid," got ",buffer endif call MPI_FINALIZE(ierr) stop end

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Simple Send & Receive Program Run the simple send & receive code with 2 processors. Output: ds100 % more LL_out : processor 0 sent : processor 1 got 5678

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO The 6 Basic MPI Calls MPI is used to create parallel programs based on message passing Usually the same program is run on multiple processors The 6 basic calls in MPI are: MPI_INIT( ierr ) MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) MPI_Send(buffer, count,MPI_INTEGER,destination, tag, MPI_COMM_WORLD, ierr) MPI_Recv(buffer, count, MPI_INTEGER,source,tag, MPI_COMM_WORLD, status,ierr) MPI_FINALIZE(ierr)

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Example MPI program using basic routines MPI is used to create parallel programs based on message passing Usually the same program is run on multiple processors The 6 basic calls in MPI are: MPI_INIT( ierr ) MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) MPI_Send(buffer, count,MPI_INTEGER,destination, tag, MPI_COMM_WORLD, ierr) MPI_Recv(buffer, count, MPI_INTEGER,source,tag, MPI_COMM_WORLD, status,ierr) MPI_FINALIZE(ierr)

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Simple Application using MPI: 1-D Heat Equation ∂T/∂t = α(∂ 2 T/∂x 2 ); T(0) = 0; T(1) = 0; (0 ≤x≤1) T(x,0) is know as an initial condition. Discretizing for numerical solution we get: T (n+1) i – T (n) i = (αΔt/Δx 2 )(T (n) i-1 -2T (n) i +T (n) i+1 ) (n is the index in time and i is the index in space) In this example we solve the problem using 11 points and we distribute this problem over 3 processors shown graphically below: Processor 0Processor 2 Processor 1

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Processor 0Processor 2 Processor 1 Simple Application using MPI: 1-D Heat Equation Processor 0: Value at 0 known; Get 4 from processor 1; Solve the equation at (1,2,3); Send 3 to processor 1 Local Data Index : ilocal = 0, 1, 2, 3, 4 Global Data Index: iglobal = 0, 1, 2, 3, 4 Processor 1: Get 3 from processor 0; Get 7 from processor 2; Solve the equation at (4,5,6); Send 4 to processor 0 and send 6 to processor 2 Local Data Index : ilocal = 0, 1, 2, 3, 4 Global Data Index : iglobal = 3, 4, 5, 6, 7 Processor 2: Get 6 from processor 1; Solve the equation at (7,8,9); Value at 10 known; Send 7 to processor 1 Local Data Index : ilocal = 0, 1, 2, 3, 4 Global Data Index : iglobal = 6, 7, 8, 9, 10

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO FORTRAN CODE: 1-D Heat Equation PROGRAM HEATEQN include "mpif.h" implicit none integer :: iglobal, ilocal, itime integer :: ierr, nnodes, my_id integer :: IIM, IIK1, IIK2, IIK3, IIK4 integer :: dest, from, status(MPI_STATUS_SIZE),tag integer :: msg_size real*8 :: xalp,delx,delt,pi real*8 :: T(0:100,0:5), TG(0:10) CHARACTER(20) :: FILEN delx = 0.1d0 delt = 1d-4 xalp = 2.0d0 call MPI_INIT(ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, nnodes, ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, my_id, ierr) print *, "Process ", my_id, "of", nnodes,"has started" ************** Initial Conditions ********************************* pi = 4d0*datan(1d0) do ilocal = 0, 4 iglobal = 3*my_id+ilocal T(0,ilocal) = dsin(pi*delx*dfloat(iglobal)) enddo write(*,*)"Processor", my_id, "has finished +setting initial conditions" ************** Iterations **************************************** do itime = 1, 3 if (my_id.eq.0) then write(*,*)"Running Iteration Number ", itime endif do ilocal = 1, 3 T(itime,ilocal)=T(itime-1,ilocal)+ + xalp*delt/delx/delx* + (T(itime-1,ilocal-1)-2*T(itime-1,ilocal)+T(itime-1,ilocal+1)) enddo if (my_id.eq.0) then write(*,*)"Sending and receiving overlap points" dest = 1 msg_size = 1

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Fortran Code: 1-D Heat Equation (Contd.) call MPI_SEND(T(itime,3),msg_size,MPI_DOUBLE_PRECISION,dest, + tag,MPI_COMM_WORLD,ierr) endif if (my_id.eq.1) then from = 0 dest = 2 msg_size = 1 call MPI_RECV(T(itime,0),msg_size,MPI_DOUBLE_PRECISION,from, + tag,MPI_COMM_WORLD,status,ierr) call MPI_SEND(T(itime,3),msg_size,MPI_DOUBLE_PRECISION,dest, + tag,MPI_COMM_WORLD,ierr) endif if (my_id.eq.2) then from = 1 dest = 1 msg_size = 1 call MPI_RECV(T(itime,0),msg_size,MPI_DOUBLE_PRECISION,from, + tag,MPI_COMM_WORLD,status,ierr) call MPI_SEND(T(itime,1),msg_size,MPI_DOUBLE_PRECISION,dest, + tag,MPI_COMM_WORLD,ierr) endif if (my_id.eq.1) then from = 2 dest = 0 msg_size = 1 call MPI_RECV(T(itime,4),msg_size,MPI_DOUBLE_PRECISION,from, + tag,MPI_COMM_WORLD,status,ierr) call MPI_SEND(T(itime,1),msg_size,MPI_DOUBLE_PRECISION,dest, + tag,MPI_COMM_WORLD,ierr) endif if (my_id.eq.0) then from = 1 msg_size = 1 call MPI_RECV(T(itime,4),msg_size,MPI_DOUBLE_PRECISION,from, + tag,MPI_COMM_WORLD,status,ierr) endif enddo if (my_id.eq.0) then write(*,*)"SOLUTION SENT TO FILE AFTER 3 TIMESTEPS:" endif FILEN = 'data'//char(my_id+48)//'.dat' open (5, file=FILEN) write(5,*)"Processor ",my_id do ilocal = 0, 4 iglobal = 3*my_id + ilocal write(5,*)"ilocal=",ilocal,";iglobal=",iglobal,";T=",T(3,ilocal) enddo close(5) call MPI_FINALIZE(ierr) END

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Processor 0Processor 2 Processor 1 Simple Application using MPI: 1-D Heat Equation Compilation Fortran: mpxlf –O3 heat-1d.f Running Interactively on DataStar Log on to dspoe.sdsc.edu poe a.out –nodes 1 –tasks_per_node 3

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Processor 0Processor 2 Processor 1 Simple Application using MPI: 1-D Heat Equation OUTPUT FROM SAMPLE PROGRAM Process 0 of 3 has started Processor 0 has finished setting initial conditions Process 1 of 3 has started Processor 1 has finished setting initial conditions Process 2 of 3 has started Processor 2 has finished setting initial conditions Running Iteration Number 1 Sending and receiving overlap points Running Iteration Number 2 Sending and receiving overlap points Running Iteration Number 3 Sending and receiving overlap points SOLUTION SENT TO FILE AFTER 3 TIMESTEPS:

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Processor 0Processor 2 Processor 1 Simple Application using MPI: 1-D Heat Equation ds100 % more data0.dat Processor 0 ilocal= 0 ;iglobal= 0 ;T= E+00 ilocal= 1 ;iglobal= 1 ;T= ilocal= 2 ;iglobal= 2 ;T= ilocal= 3 ;iglobal= 3 ;T= ilocal= 4 ;iglobal= 4 ;T= ds100 % more data1.dat Processor 1 ilocal= 0 ;iglobal= 3 ;T= ilocal= 1 ;iglobal= 4 ;T= ilocal= 2 ;iglobal= 5 ;T= ilocal= 3 ;iglobal= 6 ;T= ilocal= 4 ;iglobal= 7 ;T= ds100 % more data2.dat Processor 2 ilocal= 0 ;iglobal= 6 ;T= ilocal= 1 ;iglobal= 7 ;T= ilocal= 2 ;iglobal= 8 ;T= ilocal= 3 ;iglobal= 9 ;T= ilocal= 4 ;iglobal= 10 ;T= E+00

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO References LLNL MPI tutorial NERSC MPI tutorial LAM MPI tutorials Tutorial on MPI: The Message-Passing Interface by William Gropp