Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel Programming with Message-Passing Interface (MPI)

Similar presentations


Presentation on theme: "Parallel Programming with Message-Passing Interface (MPI)"— Presentation transcript:

1 Parallel Programming with Message-Passing Interface (MPI)
An Introduction WW Grid Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS) Lab. The University of Melbourne Melbourne, Australia

2 Message-Passing Programming Paradigm
Each processor in a message-passing program runs a sub-program written in a conventional sequential language all variables are private communicate via special subroutine calls M M M Memory P P P Processors Interconnection Network

3 MPI Slides are Derived from
Dirk van der Knijff, High Performance Parallel Programming, PPT Slides MPI Notes, Maui HPC Centre. Melbourne Advanced Research Computing Center

4 Single Program Multiple Data
Introduced in data parallel programming (HPF) Same program runs everywhere Restriction on general message-passing model Some vendors only support SPMD parallel programs Usual way of writing MPI programs General message-passing model can be emulated

5 SPMD examples main(int argc, char **argv) {
if(process is to become Master) MasterRoutine(/*arguments*/) } else /* it is worker process */ WorkerRoutine(/*arguments*/)

6 Messages Messages are packets of data moving between sub-programs
The message passing system has to be told the following information Sending processor Source location Data type Data length Receiving processor(s) Destination location Destination size

7 Messages Access: Addressing: Reception:
Each sub-program needs to be connected to a message passing system Addressing: Messages need to have addresses to be sent to Reception: It is important that the receiving process is capable of dealing with the messages it is sent A message passing system is similar to: Post-office, Phone line, Fax, , etc Message Types: Point-to-Point, Collective, Synchronous (telephone)/Asynchronous (Postal)

8 Point-to-Point Communication
Simplest form of message passing One process sends a message to another Several variations on how sending a message can interact with execution of the sub-program

9 Point-to-Point variations
Synchronous Sends provide information about the completion of the message e.g. fax machines Asynchronous Sends Only know when the message has left e.g. post cards Blocking operations only return from the call when operation has completed Non-blocking operations return straight away - can test/wait later for completion

10 Collective Communications
Collective communication routines are higher level routines involving several processes at a time Can be built out of point-to-point communications Barriers synchronise processes Broadcast one-to-many communication Reduction operations combine data from several processes to produce a single (usually) result

11 Message Passing Systems
Initially each manufacturer developed their own Wide range of features, often incompatible Several groups developed systems for workstations PVM - (Parallel Virtual Machine) de facto standard before MPI Open Source (NOT public domain!) User Interface to the System (daemons) Support for Dynamic environments

12 MPI Forum - www.mpi-forum.org
Sixty people from forty different organisations Both users and vendors, from the US and Europe Two-year process of proposals, meetings and review Produced a document defining a standard Message Passing Interface (MPI) to provide source-code portability to allow efficient implementation it provides a high level of functionality support for heterogeneous parallel architectures parallel I/O (in MPI 2.0) MPI 1.0 contains over 115 routines/functions that can be grouped into 8 categories.

13 General MPI Program Structure
MPI Include File Initialise MPI Environment Do work and perform message communication Terminate MPI Environment

14 MPI programs MPI is a library - there are NO language changes
Header Files C: #include <mpi.h> MPI Function Format C: error = MPI_Xxxx(parameter,...); MPI_Xxxx(parameter,...);

15 MPI helloworld.c #include <mpi.h> main(int argc, char **argv) {
int numtasks, rank; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, & numtasks); MPI_Comm_rank(MPI_COMM_WORLD, &rank); printf("Hello World from process %d of %d\n“, rank, numtasks); MPI_Finalize(); }

16 Example - C #include <mpi.h>
/* include other usual header files*/ main(int argc, char **argv) { /* initialize MPI */ MPI_Init(&argc, &argv); /* main part of program */ /* terminate MPI */ MPI_Finalize(); exit(0); }

17 Handles MPI controls its own internal data structures
MPI releases ‘handles’ to allow programmers to refer to these C handles are of distinct typedef‘d types and arrays are indexed from 0 Some arguments can be of any type - in C these are declared as void *

18 Initializing MPI The first MPI routine called in any MPI program must be MPI_Init. The C version accepts the arguments to main int MPI_Init(int *argc, char ***argv); MPI_Init must be called by every MPI program Making multiple MPI_Init calls is erroneous MPI_INITIALIZED is an exception to first rule

19 MPI_COMM_WORLD MPI_INIT defines a communicator called MPI_COMM_WORLD for every process that calls it. All MPI communication calls require a communicator argument MPI processes can only communicate if they share a communicator. A communicator contains a group which is a list of processes Each process has it’s rank within the communicator A process can have several communicators

20 Communicators MPI uses objects called Communicators that defines which collection of processes communicate with each other. Every process has unique integer identifier assigned by the system when the process initialises. A rand is sometimes called process ID. Processes can request information from a communicator MPI_Comm_rank(MPI_comm comm, int *rank) Returns the rank of the process in comm MPI_Comm_size(MPI_Comm comm, int *size) Returns the size of the group in comm

21 Finishing up An MPI program should call MPI_Finalize when all communications have completed. Once called no other MPI calls can be made Aborting: MPI_Abort(comm) Attempts to abort all processes listed in comm if comm = MPI_COMM_WORLD the whole program terminates

22 MPI Programs Compilation and Execution
Let us look into MARC Aplha Cluster

23 Manjra: GRIDS Lab Linux Cluster
Master Node: manjra.cs.mu.oz.au Dual Xeon 2GHz 512 MB memory 250 GB integrated storage Gigabit LAN CDROM & Floppy Drives Red Hat Linux release 7.3 (Valhalla) Worker Nodes(node1..node13) Each of the 13 worker node consists of the following: Pentium 4 2GHz 40 GB harddisk Master: manjra.cs.mu.oz.au Internal worker nodes: node1 node2 .... node13 Manjra Linux cluster

24 How legion clusters looks
Front View Back View

25 A legion cluster view from angle!

26 Compile and Run Commands
manjra> mpicc helloworld.c -o helloworld Run: manjra> mpirun -np 3 -machinefile machines.list helloworld The file machines.list contains nodes list: manjra.cs.mu.oz.au node1 node2 node3 node4 node6 node5 and node7 are not working today! No of processes

27 Sample Run and Output A Run with 3 Processes: A Run by default
manjra> mpirun -np 3 -machinefile machines.list helloworld Hello World from process 0 of 3 Hello World from process 1 of 3 Hello World from process 2 of 3 A Run by default manjra> helloworld Hello World from process 0 of 1

28 Sample Run and Output A Run with 6 Processes:
manjra> mpirun -np 6 -machinefile machines.list helloworld Hello World from process 0 of 6 Hello World from process 3 of 6 Hello World from process 1 of 6 Hello World from process 5 of 6 Hello World from process 4 of 6 Hello World from process 2 of 6 Note: Process execution need not be in process number order.

29 Sample Run and Output A Run with 6 Processes:
manjra> mpirun -np 6 -machinefile machines.list helloworld Hello World from process 0 of 6 Hello World from process 3 of 6 Hello World from process 1 of 6 Hello World from process 2 of 6 Hello World from process 5 of 6 Hello World from process 4 of 6 Note: Change in process output order. For each run, process mapping can be different. They may run on machines with different load. Hence such difference.

30 Hello World with Error Check

31 MPI Routines

32 MPI Routines – C and Fortran
Environment Management Point-to-Point Communication Collective Communication Process Group Management Communicators Derived Type Virtual Topologies Miscellaneous Routines

33 Environment Management Routines

34 Point-to-Point Communication

35 Collective Communication Routines

36 Process Group Management Routines

37 Communicators Routines

38 Derived Type Routines

39 Virtual Topologies Routines

40 Miscellaneous Routines

41 MPI Messages A message contains a number of elements of some particular datatype MPI datatypes Basic Types Derived types Derived types can be built up from basic types C types are different from Fortran types

42 MPI Basic Datatypes - C

43 Point-to-Point Communication
Communication between two processes Source process sends message to destination process Communication takes place within a communicator Destination process is identified by its rank in the communicator MPI provides four communication modes for sending messages standard, synchronous, buffered, and ready Only one mode for receiving

44 Standard Send Completes once the message has been sent
Note: it may or may not have been received Programs should obey the following rules: It should not assume the send will complete before the receive begins - can lead to deadlock It should not assume the send will complete after the receive begins - can lead to non-determinism processes should be eager readers - they should guarantee to receive all messages sent to them - else network overload Can be implemented as either a buffered send or synchronous send

45 Standard Send (cont.) MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) buf the address of the data to be sent count the number of elements of datatype buf contains datatype the MPI datatype dest rank of destination in communicator comm tag a marker used to distinguish different message types comm the communicator shared by sender and receiver ierror the fortran return value of the send

46 Standard Blocking Receive
Note: all sends so far have been blocking (but this only makes a difference for synchronous sends) Completes when message received MPI_Recv(buf, count, datatype, source, tag, comm, status) source - rank of source process in communicator comm status - returns information about message Synchronous Blocking Message-Passing processes synchronise sender process specifies the synchronous mode blocking - both processes wait until transaction completed

47 For a communication to succeed
Sender must specify a valid destination rank Receiver must specify a valid source rank The communicator must be the same Tags must match Message types must match Receivers buffer must be large enough Receiver can use wildcards MPI_ANY_SOURCE MPI_ANY_TAG actual source and tag are returned in status parameter

48 Standard/Blocked Send/Receive

49 MPI Send/Receive a Character (cont...)
#include <mpi.h> #include <stdio.h> int main(int argc, char *argv[]) { int numtasks, rank, dest, source, rc, tag=1; char inmsg, outmsg='X'; MPI_Status Stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { dest = 1; rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); printf("Rank0 sent: %c\n", outmsg); source = 1; rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); }

50 MPI Send/Receive a Character
else if (rank == 1) { source = 0; rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); printf("Rank1 received: %c\n", inmsg); dest = 0; rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } MPI_Finalize();

51 Synchronous Send Completes when the message has been received
Effect is to synchronise the sender and receiver Deadlocks if no receiver Safer than standard send but may be slower MPI_Ssend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) All parameters as for standard send Fortran equivalent as usual (plus ierror)

52 Buffered Send Guarantees to complete immediately
Copies message to buffer if necessary To use buffered mode the user must explicitly attach buffer space MPI_Bsend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_Buffer_attach(void *buf, int size) Only one buffer can de attached at any one time Buffers can be detached MPI_Buffer_detach(void *buf, int size)

53 Ready Send Completes immediately
Guaranteed to succeed if receive is already posted Outcome is undefined in no receive posted May improve performance Requires careful attention to messaging patterns MPI_Rsend(buf, count, datatype, dest, tag, comm) process 0 process 1 synchronous send with tag 1 ready send with tag 0 non-blocking receive with tag 0 blocking receive with tag 1 test non-blocking receive

54 Communication Envelope Information
Envelope information is returned from MPI_Recv as status Information includes Source: status.MPI_SOURCE or status(MPI_SOURCE) Tag: status.MPI_TAG or status(MPI_TAG) Count: MPI_Get_count(MPI_Status status, MPI_Datatype datatype, int *count)

55 Point-to-Point Rules Message Order Preservation Progress
messages do not overtake each other true even for non-synchronous sends i.e. if process a posts two sends and process posts matching receives then they will complete in the order they were sent Progress It is not possible for a matching send and receive pair to remain permanently outstanding. It is possible for a third process to match one of the pair

56 Non Blocking Message Passing

57 Exercise: Ping Pong Write a program in which two processes repeatedly pass a message back and forth. Insert timing calls to measure the time taken for one message. Investigate how the time taken varies with the size of the message.

58 A simple Ping Pong.c (cont..)
#include <mpi.h> #include <stdio.h> int main(int argc, char *argv[]) { int numtasks, rank, dest, source, rc, tag=1; char inmsg, outmsg='X'; char pingmsg[10]; char pongmsg[10]; char buff[100]; MPI_Status Stat; strcpy(pingmsg, "ping"); strcpy(pongmsg, "pong"); MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { /* Send Ping, Receive Pong */ dest = 1; source = 1; rc = MPI_Send(pingmsg, strlen(pingmsg)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); rc = MPI_Recv(buff, strlen(pongmsg)+1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); printf("Rank0 Sent: %d & Received: %s\n", pingmsg, buff); } Why + 1 ?

59 A simple Ping Pong.c else if (rank == 1) { /* Receive Ping, Send Pong */ dest = 0; source = 0; rc = MPI_Recv(buff, strlen(pingmsg)+1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); printf("Rank1 received: %s & Sending: %s\n", buff, pongmsg); rc = MPI_Send(pongmsg, strlen(pongmsg)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } MPI_Finalize();

60 Timers C: double MPI_Wtime(void); Time is measured in seconds
Returns an elapsed wall clock time in seconds (double precision) on the calling processor. Time is measured in seconds Time to perform a task is measured by consulting the time before and after

61 mpich on legion cluster
Compile with mpicc or mpif90 Don’t need -lmpi Run with qsub -q pque <jobscript> where jobscript is #PBS -np=2 mpirun <progname>

62 mpich After the MPI standard was announced a portable implementation, mpich, was produced by ANL (Argonne National Lab, Chicago, US). It consists of: libraries and include files - libmpi, mpi.h compilers - mpicc, mpif90. These know about things like where relevant include and library files are mpicc helloworld.c –o helloworld runtime loader - mpirun Has arguments -np <number of nodes>, and -machinefile <file of nodenames> implements SPMD paradigm by starting a copy of program on each node. The program must therefore do do any differentitation itself (using MPI_Comm_size() and MPI_Comm_rank() functions). mpicc –np 3 –machinefile machines.list helloworld NOTE: our version gets # CPUs and their addresses from PBS (ie, don't use -np and/or -machinefile)

63 PBS PBS is a batch system - jobs get submitted to a queue
The job is a shell script to execute your program The shell script can contain job management instructions (note that these instructions can also be in the command line) PBS will allocate your job to some other computer, log in as you, and execute your script, ie your script must contain cd's or aboslute references to access files (or globus objects) Useful PBS commands: qsub - submits a job qstat - monitors status qdel - deletes a job from a queue

64 PBS directives Some PBS directives to insert at the start of your shell script: #PBS -q <queuename> #PBS -e <filename> (stderr location) #PBS -o <filename> (stdout location) #PBS -eo (combines stderr and stdout) #PBS -t <seconds> (maximum time) #PBS -l <attribute>=<value> (eg -l nodes=2)

65 MPI Programs Compilation and Execution
Let us look into MARC Aplha Cluster

66 Melbourne Advanced Research Computing (MARC) Alpha Cluster
Exclusive nodes (cnet1..cnet16) (Parallel jobs only) Compaq Personal Workstation 600au 600 MHz 21164AXP cpu - 96 KByte internal cache 2 MByte external cache 192 MByte memory 4.3 GByte Ultrawide SCSI disc 100 Mbps Ethernet legion.hpc.unimelb.edu.au cnet1.hpc.unimelb.edu.au cnet2.hpc.unimelb.edu.au .... cnet16.hpc.unimelb.edu.au legion Alpha cluster

67 How legion clusters looks
Front View Back View

68 A legion cluster view from angle!

69 Compile and Run Commands
legion> mpicc helloworld.c -o helloworld Run: legion> mpirun -np 3 -machinefile machines.list helloworld The file machines.list contains nodes list: legion.hpc.unimelb.edu.au cnet1.hpc.unimelb.edu.au cnet2.hpc.unimelb.edu.au cnet3.hpc.unimelb.edu.au cnet4.hpc.unimelb.edu.au cnet5.hpc.unimelb.edu.au No of processes

70 Sample Run and Output A Run with 3 Processes: A Run by default
legion> mpirun -np 3 -machinefile machines.list helloworld Hello World from process 0 of 3 Hello World from process 1 of 3 Hello World from process 2 of 3 A Run by default legion> helloworld Hello World from process 0 of 1

71 Sample Run and Output A Run with 6 Processes:
legion> mpirun -np 6 -machinefile machines.list helloworld Hello World from process 0 of 6 Hello World from process 3 of 6 Hello World from process 1 of 6 Hello World from process 5 of 6 Hello World from process 4 of 6 Hello World from process 2 of 6 Note: Process execution need not be in process number order.

72 Sample Run and Output A Run with 6 Processes:
legion> mpirun -np 6 -machinefile machines.list helloworld Hello World from process 0 of 6 Hello World from process 3 of 6 Hello World from process 1 of 6 Hello World from process 2 of 6 Hello World from process 5 of 6 Hello World from process 4 of 6 Note: Change in process output order. For each run, process mapping can be different. They may run on machines with different load. Hence such difference.


Download ppt "Parallel Programming with Message-Passing Interface (MPI)"

Similar presentations


Ads by Google