Presentation is loading. Please wait.

Presentation is loading. Please wait.

PVM and MPI.

Similar presentations


Presentation on theme: "PVM and MPI."— Presentation transcript:

1 PVM and MPI

2 Message Passing Paradigm
Each processor in a message passing program runs a separate process (sub-program, task) written in a conventional sequential language all variables are private communicate via special subroutine calls

3 Messages Messages are packets of data moving between processes
The message passing system has to be told the following information: Sending process Source location Data type Data length Receiving process(es) Destination location Destination size

4 What is Master/Slave principle?
The master has the control over the running application, it controls all data and it calls the slaves to do their work PROGRAM IF (process = master) THEN master-code ELSE slave-code ENDIF END

5 PVM and MPI Goals PVM MPI
A distributed operating system Portability Heterogeneity Handling communication failures MPI A library for writing application program, not a distributed operating system portability High Performance Heterogeneity Well-defined behavior MPI implementations: LAM, MPICH,…

6 MPI - Message Passing Interface
What is MPI ? MPI - Message Passing Interface MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be. A fixed set of processes is created at program initialization, one process is created per processor Each process knows its personal number (rank) Each process knows number of all processes Each process can communicate with other processes Process can’t create new processes

7 General MPI Program Structure

8 Groups and Communicators

9 Environment Management Routines
MPI_Init: Initializes the MPI execution environment. MPI_Comm_size: Determines the number of processes in the group associated with a communicator. MPI_Comm_rank: Determines the rank of the calling process within the communicator. MPI_Send / MPI_ISend MPI_Recv / MPI_IRecv MPI_Abort: Terminates all MPI processes associated with the communicator. MPI_Finalize: Terminates the MPI execution environment.

10 Format in C Language #include "mpi.h" #include <stdio.h>
int main(argc,argv) int argc; char *argv[]; { int numtasks, rank, rc; rc = MPI_Init(&argc,&argv); if (rc != MPI_SUCCESS) printf ("Error starting MPI program. Terminating.\n"); MPI_Abort(MPI_COMM_WORLD, rc); } MPI_Comm_size(MPI_COMM_WORLD,&numtasks); MPI_Comm_rank(MPI_COMM_WORLD,&rank); printf ("Number of tasks= %d My rank= %d\n", numtasks,rank); /******* do some work *******/ MPI_Finalize();

11 Example #include "mpi.h" #include <stdio.h> int main(argc,argv)
int argc; char *argv[]; { int numtasks, rank, dest, source, rc, count, tag=1; char inmsg, outmsg='x'; MPI_Status Stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { dest = 1; source = 1; rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); } else if (rank == 1) { dest = 0; source = 0; rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } rc = MPI_Get_count(&Stat, MPI_CHAR, &count); printf("Task %d: Received %d char(s) from task %d with tag %d \n", rank, count, Stat.MPI_SOURCE, Stat.MPI_TAG); MPI_Finalize(); }

12 MPI Broadcast

13 MPI Scatter and Gather

14 MPI All to All

15 MPI All to Gather

16 MPI Reduce and All Reduce

17 PVM - Parallel Virtual Machine
What is PVM ? PVM - Parallel Virtual Machine Is a software package that allows a heterogeneous collection of workstations (host pool) to function as a single high performance parallel machine (virtual) PVM, through its virtual machine provides a simple yet useful distributed operating system It has daemon running on all computers making up the virtual machine

18 PVM Daemon (pvmd) The pvmd serves as a message router and controller
One pvmd runs on each host of a virtual machine The first pvmd (started by hand) is designated the master, while the others (started by the master) are called slaves Only the master can start new slaves and add them to configuration or delete slave hosts from the machine

19 Resource Control PVM is inherently dynamic in nature, and it has a rich set of resource control functions. Hosts can be added or deleted load balancing task migration fault tolerance efficiency MPI is specifically designed to be static in nature to improve performance

20 Message Passing operations
MPI : Rich message support PVM: Simple message passing

21 Fault Tolerance: MPI MPI standard is based on a static model
If a member of a group failed for some reason, the specification mandated that rather than continuing which would lead to unknown results in a doomed application, the group is invalidated and the application halted in a clean manner. In simple if something fails, everything does.

22 Fault Tolerance: MPI

23 Fault Tolerance: MPI Failed Node There is a failure and…

24 Fault Tolerance: MPI Failed Node … the application is shut down

25 Fault Tolerance: PVM PVM supports a basic fault notification scheme: it doesn’t automatically recover an application after a crash, but it does provide notification primitives to allow fault-tolerant applications to be built The Virtual Machine is dynamically reconfigurable A pvmd can recover from the loss of any foreign pvmd except the master. The master must never crash

26 Fault Tolerance: PVM Virtual Machine

27 Fault Tolerance: PVM Virtual Machine Failed Node

28 Fault Tolerance: PVM Virtual Machine
Fast host delete or recovery from fault

29 Conclusion PVM MPI Each API has it’s unique strengths
Virtual machine concept Simple message passing Communication topology unspecified Interoperate across host architecture boundaries Portability over performance Resource and process control Robust fault tolerance MPI No such abstraction Rich message support Support logical communication topologies Some realizations do not interoperate across architectural boundaries Performance over flexibility Primarily concerned with messaging More susceptible to faults

30 Conclusion PVM is better for: MPI is better for:
Each API has it’s unique strengths PVM is better for: Heterogeneous cluster, resource and process control The size of cluster and the time of program’s execution are great MPI is better for: Supercomputers (PVM is not supported) Application for MPP Max performance Application needs rich message support


Download ppt "PVM and MPI."

Similar presentations


Ads by Google