PVM and MPI.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Reference: / MPI Program Structure.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
1 Lecture 4: Distributed-memory Computing with PVM/MPI.
Managing Heterogeneous MPI Application Interoperation and Execution. From PVMPI to SNIPE based MPI_Connect() Graham E. Fagg*, Kevin S. London, Jack J.
PVM and MPI What is more preferable? Comparative analysis of PVM and MPI for the development of physical applications on parallel clusters Ekaterina Elts.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed project is to research and develop an effective prediction.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
Introduction to parallel computing concepts and technics
MPI Basics.
Parallel Virtual Machine
CS4402 – Parallel Computing
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
CS 584.
Message Passing Models
MPI: Message Passing Interface
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Hybrid MPI and OpenMP Parallel Programming
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Parallel Processing - MPI
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

PVM and MPI

Message Passing Paradigm Each processor in a message passing program runs a separate process (sub-program, task) written in a conventional sequential language all variables are private communicate via special subroutine calls

Messages Messages are packets of data moving between processes The message passing system has to be told the following information: Sending process Source location Data type Data length Receiving process(es) Destination location Destination size

What is Master/Slave principle? The master has the control over the running application, it controls all data and it calls the slaves to do their work PROGRAM IF (process = master) THEN master-code ELSE slave-code ENDIF END

PVM and MPI Goals PVM MPI A distributed operating system Portability Heterogeneity Handling communication failures MPI A library for writing application program, not a distributed operating system portability High Performance Heterogeneity Well-defined behavior MPI implementations: LAM, MPICH,…

MPI - Message Passing Interface What is MPI ? MPI - Message Passing Interface MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be. A fixed set of processes is created at program initialization, one process is created per processor Each process knows its personal number (rank) Each process knows number of all processes Each process can communicate with other processes Process can’t create new processes

General MPI Program Structure

Groups and Communicators

Environment Management Routines MPI_Init: Initializes the MPI execution environment. MPI_Comm_size: Determines the number of processes in the group associated with a communicator. MPI_Comm_rank: Determines the rank of the calling process within the communicator. MPI_Send / MPI_ISend MPI_Recv / MPI_IRecv MPI_Abort: Terminates all MPI processes associated with the communicator. MPI_Finalize: Terminates the MPI execution environment.

Format in C Language #include "mpi.h" #include <stdio.h> int main(argc,argv) int argc; char *argv[]; { int numtasks, rank, rc; rc = MPI_Init(&argc,&argv); if (rc != MPI_SUCCESS) printf ("Error starting MPI program. Terminating.\n"); MPI_Abort(MPI_COMM_WORLD, rc); } MPI_Comm_size(MPI_COMM_WORLD,&numtasks); MPI_Comm_rank(MPI_COMM_WORLD,&rank); printf ("Number of tasks= %d My rank= %d\n", numtasks,rank); /******* do some work *******/ MPI_Finalize();

Example #include "mpi.h" #include <stdio.h> int main(argc,argv) int argc; char *argv[]; { int numtasks, rank, dest, source, rc, count, tag=1; char inmsg, outmsg='x'; MPI_Status Stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { dest = 1; source = 1; rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); } else if (rank == 1) { dest = 0; source = 0; rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } rc = MPI_Get_count(&Stat, MPI_CHAR, &count); printf("Task %d: Received %d char(s) from task %d with tag %d \n", rank, count, Stat.MPI_SOURCE, Stat.MPI_TAG); MPI_Finalize(); }

MPI Broadcast

MPI Scatter and Gather

MPI All to All

MPI All to Gather

MPI Reduce and All Reduce

PVM - Parallel Virtual Machine What is PVM ? PVM - Parallel Virtual Machine Is a software package that allows a heterogeneous collection of workstations (host pool) to function as a single high performance parallel machine (virtual) PVM, through its virtual machine provides a simple yet useful distributed operating system It has daemon running on all computers making up the virtual machine

PVM Daemon (pvmd) The pvmd serves as a message router and controller One pvmd runs on each host of a virtual machine The first pvmd (started by hand) is designated the master, while the others (started by the master) are called slaves Only the master can start new slaves and add them to configuration or delete slave hosts from the machine

Resource Control PVM is inherently dynamic in nature, and it has a rich set of resource control functions. Hosts can be added or deleted load balancing task migration fault tolerance efficiency MPI is specifically designed to be static in nature to improve performance

Message Passing operations MPI : Rich message support PVM: Simple message passing

Fault Tolerance: MPI MPI standard is based on a static model If a member of a group failed for some reason, the specification mandated that rather than continuing which would lead to unknown results in a doomed application, the group is invalidated and the application halted in a clean manner. In simple if something fails, everything does.

Fault Tolerance: MPI

Fault Tolerance: MPI Failed Node There is a failure and…

Fault Tolerance: MPI Failed Node … the application is shut down

Fault Tolerance: PVM PVM supports a basic fault notification scheme: it doesn’t automatically recover an application after a crash, but it does provide notification primitives to allow fault-tolerant applications to be built The Virtual Machine is dynamically reconfigurable A pvmd can recover from the loss of any foreign pvmd except the master. The master must never crash

Fault Tolerance: PVM Virtual Machine

Fault Tolerance: PVM Virtual Machine Failed Node

Fault Tolerance: PVM Virtual Machine Fast host delete or recovery from fault

Conclusion PVM MPI Each API has it’s unique strengths Virtual machine concept Simple message passing Communication topology unspecified Interoperate across host architecture boundaries Portability over performance Resource and process control Robust fault tolerance MPI No such abstraction Rich message support Support logical communication topologies Some realizations do not interoperate across architectural boundaries Performance over flexibility Primarily concerned with messaging More susceptible to faults

Conclusion PVM is better for: MPI is better for: Each API has it’s unique strengths PVM is better for: Heterogeneous cluster, resource and process control The size of cluster and the time of program’s execution are great MPI is better for: Supercomputers (PVM is not supported) Application for MPP Max performance Application needs rich message support