3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
High Performance Computing
Reference: Getting Started with MPI.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 CS 668: Lecture 2 An Introduction to MPI Fred Annexstein University of Cincinnati CS668: Parallel Computing Fall 2007 CC Some.
Parallel Programming in C with MPI and OpenMP
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.
Parallel Processing LAB NO 1.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
CS 591 x I/O in MPI. MPI exists as many different implementations MPI implementations are based on MPI standards MPI standards are developed and maintained.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
MPI and OpenMP.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
Implementing Processes and Threads CS550 Operating Systems.
MPI Groups, Communicators and Topologies. Groups and communicators In our case studies, we saw examples where collective communication needed to be performed.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Message Passing Interface (MPI) by Blaise Barney, Lawrence Livermore National Laboratory.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Chapter 4 Message-Passing Programming. Learning Objectives Understanding how MPI programs execute Understanding how MPI programs execute Familiarity with.
PVM and MPI.
Introduction to parallel computing concepts and technics
MPI Basics.
gLite MPI Job Amina KHEDIMI CERIST
Introduction to MPI.
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
CS 584.
Introduction to Message Passing Interface (MPI)
Message Passing Models
CS 5334/4390 Spring 2017 Rogelio Long
CSCE569 Parallel Computing
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Introduction to Parallel Computing
Distributed Memory Programming with Message-Passing
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

3/12/2013Computer Engg, IIT(BHU)1 MPI-1

MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language or compiler specification Not a specific implementation, several implementations (like pthread) Standard for distributed memory, message passing, parallel computing Distributed Memory

GOALS OF MPI SPECIFICATION Provide source code portability. Allow efficient implementations. Flexible to port different algorithms on different hardware environments. Support for heterogeneous architectures – processors not identical.

REASONS FOR USING MPI Standardization – virtually all HPC platforms. Portability – same code runs on another platform. Performance – vendor implementations should exploit native hardware features. Functionality – 115 routines. Availability – a variety of implementations available.

BASIC MODEL Communicators and Groups Group: Ordered set of processes. Each process is associated with a unique integer rank. Rank from 0 to (N-1) for N processes. An object in system memory accessed by handle. MPI_GROUP_EMPTY MPI_GROUP_NULL

BASIC MODEL (CONTD.) Communicator: Group of processes that may communicate with each other. MPI messages must specify a communicator. An object in memory. Handle to access the object. There is a default communicator (automatically defined): MPI_COMM_WORLD Identify the group of all processes.

COMMUNICATORS Intra-Communicator – All processes from the same group Inter-Communicator – Processes picked up from several groups

COMMUNICATOR AND GROUPS Allow you to organize tasks, based upon function, into task groups. Enable Collective Communications (later) operations across a subset of related tasks. Safe communications. Many Communicators at the same time. Dynamic – can be created and destroyed at run time. Process may be in more than one group/communicator – unique rank in every group/communicator.

VIRTUAL TOPOLOGIES coord (0,0): rank 0 coord (0,1): rank 1 coord (1,0): rank 2 coord (1,1): rank 3 Attach graph topology information to an existing communicator

SEMANTICS Header file #include (C) include mpif.h (fortran) Java, Python etc.

MPI PROGRAM STRUCTURE

MPI FUNCTIONS – MINIMAL SUBSET MPI_Init – Initialize MPI. MPI_Comm_size – size of group associated with the communicator. MPI_Comm_rank – identify the process. MPI_Send MPI_Recv MPI_Finalize

CLASSIFICATION OF MPI ROUTINES Environment Management MPI_Init, MPI_Finalize Point-to-Point Communication MPI_Send, MPI_Recv Collective Communication MPI_Reduce, MPI_Bcast Information on the Processes MPI_Comm_rank, MPI_Get_processor_name

MPI_INIT All MPI programs call this before using other MPI functions int MPI_Init(int *pargc, char ***pargv); Must be called in every MPI program Must be called only once and before any other MPI functions are called. Pass command line arguments to all processes. int main(int argc, char **argv) { MPI_Init(&argc, &argv); … }

MPI_COMM_SIZE Number of processes in the group associated with a communicator int MPI_Comm_size(MPI_Comm comm, int *psize); Find out number of processes being used by your application. int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); … }

MPI_COMM_RANK Rank of the calling process within the communicator. Unique Rank between 0 and (p-1) Can be called task ID int MPI_Comm_rank(MPI_Comm comm, int *rank); Unique rank for a process in each communicator it belongs to. Used to identify work for the processor. int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); … }

MPI_FINALIZE Terminates the MPI execution environment Last MPI routine to be called in any MPI program int MPI_Finalize(void); int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); printf(“no. of processors: %d\n rank: %d”, p, rank); MPI_Finalize(); }

How To Compile a MPI Program Open MPI implementation on our Cluster. mpicc -o test test.c Like gcc only mpicc not a special compiler $mpicc: gcc: no input files Mpi implemented just as any other library Just a wrapper around gcc that includes required command line parameters.

How To Run a Compiled MPI Program mpirun -np X test Will run X copies of program in your current run time environment. np option specifies number of copies of program.

MPIRUN Only rank 0 process can receive standard input. mpirun redirects standard input of all others to /dev/null Open MPI redirects standard input of mpirun to standard input of rank 0 process. Node which invoked mpirun need not be the same as the node for the MPI_COMM_WORLD rank 0 process mpirun directs standard output and error of remote nodes to the node that invoked mpirun. SIGTERM, SIGKILL kill all processes in the communicator. All other signals ignored.

SOME MORE FUNCTIONS int MPI_Init (&flag) Check if MPI_Initialized has been called. int MPI_Wtime() Returns elapsed wall clock time in seconds (double precision) on the calling processor. int MPI_Wtick() Returns the resolution in seconds (double precision) of MPI_Wtime() Message Passing Functionality That is what MPI is meant for!