12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Parallel Programming in C with MPI and OpenMP
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Parallel & Cluster Computing MPI Introduction Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08 Education.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
ECE 1747H : Parallel Programming Message Passing (MPI)
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Parallel Programming with MPI By, Santosh K Jena..
Supercomputing in Plain English An Introduction to High Performance Computing Part VI: Distributed Multiprocessing Henry Neeman, Director OU Supercomputing.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Task/ChannelMessage-passing TaskProcess Explicit channelsMessage communication.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Interface Using resources from
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
MPI Basics.
MPI Message Passing Interface
Introduction to MPI CDP.
CS 584.
MPI: The Message-Passing Interface
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
Introduction to Message Passing Interface (MPI)
CS 5334/4390 Spring 2017 Rogelio Long
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to parallelism and the Message Passing Interface
Hybrid Parallel Programming
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008

12b.2 Basics of MPI

12b.3 MPI Startup and Cleanup Execution begins with a single processor Multiple threads are started on multiple processors by spawning using the MPI_Init() function At the end of the program, all processors should call the MPI_Finalize() function to kill threads and cleanup

12b.4 MPI Startup and Cleanup (continued)‏ #include main (int argc, char *argv[])‏ { MPI_Init(&argc, &argv);. MPI_Finalize(); }... t 0 on P 0 t 1 on P 1 t n-1 on P n-1 t 0 on P 0 Instructions between the Init and Finalize are executed by all threads

12b.5 Compiling and Running an MPI program To compile an MPI program: mpicc myprogram.c -o myprogram To run an MPI program: mpirun -nolocal -np 4 myprogram [your program arguments]

12b.6 Who am I? The threads are given ids from 0 to P-1, where P is the number of processors given on the mpirun command line Two useful functions are: –MPI_Comm_rank() – get the current thread's id –MPI_Comm_size() – get the number of processors

12b.7 Who am I? (continued)‏ #include main (int argc, char *argv[])‏ { int mypid, size; char name[BUFSIZ]; MPI_Init(&argc, &argv); MPI_Comm_rank (MPI_COMM_WORLD, &mypid); MPI_Comm_size (MPI_COMM_WORLD, &size); gethostname(name, BUFSIZ); printf (“I am thread %d running on %s\n”, mypid, name); if (mypid == 0) { printf (“There are %d threads\n”, size); } MPI_Finalize(); }

12b.8 One-to-one Communication Sending a message: –MPI_Send(buffer, count, datatype, destination, tag, communicator)‏ Receiving a message: –MPI_Recv(buffer, count, datatype, source, tag, communicator, status)‏

12b.9 One-to-one Communication (continued)‏ where: –buffer is the address of the data (put an “&” in front of a scalar variable, but not an array variable)‏ –count is the number of data items –destination and source are the thread ids of the destination and source threads –tag is a a user-defined message tag (required when multiple message are sent between the same pair of processors)‏

12b.10 One-to-one Communication (continued)‏ where: –A communicator is a communication domain. We will only use MPI_COMM_WORLD, which includes all processors. –datatype tells what type of data is in the buffer. E.g.: MPI_CHAR, MPI_INT, MPI_FLOAT, MPI_DOUBLE, MPI_PACKED

12b.11 Hello World with Communication #include main (int argc, char *argv[])‏ { int my_rank; int p; char message[BUFSIZ]; char machinename[BUFSIZ]; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_size (MPI_COMM_WORLD, &p); MPI_Comm_rank (MPI_COMM_WORLD, &my_rank);...

12b.12 Hello World with Communication (continued)‏... if (my_rank == 0) { // This is done by the master gethostname(machinename, MACH_NAME_LEN); printf ("Master thread is ready, running on %s\n", machinename); for (source = 1; source < p; source++) { MPI_Recv(message, BUFSIZ, MPI_CHAR, source, 0 MPI_COMM_WORLD, &status); printf ("%s\n", message); }...

12b.13 Hello World with Communication (continued)‏... // All threads except the master will send a "hello" else { gethostname(machinename, BUFSIZ); sprintf(message, "Greetings from thread %d, running on %s!", my_rank, machinename); MPI_Send(message, strlen(message)+1, MPI_CHAR, 0, 0, MPI_COMM_WORLD); } MPI_Finalize(); }

12b.14 Hello World Results $ mpirun -nolocal -np 8 hello Master thread is ready, running on compute-0-1.local Greetings from thread 1, running on compute-0-1.local! Greetings from thread 2, running on compute-0-1.local! Greetings from thread 3, running on compute-0-1.local! Greetings from thread 4, running on compute-0-2.local! Greetings from thread 5, running on compute-0-2.local! Greetings from thread 6, running on compute-0-2.local! Greetings from thread 7, running on compute-0-2.local!