Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

MPI_Gatherv CISC372 Fall 2006 Andrew Toy Tom Lynch Bill Meehan.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 CS 668: Lecture 2 An Introduction to MPI Fred Annexstein University of Cincinnati CS668: Parallel Computing Fall 2007 CC Some.
Parallel Programming in C with MPI and OpenMP
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
PP Lab MPI programming VI. Program 1 Break up a long vector into subvectors of equal length. Distribute subvectors to processes. Let them compute the.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
CS 420 – Design of Algorithms MPI Data Types Basic Message Passing - sends/receives.
Parallel Programming with MPI By, Santosh K Jena..
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Task/ChannelMessage-passing TaskProcess Explicit channelsMessage communication.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Chapter 4 Message-Passing Programming. Learning Objectives Understanding how MPI programs execute Understanding how MPI programs execute Familiarity with.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
MPI Basics.
MPI Jakub Yaghob.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
CS4402 – Parallel Computing
Introduction to MPI.
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
Introduction to MPI CDP.
Send and Receive.
CS 584.
Programming with MPI.
Send and Receive.
Introduction to Message Passing Interface (MPI)
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Distributed Systems CS
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
MPI: Message Passing Interface
CSCE569 Parallel Computing
Lab Course CFD Parallelisation Dr. Miriam Mehl.
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Message-Passing Computing Message Passing Interface (MPI)
NOTE: FOR CLASS USE MPICH Version of MPI!!
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Parallel Processing - MPI
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes Two 1.0 GHz VIA-C3 processors each node Connected with Gigabit Ethernet Linux kernel – 2.6.8-1-smp Blade Server – 5 nodes two 3.0 GHz Intel Xeon processors each node each Xeon processor is Hyper-Threading

MPI – message passing interface Basic data types MPI_CHAR – char MPI_UNSIGNED_CHAR – unsigned char MPI_BYTE – like unsigned char MPI_SHORT – short MPI_LONG – long MPI_INT – int MPI_FLOAT – float MPI_DOUBLE – double ……

MPI – message passing interface 6 basic MPI functions MPI_Init – initialize MPI environment MPI_Finalize – shutting down MPI environment MPI_Comm_size – determine number of processes MPI_Comm_rank – determine process rank MPI_Send – blocking data send MPI_Recv – blocking data receive

MPI – message passing interface Initialize MPI MPI_Init(&argc, &argv) First MPI function called by each process Allow system to do any necessary setup Not necessarily first executable statement in your code

MPI – message passing interface Communicators Communicators: opaque object that provides message- passing environment for processes MPI_COMM_WORLD Default communicator Includes all processes Create new communicators MPI_Comm_create() MPI_Group_incl()

Communicators Communicator Name Communicator MPI_COMM_WORLD Processes 5 2 Ranks 1 4 3

MPI – message passing interface Shutting Down MPI environment MPI_Finalize() Call after all MPI function calls Allow system to free any resources

MPI – message passing interface Determine Number of Processes MPI_Comm_size(MPI_COMM_WORLD,&size) First argument is communicator Number of processes returned through second argument

MPI – message passing interface Determine Process Rank MPI_Comm_rank(MPI_COMM_WORLD,&myid) First argument is communicator Process rank (in range 0, 1, 2, …, P-1) returned through second argument

Example - hello.c

Example - hello.c (con’t) Compile MPI Programs mpicc –o foo foo.c mpicc – script to compile and link MPI library example: mpicc –o hello hello.c

Example - hello.c (con’t) Execute MPI Programs mpirun –np <p> <exec> <argc1> … -np <p> - number of processes <exec> - executable filename <argc1> … - argument passing to <exec> example: mpirun –np 4 hello

Example – hello.c (con’t)

Example – hello.c (con’t)

Example – hello.c (con’t) rank = 0 rank = 1 rank = 2 rank = 3

Example – hello.c (con’t) rank = 0 size = 4 rank = 1 size = 4 rank = 2 size = 4 rank = 3 size = 4

Example – hello.c (con’t) rank = 0 size = 4 rank = 1 size = 4 rank = 2 size = 4 rank = 3 size = 4

MPI – message passing interface Specify Host Processors machine file describes machines to run your program # of MPI processes > physical machines ? avoid login with password mpirun –np <p> -machinefile <filename> <exec> example: in machines.LINUX # machines.LINUX # put machine hostname below node01 node02 node03

MPI – message passing interface Blocking Send and Receive MPI_send(&buf,count,datatype,dest,tag,MPI_COMM_WORLD) MPI_recv(&buf,count,datatype,src,tag,MPI_COMM_WORLD,status) Argument datatype must be MPI_CHAR, MPI_INT….. For each send-recv pair, tag must be the same

MPI – message passing interface Other program notes variables and functions except for MPI_XXX are local messages dumped are not in order example: send_recv.c

MPI – send_recv.c

Odd-Even Sort Operation in two phases, even phase and odd phase Even-numbered processes exchange numbers with their right neighbor Odd phase Odd-numbered processes exchange numbers with their right neighbor

How to solve this 8-number sorting? Sequential program – easy MPI one number for one MPI process start MPI program master sends data to other process start odd_even sorting master collects result from other processes end MPI program

Other problem? # of unsorted numbers is not power of 2 ? # of unsorted numbers is large ? # of unsorted numbers can not be divided by nprocs ?

MPI – message passing interface Advanced MPI functions MPI_Bcast – broadcast a msg from source to other processes MPI_Scatter – scatter values to a group of processes MPI_Gather – gather values from a group of processes MPI_Allgather – gather data from all tasks an distribute it to all MPI_Barrier – block until all processes reach this routine

MPI_Bcast MPI_Bcast (void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)

MPI_Scatter MPI_Scatter (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)

MPI_Gather MPI_Gather (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)

MPI_Allgather MPI_Allgather (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, MPI_Comm comm)

MPI_Barrier MPI_Barrier (MPI_Comm comm)

Extension of MPI_Recv source is don’t-care – MPI_ANY_SOURCE MPI_Recv (void *buffer, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) source is don’t-care – MPI_ANY_SOURCE tag is don’t-care – MPI_ANY_TAG to retrieve sender’s information typedef struct { int count; int MPI_SOURCE; int MPI_TAG; int MPI_ERROR; } MPI_Status; use status->MPI_SOURCE to get sender’s id use status->MPI_TAG to get message