MPI: A Quick-Start Guide Information Physics Soon-Hyung Yook.

Slides:



Advertisements
Similar presentations
Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
MPI Message Passing Interface
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
MPI_Gatherv CISC372 Fall 2006 Andrew Toy Tom Lynch Bill Meehan.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Collective Communication.  Collective communication is defined as communication that involves a group of processes  More restrictive than point to point.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
Sahalu JunaiduICS 573: High Performance Computing6.1 Programming Using the Message Passing Paradigm Principles of Message-Passing Programming The Building.
Message Passing Programming Carl Tropper Department of Computer Science.
Parallel Programming with MPI Matthew Pratola
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 Collective Communications. 2 Overview  All processes in a group participate in communication, by calling the same function with matching arguments.
ECE 1747H : Parallel Programming Message Passing (MPI)
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Steve Lantz Computing and Information Science Distributed Memory Programming Using Advanced MPI (Message Passing Interface)
PP Lab MPI programming VI. Program 1 Break up a long vector into subvectors of equal length. Distribute subvectors to processes. Let them compute the.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
1 What is MPI?  MPI = Message Passing Interface  Specification of message passing libraries for developers and users  Not a library by itself, but specifies.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Lecture 6: Message Passing Interface (MPI). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
MPI Jakub Yaghob. Literature and references Books Gropp W., Lusk E., Skjellum A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface,
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
1. 2 The logical view of a machine supporting the message-passing paradigm consists of p processes, each with its own exclusive address space. The logical.
An Introduction to MPI (message passing interface)
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
Computer Science Department
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
Capstone Project Project: Middleware for Cluster Computing
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
CS4402 – Parallel Computing
Introduction to MPI.
Computer Science Department
Send and Receive.
An Introduction to Parallel Programming with MPI
Send and Receive.
Introduction to Message Passing Interface (MPI)
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
MPI: Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message-Passing Computing Message Passing Interface (MPI)
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Computer Science Department
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Presentation transcript:

MPI: A Quick-Start Guide Information Physics Soon-Hyung Yook

Outlook MPI MPI functions –MPI_Init –MPI_Finalize –MPI_Comm_size –MPI_Comm_rank –MPI_Send –MPI_Receive –MPI_Broadcast –MPI_Scatter –MPI_Reduce –MPI_Gather MPI datatypes –MPI_CHAR –MPI_INT –MPI_FLOAT –MPI_DOUBLE

MPI –Message-passing interface –LAM –MPICH2

MPI Functions MPI_Init(&argc, &argv) – 프로그램 실행 인수들과 함께 mpi 함수들을 초기화 해준다. MPI_Finalize() –mpi 함수들을 종료한다. 모든 mpi 함수들은 MPI_Inti() 과 MP I_Finalize() 사이에서 호출되어야 한다. int MPI_Comm_rank(MPI_Comm comm, int *rank) –comm 커뮤니케이터에서 자신의 프로세스 id 를 얻는다. int MPI_Comm_size(MPI_Comm comm, int *size) –comm 커뮤니케이터에서 실행되는 프로세스의 개수를 얻는 다

MPI Functions int MPI_Send(void *message, int count, MPI_Datatupe datatype, int de st, int tag, MPI_Comm comm) –dest 로 메시지를 보낸다. 아래는 각 변수들의 설명이다. message 는 보내고자 하는 메시지를 저장하고 있는 버퍼 count 는 보낼 메시지 개수 datatype 는 보낼 메시지 타입 dest 는 보내고자 하는 프로세스 id tag 는 보내는 메시지에 대한 꼬리표 comm 은 dest 와 자신의 프로세스가 속해있는 커뮤니케이터 int MPI_Recv(void *message, int count, MPI_Datatype datatype, int sou rce, int tag, MPI_Comm comm, MPI_Status *status) –source 로부터 메시지를 받는다. 아래는 각 변수들의 설명이다. message 는 받은 메시지를 저장할 버퍼 count 는 받을 메시지 개수 ( 받는 메시지 개수보다 작으면 에러 발생 ) datatype 은 받을 메시지 타입 source 는 메시지를 보내주는 프로세스 id tag 는 받은 메시지를 확인하기 위한 꼬리표 (MPI_Recv 에서의 tag 와 MPI_Send 에서의 tag 가 같아야 한다 ) comm 은 source 와 자신의 프로세스가 속해있는 커뮤니케이터 status 는 실제받은 데이터에 대한 정보 (source 와 tag)

MPI Functions Collective Communication 을 위한 함수 –Communicator 란 병렬 실행의 대상이 되는 프로세스 그룹으로서 정의된다. 기본적으로 MPI 프로그램 에서는 MPI_COMM_WORLD 라는 기본 Communicator 가 생성되는데 이는 동시에 수행되는 모든 병 렬 프로세스를 포함한다. 그러나 사용자는 필요에 따라 임의의 프로세스로 구성된 새로운 Communic ator 를 생성할 수 있기도 하다. 이러한 Communicator 에서 정의되는 프로세스간의 통신을 위한 함수 들을 다음에 보였다. int MPI_Bcast(void *message, int count, MPI_Datatype datatype, int root, MPI_Comm comm) –comm 으로 정의된 Communicator 에 속한 모든 프로세스들에게 동일한 메시지를 전송한다. root 는 메시지를 뿌려주는 프로세스 id. root 의 message 에만 보내고자 하는 데이터가 들어있고, 다른 프로세스의 message 는 받은 메시 지를 저장할 공간이다. int MPI_Reduce(void *operand, void *result, int count, MPI_Datatype datatype, MPI_O p op, int root, MPI_Comm comm) – 모든 프로세스들이 MPI_Reduce 를 수행하면 모든 operand 가 op 에 따라 연산을 수행하고 그 결과가 root 프로세스의 result 에 저장된다. operand 는 연산이 적용될 피연산자 result 는 연산결과가 저장될 버퍼 op 는 어떤 종료의 연산이 수행될 것인지를 알려주는 OP-CODE root 는 연산결과가 저장될 프로세스 id int MPI_Barrier(MPI_Comm comm) comm Communicator 에 속한 모든 프로세스들이 MPI_Barrier 를 호출할때까지 block 시킴으 로서 동기화를 시킨다.

MPI Functions Collective Communication 을 위한 함수 int MPI_Gather(void *send_buf, int send_count, MPI_Datatype send_type, void *recv_ buf, int recv_count, MPI_Datatype recv_type, int root, MPI_comm comm) –root 프로세스는 각 프로세스들이 보내준 자료 (send_buf) 를 프로세스 id 순으로 root 의 recv_buf 에 차 곡차곡 쌓는다. int MPI_Scatter(void *send_buf, int send_count, MPI_Datatype send _type, void *recv _buf, int recv_count, MPI_Datatyp recv_type, int root,MPI_Comm comm) –send_buf 의 내용을 send_count 의 크기로 잘라서 모든 프로세스들의 recv_buf 로 순서대로 전송한다. MPI_Gather 와는 반대의 역할 int MPI_Allgather(void *send_buf, int send_count, MPI_Dataltype send_type, void *re cv_buf, int recv_count, MPI_Datatype recv_type, MPI_comm comm) –MPI_gather 와 마찬가지로 모든 프로세스의 send_buf 의 내용을 모으나, 모든 프로세스의 recv_buf 에 저장하는 것이 다르다. int MPI_Allreduce(void *operand, void *result, int count, MPI_Datatype datatype, MPI _Op, MPI_Comm comm) –MPI_Reduce 와 마찬가지로 모든 프로세스의 operand 에 op 연산을 적용하지만 그 결과를 모든 프로 세스의 result 가 가지게 된다

MPI Datatypes Basic datatypes –MPI_CHAR (char) –MPI_INT (int) –MPI_FLOAT (float) –MPI_DOUBLE (double)

MPI Programming Examples #include #define WORKTAG 1 #define DIETAG 2 /* Local functions */ static void master(void); static void slave(void); static unit_of_work_t get_next_work_item(void); static void process_results(unit_result_t result); static unit_result_t do_work(unit_of_work_t work); int main(int argc, char **argv) { int myrank; /* Initialize MPI */ MPI_Init(&argc, &argv); /* Find out my identity in the default communicator */ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); if (myrank == 0) { master(); } else{ slave(); } /* Shut down MPI */ MPI_Finalize(); return 0; }

MPI Programming Examples static void master(void) { int ntasks, rank; unit_of_work_t work; unit_result_t result; MPI_Status status; /* Find out how many processes there are in the default communicator */ MPI_Comm_size(MPI_COMM_WORLD, &ntasks); /* Seed the slaves; send one unit of work to each slave. */ for (rank = 1; rank < ntasks; ++rank) { /* Find the next item of work to do */ work = get_next_work_item(); /* Send it to each rank */ MPI_Send(&work,/* message buffer */ 1, /* one data item */ MPI_INT, /* data item is an integer */ rank,/* destination process rank */ WORKTAG, /* user chosen message tag */ MPI_COMM_WORLD); /* default communicator */ } /* Loop over getting new work requests until there is no more work to be done */ work = get_next_work_item(); while (work != NULL) { /* Receive results from a slave */ MPI_Recv(&result, /* message buffer */ 1, /* one data item */ MPI_DOUBLE, /* of type double real */ MPI_ANY_SOURCE, /* receive from any sender */ MPI_ANY_TAG, /* any type of message */ MPI_COMM_WORLD, /* default communicator */ &status); /* info about the received message */ /* Send the slave a new work unit */ MPI_Send(&work, /* message buffer */ 1, /* one data item */ MPI_INT, /* data item is an integer */ status.MPI_SOURCE, /* to who we just received from */ WORKTAG, /* user chosen message tag */ MPI_COMM_WORLD); /* default communicator */ /* Get the next unit of work to be done */ work = get_next_work_item(); } /* There's no more work to be done, so receive all the outstanding results from the slav es. */ for (rank = 1; rank < ntasks; ++rank) { MPI_Recv(&result, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_C OMM_WORLD, &status); } /* Tell all the slaves to exit by sending an empty message with the DIETAG. */ for (rank = 1; rank < ntasks; ++rank) { MPI_Send(0, 0, MPI_INT, rank, DIETAG, MPI_COMM_WORLD); } static void slave(void) { unit_of_work_t work; unit_result_t results; MPI_Status status; while (1) { /* Receive a message from the master */ MPI_Recv(&work, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status); /* Check the tag of the received message. */ if (status.MPI_TAG == DIETAG) { return; } /* Do the work */ result = do_work(work); /* Send the result back */ MPI_Send(&result, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD); }

MPI Programming Examples static unit_of_work_t get_next_work_item(void) { /* Fill in with whatever is relevant to obtain a new unit of work suitable to be given to a s lave. */ } static void process_results(unit_result_t result) { /* Fill in with whatever is relevant to process the results returned by the slave */ } static unit_result_t do_work(unit_of_work_t work) { /* Fill in with whatever is necessary to process the work and generate a result */ }