Paralēlo sistēmu programmēšana ar MPI, 3 Lekciju kurss: Paralēlie algoritmi Autors: Maksims Kravcevs Rīga 2007.

Slides:



Advertisements
Similar presentations
1 Computer Science, University of Warwick Accessing Irregularly Distributed Arrays Process 0’s data arrayProcess 1’s data arrayProcess 2’s data array Process.
Advertisements

Non-Blocking Collective MPI I/O Routines Ticket #273.
Parallel Processing1 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
High Performance Computing
Parallel I/O. Objectives The material covered to this point discussed how multiple processes can share data stored in separate memory spaces (See Section.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Parallel I/O. 2 Common Ways of Doing I/O in Parallel Programs Sequential I/O: –All processes send data to rank 0, and 0 writes it to the file.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
PPC MPI Parallel File I/O1 CSCI-4320/6360: Parallel Programming & Computing Tues./Fri. 12-1:20 p.m. MPI File I/O Prof. Chris Carothers Computer.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Introduction to MPI-IO Rajeev Thakur Mathematics and Computer Science Division Argonne National Laboratory.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 3 Distributed Memory Programming with MPI An Introduction to Parallel Programming Peter Pacheco.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 The University of Southern Mississippi April 8, 2010.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
MPI-2 Sathish Vadhiyar Using MPI2: Advanced Features of the Message-Passing.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
1 Using PMPI routines l PMPI allows selective replacement of MPI routines at link time (no need to recompile) l Some libraries already make use of PMPI.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Message Passing Interface Using resources from
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
MPI IO Parallel Distributed Systems File Management I/O Peter Collins, Khadouj Fikry.
CS4402 – Parallel Computing
Introduction to Message Passing Interface (MPI)
Lecture 14: Inter-process Communication
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Prof. Chris Carothers Computer Science Department MRC 309a
Presentation transcript:

Paralēlo sistēmu programmēšana ar MPI, 3 Lekciju kurss: Paralēlie algoritmi Autors: Maksims Kravcevs Rīga 2007

Ievades, izvades operācijas

Papildus materiāli “Introduction to Parallel I/O and MPI-IO” IO.ppt ml mples/moreio/main.htmhttp:// mples/moreio/main.htm - Piemēri

MPI 1 Piemērs (Chapter 4) void Get_data( float* a_ptr /* out */, float* b_ptr /* out */, int* n_ptr /* out */, int my_rank /* in */, int p /* in */) { int source = 0; /* All local variables used by */ int dest; /* MPI_Send and MPI_Recv */ int tag; MPI_Status status; if (my_rank == 0){ printf("Enter a, b, and n\n"); scanf("%f %f %d", a_ptr, b_ptr, n_ptr); for (dest = 1; dest < p; dest++){ tag = 0; MPI_Send(a_ptr, 1, MPI_FLOAT, dest, tag, MPI_COMM_WORLD); tag = 1; MPI_Send(b_ptr, 1, MPI_FLOAT, dest, tag, MPI_COMM_WORLD); tag = 2; MPI_Send(n_ptr, 1, MPI_INT, dest, tag, MPI_COMM_WORLD); } } else { tag = 0; MPI_Recv(a_ptr, 1, MPI_FLOAT, source, tag, MPI_COMM_WORLD, &status); tag = 1; MPI_Recv(b_ptr, 1, MPI_FLOAT, source, tag, MPI_COMM_WORLD, &status); tag = 2; MPI_Recv(n_ptr, 1, MPI_INT, source, tag, MPI_COMM_WORLD, &status); } } /* Get_data */

Datu ievades izvades operācijas MPI 1-2 standarts neiekļauj sevī nekādas paralēlas darbības ar informācijas ievadi un izvadi, ka arī nedomā par iespējamām šo ierīču topoloģijām. Vienkāršākais pieņēmums ir, ka pastāv vienīga datu ievades un izvades ierīce un viens process nodrošina datu ievadi un izvadi, pārsūtot informāciju citiem procesiem. MPI 2 iekļauj sevi loģisko I/O bibliotēku, kas apraksta paralēlo darbu ar failiem. Programmētājs strādā ar loģiskajiem failiem, kas fiziski var būt sadalīti. Darbs tiek organizēts līdzīgi kā ar ziņojumiem: ar loģisko failu ir iespējamas kā bloķējošas un viena procesa neatkarīgas operācijas, tā arī nebloķējošas un kopējas operācijas (kad visi procesi, kas loģiski ir saistīti ar šo failu, veic kādu operāciju). Visos gadījumos tiek nodrošinātas 3 piekļūšanas metodes: a) Darbs ar konkrētu pozīciju failā - process var griezties pie konkrētas faila pozīcijas b) Darbs ar individuālām norādēm - katram procesam ir sava norāde uz failu, kas iekšā nosāka konkrētu pozīciju c) Darbs ar kopējam norādēm - kad vairāki procesi strādā ar vienotu norādi

Datu ievades izvades operācijas MPI_File_Open MPI_File_Close Atver failu vai aizver failu MPI_File_Read MPI_File_Write Process veic individuāli I/O operāciju ar failu. Programma turpina pēc operācijas pabeigšanas MPI_File_Read_ALL MPI_File_Write_ALL Visi procesi veic kādu operāciju ar failu. Operācija ir bloķējoša MPI_File_IWrite MPI_File_Iread Process inicializē individuāli I/O operāciju ar failu. Atgriežas pie tālākas izpildes MPI_File_Iwrite_ALL MPI_File_Iread_ALL Visi procesi uzsāc kādu kolektīvu operāciju ar failu.

MPI IO piemērs 1.#include MPI_File fh; 4. MPI_Status status; MPI_File_open(MPI_COMM_SELF, filename, MPI_MODE_CREATE | MPI_MODE_RDWR, 7. MPI_INFO_NULL, &fh); 8. MPI_File_set_view(fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL); 9. MPI_File_write(fh, buf, nints, MPI_INT, &status); 10. MPI_File_close(&fh); 11. MPI_File_open(MPI_COMM_SELF, filename, MPI_MODE_CREATE | MPI_MODE_RDWR, 12. MPI_INFO_NULL, &fh); 13. MPI_File_set_view(fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL); 14. MPI_File_read(fh, buf, nints, MPI_INT, &status); 15. MPI_File_close(&fh);

int MPI_File_open(MPI_Comm comm /*in*/, char *filename /*in*/, int amode /*in*/, MPI_Info info, /*in*/ MPI_File *fh /*out*/) MPI_FILE_OPEN opens the file identified by the file name filename on all processes in the comm communicator group. MPI_Info – daudzas funkcijas satur šā tipa handle, kas ļauj nodot atributus (bet parasti MPI_INFO_NULL) MPI_FILE_OPEN is a collective routine: all processes must provide the same value for amode, and all processes must provide filenames that reference the same file.

1.MPI_MODE_RDONLY --- read only, 2.MPI_MODE_RDWR --- reading and writing, 3.MPI_MODE_WRONLY --- write only, 4.MPI_MODE_CREATE --- create the file if it does not exist, 5.MPI_MODE_EXCL --- error if creating file that already exists, 6.MPI_MODE_DELETE_ON_CLOSE --- delete file on close, 7.MPI_MODE_UNIQUE_OPEN --- file will not be concurrently opened elsewhere, 8.MPI_MODE_SEQUENTIAL --- file will only be accessed sequentially, 9.MPI_MODE_APPEND --- set initial position of all file pointers to end of file.

MPI provides three types of positioning for data access routines: explicit offsets, individual file pointers, shared file pointers. The different positioning methods may be mixed within the same program and do not affect each other. 1.The data access routines that accept explicit offsets contain _AT in their name (e.g., MPI_FILE_WRITE_AT). Explicit offset operations perform data access at the file position given directly as an argument---no file pointer is used nor updated. Note that this is not equivalent to an atomic seek-and- read or seek-and-write operation, as no ``seek'' is issued. Operations with explicit offsets are described in Section Data Access with Explicit Offsets. 2.The names of the individual file pointer routines contain no positional qualifier (e.g., MPI_FILE_WRITE). 3.The data access routines that use shared file pointers contain _SHARED or _ORDERED in their name (e.g., MPI_FILE_WRITE_SHARED)..

MPI_FILE_READ int MPI_File_read( MPI_File mpi_fh,/*[in] file handle */ void *buf, /* [in] initial address of buffer (choice) */ int count, /* [in] number of elements in buffer (nonnegative integer) */ MPI_Datatype datatype, /* [in] datatype of each buffer element (handle) */ MPI_Status *status /* [out] status object (Status) */ );

1.MPI supports blocking and nonblocking I/O routines. 2.A blocking I/O call will not return until the I/O request is completed. 3.A nonblocking I/O call initiates an I/O operation, but does not wait for it to complete. Given suitable hardware, this allows the transfer of data out/in the user's buffer to proceed concurrently with computation. A separate request complete call ( MPI_WAIT, MPI_TEST, or any of their variants) is needed to complete the I/O request, i.e., to confirm that the data has been read or written and that it is safe for the user to reuse the buffer. The nonblocking versions of the routines are named MPI_FILE_IXXX, where the I stands for immediate.

MPI_File_set_view int MPI_File_set_view( MPI_File mpi_fh /*in*/, MPI_Offset disp, MPI_Datatype etype, MPI_Datatype filetype, wchar_t *datarep, MPI_Info info );

Piemērs:

MPI atkļūdošana un izpilde uz vairākiem datoriem

MPI paralēlo programmu atkļūdošana 1.Problēmas Izpildes soļu secība katru reizi var atšķirties Sistēmas “uzkārās” uz komunikāciju operācijām Pārslēgšanas starp procesiem 2.Parallel debuggers Active development, several available 3.Vienkāršāka pieeja - Izmantojot izvadu a.Printf/fflush 4.Izsaucot mpi programmu no debugera

Nedaudz par MPI CH “iekšēju uzbūvi” 1.0. process ir Master, kurš klausās uz noteikta porta, visi pārēji ir Slave, kas piereģistrējas pie šā MAster 2.Līdz ar to svarīgi lai MPI_Init sākumā tiktu izpildīts uz Master procesa! 3.Skatīt arī MPI programmas palaišanu pa tiešu (mpich2- doc-windev.pdf) if ‘‘%1’’ == ‘‘’’ goto HELP if ‘‘%2’’ == ‘‘’’ goto HELP set PMI_ROOT_HOST=%COMPUTERNAME% set PMI_ROOT_PORT=9222 set PMI_ROOT_LOCAL=1 set PMI_RANK=%1 set PMI_SIZE=%2 set PMI_KVS=mpich2 goto DONE :HELP REM usage: setmpi2 rank size 9 RUNTIME ENVIRONMENT 23 :DONE

Linux – skatīt mpich-doc-user.pdf Sadaļa 7. Debugging

Visual Studio – MPI Cluster Debugger 1.Open your project properties, and go to Configuration Properties->Debugging 2.Select MPI Cluster Debugger

MPI Cluster Debugger konfigurēšana All we have to do is to fill 3 textboxes: 1.MPIRun Command: is the path + filename of your mpirun applcation(usually mpiexec.exe or mpirun.exe), In my case it is "c:\Program Files\MPICH2\bin\mpiexec.exe" (do not forget the “”). 2.MPIRun Arguments: are the arguments we want to pass to the MPIRun command, at very least we have to tell mpi the number processes to spawn, so we use –n 2 for 2 processes (change 2 as you wish). 3.MPIShim Location: is the path + filename of your mpishim.exe file. Mpishim.exe is different for every release of visual studio and platform you have. For example its location for Visual Studio 2005 for 32bit platforms should be “C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\Remote Debugger\x86” Un neaizmirst Application Command = $(TargetPath)

Papildus konfigurēšana 1.Papildus var izvēlēties, vai apturēt pārējos procesus, kad viens no tiem sasniedz BreakPoint:

MPI Cluster Debugger izmantošana 1.BreakPoint a.Kā parasti ieliek kodā un veic Build programmai 2.Tagad var izpildīt programmu ar Debug komandu (F5) a.Visi procesi palaižas atsevišķos logos! Sasniedzot Breakpoint – izpilde apstājas. b.To view the processes of the application, press CTRL+ALT+Z to open the Processes window, shown in Figure 3. Note that the ID shown in this window is the Windows Process ID and not the MPI rank ID. c.Set a breakpoint filter in Visual Studio to have the breakpoint active on only some of the processes. d.The selected process in the Processes window sets the focus in the other windows, making it easy to examine the details of the process. e. Note: Always use the step buttons in the process window, which are circled in red in Figure 3, to step through a breakpoint of a parallel application. Do not use the function key shortcuts.

Dažas piezīmes 1.Procesu izpildes secība a.Ja visi procesi netiek apturēti, droši nevar zināt, kad pārslēgsies uz cita procesa izpildi  (tiek apgalvots ka VS 2008 var apturēt kādu no procesiem ?!) b.Visual Studio 2005 jāmanipulē ar Breakpoints – kamēr viens process stāv uz noteikta Breakpoint, tad var otro izpildīt tālāk u tt.

MPI CH Programmu izpilde uz vairākiem datoriem

MPICH Izpilde uz vairākiem datoriem Linux owulf.doc Windows wnloads/windows-mpich2-tips.doc wnloads/windows-mpich2-tips.doc “Klastera veidošana un MPI uzstādīšana” – labs temats referātam ?! Use-a-Beowulf-Cluster

Windows 1.Install MPI CH uz visiem Nodes 2.DOMAIN Administrative rights are required. a.On each node execute: “spmd –register_spn” b.All jobs must be submitted with the –delegate command. 3.Copy the executable to each machine. This should be in the same directory structure as the MASTER node. For example, “C:\bspaul\helmholtz.mpi”. 4.Log on to each machine you want to run on. This is not required, but prevents anyone else logging on and using those machines. 5.Run the program from a command prompt from the MASTER node by typing: “mpiexec –hosts X hostname.1.com hostname.2.com hostname.X.com” 6.X is the number of hosts being used and hostname.X.com are the names of the machines running one. 7.Verification can be made by checking the Windows Task Manager of each machine to verify they are at 100%.

Troubleshooting Windows 1.The Windows Firewall must be adjusted to allow MPICH2 to run. a.Bring up the Windows Firewall from the Control Panel. b.From the Exceptions Tab, select “Add Program” and make sure “C:\mpich2\bin\smpd.exe” and “C:\mpich2\bin\mpiexec.exe” are on the list. c.From “Add Program” add the executable to the exceptions. N.B. If the Windows Security alert appears then make sure the executable has been added. 2.Share the folder on the master node. From Windows Explorer right click Properties, and Select “Share this Folder”.