Presentation is loading. Please wait.

Presentation is loading. Please wait.

Paralēlo sistēmu programmēšana ar MPI, 3 Lekciju kurss: Paralēlie algoritmi Autors: Maksims Kravcevs Rīga 2007.

Similar presentations


Presentation on theme: "Paralēlo sistēmu programmēšana ar MPI, 3 Lekciju kurss: Paralēlie algoritmi Autors: Maksims Kravcevs Rīga 2007."— Presentation transcript:

1 Paralēlo sistēmu programmēšana ar MPI, 3 Lekciju kurss: Paralēlie algoritmi Autors: Maksims Kravcevs Rīga 2007

2 Ievades, izvades operācijas

3 Papildus materiāli “Introduction to Parallel I/O and MPI-IO” www.sdsc.edu/us/training/workshops/.../docs/Thakur-MPI- IO.ppt http://mpi.deino.net/mpi_functions/MPI_File_write_shared.ht ml http://www.mcs.anl.gov/research/projects/mpi/usingmpi2/exa mples/moreio/main.htmhttp://www.mcs.anl.gov/research/projects/mpi/usingmpi2/exa mples/moreio/main.htm - Piemēri

4 MPI 1 Piemērs (Chapter 4) void Get_data( float* a_ptr /* out */, float* b_ptr /* out */, int* n_ptr /* out */, int my_rank /* in */, int p /* in */) { int source = 0; /* All local variables used by */ int dest; /* MPI_Send and MPI_Recv */ int tag; MPI_Status status; if (my_rank == 0){ printf("Enter a, b, and n\n"); scanf("%f %f %d", a_ptr, b_ptr, n_ptr); for (dest = 1; dest < p; dest++){ tag = 0; MPI_Send(a_ptr, 1, MPI_FLOAT, dest, tag, MPI_COMM_WORLD); tag = 1; MPI_Send(b_ptr, 1, MPI_FLOAT, dest, tag, MPI_COMM_WORLD); tag = 2; MPI_Send(n_ptr, 1, MPI_INT, dest, tag, MPI_COMM_WORLD); } } else { tag = 0; MPI_Recv(a_ptr, 1, MPI_FLOAT, source, tag, MPI_COMM_WORLD, &status); tag = 1; MPI_Recv(b_ptr, 1, MPI_FLOAT, source, tag, MPI_COMM_WORLD, &status); tag = 2; MPI_Recv(n_ptr, 1, MPI_INT, source, tag, MPI_COMM_WORLD, &status); } } /* Get_data */

5 Datu ievades izvades operācijas MPI 1-2 standarts neiekļauj sevī nekādas paralēlas darbības ar informācijas ievadi un izvadi, ka arī nedomā par iespējamām šo ierīču topoloģijām. Vienkāršākais pieņēmums ir, ka pastāv vienīga datu ievades un izvades ierīce un viens process nodrošina datu ievadi un izvadi, pārsūtot informāciju citiem procesiem. MPI 2 iekļauj sevi loģisko I/O bibliotēku, kas apraksta paralēlo darbu ar failiem. Programmētājs strādā ar loģiskajiem failiem, kas fiziski var būt sadalīti. Darbs tiek organizēts līdzīgi kā ar ziņojumiem: ar loģisko failu ir iespējamas kā bloķējošas un viena procesa neatkarīgas operācijas, tā arī nebloķējošas un kopējas operācijas (kad visi procesi, kas loģiski ir saistīti ar šo failu, veic kādu operāciju). Visos gadījumos tiek nodrošinātas 3 piekļūšanas metodes: a) Darbs ar konkrētu pozīciju failā - process var griezties pie konkrētas faila pozīcijas b) Darbs ar individuālām norādēm - katram procesam ir sava norāde uz failu, kas iekšā nosāka konkrētu pozīciju c) Darbs ar kopējam norādēm - kad vairāki procesi strādā ar vienotu norādi

6 Datu ievades izvades operācijas MPI_File_Open MPI_File_Close Atver failu vai aizver failu MPI_File_Read MPI_File_Write Process veic individuāli I/O operāciju ar failu. Programma turpina pēc operācijas pabeigšanas MPI_File_Read_ALL MPI_File_Write_ALL Visi procesi veic kādu operāciju ar failu. Operācija ir bloķējoša MPI_File_IWrite MPI_File_Iread Process inicializē individuāli I/O operāciju ar failu. Atgriežas pie tālākas izpildes MPI_File_Iwrite_ALL MPI_File_Iread_ALL Visi procesi uzsāc kādu kolektīvu operāciju ar failu. http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-2.0/

7 MPI IO piemērs 1.#include 2.... 3. MPI_File fh; 4. MPI_Status status; 5.... 6. MPI_File_open(MPI_COMM_SELF, filename, MPI_MODE_CREATE | MPI_MODE_RDWR, 7. MPI_INFO_NULL, &fh); 8. MPI_File_set_view(fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL); 9. MPI_File_write(fh, buf, nints, MPI_INT, &status); 10. MPI_File_close(&fh); 11. MPI_File_open(MPI_COMM_SELF, filename, MPI_MODE_CREATE | MPI_MODE_RDWR, 12. MPI_INFO_NULL, &fh); 13. MPI_File_set_view(fh, 0, MPI_INT, MPI_INT, "native", MPI_INFO_NULL); 14. MPI_File_read(fh, buf, nints, MPI_INT, &status); 15. MPI_File_close(&fh); 16.... http://www.sesp.cse.clrc.ac.uk/Publications/paraio/paraio/node53.html http://beige.ucs.indiana.edu/I590/node88.html

8 int MPI_File_open(MPI_Comm comm /*in*/, char *filename /*in*/, int amode /*in*/, MPI_Info info, /*in*/ MPI_File *fh /*out*/) MPI_FILE_OPEN opens the file identified by the file name filename on all processes in the comm communicator group. MPI_Info – daudzas funkcijas satur šā tipa handle, kas ļauj nodot atributus (bet parasti MPI_INFO_NULL) MPI_FILE_OPEN is a collective routine: all processes must provide the same value for amode, and all processes must provide filenames that reference the same file.

9 1.MPI_MODE_RDONLY --- read only, 2.MPI_MODE_RDWR --- reading and writing, 3.MPI_MODE_WRONLY --- write only, 4.MPI_MODE_CREATE --- create the file if it does not exist, 5.MPI_MODE_EXCL --- error if creating file that already exists, 6.MPI_MODE_DELETE_ON_CLOSE --- delete file on close, 7.MPI_MODE_UNIQUE_OPEN --- file will not be concurrently opened elsewhere, 8.MPI_MODE_SEQUENTIAL --- file will only be accessed sequentially, 9.MPI_MODE_APPEND --- set initial position of all file pointers to end of file.

10 MPI provides three types of positioning for data access routines: explicit offsets, individual file pointers, shared file pointers. The different positioning methods may be mixed within the same program and do not affect each other. 1.The data access routines that accept explicit offsets contain _AT in their name (e.g., MPI_FILE_WRITE_AT). Explicit offset operations perform data access at the file position given directly as an argument---no file pointer is used nor updated. Note that this is not equivalent to an atomic seek-and- read or seek-and-write operation, as no ``seek'' is issued. Operations with explicit offsets are described in Section Data Access with Explicit Offsets. 2.The names of the individual file pointer routines contain no positional qualifier (e.g., MPI_FILE_WRITE). 3.The data access routines that use shared file pointers contain _SHARED or _ORDERED in their name (e.g., MPI_FILE_WRITE_SHARED)..

11 MPI_FILE_READ int MPI_File_read( MPI_File mpi_fh,/*[in] file handle */ void *buf, /* [in] initial address of buffer (choice) */ int count, /* [in] number of elements in buffer (nonnegative integer) */ MPI_Datatype datatype, /* [in] datatype of each buffer element (handle) */ MPI_Status *status /* [out] status object (Status) */ );

12

13 1.MPI supports blocking and nonblocking I/O routines. 2.A blocking I/O call will not return until the I/O request is completed. 3.A nonblocking I/O call initiates an I/O operation, but does not wait for it to complete. Given suitable hardware, this allows the transfer of data out/in the user's buffer to proceed concurrently with computation. A separate request complete call ( MPI_WAIT, MPI_TEST, or any of their variants) is needed to complete the I/O request, i.e., to confirm that the data has been read or written and that it is safe for the user to reuse the buffer. The nonblocking versions of the routines are named MPI_FILE_IXXX, where the I stands for immediate.

14 MPI_File_set_view int MPI_File_set_view( MPI_File mpi_fh /*in*/, MPI_Offset disp, MPI_Datatype etype, MPI_Datatype filetype, wchar_t *datarep, MPI_Info info );

15 http://mpi.deino.net/mpi_functions/MPI_File_set_view.html Piemērs:

16

17 MPI atkļūdošana un izpilde uz vairākiem datoriem

18 MPI paralēlo programmu atkļūdošana 1.Problēmas Izpildes soļu secība katru reizi var atšķirties Sistēmas “uzkārās” uz komunikāciju operācijām Pārslēgšanas starp procesiem 2.Parallel debuggers Active development, several available 3.Vienkāršāka pieeja - Izmantojot izvadu a.Printf/fflush 4.Izsaucot mpi programmu no debugera

19 Nedaudz par MPI CH “iekšēju uzbūvi” 1.0. process ir Master, kurš klausās uz noteikta porta, visi pārēji ir Slave, kas piereģistrējas pie šā MAster 2.Līdz ar to svarīgi lai MPI_Init sākumā tiktu izpildīts uz Master procesa! 3.Skatīt arī MPI programmas palaišanu pa tiešu (mpich2- doc-windev.pdf) if ‘‘%1’’ == ‘‘’’ goto HELP if ‘‘%2’’ == ‘‘’’ goto HELP set PMI_ROOT_HOST=%COMPUTERNAME% set PMI_ROOT_PORT=9222 set PMI_ROOT_LOCAL=1 set PMI_RANK=%1 set PMI_SIZE=%2 set PMI_KVS=mpich2 goto DONE :HELP REM usage: setmpi2 rank size 9 RUNTIME ENVIRONMENT 23 :DONE

20 Linux – skatīt mpich-doc-user.pdf Sadaļa 7. Debugging

21 Visual Studio – MPI Cluster Debugger 1.Open your project properties, and go to Configuration Properties->Debugging 2.Select MPI Cluster Debugger http://www.tuncbahcecioglu.com/posts/mpi-debugging-with-visual-studio/ http://go.microsoft.com/fwlink/?LinkId=55932

22 MPI Cluster Debugger konfigurēšana All we have to do is to fill 3 textboxes: 1.MPIRun Command: is the path + filename of your mpirun applcation(usually mpiexec.exe or mpirun.exe), In my case it is "c:\Program Files\MPICH2\bin\mpiexec.exe" (do not forget the “”). 2.MPIRun Arguments: are the arguments we want to pass to the MPIRun command, at very least we have to tell mpi the number processes to spawn, so we use –n 2 for 2 processes (change 2 as you wish). 3.MPIShim Location: is the path + filename of your mpishim.exe file. Mpishim.exe is different for every release of visual studio and platform you have. For example its location for Visual Studio 2005 for 32bit platforms should be “C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\Remote Debugger\x86” Un neaizmirst Application Command = $(TargetPath)

23 Papildus konfigurēšana 1.Papildus var izvēlēties, vai apturēt pārējos procesus, kad viens no tiem sasniedz BreakPoint:

24 MPI Cluster Debugger izmantošana 1.BreakPoint a.Kā parasti ieliek kodā un veic Build programmai 2.Tagad var izpildīt programmu ar Debug komandu (F5) a.Visi procesi palaižas atsevišķos logos! Sasniedzot Breakpoint – izpilde apstājas. b.To view the processes of the application, press CTRL+ALT+Z to open the Processes window, shown in Figure 3. Note that the ID shown in this window is the Windows Process ID and not the MPI rank ID. c.Set a breakpoint filter in Visual Studio to have the breakpoint active on only some of the processes. d.The selected process in the Processes window sets the focus in the other windows, making it easy to examine the details of the process. e. Note: Always use the step buttons in the process window, which are circled in red in Figure 3, to step through a breakpoint of a parallel application. Do not use the function key shortcuts.

25 Dažas piezīmes 1.Procesu izpildes secība a.Ja visi procesi netiek apturēti, droši nevar zināt, kad pārslēgsies uz cita procesa izpildi  (tiek apgalvots ka VS 2008 var apturēt kādu no procesiem ?!) b.Visual Studio 2005 jāmanipulē ar Breakpoints – kamēr viens process stāv uz noteikta Breakpoint, tad var otro izpildīt tālāk u tt.

26 MPI CH Programmu izpilde uz vairākiem datoriem

27 MPICH Izpilde uz vairākiem datoriem Linux http://dev.gentoo.org/~dberkholz/proj/cluster/build_a_be owulf.doc Windows http://www.mcs.anl.gov/research/projects/mpi/mpich2/do wnloads/windows-mpich2-tips.doc http://www.mcs.anl.gov/research/projects/mpi/mpich2/do wnloads/windows-mpich2-tips.doc “Klastera veidošana un MPI uzstādīšana” – labs temats referātam ?! http://www.docstoc.com/docs/7187091/How-to-Build-and- Use-a-Beowulf-Cluster

28 Windows 1.Install MPI CH uz visiem Nodes 2.DOMAIN Administrative rights are required. a.On each node execute: “spmd –register_spn” b.All jobs must be submitted with the –delegate command. 3.Copy the executable to each machine. This should be in the same directory structure as the MASTER node. For example, “C:\bspaul\helmholtz.mpi”. 4.Log on to each machine you want to run on. This is not required, but prevents anyone else logging on and using those machines. 5.Run the program from a command prompt from the MASTER node by typing: “mpiexec –hosts X hostname.1.com hostname.2.com hostname.X.com” 6.X is the number of hosts being used and hostname.X.com are the names of the machines running one. 7.Verification can be made by checking the Windows Task Manager of each machine to verify they are at 100%.

29 Troubleshooting Windows 1.The Windows Firewall must be adjusted to allow MPICH2 to run. a.Bring up the Windows Firewall from the Control Panel. b.From the Exceptions Tab, select “Add Program” and make sure “C:\mpich2\bin\smpd.exe” and “C:\mpich2\bin\mpiexec.exe” are on the list. c.From “Add Program” add the executable to the exceptions. N.B. If the Windows Security alert appears then make sure the executable has been added. 2.Share the folder on the master node. From Windows Explorer right click Properties, and Select “Share this Folder”.


Download ppt "Paralēlo sistēmu programmēšana ar MPI, 3 Lekciju kurss: Paralēlie algoritmi Autors: Maksims Kravcevs Rīga 2007."

Similar presentations


Ads by Google