Presentation is loading. Please wait.

Presentation is loading. Please wait.

Executing Message-Passing Programs Mitesh Meswani.

Similar presentations


Presentation on theme: "Executing Message-Passing Programs Mitesh Meswani."— Presentation transcript:

1 Executing Message-Passing Programs Mitesh Meswani

2 Presentation Outline Introduction to Top Gun (eServer pSeries 690) MPI on Top Gun (AIX/Linux) Itanium2 (Linux) Cluster Sun (Solaris) Workstation Cluster Environment Set-up Compilation Execution Contacts and Web Sites

3 Introduction to Top Gun System Summary ArchitectureIBM POWER4 OSAIX/LINUX CPUs16 Peak GFLOPS83.2 Memory32GB Disk Storage2TB Top Gun is an IBM eServer pSeries 690 (p690) multiprocessor

4 Some features of the p690* Logical Partitioning Processor – IBM POWER4 dual-core, 1GHz processor with a fast shared L2 cache Self Management – Autonomic computing tools for self correction and detection of errors AIX Operating System – supports large files, system partitioning and large main memory *Source : http://www-1.ibm.com/servers/eserver/pseries/hardware/tour/690_text.html

5 POWER4 Processor Layout* Description: Two 64-bit processors per chip 64KB Icache and 32KB Dcache per processor 1.44MB L2 unified cache with a 125GB/s transfer rate per chip Some POWER4 Features: Supports speculative execution Supports out-of-order execution Each processor is superscalar with 8 functional units: two floating-point units, two integer units, two load/store units, one branch prediction unit and one logical unit *Source : http://www-1.ibm.com/servers/eserver/pseries/hardware/whitepapers/power4.html

6 Multi-Chip Module Layout Each MCM has four POWER4 chips, a total of 8 processors. a shared 128MB L3 unified cache with max transfer rate of 13.8GB/s Max of four MCMs per p690 server/cabinet Multiple p690 servers can be connected to form a cluster PP L2 PP PP PP L3

7 32-Processor p690 Configuration P P PP PP P L2 L3 MCM 0 P P P PP PP P L2 L3 MCM 2 P P P PP PP P L2 L3 MCM 1 P P P PP PP P L2 L3 MCM 3 P

8 Projects using Top Gun - 1 Dr. Stephen Aley, Department of Biological Sciences: programs for the analysis and data mining of DNA sequences, analysis of micro array data, and proteomic analysis by LC/MS-MS. Robert Jeffrey Bruntz, Department of Physics: Analyzing the output of magneto hydrodynamics (MHD) simulations via visualization enabled by OpenDX. Dr. Michael Huerta, Department of Mechanical and Industrial Engineering: Modeling shaped charges and other shock physics problems via hydro code calculations.

9 Projects using Top Gun - 2 Dr. Ramon Ravelo, Department of Physics: Performing multi-million atom simulations of material response employing parallel molecular dynamics algorithms. Dr. Ming-Ying Leung, Department of Mathematical Sciences : Developing efficient algorithms for pseudo knot prediction in long RNA sequences. Dr. Patricia Teller, Diana Villa, and Research Assistants, Department of Computer Science: Developing performance-counter based metrics and techniques to tune application performance, mathematical models to steer HPC applications, memory access models of large complex application to explore sources of performance degradation, and techniques to dynamically adapt the Linux operating system.

10 Software on Top Gun System Software: OS - AIX 5.3 Compilers: XL, Visual Age, GCC Parallel Programming Libraries: MPI, OpenMP, SHMEM Parallel Run Time Environment: Parallel Operating Environment, POE Other Software OpenGL 5.2 Loadleveler for AIX 2.0 PAPI XL Fortran Compiler Perl 5 Java 1.4.1 Complete list available at: http://research.utep.edu/Default.aspx?tabid=20686 http://research.utep.edu/Default.aspx?tabid=20686

11 MPI on Top Gun Sample hello world MPI program: #include main(argc, argv) int argc; char *argv[]; { char name[BUFSIZ]; int length; MPI_Init(&argc, &argv); MPI_Get_processor_name(name, &length); printf("%s: hello world\n", name); MPI_Finalize(); }

12 MPI on Top Gun Environment Set-up 1. Create a file.rhosts and type topgun.utep.edu and save the file in your home directory. 2. Create a host file.hf containing eight lines with the string topgun.utep.edu. Save this file in your home directory. Then set the environment variable MP_HOSTFILE to the absolute path of your.hf file. Example: %setenv MP_HOSTFILE /home/mitesh/.hf 3. Define an environment variable called MP_PROCS to have a default value for number of processors. Example: %setenv MP_PROCS 4

13 MPI on Top Gun Program Compilation Use mpcc Example: %mpcc HelloWorldMPI.c -o hello This creates an executable called hello. Other MPI compilers: Fortran77: mpxlf C++: mpCC Complete list available at: http://research.utep.edu/Default.aspx?tabid=2 0687 http://research.utep.edu/Default.aspx?tabid=2 0687

14 MPI on Top Gun Program Execution - 1 Use poe utility to execute parallel programs. The poe command invokes the Parallel Operating Environment (POE) for loading and executing programs on remote nodes. The flags associated with poe : -hostfile: a file with a list of hosts can be specified – This will override the.hf file. -procs: number of MPI tasks to create – This flag will ignore the MP_PROCS environment variable.

15 MPI on Top Gun Program Execution - 2 Example with two tasks: %poe hello -procs 2 %topgun.utep.edu: hello world Example with a user specified host file: %poe hello -hostfile./host.txt -procs 3 %topgun.utep.edu: hello world Link for information on IBM’s POE ( Parallel Operating Environment): http://www.llnl.gov/computing/tutorials/ibm_sp/#POE

16 MPI on Itanium2 Cluster Vampyre Cluster: 8-processor Intel Itanium2 Cluster, with dual (SMP) 900MHz Itanium2 processors per node Network Features: Externally accessible by 100 Mbps Ethernet; internal network runs at 1Gbps OS: Linux kernel 2.4.18-e.25 MPICH

17 MPI on Itanium2 Cluster Environment Set-up In your home directory create an.rhosts file that contains the following four lines: it01.vampyre.cs.utep.edu it04.vampyre.cs.utep.edu it03.vampyre.cs.utep.edu it02.vampyre.cs.utep.edu * it02 is down and cannot be used for MPI at the moment. Additionally only two of the three remaining nodes are active because of routing problems caused by it02 being down.

18 MPI on Itanium2 Cluster Program Compilation Use mpicc -o to compile program Example: %mpicc HelloWorldMPI.c -o hello

19 MPI on Itanium2 Cluster Program Execution - 1 Use mpirun to execute your program Some mpirun options: - machinefile : Create the list of possible machines on which to execute from the file - np : specify the number of processors on which to execute - nolocal: avoids executing MPI tasks on local host Example: %mpirun -np 2./hello %sabina.vampyre.cs.utep.edu: hello world %clarimonde.cs.utep.edu: hello world

20 MPI on Itanium2 Cluster Program Execution - 2 Example: Create one task while logged into host sabina and use host clarimonde to execute the MPI task. %mpirun –nolocal –np 1./hello %clarimonde.cs.utep.edu: hello world

21 MPI on Sun Workstation Cluster Environment Set-up Create an.rhosts file with hostname Example: station10 mitesh station11 mitesh. station20 mitesh Copy the latest.cshrc file for mpi %cp /usr/local/cshrc/.cshrc.mpi.cshrc

22 MPI on Sun Workstation Cluster Program Compilation Use mpicc -o to compile program Example: %mpicc HelloWorldMPI.c -o hello

23 MPI on Sun Workstation Cluster Program Execution - 1 Use mpirun to execute a program Some mpirun options: -machinefile : Create the list of possible machines on which to execute from the file. The format of machinefile list is nodename Example: station10 station11 -np : specify the number of processors to run on - nolocal: avoids executing MPI tasks on local host Example: %mpirun -np 3./hello %station11.: hello world %station13.: hello world %station12.: hello world

24 MPI on Sun Workstation Cluster Program Execution - 2 Example: Create two tasks while logged into station11 and use only station12 and station13 to execute the MPI tasks. %mpirun –nolocal –np 2./hello %station12.: hello world %station13.: hello world Refer to this MPICH Web Site for complete list of mpirun options: http://www-unix.mcs.anl.gov/mpi/www/www1/mpirun.html http://www-unix.mcs.anl.gov/mpi/www/www1/mpirun.html

25 Contacts and Websites System Administrators: Jose Hernandez (jose@cs.utep.edu) for Top Gun and Sun Workstation Clusterjose@cs.utep.edu Leopoldo Hernandez (leo@cs.utep.edu) for Itanium2 Clusterleo@cs.utep.edu System Web Sites: Top Gun: http://research.utep.edu/topgun http://research.utep.edu/topgun Itanium2 Cluster: http://www.cs.utep.edu/~bdauriol/courses/ParallelAndConcur rentProgramming/vampyre.html http://www.cs.utep.edu/~bdauriol/courses/ParallelAndConcur rentProgramming/vampyre.html MPI Links: http://www.llnl.gov/computing/tutorials/workshops/work shop/mpi/MAIN.html#References http://www.llnl.gov/computing/tutorials/workshops/work shop/mpi/MAIN.html#References http://www-unix.mcs.anl.gov/mpi/

26 Questions?


Download ppt "Executing Message-Passing Programs Mitesh Meswani."

Similar presentations


Ads by Google