Executing Message-Passing Programs Mitesh Meswani.

Slides:



Advertisements
Similar presentations
Three types of remote process invocation
Advertisements

CPSC 441 TUTORIAL – JANUARY 16, 2012 TA: MARYAM ELAHI INTRODUCTION TO C.
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
Program Analysis and Tuning The German High Performance Computing Centre for Climate and Earth System Research Panagiotis Adamidis.
Beowulf Supercomputer System Lee, Jung won CS843.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Types of Parallel Computers
Development of a Compact Cluster with Embedded CPUs Sritrusta Sukaridhoto, Yoshifumi Sasaki, Koichi Ito and Takafumi Aoki.
Introduction CS 524 – High-Performance Computing.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Sun FIRE Jani Raitavuo Niko Ronkainen. Sun FIRE 15K Most powerful and scalable Up to 106 processors, 576 GB memory and 250 TB online disk storage Fireplane.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Installing software on personal computer
Module 2: Planning to Install SQL Server. Overview Hardware Installation Considerations SQL Server 2000 Editions Software Installation Considerations.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
TPB Models Development Status Report Presentation to the Travel Forecasting Subcommittee Ron Milone National Capital Region Transportation Planning Board.
Parallel Processing LAB NO 1.
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
Simultaneous Multithreading: Maximizing On-Chip Parallelism Presented By: Daron Shrode Shey Liggett.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Programming With C.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
Architectural Characterization of an IBM RS6000 S80 Server Running TPC-W Workloads Lei Yang & Shiliang Hu Computer Sciences Department, University of.
Architectural Characterization of an IBM RS6000 S80 Server Running TPC-W Workloads Lei Yang & Shiliang Hu Computer Sciences Department, University of.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
© 2005 IBM MPI Louisiana Tech University Ruston, Louisiana Charles Grassl IBM January, 2006.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
The CCSM2.0 Quick Start Guide Lawrence Buja CCSM Software Engineering Group June
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 2: Message-Passing Computing LAM/MPI at the.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Connections to Other Packages The Cactus Team Albert Einstein Institute
Chapter 1 Computers, Compilers, & Unix. Overview u Computer hardware u Unix u Computer Languages u Compilers.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
Installation of Storage Foundation for Windows High Availability 5.1 SP2 1 Daniel Schnack Principle Technical Support Engineer.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
© 2005 IBM Essential Overview Louisiana Tech University Ruston, Louisiana Charles Grassl IBM January, 2006.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
CEG 2400 FALL 2012 Linux/UNIX Network Operating Systems.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
Hands on training session for core skills
CSCI-235 Micro-Computer Applications
MPI Message Passing Interface
Constructing a system with multiple computers or processors
CRESCO Project: Salvatore Raia
Is System X for Me? Cal Ribbens Computer Science Department
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Quick Tutorial on MPICH for NIC-Cluster
Types of Parallel Computers
Working in The IITJ HPC System
Presentation transcript:

Executing Message-Passing Programs Mitesh Meswani

Presentation Outline Introduction to Top Gun (eServer pSeries 690) MPI on Top Gun (AIX/Linux) Itanium2 (Linux) Cluster Sun (Solaris) Workstation Cluster Environment Set-up Compilation Execution Contacts and Web Sites

Introduction to Top Gun System Summary ArchitectureIBM POWER4 OSAIX/LINUX CPUs16 Peak GFLOPS83.2 Memory32GB Disk Storage2TB Top Gun is an IBM eServer pSeries 690 (p690) multiprocessor

Some features of the p690* Logical Partitioning Processor – IBM POWER4 dual-core, 1GHz processor with a fast shared L2 cache Self Management – Autonomic computing tools for self correction and detection of errors AIX Operating System – supports large files, system partitioning and large main memory *Source :

POWER4 Processor Layout* Description: Two 64-bit processors per chip 64KB Icache and 32KB Dcache per processor 1.44MB L2 unified cache with a 125GB/s transfer rate per chip Some POWER4 Features: Supports speculative execution Supports out-of-order execution Each processor is superscalar with 8 functional units: two floating-point units, two integer units, two load/store units, one branch prediction unit and one logical unit *Source :

Multi-Chip Module Layout Each MCM has four POWER4 chips, a total of 8 processors. a shared 128MB L3 unified cache with max transfer rate of 13.8GB/s Max of four MCMs per p690 server/cabinet Multiple p690 servers can be connected to form a cluster PP L2 PP PP PP L3

32-Processor p690 Configuration P P PP PP P L2 L3 MCM 0 P P P PP PP P L2 L3 MCM 2 P P P PP PP P L2 L3 MCM 1 P P P PP PP P L2 L3 MCM 3 P

Projects using Top Gun - 1 Dr. Stephen Aley, Department of Biological Sciences: programs for the analysis and data mining of DNA sequences, analysis of micro array data, and proteomic analysis by LC/MS-MS. Robert Jeffrey Bruntz, Department of Physics: Analyzing the output of magneto hydrodynamics (MHD) simulations via visualization enabled by OpenDX. Dr. Michael Huerta, Department of Mechanical and Industrial Engineering: Modeling shaped charges and other shock physics problems via hydro code calculations.

Projects using Top Gun - 2 Dr. Ramon Ravelo, Department of Physics: Performing multi-million atom simulations of material response employing parallel molecular dynamics algorithms. Dr. Ming-Ying Leung, Department of Mathematical Sciences : Developing efficient algorithms for pseudo knot prediction in long RNA sequences. Dr. Patricia Teller, Diana Villa, and Research Assistants, Department of Computer Science: Developing performance-counter based metrics and techniques to tune application performance, mathematical models to steer HPC applications, memory access models of large complex application to explore sources of performance degradation, and techniques to dynamically adapt the Linux operating system.

Software on Top Gun System Software: OS - AIX 5.3 Compilers: XL, Visual Age, GCC Parallel Programming Libraries: MPI, OpenMP, SHMEM Parallel Run Time Environment: Parallel Operating Environment, POE Other Software OpenGL 5.2 Loadleveler for AIX 2.0 PAPI XL Fortran Compiler Perl 5 Java Complete list available at:

MPI on Top Gun Sample hello world MPI program: #include main(argc, argv) int argc; char *argv[]; { char name[BUFSIZ]; int length; MPI_Init(&argc, &argv); MPI_Get_processor_name(name, &length); printf("%s: hello world\n", name); MPI_Finalize(); }

MPI on Top Gun Environment Set-up 1. Create a file.rhosts and type topgun.utep.edu and save the file in your home directory. 2. Create a host file.hf containing eight lines with the string topgun.utep.edu. Save this file in your home directory. Then set the environment variable MP_HOSTFILE to the absolute path of your.hf file. Example: %setenv MP_HOSTFILE /home/mitesh/.hf 3. Define an environment variable called MP_PROCS to have a default value for number of processors. Example: %setenv MP_PROCS 4

MPI on Top Gun Program Compilation Use mpcc Example: %mpcc HelloWorldMPI.c -o hello This creates an executable called hello. Other MPI compilers: Fortran77: mpxlf C++: mpCC Complete list available at:

MPI on Top Gun Program Execution - 1 Use poe utility to execute parallel programs. The poe command invokes the Parallel Operating Environment (POE) for loading and executing programs on remote nodes. The flags associated with poe : -hostfile: a file with a list of hosts can be specified – This will override the.hf file. -procs: number of MPI tasks to create – This flag will ignore the MP_PROCS environment variable.

MPI on Top Gun Program Execution - 2 Example with two tasks: %poe hello -procs 2 %topgun.utep.edu: hello world Example with a user specified host file: %poe hello -hostfile./host.txt -procs 3 %topgun.utep.edu: hello world Link for information on IBM’s POE ( Parallel Operating Environment):

MPI on Itanium2 Cluster Vampyre Cluster: 8-processor Intel Itanium2 Cluster, with dual (SMP) 900MHz Itanium2 processors per node Network Features: Externally accessible by 100 Mbps Ethernet; internal network runs at 1Gbps OS: Linux kernel e.25 MPICH

MPI on Itanium2 Cluster Environment Set-up In your home directory create an.rhosts file that contains the following four lines: it01.vampyre.cs.utep.edu it04.vampyre.cs.utep.edu it03.vampyre.cs.utep.edu it02.vampyre.cs.utep.edu * it02 is down and cannot be used for MPI at the moment. Additionally only two of the three remaining nodes are active because of routing problems caused by it02 being down.

MPI on Itanium2 Cluster Program Compilation Use mpicc -o to compile program Example: %mpicc HelloWorldMPI.c -o hello

MPI on Itanium2 Cluster Program Execution - 1 Use mpirun to execute your program Some mpirun options: - machinefile : Create the list of possible machines on which to execute from the file - np : specify the number of processors on which to execute - nolocal: avoids executing MPI tasks on local host Example: %mpirun -np 2./hello %sabina.vampyre.cs.utep.edu: hello world %clarimonde.cs.utep.edu: hello world

MPI on Itanium2 Cluster Program Execution - 2 Example: Create one task while logged into host sabina and use host clarimonde to execute the MPI task. %mpirun –nolocal –np 1./hello %clarimonde.cs.utep.edu: hello world

MPI on Sun Workstation Cluster Environment Set-up Create an.rhosts file with hostname Example: station10 mitesh station11 mitesh. station20 mitesh Copy the latest.cshrc file for mpi %cp /usr/local/cshrc/.cshrc.mpi.cshrc

MPI on Sun Workstation Cluster Program Compilation Use mpicc -o to compile program Example: %mpicc HelloWorldMPI.c -o hello

MPI on Sun Workstation Cluster Program Execution - 1 Use mpirun to execute a program Some mpirun options: -machinefile : Create the list of possible machines on which to execute from the file. The format of machinefile list is nodename Example: station10 station11 -np : specify the number of processors to run on - nolocal: avoids executing MPI tasks on local host Example: %mpirun -np 3./hello %station11.: hello world %station13.: hello world %station12.: hello world

MPI on Sun Workstation Cluster Program Execution - 2 Example: Create two tasks while logged into station11 and use only station12 and station13 to execute the MPI tasks. %mpirun –nolocal –np 2./hello %station12.: hello world %station13.: hello world Refer to this MPICH Web Site for complete list of mpirun options:

Contacts and Websites System Administrators: Jose Hernandez for Top Gun and Sun Workstation Leopoldo Hernandez for Itanium2 System Web Sites: Top Gun: Itanium2 Cluster: rentProgramming/vampyre.html rentProgramming/vampyre.html MPI Links: shop/mpi/MAIN.html#References shop/mpi/MAIN.html#References

Questions?