Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.

Slides:



Advertisements
Similar presentations
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Advertisements

Parallel ISDS Chris Hans 29 November 2004.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
Tutorial on MPI Experimental Environment for ECE5610/CSC
Parallel Systems Parallel Systems Tools Dr. Guy Tel-Zur.
ISG We build general capability Job Submission on the Olympus Cluster J. DePasse; S. Brown, PhD; T. Maiden Pittsburgh Supercomputing Center Public Health.
Using Parallel Computing Resources at Marquette
High Performance Computing
Advanced MPI Lab MPI I/O Exercises. 1 Getting Started Get a training account from the instructor Login (using ssh) to ranger.tacc.utexas.edu.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh ssh.fsl.byu.edu You will be logged in to an interactive node.
Introduction to Scientific Computing on BU’s Linux Cluster Doug Sondak Linux Clusters and Tiled Display Walls Boston University July 30 – August 1, 2002.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Understanding the Basics of Computational Informatics Summer School, Hungary, Szeged Methos L. Müller.
Critical Flags, Variables, and Other Important ALCF Minutiae Jini Ramprakash Technical Support Specialist Argonne Leadership Computing Facility.
Debugging Cluster Programs using symbolic debuggers.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Intro to Linux/Unix (user commands) Box. What is Linux? Open Source Operating system Developed by Linus Trovaldsa the U. of Helsinki in Finland since.
Using The Cluster. What We’ll Be Doing Add users Run Linpack Compile code Compute Node Management.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
DDT Debugging Techniques Carlos Rosales Scaling to Petascale 2010 July 7, 2010.
Getting Started with HPC On Iceberg Michael Griffiths and Deniz Savas Corporate Information and Computing Services The University of Sheffield
How to get started on cees Mandy SEP Style. Resources Cees-clusters SEP-reserved disk20TB SEP reserved node35 (currently 25) Default max node149 (8 cores.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 MCSR Unix Camp.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
© 2011 Pittsburgh Supercomputing Center XSEDE 2012 Intro To Our Environment John Urbanic Pittsburgh Supercomputing Center July, 2012.
1 High-Performance Grid Computing and Research Networking Presented by Javier Delgodo Slides prepared by David Villegas Instructor: S. Masoud Sadjadi
Advanced Computing Facility Introduction
Auburn University
Welcome to Indiana University Clusters
PARADOX Cluster job management
Unix Scripts and PBS on BioU
HPC usage and software packages
Welcome to Indiana University Clusters
Using Paraguin to Create Parallel Programs
Computational Physics (Lecture 17)
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
CommLab PC Cluster (Ubuntu OS version)
Practice #0: Introduction
Advanced TAU Commander
Paul Sexton CS 566 February 6, 2006
Compiling and Job Submission
Good Testing Practices
Quick Tutorial on MPICH for NIC-Cluster
Executing Host Commands
Working in The IITJ HPC System
Presentation transcript:

Using the Argo Cluster Paul Sexton CS 566 February 6, 2006

2 Logging in The address is argo-new.cc.uic.edu Use ssh to log in, using your ACCC username Ex: ssh Pay attention to the login message, this is how the admins will communicate information to you.

3 Environmental Variables Add the following to your.bash_profile: # For the PGI compilers export PGI_HOME=/usr/common/pgi-6.0-0/linux86/6.0 export LIB=$PGI_HOME/lib:/$PGI_HOME/liblf:/lib:/usr/lib # For the MPI libraries export MPICH2_HOME=/usr/common/mpich export LD_LIBRARY_PATH=$MPICH2_HOME/lib:$LD_LIBRARY_PATH export PATH=$MPICH2_HOME/bin:$PATH

4 Compiling with GCC MPICH provides wrapper scripts for C, C++, and Fortran. Use mpicc in place of gcc Use mpicxx in place of g++ Use mpif77 in place of g77.

5 Compiling with PGI Use pgcc in place of cc/gcc Use pgCC in place of CC/g++ Use pgf77 in place of f77/g77. No wrapper scripts provided, need to add include and lib paths manually:

6 Compiling with PGI (cont.) For C or C++, add the following to your compiler flags:  -I$MPICH2_HOME/include  -L$MPICH2_HOME/lib  -lmpich For Fortran, also add:  -lfmpich

7 Running an MPI program Normally, to run an MPI program, you’d use the mpirun/mpiexec command. Ex: mpirun -np 4 myprogram Don’t do this on argo.

8 Running an MPI program on Argo Edit and compile your code on the master node, and run it on the slave nodes. Using PBS / Torque. The important commands:  qsub  qdel  qstat  qnodes

9 qsub - submit a job qsub [options] script_file script_file is a shell script that gives the path to your executable and the command to run it. Do not call qsub on your program directly! Returns a status message giving you the job id.  Ex: “3208.argo-new.cc.uic.edu”

10 A qsub script For example, imagine your program is called “foo.exe”. The contents of the script file would be: mpiexec -n 4 /path/to/foo/foo.exe Where “/path/to/foo/” is the path to the executable’s directory.

11 qdel - remove a job qdel job_id job_id is the number qsub returned Ex: qdel 3208

12 qstat - queue status qstat job_id  To check on the status of a job when you know the number  Ex: qstat 3208 qstat -u userid  To check on the status of all of your jobs.  Ex: qstat -u psexto2 qstat  To obtain status of all jobs running on argo.

13 qnodes - node status Shows the availability of all 64 nodes, and how many jobs they’re currently running. No arguments. No man page entry for some reason.

14 Viewing the output Standard output and standard error are both redirected to files: [script name].{e,o}[job id] Ex:  stderr in my_script.e3208  stdout in my_script.o3208 The.e file should usually be empty.

15 More on qsub There are numerous options to qsub, but the most common / useful one is “-l nodes”: To set the number of nodes:  qsub -l nodes=4 my_script To pick the nodes manually:  qsub -l nodes=argo9-1+argo9-2+argo9-3 my_script To run only on a subset of nodes:  qsub -l nodes=6:cpu.xeon my_script  qsub -l nodes=8:cpu.amd my_script  qsub -l nodes=8:cpu.smp my_script

16 And that’s about all there is… Final Notes:  The Intel compilers are also available.  The PGI tools include a debugger, PGDBG, and a profiler, PGPROF. These are highly recommended for debugging and profiling parallel code.  Limit yourself to 3 jobs at a time.