Roadrunner Supercluster University of New Mexico -- National Computational Science Alliance Paul Alsing.

Slides:



Advertisements
Similar presentations
Setting up Small Grid Testbed
Advertisements

Beowulf Supercomputer System Lee, Jung won CS843.
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Lecture 2 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Job Submission on WestGrid Feb on Access Grid.
A Commodity Cluster for Lattice QCD Calculations at DESY Andreas Gellrich *, Peter Wegner, Hartmut Wittig DESY CHEP03, 25 March 2003 Category 6: Lattice.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
A Comparative Study of Network Protocols & Interconnect for Cluster Computing Performance Evaluation of Fast Ethernet, Gigabit Ethernet and Myrinet.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
MASPLAS ’02 Creating A Virtual Computing Facility Ravi Patchigolla Chris Clarke Lu Marino 8th Annual Mid-Atlantic Student Workshop On Programming Languages.
Using the P4-Xeon cluster HPCCC, Science Faculty, HKBU Usage Seminar for the 64-nodes P4-Xeon Cluster in Science Faculty March 24, 2004.
Introduction to Scientific Computing on BU’s Linux Cluster Doug Sondak Linux Clusters and Tiled Display Walls Boston University July 30 – August 1, 2002.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
COCOA(1/19) Real Time Systems LAB. COCOA MAY 31, 2001 김경임, 박성호.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
Scheduling of Tiled Nested Loops onto a Cluster with a Fixed Number of SMP Nodes Maria Athanasaki, Evangelos Koukis, Nectarios Koziris National Technical.
Compaq - Indiana University Visit IU’s Compaq Parallel PC Cluster.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
University of Illinois at Urbana-Champaign NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
QCD Project Overview Ying Zhang September 26, 2005.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Design & Management of the JLAB Farms Ian Bird, Jefferson Lab May 24, 2001 FNAL LCCWS.
Sobolev Showcase Computational Mathematics and Imaging Lab.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
Amy Apon, Pawel Wolinski, Dennis Reed Greg Amerson, Prathima Gorjala University of Arkansas Commercial Applications of High Performance Computing Massive.
BSP on the Origin2000 Lab for the course: Seminar in Scientific Computing with BSP Dr. Anne Weill –
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
National Computational Science National Center for Supercomputing Applications National Computational Science NCSA Terascale Clusters Dan Reed Director,
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
National Computational Science Alliance Clusters, Cluster in a Box Rob Pennington Acting Associate Director Computing and Communications Division NCSA.
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 2: Message-Passing Computing LAM/MPI at the.
Cluster Software Overview
Getting Started on Emerald Research Computing Group.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations Josh Hursey.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Intermediate Parallel Programming and Cluster Computing Workshop Oklahoma University, August 2010 Running, Using, and Maintaining a Cluster From a software.
University of Illinois at Urbana-Champaign Using the NCSA Supercluster for Cactus NT Cluster Group Computing and Communications Division NCSA Mike Showerman.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
GRID COMPUTING.
Auburn University
PARADOX Cluster job management
HPC usage and software packages
Is System X for Me? Cal Ribbens Computer Science Department
Postdoctoral researcher Department of Environmental Sciences, LSU
Paul Sexton CS 566 February 6, 2006
Compiling and Job Submission
NCSA Supercluster Administration
Cornell Theory Center Cornell Theory Center (CTC) is a high-performance computing and interdisciplinary research center at Cornell.
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Designing a PC Farm to Simultaneously Process Separate Computations Through Different Network Topologies Patrick Dreher MIT.
Working in The IITJ HPC System
Presentation transcript:

Roadrunner Supercluster University of New Mexico -- National Computational Science Alliance Paul Alsing

23 September 1999 Cactus Workshop2 Alliance/UNM Roadrunner SuperCluster

23 September 1999 Cactus Workshop3 Alliance/UNM Roadrunner SuperCluster Strategic Collaborations with  Alta Technologies  Intel Corp. Node configuration  Dual 450MHz Intel Pentium II processors  512 KB cache, 512 MB ECC SDRAM  6.4 GB IDE hard drive  Fast Ethernet and Myrinet NICs

23 September 1999 Cactus Workshop4 Alliance / UNM Roadrunner Interconnection Networks  Control: 72-port Fast Ethernet Foundry switch with 2 Gigabit Ethernet uplinks  Data: Four Myrinet Octal 8-port switches  Diagnostic: Chained serial ports

23 September 1999 Cactus Workshop5 A Peek Inside Roadrunner

23 September 1999 Cactus Workshop6 Roadrunner System Software Redhat Linux 5.2 (6.0) SMP Linux kernel MPI (Argonne’s MPICH 1.1.2) Portland Group Compiler Suite Myricom GM Drivers (1.086) and MPICH-GM ( ) Portable Batch Scheduler (PBS)

HPF Parallel Fortran for clusters F90 Parallel SMP Fortran 90 F77 Parallel SMP Fortran 77 CC Parallel SMP C/C++ DBG symbolic debugger PROF performance profiler

23 September 1999 Cactus Workshop8 Roadrunner System Libraries BLAS LAPACK ScaLAPACK Petsc FFTw Cactus Globus Grid Infrastructure

23 September 1999 Cactus Workshop9 Parallel Job Scheduling Node-based resource allocation Job monitoring and auditing Resource reservations

23 September 1999 Cactus Workshop10 Computational Grid National Technology Grid Globus Infrastructure  Authentication  Security  Heterogenous environments  Distributed applications  Resource monitoring

23 September 1999 Cactus Workshop11 For more information: Contact Information To Apply for an Account  

23 September 1999 Cactus Workshop12 Easy to Use rr% ssh -l username rr.alliance.unm.edu rr% mpicc -o prog helloWorld.c rr% qsub -I -l nodes=64 r021 % mpirun prog

23 September 1999 Cactus Workshop13 Job Monitoring with PBS

23 September 1999 Cactus Workshop14 Roadrunner Performance

23 September 1999 Cactus Workshop15 Roadrunner Ping-Pong Time

23 September 1999 Cactus Workshop16 Roadrunner Bandwidth

23 September 1999 Cactus Workshop17 Applications on RR MILC QCD (Bob Sugar, Steve Gottlieb)  A body of high performance research software for doing SU(3) and SU(2) lattice gauge theory on several different (MIMD) parallel computers in current use ARPI3D (Dan Weber)  3-D numerical weather prediction model to simulate the rise of a moist warm bubble in a standard atmosphere AS-PCG (Danesh Tafti)  2-D Navier Stokes solver BEAVIS (Marc Ingber, Andrea Mammoli)  1994 Gordon Bell Prize-winning dynamic simulation code for particle-laden, viscous suspensions

23 September 1999 Cactus Workshop18 Applications: CACTUS 3D Numerical Relativity Toolkit for Computational Astrophysics (Courtesy of Gabrielle Allen and Ed Seidel) Roadrunner performance under the Cactus application benchmark shows near-perfect scalability.

23 September 1999 Cactus Workshop19 CACTUS Performance (Graphs - courtesy of O. Wehrens)

23 September 1999 Cactus Workshop20 CACTUS Scaling (Graphs - courtesy of O. Wehrens)

23 September 1999 Cactus Workshop21 CACTUS: The evolution of a pure gravitational wave A subcritical Brill wave (Amplitude=4.5), showing the Newman-Penrose Quantity as volume rendered 'glowing clouds'. The lapse function is shown as a height field in the bottom part of the picture. (Courtesy of Werner Benger)

23 September 1999 Cactus Workshop22 TeraScale computing “A SuperCluster in every lab” Efficient use of SMP nodes Scalable interconnection networks High-performance I/O Advanced programming models for hybrid (SMP and Grid-based) clusters

23 September 1999 Cactus Workshop23 Exercises Login to Roadrunner % ssh roadrunner.alliance.unm.edu -l cactusXX Request interactive session % qsub -I -l nodes=n Create Myrinet Node-Configuration File % gmpiconf $PBS_NODEFILE (to use 1 CPU per node) % gmpiconf2 $PBS_NODEFILE (to use 2 CPUs per node) Run Job % mpirun cactus_wave wavetoyf90.par (on 1 CPU per node) % mpirun -np 2*n cactus wavetoyf90.par (on 2 CPUs per node)

23 September 1999 Cactus Workshop24 Compiling Cactus: WaveToy Login to Roadrunner % ssh roadrunner.alliance.unm.edu -l cactusXX.cshrc  #MPI (season to taste)  #setenv MPIHOME /usr/parallel/mpich-eth.pgi #ethernet/Portland Grp  #setenv MPIHOME /usr/parallel/mpich-eth.gnu #ethernet/GNU  setenv MPIHOME /usr/parallel/mpich-gm.pgi #myrinet/Portland Grp  #setenv MPIHOME /usr/parallel/mpich-gm.gnu #myrinet/GNU if you modify.cshrc make sure to  source.cshrc; rehash  echo $MPIHOME #should read /usr/parallel/mpich-gm.pgi

23 September 1999 Cactus Workshop25 Compiling Cactus: WaveToy Create WaveToy configuration % gmake wave F90=pgf90 MPI=MPICH MPICH_DIR=$MPIHOME Compile WaveToy % gmake wave % cd ~/Cactus/exe Copy all.par files into this directory (not necessary) % foreach file (`find ~/Cactus -name “*.par” -print`) foreach> cp $file. foreach> end

23 September 1999 Cactus Workshop26 Running WaveToy on RoadRunner Run wave interactively on RoadRunner  PBS job scheduler: request interactive nodes % qsub -I -l nodes=4 (note: -I = interactive) Note: prompt changes from a front-end node name like exe] to an compute-node name for e.g. exe] Note: you should compile on the front-end and run on the compute nodes (open 2 windows)  PBS job scheduler: setup a node-configuration file % gmpiconf $PBS_NODEFILE Note: cat ~/.gmpi/conf-xxxx.rr will contain specific node names  Run the job from ~/Cactus/exe % mpirun cactus_wave wavetoyf90.par % mpirun -np 2 cactus_wave wavetoyf90.par

23 September 1999 Cactus Workshop27 Running WaveToy on RoadRunner Run wave batch on RoadRunner  PBS script: (call it, for e.g.) wave.pbs #PBS -l nodes=4 # pbs script for wavetoy: 1 processor per node gmpiconf $PBS_NODEFILE mpirun ~/Cactus/exe/cactus_wave wavetoyf90.par #(use full path) % Submit batch PBS job % qsub wave.pbs % (PBS responds with your job_id #) % qstat -a (check status of your job) % qstat -n (check status, and see the nodes you are on) % qdel (remove job from queue) % dsh killall cactus_wave (if things hang, mess up, etc…)