HPC usage and software packages

Slides:



Advertisements
Similar presentations
Rhea Analysis & Post-processing Cluster Robert D. French NCCS User Assistance.
Advertisements

Lecture 2 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
Introduction to Running CFX on U2  Introduction to the U2 Cluster  Getting Help  Hardware Resources  Software Resources  Computing Environment  Data.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
High Performance Computing G Burton – ICG – Oct12 – v1.1 1.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Introduction to HPCC at MSU 09/08/2015 Matthew Scholz Research Consultant, Institute for Cyber-Enabled Research Download this presentation:
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Lab System Environment
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
ARCHER Advanced Research Computing High End Resource
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Workstations & Thin Clients
Hackinars in Bioinformatics
Hands on training session for core skills
GRID COMPUTING.
Specialized Computing Cluster An Introduction
Welcome to Indiana University Clusters
PARADOX Cluster job management
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Unix Scripts and PBS on BioU
Assumptions What are the prerequisites? … The hands on portion of the workshop will be on the command-line. If you are not familiar with the command.
Welcome to Indiana University Clusters
How to use the HPCC to do stuff
Introduction to HPCC at MSU
BIOSTAT LINUX CLUSTER By Helen Wang October 29, 2015.
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Architecture & System Overview
CommLab PC Cluster (Ubuntu OS version)
HPCC New User Training Getting Started on HPCC Resources
Welcome to our Nuclear Physics Computing System
Paul Sexton CS 566 February 6, 2006
Introduction to TAMNUN server and basics of PBS usage
Compiling and Job Submission
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
Overview of HPC systems and software available within
Requesting Resources on an HPC Facility
High Performance Computing in Bioinformatics
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

HPC usage and software packages Vladimir Slavnić Research Assistant Institute of Physics Belgrade The VI-SEEM project initiative is co-funded by the European Commission under the H2020 Research Infrastructures contract no. 675121

Agenda VI-SEEM HPC resources PARADOX Cluster Access to PARADOX Cluster PARADOX Cluster software stack Environment modules Job management with examples VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 2

VI-SEEM HPC Resources High Performance Computing HPC vs HTC VI-SEEM HPC (High Performance Computing) service Provides access to clusters with low-latency interconnection or supercomputers 15 HPC systems Most of them based on CPUs with x86_64 instruction set and equipped with accelerator cards Some BlueGene/P systems, and one based on the Cell processors. Delivers 18.8 CPU, 371.6 GPU, 16.0 Xeon Phi, and 5.3 IBM Cell millions of hours per year Ability to perform complex simulations on the state-of-the-art computing hardware VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 3

PARADOX Cluster Water-cooled racks 106 nodes, 1696 CPU cores 2x8-core Intel Sandy Bridge E5-2670 @ 2.6 GHz 32 GB of RAM 106 GPU NVIDIA Tesla M2090 96 TB storage space Lustre parallel file system QDR Infiniband 40 Gbps VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 4

Access to PARADOX Cluster From local machine within IPB computer network: $ ssh username@paradox.ipb.ac.rs Example: $ ssh demo001@paradox.ipb.ac.rs If a graphical environment is needed use the -X option: $ ssh username@paradox.ipb.ac.rs -X Login node paradox.ipb.ac.rs is used for: - Preparing jobs - Submitting jobs to the batch system - Some very lightweight testing - But not for the long running computations! File systems /home # Shared between worker nodes and used both for long term storage and job submission /scratch # Local file system available on each worker node Secure copy (scp) can be used to transfer data to or from paradox.ipb.ac.rs To logout use the Ctrl-d command, or exit From outside of IPB computer network, first log in to the user interface machine ui.ipb.ac.rs, and then to login node paradox.ipb.ac.rs http://www.scl.rs/PARADOXClusterUserGuide/ VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 5

PARADOX Cluster software stack [1] Compilers GNU Compiler Collection Intel Compilers Portland Group Compilers Performance tools Profilers Scalasca TAU Cube gprof PGI pgprof Intel Vtune Valgrind Debuggers TotalView Debugger GDB PGI pgdbg Intel Debugger VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 6

PARADOX Cluster software stack [2] Libraries LAPACK BLAS FFTW Intel MKL NumPy & SciPy Boost gsl HDF5 Application software CP2K NAMD Netcdf Parallel libraries CUDA OpenMPI Gromacs VMD VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 7

Environment modules [1] Tool to help users to manage Linux environment Dynamic modification of a user's environment variables Used by HPC centers to provide different versions of software to users Check available modules $ module avail The modules on PARADOX are divided into: Applications, environment, compilers, libraries and tools categories VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 8

Environment modules [2] Module can be loaded by executing: $ module load module_name Example: $ module load intel $ which icc $ env The list of currently loaded modules can be shown by tying: $ module list Specific modules can be unloaded by calling: $ module unload module_name $ module unload intel All modules can be unloaded by executing: $ module purge VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 9

Job management [1] Batch system (torque+maui scheduler) on paradox.ipb.ac.rs is managing: Job submissions Resources allocations Jobs launching over the cluster To submit a batch job, user needs to write a shell script which contains: A set of directives which describe needed resources for job (lines beginning with #PBS) Lines necessary to execute user’s code After that job can be launched by submitting the script to batch system using qsub command Job will enter into a batch queue When resources are available, job will be launched over allocated nodes Batch system provides monitoring of all submitted jobs Queue standard is available for user’s job submission. VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 10

Job management [2] Frequently used PBS commands VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 11

Serial job example [1] Simple job bash script Sample sequential job PBS script: #!/bin/bash #PBS -q standard #PBS -l nodes=1:ppn=1 #PBS -l walltime=00:10:00 #PBS -e ${PBS_JOBID}.err #PBS -o ${PBS_JOBID}.out cd $PBS_O_WORKDIR chmod +x job.sh ./job.sh Simple job bash script #!/bin/bash /bin/date /bin/hostname pwd sleep 30 /usr/bin/whoami This example can be downloaded from: http://wiki.ipb.ac.rs/images/d/db/Serial.tgz Download tgz archive with example files, extract archive and enter serial folder : VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 12

Serial job example [2] Submit job to execution by issuing following command: $ qsub job.pbs The qsub command will return а result: <JOB_ID>.paradox.ipb.ac.rs Where <JOB_ID> is а unique integer used to identify the job To check the status of your job use the following command: $ qstat <JOB_ID> Alternatively user can check the status of all submitted jobs using the following syntax of the qstat command: $ qstat -u <user_name> To get detailed information about a job user can use the following command: $ qstat -f <JOB_ID> VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 13

Serial job example [3] When job is finished, files to which standard output and standard error of a job was redirected will appear in user’s work directory To cancel a running job following command should be executed: $ qdel <JOB_ID> VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 14

NAMD job example Go to: http://wiki.vi-seem.eu/index.php/PARADOX-IV_NAMD_and_VMD_usage Download apoa1 benchmark input Download and prepare PBS job script http://wiki.ipb.ac.rs/images/6/60/Example_namd_job_scripts.tgz Submit NAMD job: CUDA with InfiniBand $ qsub example_gpu.pbs InfiniBand via OpenFabrics OFED $ qsub example_non_gpu.pbs VI-SEEM Life Sciences Regional Training – Belgrade, Serbia, 19-21 Oct 2016 15