Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node

Slides:



Advertisements
Similar presentations
An Introduction to Gauss Paul D. Baines University of California, Davis November 20 th 2012.
Advertisements

Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
Using HPC for Ansys CFX and Fluent
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Introduction to RCC for Intro to MRI 2014 July 25, 2014.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Lab System Environment
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
HPC at HCC Jun Wang Outline of Workshop2 Familiar with Linux file system Familiar with Shell environment Familiar with module command Familiar with queuing.
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Modules, Compiling WRF, and Running on CHPC Clusters Adam Varble WRF Users Meeting 10/26/15.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
Intel Xeon Phi Training - Introduction Rob Allan Technology Support Manager The Hartree Centre.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Linux & Joker – An Introduction
Interacting with the cluster ssh, sftp, & slurm batch scripts
Outline Introduction/Questions
Workstations & Thin Clients
Hands on training session for core skills
GRID COMPUTING.
Specialized Computing Cluster An Introduction
Auburn University
Welcome to Indiana University Clusters
Assumptions What are the prerequisites? … The hands on portion of the workshop will be on the command-line. If you are not familiar with the command.
HPC usage and software packages
Using Longleaf ITS Research Computing
Carrie Brown, Adam Caprez
Welcome to Indiana University Clusters
Heterogeneous Computation Team HybriLIT
Creating and running applications on the NGS
ASU Saguaro 09/16/2016 Jung Hyun Kim.
Joker: Getting the most out of the slurm scheduler
Architecture & System Overview
CommLab PC Cluster (Ubuntu OS version)
Hyperion Named after one of the sons of Uranus and Gaia from the greek mythology. Hyperion himself: father of Helios (Sun), Selene (Moon) and Eos.
Postdoctoral researcher Department of Environmental Sciences, LSU
Short Read Sequencing Analysis Workshop
Welcome to our Nuclear Physics Computing System
Using Dogwood Instructor: Mark Reed
Using HPC for Ansys CFX and Fluent
Helix - HPC/SLURM Tutorial
College of Engineering
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
Advanced Computing Facility Introduction
Requesting Resources on an HPC Facility
High Performance Computing in Bioinformatics
Using Dogwood Instructor: Mark Reed
MPI MPI = Message Passing Interface
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Frank Mueller North Carolina State University
Working in The IITJ HPC System
Short Read Sequencing Analysis Workshop
Maxwell Compute Cluster
Presentation transcript:

Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node Parallel Storage

Hodor HPC Cluster Head Node 2 CPUs – 8 Cores - SandyBridge 64GB RAM Module Files Open MPI, Intel MPI GCC, Intel COMPILERS SLURM Job Scheduler Head Node HPN MNG LON Comp Node Parallel Storage

Hodor HPC Cluster Head Node Must Request an Account Authentication Campus Connection File Transfer SCP SFTP Globus Compile Very Short Test Runs Do not USE mpirun / mpiexec Head Node HPN MNG LON Comp Node Parallel Storage

Hodor HPC Cluster Head Node SSH Keys Are Allowed Must Append new keys Replacing keys will break your account Must be on-campus to connect Or use Campus VPN Incorrect password attempts will block user access. Head Node HPN MNG LON Comp Node Parallel Storage

Hodor HPC Cluster Compute Nodes x 32 2 CPUs – 4 Cores – SandyBridge 16 Nvidia Tesla K20m GPUs 16 Xeon Phi 31P1s Co-processors 64GB RAM Module Files Use “srun” to get Bash Shell Head Node HPN MNG LON Comp Node Parallel Storage

Shared File Space /home/user.name /share/apps /cm/shared Not Shared /tmp Heavy I/O should be done in: Must be copied before end of job. Head Node HPN MNG LON Comp Node Parallel Storage

SLURM Queue Information Commands squeue sinfo qstat –a qstat –n qstat –f

SLURM Job Submission Commands srun – for interactive - use of “screen” sbatch - for batch runs

SLURM Job Submission Commands Examples: /share/apps/slurm/examples sbatch somescript.sh srun –N1 –n --pty bash

Important sbatch arguments #SBATCH –N8 #SBATCH --ntasks-per-node=8 #SBATCH –t 00:10:00 #SBATCH –o ./out_test.txt #SBATCh –e ./err_test.txt

Important sbatch environment variables $SLURM_SUBMIT_DIR Path from current dir when sbatch was invoked. $SLURM_NTASKS Total number of tasks in your job $SLURM_JOB_ID ID number identifying your SLURM job.

SLURM Job Delete Commands scancel #### qdel ####

Modulefile Commands module avail module load module list

www.crc.und.edu Website: Hodor Help Info Tutorials & Desktop Software > Linux HPC Cluster (Hodor) Hodor does not have a job submission queue called “test”

Get Hodor Account Send email to: Aaron.Bergstrom@und.edu