Presentation is loading. Please wait.

Presentation is loading. Please wait.

Helix - HPC/SLURM Tutorial

Similar presentations


Presentation on theme: "Helix - HPC/SLURM Tutorial"— Presentation transcript:

1 Helix - HPC/SLURM Tutorial
14/09/2017

2 Tutorials 10-15 minutes presentation 20-25 minutes examples
20-30 minutes questions Open for suggested format, time, topics

3 High Performance Computing
Cluster: A collection of nodes that can run computing jobs in parallel Nodes: Individual Servers with RAM, CPU and networking interfaces. Also known as Compute nodes. RAM: Random Access Memory CPU: Central Processing Unit. Executes the jobs Core: A CPU can have multiple cores. Can run multiple processes Threads: Processes that run at the same time Storage: Hard drive space to store data Partition: Similar to a queue Helix: Our Cluster

4 Helix: Our Cluster Comprises of: 8 x 32 core, 128Gb Nodes.
175 Tb of disk storage Has Groups: [Project ID, Name, Project leaders] SG0001 Systems Biology Lab at Centre for Systems Genomics Edmund Crampin SG0004 Centre for Systems Genomics Cluster - Leslie Group Allan Motyer, Ashley Farlow, Damjan Vukcevic, Stephen Leslie SG0005 Statistical genomics University of Melbourne David Balding SG0007 Centre for System Genomics Cluster – Kim-Anh Le Cao SG0009 COGENT – Bobbie Shaban SGA0001 Systems Genomics Associate Member Dr Sarah Dunstan Andrew Siebel, SGN0001 Project space for Oxford Nanopore data generated by Systems Genomics

5 List of partitions [bshaban@snowy-sg1 ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST main* up 30-00:00: mix snowy[ ,005,012] main* up 30-00:00: alloc snowy[001,004,006, , ] main* up 30-00:00: idle snowy007 sysgen up 1-00:00: mix snowy035 sysgen up 1-00:00: idle snowy[ , ] sysgen-long up 30-00:00: mix snowy[ ,043] sysgen-long up 30-00:00: idle snowy042

6 List of partitions. Cont..
The system genomics nodes are split into two sub-partitions (4 x 128GB + 2 x 512GB nodes each).  Half the sysgen nodes are always available for < 24 hour duration jobs.  Any job longer than 24 hours will only be eligible for a sub-partition, and the main partition.  Any user (or project) can run a maximum of 256 cores worth of jobs at a time (global default), but the maximum number of cores a single job can use on a systems genomics partition is 192 cores (4 x x 32). Any larger job will only run on the main partition.

7 Head node – Snowy-sg1 For Job submission only! Can use srun or sbatch.
Only has 34Gb of memory. Can easily be consumed by a job If head node over subscribes then no jobs can be submitted – not being monitored Submit via sbatch – job is scheduled on a compute node irrespective of head node Use srun, sbatch or sinteractive – jobs will die if head node crashes

8 Resources limitations per node
~]$ sinfo --format "%n %20E %12U %19H %6t %a %b %C %m %e" HOSTNAMES REASON USER TIMESTAMP STATE AVAIL ACTIVE_FEATURES CPUS(A/I/O/T) MEMORY FREE_MEM snowy002 none Unknown Unknown mix up (null) 31/1/0/ snowy003 none Unknown Unknown mix up (null) 8/24/0/ snowy005 none Unknown Unknown mix up (null) 5/27/0/

9 Resource limitations per user
~]$ mylimits The following limits apply to your account on snowy: * Max number of idle jobs in queue (see note) * Default memory per core in MB * Max memory per core in MB (use --mem-per-cpu=N) * Default wall time mins * Max wall time (use --time=D-hh:mm:ss) hours * Default number of CPUs * Max number of CPUs per job (use --ntasks==N) * Max number of jobs running at one time

10 Resource limitations per user. Cont..
~]$ mydisk The following is how much disk space your current projects have: Fileset Size Used Avail Use% SG G 125G 20355G 1% SG G 10285G 15615G 40% SG G 4278G 5962G 42% SGN G 132G 9008G 1%

11 End Start tutorial https://slurm.schedmd.com/tutorials.html
Workshop September 26, 2017  pm Melbourne Bioinformatics Boardroom, 187 Grattan Street, Carlton, VIC 3053 Australia Introduction to High Performance Computing Using High Performance Computing (HPC) resources such as Melbourne Bioinformatics in an effective and efficient manner is key to modern research.  This workshop will introduce you to HPC environments and assist you to get on with your research. October 3rd as well


Download ppt "Helix - HPC/SLURM Tutorial"

Similar presentations


Ads by Google