Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lab System Environment

Similar presentations


Presentation on theme: "Lab System Environment"— Presentation transcript:

1 Lab System Environment
Paul Kapinos

2 Lab nodes: integrated to HPC Cluster
OS: Scientific Linux 6.5 (RHEL6.5 compatible) Batch system: LSF 9.1  not for this lab  Storage: NetApp filer ($HOME / $WORK) no backup on $WORK Lustre ($HPCWORK) not available

3 Software Environment Compiler: MPI: Default: Intel 15.0 (and older)
GCC 4.9 (and older) Oracle Studio, PGI MPI: Open MPI, Intel MPI No InfiniBand! 1GE only Warnings and 1/20 of usual performance Default: intel/ openmpi/1.6.5

4 How to login Frontends login / SCP File transfer:
$ ssh [-Y] $ scp [...] then jump to the assigned lab node $ ssh lab5[.rz.rwth-aachen.de] cluster.rz.RWTH-Aachen.DE cluster2.rz.RWTH-Aachen.DE cluster-x.rz.RWTH-Aachen.DE GUI cluster-x2.rz.RWTH-Aachen.DE GUI cluster-linux.rz.RWTH-Aachen.DE cluster-linux-nehalem.rz.RWTH-Aachen.DE cluster-linux-xeon.rz.RWTH-Aachen.DE cluster-linux-tuning.rz.RWTH-Aachen.DE cluster-copy.rz.RWTH-Aachen.DE ‘scp’ cluster-copy2.rz.RWTH-Aachen.DE ‘scp’ cluster-x[2] only for GUI-based applications, not for compiling

5 Lab Node Assignment Please use your’s allocated node only
or agree in advance with the node owner Node Account Institute lab1 hpclab01 EONRC lab2 hpclab02 IGPM lab3 hpclab03 GHI(AICES) lab4 hpclab04 ITV lab5 hpclab05 FZJ,PGI lab6 hpclab06 CATS lab7 hpclab07 Physik lab8 hpclab08 AIA

6 Lab Node Assignment Please use your’s allocated node only
or agree in advance with the node owner

7 Lab nodes Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 64GB RAM
Packages(sockets) / Cores per package / Threads per core : 2/18/2 Cores / Processors(CPUs) : 36 / 72 AVX2: 256bit register 2x Fused Multiply Add (FMA) >> double peak performance cf. previous chips 64GB RAM Stream: >100Gb/s (Triad) No InfiniBand connection MPI via 1GE network still possible Warnings and 1/20 of usual performance

8 Module System Many compilers, MPIs and ISV software
The module system helps to manage all the packages List loaded modules / available modules $ module list $ module avail Load / unload a software $ module load <modulename> $ module unload <modulename> Exchange a module (Some modules depend on each other) $ module switch <oldmodule> <newmodule> $ module switch intel intel/15.0 Reload all modules (May fix your environment) $ module reload Find out in which category a module is: $ module apropos <modulename>

9 MPI No InfiniBand connection Default: Open MPI 1.6.5
MPI via 1GE network, >> warnings and 1/20 of usual performance Default: Open MPI 1.6.5 e.g. switch to Intel MPI: $ module switch openmpi intelmpi Wrapper in $MPIEXEC redirects the processes to ‘back end nodes’ by default your processes run on (random) non-Haswell node use the ‘-H’ option to start the processes on favoured node $ $MPIEXEC -H lab5,lab6 -np 12 MPI_FastTest.exe other options of the interactive wrapper $ $MPIEXEC -help | less

10 Documentation RWTH Compute Cluster Environment
HPC Users‘s Guide (a bit outdated): Online documentation (including example scripts): Man-Pages for all commands available In case of errors / problems let us know:

11 Lab We provide laptops Log in to the laptops with the local „hpclab“ account (your own PC pool accounts might also work) Use X-Win32 to log in to the cluster (use “hpclab0Z” or your own account) Log in to the labZ node (use “hpclab0Z” account) Feel free to ask questions Source: D. Both, Bull GmbH


Download ppt "Lab System Environment"

Similar presentations


Ads by Google