Presentation is loading. Please wait.

Presentation is loading. Please wait.

New Compute-Cluster at CMS

Similar presentations

Presentation on theme: "New Compute-Cluster at CMS"— Presentation transcript:

1 New Compute-Cluster at CMS
Daniela-Maria Pusinelli Compute-Cluster - 1 Jahreskolloquium Daniela-Maria Pusinelli

2 Agenda Construction and Management High Avaibility How to use Software
Batchsystem Compute-Cluster - 2 Jahreskolloquium Daniela-Maria Pusinelli

3 8 Supermicro Twin2-Server
Construction and Management/1 Where to find: Grimm-Zentrum, Server Room, water cooled cabinet Aufbau: Supermicro Twin2-Server, 2 U - every has 4 nodes with 2 Intel-Nehalem CPUs - every has 4 Cores, Infiniband QDR, 48 GB memory - at all 32 nodes, 256 Cores, TB memory 8 Supermicro Twin2-Server Compute-Cluster - 3 Jahreskolloquium Daniela-Maria Pusinelli

4 Construction and Management/2
two master nodes are responsible for login to the address: they manage different services (DHCP, NFS, LDAP, Mail) they start und stop alle nodes if necessary, only they communicate into the Universtiy network they provide development software (Compiler, MPI) and also application software they organize the batch service Compute-Cluster - 4 Jahreskolloquium Daniela-Maria Pusinelli

5 Construction and Management/3
two as a HA cluster configured server provide the parallel file system (Lustre FS) for temporary data of large size lulu:/work all nodes are equipped with fast infiniband network (QDR) which is responsible for node communication during parallel computations the master and file server nodes are monitored by central Nagios of CMS the compute server are monitored local with Ganglia Compute-Cluster - 5 Jahreskolloquium Daniela-Maria Pusinelli

6 Aufbau und Cluster-Verwaltung/4
/work Data 1 MDT , 2 OSTs 3 OSTs Login Failover Login 36 Port Infiniband Switch node1 bis node4 node5 bis node8 node25 bis node28 node29 bis node32 - /home Data Stellt /work bereit Compute-Cluster - 6 Jahreskolloquium Daniela-Maria Pusinelli

7 all servers are equipped with redundant power supplies
High Avaibility /1 all servers are equipped with redundant power supplies master server: RedHat Resource Group Manger rgmanager configures and monitors the services on both master nodes in case of failover of one master the other one takes over the services of the failed one file server: for the Lustre FS /work are also redundant configured, if one fails the other one takes the MDS (Meta Data Server) and also the OSTs (Object Storage Targets), possible becuse all data are stored on a virtual SAN disk Compute-Cluster - 7 Jahreskolloquium Daniela-Maria Pusinelli

8 How to use/1 all colleagues who have not enough resources at the institut are allowed to use the system necessyary is an account at CMS, the acoount will be opend for the service login on master with SSH form University network you may login to nodes via ssh from master node without further authentification Data storage: /home/<institut>/<account> unique user home dir /afs/<account> OpenAFS home dir /work/<account> working dir /perm/<account> permanent dir /scratch/<account> on nodes lokal working dir - Migration of data from old cluster via /home Verzeichnis or with scp Kkk alle Compute-Cluster - 8 Jahreskolloquium Daniela-Maria Pusinelli

9 How to use/2 alle Kkk Data saving:
data in /home daily into TSM, max. 3 older versions data in /afs/ daily, the whole semester data in /perm daily on disk, max. 3 older versions data in /work no saving data in /scratch no saving /work and /scratch are controlled on achieve high water mark of 85%, data older than 1 month willl be removed important data should be copied to /perm or to a home dir a parallel SSH is installed, for calling commands on all nodes pssh --help pssh –P –h /usr/localshare/nodes/all w |grep load Kkk alle Compute-Cluster - 9 Jahreskolloquium Daniela-Maria Pusinelli

10 Software/1 alle Kkk System software
Operating system: CentOS 5 = RedHat EL 5 Development Software - GNU Compiler Intel Compiler Suite mit MKL Portland Group Compiler OpenMPI MPICH Application software Chemie: Gaussian, Turbomole, ORCA Matlab, Maple further Software is possible Kkk alle Compute-Cluster - 10 Jahreskolloquium Daniela-Maria Pusinelli

11 Software/2 Kkk - all Software versions must be loaded with the module command module available -> all available modules (Software) - Development: module load intel-cluster > loads Intel Compiler+ module load openmpi-14-intel -> loads OpenMPI ... Application software: module load g09-a > loads Gaussian09 A02 module load orca > loads ORCA 2.7 module load matlab-10a -> loads Matlab R2010a alle Compute-Cluster - 11 Jahreskolloquium Daniela-Maria Pusinelli

12 Batch Service/1 alle Open Source Product: SUN Grid Engine (SGE)
cell clous is installed on one master node the Qmaster daemon is running the other one is working as a slave and will get activ if the first fails - SGE supports parallel environments (PE) - There is a Grafical User Interface QMON With QMON all configurations may be shown and actions may be done The are two parallel environments installed: ompi (128 Slots) and smp (64 Slots) - These will be allocated to the parallel queues alle Compute-Cluster - 12 Jahreskolloquium Daniela-Maria Pusinelli

13 Batch Service/2 alle Kkk
parallel jobs upto 32 cores are allowed, max. 4 Jobs a 32 Cores at the same time in bigpar in queue par there are upto 8 jobs a 8 cores allowed, all jobs are running on a separate node - ser and long are serial queues - inter is for interactive computations (Matlab, GaussView) all values and configurations are preliminary and may be changed on user requirements batch scripts of old cluster can’t be used for the new one, because of new batch system Kkk alle Compute-Cluster - 13 Jahreskolloquium Daniela-Maria Pusinelli

14 Batch Service/3 alle Kkk Queue Priority Prozessors Memory Slots PE
Runtime bigpar +5 8-32 40 GB 128 ompi 48 h par 4-8 20 GB 64 smp 24 h long -5 1 4 GB 32 - 96 h short +10 6 h inter +15 1 GB 8 3 h alle Compute-Cluster - 14 Jahreskolloquium Daniela-Maria Pusinelli

15 Batch Service/4 alle Kkk Users are collected to lists = working groups
e.d.: cms, limberg, haerdle, ... submition of jobs: example scripts for all applications are in /perm/skripte, they include the module calls e.d. Gaussian09 computation: cd /work/$USER/g09 cp /perm/skripte/g09/run_g09_smp_8 -> ev. Änderungen qsub run_g09_smp_8 qstat qmon& -> Grafical User Interface alle Compute-Cluster - 15 Jahreskolloquium Daniela-Maria Pusinelli

16 Batch Service/5 alle Kkk
- MPI program developing and starting (computation of Pi) cd /work/$USER/ompi module load intel-cluster-322 module load openmpi-14-intel cp /perm/skripte/ompi/cpi.c . mpicc –o cpi cpi.c cp /perm/skripte/ompi/run_ompi_32 . qsub run_ompi_32 Enthält den Aufruf des MPI Programmes mpirun –np 8 –machinefile nodefile cpi cat nodefile -> enhält den Knoten, z.B. node8.local slots=8 alle Compute-Cluster - 16 Jahreskolloquium Daniela-Maria Pusinelli

17 Batch Service/6 alle Kkk - important commands
qstat -> Status der eigenen Jobs qstat –u \* -> Liste aller Jobs im Cluster qdel <jobid> -> Entfernen des Jobs qconf –sql -> Liste aller Queues qconf –sq par -> Konfiguration Queue par qconf –sul -> zeigt die Userlisten (Gruppen) qconf –su cms -> User der Liste/Gruppe cms qconf –spl -> Liste der Parallel Environments qacct <jobid> -> Abrechnungsinfo zum Job qstat –q bigpar –f -> zeigt, ob Queue disabled alle Compute-Cluster - 17 Jahreskolloquium Daniela-Maria Pusinelli

Download ppt "New Compute-Cluster at CMS"

Similar presentations

Ads by Google