Presentation is loading. Please wait.

Presentation is loading. Please wait.

ARCHER Advanced Research Computing High End Resource

Similar presentations


Presentation on theme: "ARCHER Advanced Research Computing High End Resource"— Presentation transcript:

1 ARCHER Advanced Research Computing High End Resource
Nick Brown

2 Website Location

3 Machine overview About ARCHER ARCHER (a Cray XC30) is a Massively Parallel Processor (MPP) supercomputer design built from many thousands of individual nodes. There are two basic types of nodes in any Cray XC30: Compute nodes (4920) These only do user computation and are always referred to as “Compute nodes” 24 cores per node, therefore approx 120,000 cores Service/Login nodes (72/8) Login nodes – allow users to log in and perform interactive tasks Other misc service functions Serial/Post-Processing Nodes (2)

4 Interacting with the system
User guide Users do not log directly into the system. Instead they run commands via an esLogin server. This server will relay commands and information via a service node referred to as a “Gateway node” Compute node LNET Nodes Gateway esLogin Lustre OSS Cray Aries Interconnect Cray XC30 Cabinets Cray Sonnexion Filesystem External Network Infiniband links Ethernet Serial node

5 Job submission example
Quick start guide #!/bin/bash --login #PBS -l select=2 #PBS -N test-job #PBS -A budget #PBS -l walltime=0:20:0 # Make sure any symbolic links are resolved to absolute path export PBS_O_WORKDIR=$(readlink -f $PBS_O_WORKDIR) aprun -n 48 -N 24 ./hello_world Test-job.o50818 Test-job.e50818 my_job.pbs PBS QUEUE Compute node Compute node Compute node Compute node Compute node Compute node qsub my_job.pbs 50818.sdb qstat –u $USER 50818.sdb nbrown23 standard test-job :20 R 00:00 qstat –u $USER 50818.sdb nbrown23 standard test-job :20 Q --

6 Compute node architecture and topology
ARCHER Layout Compute node architecture and topology

7 Cray XC30 node The XC30 Compute node features:
Cray XC30 Compute Node NUMA Node 1 NUMA Node 0 Intel® Xeon® 12 Core die Aries Router Aries NIC 32GB PCIe 3.0 Aries Network QPI DDR3 The XC30 Compute node features: 2 x Intel® Xeon® Sockets/die 12 core Ivy Bridge 64GB in normal nodes 128GB in 376 “high memory” nodes 1 x Aries NIC Connects to shared Aries router and wider network

8 XC30 Compute Blade Cray Proprietary

9 Cray XC30 Rank1 Network Chassis with 16 compute blades 128 Sockets
This is the All-to-all rank-1 topology. The little animation shows that for a message going from a node from Aries 3 to a node on Aries 11, the shortest route is one hop. It also illustrates how each packet can be routed adaptively on a non-minimal, 2-hop route. Point out that packets are at most 64 bytes of data and routing decisions are made on a packet-by-packet basis. Chassis with 16 compute blades 128 Sockets Inter-Aries communication over backplane Per-Packet adaptive Routing

10 Cray XC30 Rank-2 Copper Network
4/26/2017 Cray XC30 Rank-2 Copper Network 2 Cabinet Group 768 Sockets 6 backplanes connected with copper cables in a 2-cabinet group: This is a slide that helps you build up the topology of Dragonfly within a group. I would point out that each black wire actually represents 3 routes between the Aries. That is because the copper cables carry 3 router-tiles worth of traffic. 16 Aries connected by backplane Active optical cables interconnect groups 4 nodes connect to a single Aries

11 Copper & Optical Cabling
Connections Copper Connections

12 ARCHER Filesystems Brief Overview

13 Nodes and filesystems Compute Nodes Login/PP Nodes RDF /home /work

14 ARCHER Filesystems /home (/home/n02/n02/<username>)
User guide /home (/home/n02/n02/<username>) Small (200 TB) filesystem for critical data (e.g. source code) Standard performance (NFS) Fully backed up /work (/work/n02/n02/<username>) Large (>4 PB) filesystem for use during computations High-performance, parallel (Lustre) filesystem No backup RDF (/nerc/n02/n02/<username>) Research Data Facility Very large (26 PB) filesystem for persistent data storage (e.g. results) High-performance, parallel (GPFS) filesystem Backed up via snapshots

15 Research Data Facility
RDF guide Mounted on machines such as: ARCHER (service and PP nodes) DiRAC Bluegene/Q (frontend nodes) Data Transfer Nodes (DTN) Jasmin Data Analytic Cluster (DAC) Run compute, memory, or IO intensive analyses on data hosted on the service. Nodes are specifically tailored for data intensive work with direct connections to the disks. Separate from ARCHER but very similar architecture

16 ARCHER Software Brief Overview

17 Cray’s Supported Programming Environment
Programming Languages Fortran C C++ Programming models Distributed Memory (Cray MPT) MPI SHMEM PGAS & Global View UPC (CCE) CAF (CCE) Chapel Shared Memory OpenMP 3.0 OpenACC Compilers CrayPat Cray Apprentice2 Tools Environment setup Debuggers Modules Allinea (DDT) lgdb Debugging Support Tools Abnormal Termination Processing Performance Analysis STAT Scoping Analysis Reveal Optimized Scientific Libraries LAPACK ScaLAPACK BLAS (libgoto) Iterative Refinement Toolkit Cray Adaptive FFTs (CRAFFT) FFTW Cray PETSc (with CASK) Cray Trilinos (with CASK) I/O Libraries NetCDF HDF5 Cray Compiling Environment (CCE) GNU 3rd Party Compilers Intel Composer Python This is the PE Software that is packaged and shipped today. Third party compilers, such as the Intel and PGI compilers also work on the Cray, but users have to get the compiler directly from the provider. Other Libraries and tools, such as Valgrind, Vampir, work on the Cray system, but are not packaged and distributed with the Cray PE Cray developed Licensed ISV SW 3rd party packaging Cray added value to 3rd party Cray Inc. ConfidentialCray Proprietary Cray Inc. ConfidentialCray Proprietary 17

18 Module environment Software is available via the module environment
Best practice guide Software is available via the module environment Allows you to load in different packages and different versions of packages Deals with potential library conflicts This is based around the module command List currently loaded modules: module list List all modules: module available Load a module: module load x Unload a module: module unload x

19 Service Administration https://www.archer.ac.uk/safe
ARCHER SAFE Service Administration

20 SAFE SAFE user guide SAFE is an online ARCHER management system which all users have an account on Request machine accounts Reset passwords View resource usage Primary way in which PIs manage their ARCHER projects Management of project users Track user’s project usage users of the project

21 Project resources Machine usage is charged in kAUs. Disk quotas
User guide Machine usage is charged in kAUs. This is time running your jobs on each compute node, 0.36 kAUs for a node hour. There is no usage charge for time spent working on the login nodes, post processing nodes or RDF DAC You can track usage via the SAFE or the budgets command (calculated daily.) Disk quotas There is no specific charge made for disk usage, but all projects have quotas If you need more disk space then contact the PI or us if you manage the project

22 To conclude…. You will be using ARCHER during this course
If you have any questions then let us know The documentation on the archer website is a good reference tool Especially the quick start guide In normal use if you have any questions or can not find something then contact the helpdesk


Download ppt "ARCHER Advanced Research Computing High End Resource"

Similar presentations


Ads by Google