Presentation is loading. Please wait.

Presentation is loading. Please wait. Team Quokka: Australia’s First Foray into the Student Cluster Challenge Rebecca Hartman-Baker, Ph.D. Senior Supercomputing Specialist & SCC Team.

Similar presentations

Presentation on theme: " Team Quokka: Australia’s First Foray into the Student Cluster Challenge Rebecca Hartman-Baker, Ph.D. Senior Supercomputing Specialist & SCC Team."— Presentation transcript:

1 Team Quokka: Australia’s First Foray into the Student Cluster Challenge Rebecca Hartman-Baker, Ph.D. Senior Supercomputing Specialist & SCC Team Coach Happy Cheeks, Ph.D. Supercomputing Quokka & Team Mascot

2 Outline I.Student Cluster Competition II.Building & Training SCC Team III.Cluster Design IV.Science Applications V.Competition

3 I. STUDENT CLUSTER COMPETITION iVEC Student Cluster Competition Training Team, 2013

4 Student Cluster Competition: Introduction 48-hour non-stop computing showdown held at annual supercomputing conference Held in Denver, Colorado, USA in 2013 Teams of undergraduates design, build, run apps on cluster Power constraint ~3000W (standard track) $2500 equipment constraint (commodity track)

5 SCC: Introduction Key cluster architecture rules: Machine must contain only publicly available components All components must be turned on at all times (low-watt idle okay) When running applications, must not exceed 26 A @ 120 V power draw (~3120 W)

6 SCC: Introduction Applications: HPCC + 3 known apps, 1 mystery app 2013 apps: Graphlab, WRF, Nemo5, + Flying Snakes (OpenFOAM) Teams judged on throughput of work provided by judges, plus interview to determine depth of understanding

7 SCC: History First held at 2007 Supercomputing conference, SC07 Brainchild of Brent Gorda, now General Manager, Intel High- Performance Data Division (formerly Whamcloud) Now three Student Cluster Competitions/year: China (April) ISC (June) SC (November)

8 SCC: iVEC’s Motivation Increase computational science literacy in WA Develop future users/employees Train professional workforce for local industry Exposure for iVEC in international HPC community It sounded like fun!


10 Starting iVEC’s SCC Team Began by raising interest at iVEC partner universities Contacted iVEC directors at universities, got leads for whom to contact First interest came from ECU (3 students) Other unis followed

11 Sponsorship for SCC Team SGI NVIDIA Allinea Rio Tinto

12 Sponsorship for SCC Team Discussed hardware sponsorship with Cray & SGI SGI first to commit, hardware + travel money Solicited financial sponsorship from mining companies in WA Rio Tinto committed to sponsor 3 students Obtained software & hardware sponsorship from Allinea & NVIDIA

13 Team Hardware Sponsorship Most important sponsorship to get SGI very enthusiastic about sponsoring team Put best person in Asia-Pacific region on project Todd helped team: Select machine architecture Determine software stack Set the machine up in Perth & at competition

14 Team Hardware Sponsorship When team decided to use GPUs in cluster, NVIDIA loaned us 8 K20X GPUs Received free of charge through academic program (had to return after competition  )

15 Team Travel Sponsorship Travel to competition very expensive Budget: $3000/student SGI committed enough for half of team Solicited support from mining companies in WA, successful with Rio Tinto

16 Team Software Sponsorship I “won” license for Allinea software I asked Allinea to sponsor license for team instead Allinea provided license for MAP & DDT products MAP: simple profiling tool, very useful to novice users DDT: parallel debugger, intuitive GUI

17 Team Composition Breakdown: 3 Computer Science/Games majors from ECU 2 Physics/Computer Engineering majors from UWA 1 Geophysics major from Curtin Each student assigned areas of expertise (1 primary, 2 secondary) At beginning of training, I facilitated students’ development of team norms (standards of behavior) that proved very effective No conflicts, no inappropriate behavior

18 III. CLUSTER DESIGN “Crazy Straws,”

19 Cluster Design Designing cluster, generally the following must be considered: Cost Space Utility Performance Power Consumption Cost

20 Cluster Design Designing cluster, generally the following must be considered: Cost Space Utility Performance Power Consumption Cost

21 Cluster Design Architecture choices: All CPU nodes All accelerator nodes Hybrid CPU/accelerator Accelerator choices: NVIDIA Tesla Intel Xeon Phi Combination (?)

22 Cluster Architecture 2 Pyramid nodes 4 x NVIDIA K20X 2 x Intel Ivy Bridge 12-core 64 GB 8 Hollister nodes 2 x Intel Ivy Bridge 12-core 64 GB Infiniband interconnect Stay within power budget by running only GPUs or CPUs

23 Cluster Architecture Chose CPU/GPU hybrid architecture For good LINPACK performance Potential accelerated mystery app Maximize flops per watt performance

24 Cluster Software CentOS 6 Ceph filesystem Open-source Software GCC OpenMPI Numerical libraries FFTW PETSc NetCDF HDF5 Proprietary software Intel compiler/MKL Allinea DDT/MAP PGI compiler CUDA

25 Cluster Software: Ceph Each node has > 1TB disk, need parallel filesystem Could use Lustre, however issues with losing data if one node fails Ceph: distributed object store and file system designed to provide excellent performance, reliability and scalability

26 Cluster Software: Ceph Ceph object-storage system with traditional file-system interface with POSIX semantics Looks like regular filesystem Directly mounted in recent CentOS kernel Underneath, Ceph keeps several copies of files balanced across hosts Metadata server cluster can expand/contract to fit file system Rebalance dynamically to distribute data (weighted distribution if disks differ in size)

27 IV. APPLICATIONS “TornadoGuard,”

28 Applications High-Performance LINPACK Graphlab NEMO5 WRF Mystery Application – Flying Snakes!


30 Linpack History Linear algebra library written in Fortran Benchmarking added in late 1980s to estimate calculation times Initial releases used fixed matrix sizes 100 and 1000 Arbitrary problem size support added in 1991 LAPACK replaced the Linpack library for linear algebra, however Linpack benchmarking tool still used today

31 HPL Standard Released in 2000 re-written in C and optimized for parallel computing Uses MPI and BLAS The standard benchmark used measure supercomputer performance Used to determine the Top500 Also used for stress testing and maintenance analysis

32 CUDA HPL CUDA-accelerated Linpack released by NVIDIA available on developer zone Uses the GPU instead of the CPU and limited to GPU memory Popularity gaining with GPU providing better flops/watt Standard for HPL runs in Student Cluster Competitions

33 TF Year


35 GraphLab Toolkit for graph algorithms Topic Modelling Graph Analytics Clustering Collaborative Filtering Graphical Models Computer Vision

36 Graphlab Applications Page rank (e.g., Google) Image reconstruction Recommendation predictions (e.g., Netflix) Image stitching (e.g., panoramic photos)

37 NEMO5

38 NEMO5 Stands for NanoElectronics MOdeling Tools Free for academic use, not exactly open source Evolved to current form over 15 years Developed by Purdue University

39 NEMO5 NEMO5 designed to model at the atomic scale Simulation of nanostructure properties: strain relaxation, phonon modes, electronic structure, self- consistent Schrodinger-Poisson calculations, and quantum transport E.g., modelling Quantum Dots

40 WRF

41 WRF Next-generation mesoscale numerical weather prediction system Used for both weather prediction and research forecasting, throughout the world


43 Mystery Application Unknown application, presented at competition To prepare, compiled and ran one new code each week during 2 nd semester Gained experience with different types of compiles (e.g., edit makefiles,, cmake, autoconf, etc.) Gained familiarity with common errors encountered while compiling, and how to fix them

44 Flying Snakes! Aerodynamics of flying snakes Flying snakes inhabit rainforest canopy in East Asia & jump between tree branches, gliding to next branch Case of fluid dynamics: behavior of air as snake passes through, development of vortices, eddies, etc. Modeled with OpenFOAM, open-source computational fluid dynamics toolbox

45 V. COMPETITION “Standards,”

46 Competition Arrived in Denver Thursday before competition, to acclimate to 15-hour time difference Visited National Renewable Energy Laboratory to see supercomputers Began setting up on Saturday before competition Competition time: Monday evening – Wednesday evening Wednesday evening: party at Casa Bonita Thursday: Pros vs. amateurs competition Friday: back home

47 Scenes from the Trip

48 Scenes from the Trip

49 Team Booth

50 Casa Bonita

51 Taking Down the Booth

52 Results Official champion: University of Texas (last year’s champions too) Other rankings not given, but we were middle of pack Entire team (including coach) learned a lot! Students have potential leads for jobs & further study Plans to coach another team for 2014

53 Bibliography CentOS, http://www.centos.org Ceph, High-Performance Linpack & HPCC, Graphlab, http://graphlab.org NEMO5, e-projects/nemo5/ e-projects/nemo5/ WRF, Krishnan et al., Lift and wakes of flying snakes, OpenFOAM, http://www.openfoam.com

54 For More Information iVEC, http://www.ivec.org Student Cluster Competition, Email:

Download ppt " Team Quokka: Australia’s First Foray into the Student Cluster Challenge Rebecca Hartman-Baker, Ph.D. Senior Supercomputing Specialist & SCC Team."

Similar presentations

Ads by Google