Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced High Performance Computing Workshop HPC 201 Charles J Antonelli Mark Champe Seth Meyer LSAIT ARS October, 2014.

Similar presentations


Presentation on theme: "Advanced High Performance Computing Workshop HPC 201 Charles J Antonelli Mark Champe Seth Meyer LSAIT ARS October, 2014."— Presentation transcript:

1 Advanced High Performance Computing Workshop HPC 201 Charles J Antonelli Mark Champe Seth Meyer LSAIT ARS October, 2014

2 Roadmap Flux review Globus Connect Advanced PBS Array & dependent scheduling Tools GPUs on Flux Scientific applications R, Python, MATLAB Parallel programming Debugging & profiling 10/14cja 20142

3 Flux review 10/14cja 20143

4 The Flux cluster … 10/14cja 20144

5 A Flux node 12-40 Intel cores 48 GB - 1 TB RAM Local disk 10/14 8 GPUs (GPU Flux) Each GPU contains 2,688 GPU cores cja 20145

6 Programming Models Two basic parallel programming models Message-passing The application consists of several processes running on different nodes and communicating with each other over the network Used when the data are too large to fit on a single node, and simple synchronization is adequate "Coarse parallelism" or "SPMD" Implemented using MPI (Message Passing Interface) libraries Multi-threaded The application consists of a single process containing several parallel threads that communicate with each other using synchronization primitives Used when the data can fit into a single process, and the communications overhead of the message-passing model is intolerable "Fine-grained parallelism" or "shared-memory parallelism" Implemented using OpenMP (Open Multi-Processing) compilers and libraries Both 10/14cja 20146

7 Using Flux Three basic requirements: A Flux login account A Flux allocation An MToken (or a Software Token) Logging in to Flux ssh flux-login.engin.umich.edu Campus wired or MWireless Otherwise: VPN ssh login.itd.umich.edu first 10/14cja 20147

8 Cluster batch workflow You create a batch script and submit it to PBS PBS schedules your job, and it enters the flux queue When its turn arrives, your job will execute the batch script Your script has access to all Flux applications and data When your script completes, anything it sent to standard output and error are saved in files stored in your submission directory You can ask that email be sent to you when your jobs starts, ends, or aborts You can check on the status of your job at any time, or delete it if it's not doing what you want A short time after your job completes, it disappears from PBS 10/14cja 20148

9 Loosely-coupled batch script #PBS -N yourjobname #PBS -V #PBS -A youralloc_flux #PBS -l qos=flux #PBS -q flux #PBS -l procs=12,pmem=1gb,walltime=00:05:00 #PBS -M youremailaddress #PBS -m abe #PBS -j oe #Your Code Goes Below: cat $PBS_NODEFILE cd $PBS_O_WORKDIR mpirun./c_ex01 10/14cja 20149

10 Tightly-coupled batch script #PBS -N yourjobname #PBS -V #PBS -A youralloc_flux #PBS -l qos=flux #PBS -q flux #PBS -l nodes=1:ppn=12,mem=4gb,walltime=00:05:00 #PBS -M youremailaddress #PBS -m abe #PBS -j oe #Your Code Goes Below: cd $PBS_O_WORKDIR matlab -nodisplay -r script 10/14cja 201410

11 GPU batch script #PBS -N yourjobname #PBS -V #PBS -A youralloc_flux #PBS -l qos=flux #PBS -q flux #PBS -l nodes=1:gpus=1,walltime=00:05:00 #PBS -M youremailaddress #PBS -m abe #PBS -j oe #Your Code Goes Below: cat $PBS_NODEFILE cd $PBS_O_WORKDIR matlab -nodisplay -r gpuscript 10/14cja 201411

12 Copying data F rom Linux or Mac OS X, use scp or sftp Non-interactive (scp) scp localfile uniqname@flux-xfer.engin.umich.edu:remotefile scp -r localdir uniqname@flux-xfer.engin.umich.edu:remotedir scp uniqname@flux-login.engin.umich.edu:remotefile localfile Use "." as destination to copy to your Flux home directory: scp localfile login@flux-xfer.engin.umich.edu:.... or to your Flux scratch directory: scp localfile login@flux-xfer.engin.umich.edu:/scratch/allocname/uniqname Interactive (sftp) sftp uniqname@flux-xfer.engin.umich.edu From Windows, use WinSCP U-M Blue Disc: http://www.itcs.umich.edu/bluedisc/ http://www.itcs.umich.edu/bluedisc/ Version ca-0.96 - 8 Oct 201412LSA IT ARS / cja © 2014

13 Globus Online Features High-speed data transfer, much faster than SCP or SFTP Reliable & persistent Minimal client software: Mac OS X, Linux, Windows GridFTP Endpoints Gateways through which data flow Exist for XSEDE, OSG, … UMich: umich#flux, umich#nyx Add your own client endpoint! Add your own server endpoint: contact flux-support@umich.edu flux-support@umich.eduflux-support@umich.edu More information http://cac.engin.umich.edu/resources/login-nodes/globus-gridftp 10/14cja 201413

14 Advanced PBS 10/14cja 201414

15 Job Arrays Submit copies of identical jobs Submit copies of identical jobs Invoked via qsub -t: Invoked via qsub -t: qsub -t array-spec pbsbatch.txt Where array-spec can be m-na,b,cm-n%slotlimite.g. qsub -t 1-50%10Fifty jobs, numbered 1 through 50, only ten can run simultaneously $PBS_ARRAYID records array identifier $PBS_ARRAYID records array identifier 15 10/14cja 201415

16 Dependent scheduling Submit job to become eligible for execution at a given time Submit job to become eligible for execution at a given time Invoked via qsub -a: Invoked via qsub -a: qsub -a [[[[CC]YY]MM]DD]hhmm[.SS] … qsub -a 201412312359 j1.pbs j1.pbs becomes eligible one minute before New Year's Day 2015 qsub -a 1800 j2.pbs j2.pbs becomes eligible at six PM today (or tomorrow, if submitted after six PM) 16 10/14cja 201416

17 Dependent scheduling Submit job to run after specified job(s) Submit job to run after specified job(s) Invoked via qsub -W: Invoked via qsub -W: qsub -W depend=type:jobid[:jobid]… Where depend can be afterSchedule this job after jobids have started afteranySchedule this job after jobids have finished afterokSchedule this job after jobids have finished with no errors afternotokSchedule this job after jobids have finished with errors qsub first.pbs # assume receives jobid 12345 qsub -W afterany:12345 second.pbs Schedule second.pbs after first.pbs completes 17 10/14cja 201417

18 Dependent scheduling Submit job to run before specified job(s) Submit job to run before specified job(s) Requires dependent jobs to be scheduled first Requires dependent jobs to be scheduled first Invoked via qsub -W: Invoked via qsub -W: qsub -W depend=type:jobid[:jobid]… Where depend can be beforejobids scheduled after this job starts beforeanyjobids scheduled after this job completes beforeokjobids scheduled after this job completes with no errors beforenotokjobids scheduled after this job completes with errors on:Nwait for N job completions qsub -W on:1 second.pbs # assume receives jobid 12345 qsub -W beforeany:12345 first.pbs Schedule second.pbs after first.pbs completes 18 10/14cja 201418

19 Troubleshooting showq [-r][-i][-b][-w user=uniq] # running/idle/blocked jobs qstat -f jobno # full info incl gpu qstat -n jobno # nodes/cores where job running diagnose -p # job prio and components pbsnodes # nodes, states, properties pbsnodes -l # list nodes marked down checkjob [-v] jobno # why job jobno not running mdiag -a # allocs & users (flux) freenodes # aggregate node/core busy/free mdiag -u uniq # allocs for uniq (flux) mdiag -a alloc_flux # cores active, alloc (flux) 10/14cja 201419

20 Scientific applications 10/14cja 201420

21 Scientific Applications R (incl snow and multicore) R with GPU (GpuLm, dist) SAS, Stata Python, SciPy, NumPy, BioPy MATLAB with GPU CUDA Overview CUDA C (matrix multiply) 10/14cja 201421

22 Python Python software available on Flux EPD The Enthought Python Distribution provides scientists with a comprehensive set of tools to perform rigorous data analysis and visualization. https://www.enthought.com/products/epd/ https://www.enthought.com/products/epd/ biopython Python tools for computational molecular biology http://biopython.org/wiki/Main_Page http://biopython.org/wiki/Main_Page numpy Fundamental package for scientific computing http://www.numpy.org/ http://www.numpy.org/ scipy Python-based ecosystem of open-source software for mathematics, science, and engineering http://www.scipy.org/ http://www.scipy.org/ 10/14cja 201422

23 Debugging & profiling 10/14cja 201423

24 Debugging with GDB Command-line debugger Start programs or attach to running programs Display source program lines Display and change variables or memory Plant breakpoints, watchpoints Examine stack frames Excellent tutorial documentation http://www.gnu.org/s/gdb/documentation/ 24 10/14cja 201424

25 Compiling for GDB Debugging is easier if you ask the compiler to generate extra source-level debugging information Add -g flag to your compilation icc -g serialprogram.c -o serialprogram or mpicc -g mpiprogram.c -o mpiprogram GDB will work without symbols Need to be fluent in machine instructions and hexadecimal Be careful using -O with -g Some compilers won't optimize code when debugging Most will, but you sometimes won't recognize the resulting source code at optimization level -O2 and higher Use -O0 -g to suppress optimization 25 10/14cja 201425

26 Running GDB Two ways to invoke GDB: Debugging a serial program: gdb./serialprogram Debugging an MPI program: mpirun -np N xterm -e gdb./mpiprogram This gives you N separate GDB sessions, each debugging one rank of the program Remember to use the -X or -Y option to ssh when connecting to Flux, or you can't start xterms there 10/14cja 201426

27 Useful GDB commands gdb execstart gdb on executable exec gdb exec corestart gdb on executable exec with core file core l [m,n]list source disasdisassemble function enclosing current instruction disas funcdisassemble function func b funcset breakpoint at entry to func b line#set breakpoint at source line# b *0xaddrset breakpoint at address addr i bshow breakpoints d bp#delete beakpoint bp# r [args]run program with optional args btshow stack backtrace ccontinue execution from breakpoint stepsingle-step one source line nextsingle-step, don't step into function stepisingle-step one instruction p vardisplay contents of variable var p *vardisplay value pointed to by var p &vardisplay address of var p arr[idx]display element idx of array arr x 0xaddrdisplay hex word at addr x *0xaddrdisplay hex word pointed to by addr x/20x 0xaddrdisplay 20 words in hex starting at addr i rdisplay registers i r ebpdisplay register ebp set var = expressionset variable var to expression qquit gdb 10/14cja 201427

28 Debugging with DDT Allinea's Distributed Debugging Tool is a comprehensive graphical debugger designed for the complex task of debugging parallel code Advantages include Provides GUI interface to debugging Similar capabilities as, e.g., Eclipse or Visual Studio Supports parallel debugging of MPI programs Scales much better than GDB 10/14cja 201428

29 Running DDT Compile with -g: mpicc -g mpiprogram.c -o mpiprogram Load the DDT module: module load ddt Start DDT: ddt mpiprogram This starts a DDT session, debugging all ranks concurrently Remember to use the -X or -Y option to ssh when connecting to Flux, or you can't start ddt there http://arc.research.umich.edu/flux-and-other-hpc-resources/flux/software-library/http://content.allinea.com/downloads/userguide.pdf 10/14cja 201429

30 Application Profiling with MAP Allinea's MAP Tool is a statistical application profiler designed for the complex task of profiling parallel code Advantages include Provides GUI interface to profiling Observe cumulative results, drill down for details Supports parallel profiling of MPI programs Handles most of the details under the covers 10/14cja 201430

31 Running MAP Compile with -g: mpicc -g mpiprogram.c -o mpiprogram Load the MAP module: module load ddt Start MAP: ddt mpiprogram This starts a MAP session Runs your program, gathers profile data, displays summary statistics Remember to use the -X or -Y option to ssh when connecting to Flux, or you can't start ddt there http://content.allinea.com/downloads/userguide.pdf 10/14cja 201431

32 Resources http://arc.research.umich.edu/flux-and-other-hpc-resources/flux/software-library/ U-M Advanced Research Computing Flux pages http://arc.research.umich.edu/resources-services/flux/ U-M Advanced Research Computing Flux pages http://cac.engin.umich.edu/ http://cac.engin.umich.edu/ CAEN HPC Flux pages http://www.youtube.com/user/UMCoECAC CAEN HPC YouTube channel For assistance: flux-support@umich.edu Read by a team of people including unit support staff Cannot help with programming questions, but can help with operational Flux and basic usage questions 10/14cja 201432

33 References 1.Supported Flux software, http://arc.research.umich.edu/flux-and-other-hpc- resources/flux/software-library/, (accessed June 2014) http://arc.research.umich.edu/flux-and-other-hpc- resources/flux/software-library/http://arc.research.umich.edu/flux-and-other-hpc- resources/flux/software-library/ 2.Free Software Foundation, Inc., "GDB User Manual," http://www.gnu.org/s/gdb/documentation/ (accessed June 2014). http://www.gnu.org/s/gdb/documentation/ 3.Intel C and C++ Compiler 14 User and Reference Guide, https://software.intel.com/en- us/compiler_14.0_ug_c (accessed June 2014). https://software.intel.com/en- us/compiler_14.0_ug_chttps://software.intel.com/en- us/compiler_14.0_ug_c 4.Intel Fortran Compiler 14 User and Reference Guide,https://software.intel.com/en- us/compiler_14.0_ug_f(accessed June 2014). https://software.intel.com/en- us/compiler_14.0_ug_fhttps://software.intel.com/en- us/compiler_14.0_ug_f 5.Torque Administrator's Guide, http://docs.adaptivecomputing.com/torque/4-2- 8/torqueAdminGuide-4.2.8.pdf (accessed June 2014). http://docs.adaptivecomputing.com/torque/4-2- 8/torqueAdminGuide-4.2.8.pdfhttp://docs.adaptivecomputing.com/torque/4-2- 8/torqueAdminGuide-4.2.8.pdf 6.Submitting GPGPU Jobs, https://sites.google.com/a/umich.edu/engin- cac/resources/systems/flux/gpgpus (accessed June 2014). https://sites.google.com/a/umich.edu/engin- cac/resources/systems/flux/gpgpushttps://sites.google.com/a/umich.edu/engin- cac/resources/systems/flux/gpgpus 7.http://content.allinea.com/downloads/userguide.pdf (accessed October 2014) http://content.allinea.com/downloads/userguide.pdf 10/14cja 201433


Download ppt "Advanced High Performance Computing Workshop HPC 201 Charles J Antonelli Mark Champe Seth Meyer LSAIT ARS October, 2014."

Similar presentations


Ads by Google