Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Performance Computing Workshop (Statistics) HPC 101 Dr. Charles J Antonelli LSAIT ARS January, 2013.

Similar presentations


Presentation on theme: "High Performance Computing Workshop (Statistics) HPC 101 Dr. Charles J Antonelli LSAIT ARS January, 2013."— Presentation transcript:

1 High Performance Computing Workshop (Statistics) HPC 101 Dr. Charles J Antonelli LSAIT ARS January, 2013

2 Credits Contributors: Brock Palen (CoE-IT CAC) Jeremy Hallum (MSIS) Tony Markel (MSIS) Bennet Fauber (CoE-IT CAC) LSAIT ARS UM CoE-IT CAC 1/13cja 20132

3 Roadmap Flux Mechanics High Performance Computing Flux Architecture Flux Batch Operations Introduction to Scheduling 1/13cja 20133

4 Flux Mechanics 1/13cja 20134

5 Using Flux Three basic requirements to use Flux: 1.A Flux account 2.A Flux allocation 3.An MToken (or a Software Token) 1/13cja 20135

6 Using Flux 1.A Flux account Allows login to the Flux login nodes Develop, compile, and test code Available to members of U-M community, free Get an account by visiting https://www.engin.umich.edu/form/cacaccountapplicati on https://www.engin.umich.edu/form/cacaccountapplicati on https://www.engin.umich.edu/form/cacaccountapplicati on 1/13cja 20136

7 Using Flux 2.A Flux allocation Allows you to run jobs on the compute nodes Current rates: $18 per core-month for Standard Flux $18 per core-month for Standard Flux $24.35 per core-month for BigMem Flux $8 subsidy per core month for LSA and Engineering Details at http://www.engin.umich.edu/caen/hpc/planning/costing.ht ml http://www.engin.umich.edu/caen/hpc/planning/costing.ht ml http://www.engin.umich.edu/caen/hpc/planning/costing.ht ml To inquire about Flux allocations please email flux- support@umich.edu flux- support@umich.eduflux- support@umich.edu 1/13cja 20137

8 Using Flux 3.An MToken (or a Software Token) Required for access to the login nodes Improves cluster security by requiring a second means of proving your identity You can use either an MToken or an application for your mobile device (called a Software Token) for this Information on obtaining and using these tokens at http://cac.engin.umich.edu/resources/loginnodes/twofa ctor.html http://cac.engin.umich.edu/resources/loginnodes/twofa ctor.html http://cac.engin.umich.edu/resources/loginnodes/twofa ctor.html 1/13cja 20138

9 Logging in to Flux ssh flux-login.engin.umich.edu MToken (or Software Token) required You will be randomly connected a Flux login node Currently flux-login1 or flux-login2 Firewalls restrict access to flux-login. To connect successfully, either Physically connect your ssh client platform to the U-M campus wired network, or Use VPN software on your client platform, or Use ssh to login to an ITS login node, and ssh to flux-login from there 1/13cja 20139

10 Modules The module command allows you to specify what versions of software you want to use module list -- Show loaded modules module load name -- Load module name for use module avail -- Show all available modules module avail name -- Show versions of module name* module unload name -- Unload module name module-- List all options Enter these commands at any time during your session A configuration file allows default module commands to be executed at login Put module commands in file ~/privatemodules/default Don’t put module commands in your.bashrc /.bash_profile 1/13cja 201310

11 Flux environment The Flux login nodes have the standard GNU/Linux toolkit: make, autoconf, awk, sed, perl, python, java, emacs, vi, nano, … Watch out for source code or data files written on non-Linux systems Use these tools to analyze and convert source files to Linux format file dos2unix, mac2unix 1/13cja 201311

12 Lab 1 Task: Invoke R interactively on the login node module load R module list Rq() Please run only very small computations on the Flux login nodes, e.g., for testing 1/13cja 201312

13 Lab 2 Task: Run R in batch mode module load R Copy sample code to your login directory cd cp ~cja/stats-sample-code.tar.gz. tar -zxvf stats-sample-code.tar.gz cd./stats-sample-code Examine lab2.pbs and lab2.R Edit lab2.pbs with your favorite Linux editor Change #PBS -M email address to your own 1/13cja 201313

14 Lab 2 Task: Run R in batch mode Submit your job to Flux qsub lab2.pbs Watch the progress of your job qstat -u uniqname where uniqname is your own uniqname When complete, look at the job’s output less lab2.out 1/13cja 201314

15 Lab 3 Task: Use the multicore package in R The multicore package allows you to use multiple cores on a single node module load R cd ~/stats-sample-code Examine lab3.pbs and lab3.R Edit lab3.pbs with your favorite Linux editor Change #PBS -M email address to your own 1/13cja 201315

16 Lab 3 Task: Use the multicore package in R Submit your job to Flux qsub lab3.pbs Watch the progress of your job qstat -u uniqname where uniqname is your own uniqname When complete, look at the job’s output less lab3.out 1/13cja 201316

17 Lab 4 Task: Another multicore example in R module load R cd ~/stats-sample-code Examine lab4.pbs and lab4.R Edit lab4.pbs with your favorite Linux editor Change #PBS -M email address to your own 1/13cja 201317

18 Lab 4 Task: Another multicore example in R Submit your job to Flux qsub lab4.pbs Watch the progress of your job qstat -u uniqname where uniqname is your own uniqname When complete, look at the job’s output less lab4.out 1/13cja 201318

19 Lab 5 Task: Run snow interactively in R The snow package allows you to use cores on multiple nodes module load R cd ~/stats-sample-code Examine lab5.R Start an interactive PBS session qsub -I -V -l procs=3 -l walltime=30:00 -A stats_flux -l qos=flux -q flux 1/13cja 201319

20 Lab 5 Task: Run snow interactively in R cd $PBS_O_WORKDIR Run snow in the interactive PBS session R CMD BATCH --vanilla lab5.R lab5.out … ignore any “Connection to lifeline lost” message 1/13cja 201320

21 Lab 6 Task: Run snowfall in R The snowfall package is similar to snow, and allows you to change the number of cores used without modifying your R code module load R cd ~/stats-sample-code Examine lab6.pbs and lab6.R Edit lab6.pbs with your favorite Linux editor Change #PBS -M email address to your own 1/13cja 201321

22 Lab 6 Task: Run snowfall in R Submit your job to Flux qsub lab6.pbs Watch the progress of your job qstat -u uniqname where uniqname is your own uniqname When complete, look at the job’s output less lab6.out 1/13cja 201322

23 Lab 7 Task: Run parallel MATLAB Distribute parfor iterations over multiple cores on multiple nodes Do this once: mkdir ~/matlab/ cd ~/matlab wget http://cac.engin.umich.edu/resources/software/matlabdct/mpiLibConf.m 1/13cja 201323

24 Lab 7 Task: Run parallel MATLAB Start an interactive PBS session module load matlab qsub -I -V -l nodes=2:ppn=3 -l walltime=30:00 -A stats_flux -l qos=flux -q flux Start MATLAB matlab -nodisplay 1/13cja 201324

25 Lab 7 Task: Run parallel MATLAB Set up a matlabpool sched = findResource('scheduler', 'type', 'mpiexec') set(sched, 'MpiexecFileName', '/home/software/rhel6/mpiexec/bin/mpiexec') set(sched, 'EnvironmentSetMethod', 'setenv') %use the 'sched' object when calling matlabpool %the syntax for matlabpool must use the (sched, N) format matlabpool (sched, 6) … ignore “Found pre-existing parallel job(s)” warnings 1/13cja 201325

26 Lab 7 Task: Run parallel MATLAB Run a simple parfor ticx=0; parfor i=1:100000000 x=x+i;endtoc Close the matlabpool matlabpool close 1/13cja 201326

27 Compiling Code Assuming default module settings Use mpicc/mpiCC/mpif90 for MPI code Use icc/icpc/ifort with -mp for OpenMP code Serial code, Fortran 90: ifort -O3 -ipo -no-prec-div –xHost -o prog prog.f90 Serial code, C: icc -O3 -ipo -no-prec-div –xHost –o prog prog.c MPI parallel code: mpicc -O3 -ipo -no-prec-div –xHost -o prog prog.c mpirun -np 2./prog 1/13cja 201327

28 Lab Task: compile and execute simple programs on the Flux login node Copy sample code to your login directory: cd cp ~brockp/cac-intro-code.tar.gz. tar -xvzf cac-intro-code.tar.gz cd./cac-intro-code Examine, compile & execute helloworld.f90: ifort -O3 -ipo -no-prec-div -xHost -o f90hello helloworld.f90./f90hello Examine, compile & execute helloworld.c: icc -O3 -ipo -no-prec-div -xHost -o chello helloworld.c./chello Examine, compile & execute MPI parallel code: mpicc -O3 -ipo -no-prec-div -xHost -o c_ex01 c_ex01.c … ignore the “feupdateenv is not implemented and will always fail” warning mpirun -np 2./c_ex01 … ignore runtime complaints about missing NICs 1/13cja 201328

29 Makefiles The make command automates your code compilation process Uses a makefile to specify dependencies between source and object files The sample directory contains a sample makefile To compile c_ex01 : make c_ex01 To compile all programs in the directory make To remove all compiled programs make clean To make all the programs using 8 compiles in parallel make -j8 1/13cja 201329

30 High Performance Computing 1/13 cja 2013 30

31 Advantages of HPC Cheaper than the mainframe More scalable than your laptop Buy or rent only what you need COTS hardware COTS software COTS expertise 1/13cja 201331

32 Disadvantages of HPC Serial applications Tightly-coupled applications Truly massive I/O or memory requirements Difficulty/impossibility of porting software No COTS expertise 1/13cja 201332

33 Programming Models Two basic parallel programming models Message-passing The application consists of several processes running on different nodes and communicating with each other over the network Used when the data are too large to fit on a single node, and simple synchronization is adequate “Coarse parallelism” Implemented using MPI (Message Passing Interface) libraries Multi-threaded The application consists of a single process containing several parallel threads that communicate with each other using synchronization primitives Used when the data can fit into a single process, and the communications overhead of the message-passing model is intolerable “Fine-grained parallelism” or “shared-memory parallelism” Implemented using OpenMP (Open Multi-Processing) compilers and libraries Both 1/13cja 201333

34 Good parallel Embarrassingly parallel Folding@home, RSA Challenges, password cracking, … Regular structures Divide&conquer, e.g. Quicksort Pipelined: N-body problems, matrix multiplication O(n 2 ) -> O(n) 1/13cja 201334

35 Less good parallel Serial algorithms Those that don’t parallelize easily Irregular data & communications structures E.g., surface/subsurface water hydrology modeling Tightly-coupled algorithms Unbalanced algorithms Master/worker algorithms, where the worker load is uneven 1/13cja 201335

36 Amdahl’s Law If you enhance a fraction f of a computation by a speedup S, the overall speedup is: 1/13cja 201336

37 Amdahl’s Law 1/13cja 201337

38 Flux Architecture 1/13cja 201338

39 The Flux cluster … 1/13cja 201339

40 Behind the curtain nyx flux … shared 1/13cja 201340

41 A Flux node 12 Intel cores 48 GB RAM Local disk EthernetInfiniBand 1/13cja 201341

42 A Newer Flux node 16 Intel cores 64 GB RAM Local disk EthernetInfiniBand 1/13cja 201342

43 A Flux BigMem node 1 TB RAM Local disk EthernetInfiniBand 1/13cja 201343 40 Intel cores

44 Flux hardware (January 2012) 8,016 Intel cores200 Intel BigMem cores 632 Flux nodes5 Flux BigMem nodes 48 GB RAM/node1 TB RAM/ BigMem node 4 GB RAM/core (average)25 GB RAM/BigMem core 4X Infiniband network (interconnects all nodes) 40 Gbps, <2 us latency Latency an order of magnitude less than Ethernet Lustre Filesystem Scalable, high-performance, open Supports MPI-IO for MPI jobs Mounted on all login and compute nodes 1/13cja 201344

45 Flux software Default Software: Intel Compilers with OpenMPI for Fortran and C Optional software: PGI Compilers Unix/GNU tools gcc/g++/gfortran Licensed software: Abacus, ANSYS, Mathematica, Matlab, R, STATA SE, … See http://cac.engin.umich.edu/resources/software/index.html http://cac.engin.umich.edu/resources/software/index.html You can choose software using the module command 1/13cja 201345

46 Flux network All Flux nodes are interconnected via Infiniband and a campus-wide private Ethernet network The Flux login nodes are also connected to the campus backbone network The Flux data transfer node will soon be connected over a 10 Gbps connection to the campus backbone network This means The Flux login nodes can access the Internet The Flux compute nodes cannot If Infiniband is not available for a compute node, code on that node will fall back to Ethernet communications 1/13cja 201346

47 Flux data Lustre filesystem mounted on /scratch on all login, compute, and transfer nodes 342TB of short-term storage for batch jobs Large, fast, short-term NFS filesystems mounted on /home and /home2 on all nodes 40 GB of storage per user for development & testing Small, slow, long-term 1/13cja 201347

48 Flux data Flux does not provide large, long-term storage Alternatives: ITS Value Storage Departmental server CAEN can mount your storage on the login nodes Issue df –kh command on a login node to see what other groups have mounted 1/13cja 201348

49 Globus Online Features High-speed data transfer, much faster than SCP or SFTP Reliable & persistent Minimal client software: Mac OS X, Linux, Windows GridFTP Endpoints Gateways through which data flow Exist for XSEDE, OSG, … UMich: umich#flux, umich#nyx Add your own server endpoint: contact flux-support Add your own client endpoint! More information http://cac.engin.umich.edu/resources/loginnodes/globus.html 1/13cja 201349

50 Flux Batch Operations 1/13cja 201350

51 Portable Batch System All production runs are run on the compute nodes using the Portable Batch System (PBS) PBS manages all aspects of cluster job execution except job scheduling Flux uses the Torque implementation of PBS Flux uses the Moab scheduler for job scheduling Torque and Moab work together to control access to the compute nodes PBS puts jobs into queues Flux has a single queue, named flux 1/13cja 201351

52 Cluster workflow You create a batch script and submit it to PBS PBS schedules your job, and it enters the flux queue When its turn arrives, your job will execute the batch script Your script has access to any applications or data stored on the Flux cluster When your job completes, anything it sent to standard output and error are saved and returned to you You can check on the status of your job at any time, or delete it if it’s not doing what you want A short time after your job completes, it disappears 1/13cja 201352

53 Sample serial script #PBS -N yourjobname #PBS -V #PBS -A youralloc_flux #PBS -l qos=flux #PBS -q flux #PBS –l procs=1,walltime=00:05:00 #PBS -M youremailaddress #PBS -m abe #PBS -j oe #Your Code Goes Below: cd $PBS_O_WORKDIR./f90hello 1/13cja 201353

54 Sample batch script #PBS -N yourjobname #PBS -V #PBS -A youralloc_flux #PBS -l qos=flux #PBS -q flux #PBS –l procs=16,walltime=00:05:00 #PBS -M youremailaddress #PBS -m abe #PBS -j oe #Your Code Goes Below: cat $PBS_NODEFILE cd $PBS_O_WORKDIR mpirun./c_ex01 Lists the node(s) your job ran on No need to specify -np Change to submission directory 1/13cja 201354

55 Basic batch commands Once you have a script, submit it: qsub scriptfile $ qsub singlenode.pbs 6023521.nyx.engin.umich.edu You can check on the job status: qstat jobid $ qstat 6023521 nyx.engin.umich.edu: Req'd Req'd Elap Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time -------------------- -------- -------- ---------------- ------ ----- --- ------ ----- - ----- 6023521.nyx.engi cja flux hpc101i -- 1 1 -- 00:05 Q -- To delete your job qdel jobid $ qdel 6023521 $ 1/13cja 201355

56 Lab Task: Run an MPI job on 8 cores Compile c_ex05 cd ~/cac-intro-code make c_ex05 Edit file run with your favorite Linux editor Change #PBS –M address to your own I don’t want Brock to get your email! Change #PBS –A allocation to stats_flux, or to your own allocation, if desired Change #PBS –l allocation to flux Submit your job qsub run 1/13cja 201356

57 PBS attributes As always, man qsub is your friend -N : sets the job name, can’t start with a number -V : copy shell environment to compute node -A youralloc_flux: sets the allocation you are using -l qos=flux: sets the quality of service parameter -q flux : sets the queue you are submitting to -l : requests resources, like number of cores or nodes -M : whom to email, can be multiple addresses -m : when to email: a=job abort, b=job begin, e=job end -j oe: join STDOUT and STDERR to a common file -I : allow interactive use -X : allow X GUI use 1/13cja 201357

58 PBS resources (1) A resource ( -l ) can specify: Request wallclock (that is, running) time -l walltime=HH:MM:SS Request C MB of memory per core -l pmem=Cmb Request T MB of memory for entire job -l mem=Tmb Request M cores on arbitrary node(s) -l procs=M Request a token to use licensed software -l gres=stata:1 -l gres=matlab -l gres=matlab%Communication_toolbox 1/13cja 201358

59 PBS resources (2) A resource ( -l ) can specify: For multithreaded code: Request M nodes with at least N cores per node -l nodes=M:ppn=N Request M nodes with exactly N cores per node -l nodes=M:tpn=N (you’ll only use this for specific algorithms) 1/13cja 201359

60 Interactive jobs You can submit jobs interactively: qsub -I -V -l procs=2 -l walltime=15:00 -A youralloc_flux -l qos=flux –q flux This queues a job as usual Your terminal session will be blocked until the job runs When it runs, you will be connected to one of your nodes Invoked serial commands will run on that node Invoked parallel commands (e.g., via mpirun ) will run on all of your nodes When you exit the terminal session your job is deleted Interactive jobs allow you to Test your code on cluster node(s) Execute GUI tools on a cluster node with output on your local platform’s X server Utilize a parallel debugger interactively 1/13cja 201360

61 Lab Task: Run an interactive job Enter this command (all on one line): qsub -I -V -l procs=2 -l walltime=15:00 -A FluxTraining_flux -l qos=flux -q flux When your job starts, you’ll get an interactive shell Copy and paste the batch commands from the “run” file, one at a time, into this shell Experiment with other commands After fifteen minutes, your interactive shell will be killed 1/13cja 201361

62 Introduction to Scheduling 1/13cja 201362

63 The Scheduler (1/3) Flux scheduling policies: The job’s queue determines the set of nodes you run on The job’s account and qos determine the allocation to be charged If you specify an inactive allocation, your job will never run The job’s resource requirements help determine when the job becomes eligible to run If you ask for unavailable resources, your job will wait until they become free There is no pre-emption 1/13cja 201363

64 The Scheduler (2/3) Flux scheduling policies : If there is competition for resources among eligible jobs in the allocation or in the cluster, two things help determine when you run: How long you have waited for the resource How much of the resource you have used so far This is called “fairshare” The scheduler will reserve nodes for a job with sufficient priority This is intended to prevent starving jobs with large resource requirements 1/13cja 201364

65 The Scheduler (3/3) Flux scheduling policies : If there is room for shorter jobs in the gaps of the schedule, the scheduler will fit smaller jobs in those gaps This is called “backfill” Cores Time 1/13cja 201365

66 Gaining insight There are several commands you can run to get some insight over the scheduler’s actions: freenodes : shows the number of free nodes and cores currently available showq : shows the state of the queue (like qstat -a ), except shows running jobs in order of finishing mdiag –p -t flux : shows the factors used in computing job priority checkjob jobid : Can show why your job might not be starting showstart –e all : Gives you a coarse estimate of job start time; use the smallest value returned 1/13cja 201366

67 More advanced scheduling Job Arrays Dependent Scheduling 67 cja 20131/1367

68 Job Arrays Submit copies of identical jobs Submit copies of identical jobs Invoked via qsub –t: Invoked via qsub –t: qsub –t array-spec pbsbatch.txt Where array-spec can be m-na,b,cm-n%slotlimite.g. qsub –t 1-50%10Fifty jobs, numbered 1 through 50, only ten can run simultaneously $PBS_ARRAYID records array identifier $PBS_ARRAYID records array identifier 68 cja 20131/1368

69 Dependent scheduling Submit jobs whose execution scheduling depends on other jobs Submit jobs whose execution scheduling depends on other jobs Invoked via qsub –W: Invoked via qsub –W: qsub -W depend=type:jobid[:jobid]… Where depend can be afterSchedule after jobids have started afterokSchedule after jobids have finished, only if no errors afternotokSchedule after jobids have finished, only if errors afteranySchedule after jobids have finished, regardless of status before,beforeok,beforenotok,beforeany 69 cja 20131/1369

70 Dependent scheduling Where depend can be (cont’t) beforeWhen this job has started, jobids will be scheduled beforeokAfter this job completes without errors, jobids will be scheduled beforenotokAfter this job completes without errors, jobids will be scheduled afteranyAfter this job completes, regardless of status, jobids will be scheduled 70 cja 20131/1370

71 Flux On-Demand Pilot Alternative to a static allocation Pay only for the core time you use Pros Accommodates “bursty” usage patterns Cons Limit of 50 cores total Limit of 25 cores for any user FoD pilot has ended FoD pilot users continue to be supported FoD service is being defined To inquire about FoD allocations please email flux-support@umich.edu flux-support@umich.edu 1/13cja 201371

72 Flux Resources http://www.youtube.com/user/UMCoECAC UMCoECAC’s YouTube channel http://orci.research.umich.edu/resources-services/flux/ U-M Office of Research Cyberinfrastructure Flux summary page http://cac.engin.umich.edu/ http://cac.engin.umich.edu/ Getting an account, basic overview (use menu on left to drill down) http://cac.engin.umich.edu/started http://cac.engin.umich.edu/started How to get started at the CAC, plus cluster news, RSS feed and outages http://www.engin.umich.edu/caen/hpc XSEDE information, Flux in grant applications, startup & retention offers http://cac.engin.umich.edu/ http://cac.engin.umich.edu/ Resources | Systems | Flux | PBS http://cac.engin.umich.edu/ Detailed PBS information for Flux use For assistance: flux-support@umich.edu flux-support@umich.edu Read by a team of people Cannot help with programming questions, but can help with operational Flux and basic usage questions 1/13cja 201372

73 Summary The Flux cluster is just a collection of similar Linux machines connected together to run your code, much faster than your desktop can Command-line scripts are queued by a batch system and executed when resources become available Some important commands are qsub qstat -u username qdel jobid checkjob Develop and test, then submit your jobs in bulk and let the scheduler optimize their execution 1/13cja 201373

74 Any Questions? Charles J. Antonelli LSAIT Research Systems Group cja@umich.edu http://www.umich.edu/~cja 734 763 0607 cja@umich.edu http://www.umich.edu/~cja cja@umich.edu http://www.umich.edu/~cja 1/13cja 201374

75 References 1.http://cac.engin.umich.edu/resources/software/R.html http://cac.engin.umich.edu/resources/software/R.html 2.http://cac.engin.umich.edu/resources/software/snow.html http://cac.engin.umich.edu/resources/software/snow.html 3.http://cac.engin.umich.edu/resources/software/matlab.html http://cac.engin.umich.edu/resources/software/matlab.html 4.CAC supported Flux software, http://cac.engin.umich.edu/resources/software/index.html, (accessed August 2011) http://cac.engin.umich.edu/resources/software/index.html 5.J. L. Gustafson, “Reevaluating Amdahl’s Law,” chapter for book, Supercomputers and Artificial Intelligence, edited by Kai Hwang, 1988. http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html (accessed November 2011). http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html 6.Mark D. Hill and Michael R. Marty, “Amdahl’s Law in the Multicore Era,” IEEE Computer, vol. 41, no. 7, pp. 33-38, July 2008. http://research.cs.wisc.edu/multifacet/papers/ieeecomputer08_amdahl_multicore.pdf (accessed November 2011). http://research.cs.wisc.edu/multifacet/papers/ieeecomputer08_amdahl_multicore.pdf 7.InfiniBand, http://en.wikipedia.org/wiki/InfiniBand (accessed August 2011). http://en.wikipedia.org/wiki/InfiniBand 8.Intel C and C++ Compiler 1.1 User and Reference Guide, http://software.intel.com/sites/products/documentation/hpc/compilerpro/en- us/cpp/lin/compiler_c/index.htm (accessed August 2011). http://software.intel.com/sites/products/documentation/hpc/compilerpro/en- us/cpp/lin/compiler_c/index.htm http://software.intel.com/sites/products/documentation/hpc/compilerpro/en- us/cpp/lin/compiler_c/index.htm 9.Intel Fortran Compiler 11.1 User and Reference Guide,http://software.intel.com/sites/products/documentation/hpc/compilerpro/en- us/fortran/lin/compiler_f/index.htm (accessed August 2011). http://software.intel.com/sites/products/documentation/hpc/compilerpro/en- us/fortran/lin/compiler_f/index.htmhttp://software.intel.com/sites/products/documentation/hpc/compilerpro/en- us/fortran/lin/compiler_f/index.htm 10.Lustre file system, http://wiki.lustre.org/index.php/Main_Page (accessed August 2011). http://wiki.lustre.org/index.php/Main_Page 11.Torque User’s Manual, http://www.clusterresources.com/torquedocs21/usersmanual.shtml (accessed August 2011). http://www.clusterresources.com/torquedocs21/usersmanual.shtml 12.Jurg van Vliet & Flvia Paginelli, Programming Amazon EC2,’Reilly Media, 2011. ISBN 978-1-449-39368-7. 1/13cja 201375


Download ppt "High Performance Computing Workshop (Statistics) HPC 101 Dr. Charles J Antonelli LSAIT ARS January, 2013."

Similar presentations


Ads by Google