Presentation is loading. Please wait.

Presentation is loading. Please wait.

HCC Workshop Department of Earth and Atmospheric Sciences September 23/30, 2014.

Similar presentations

Presentation on theme: "HCC Workshop Department of Earth and Atmospheric Sciences September 23/30, 2014."— Presentation transcript:

1 HCC Workshop Department of Earth and Atmospheric Sciences September 23/30, 2014

2 Introduction to LINUX ●Operating system like Windows or OS X (but different) ●OS used by HCC ●Means of communicating with computer ●Command Line Interface (instead of GUI)

3 The Terminal Window After logging in to your account, commands are entered in a terminal window: Working in LINUX, you are always located somewhere in your directory tree. Most common commands: navigating within directory tree, creating, editing, moving and removing files and directories. You are here

4 Getting Started - Logging in ●Quick start guide for WindowsQuick start guide for Windows ●Quick start guide for Mac/LinuxQuick start guide for Mac/Linux

5 Your Directories on HCC Computers Work Directory - not backed up /work/ / or just $WORK Use for: temporary scratch space (not long-term storage) I/O for running jobs Home Directory - smaller space than work (quota limited by group) /home/ / or just $HOME Use for: program binaries, source code, configuration files, etc. Your directories are not shared/mounted across all HCC clusters (ie. Tusker and Crane have separate file systems.) “print working directory” “change directory”

6 Other commands to cover Move/rename files: mv Example: mv oldname newname Copy files: cpExample: cp myfile myfilecopy Change the current directory: cdExample: cd mydirectory See contents of the directory: ls See directory contents with more details: ls -l Print working directory:pwd Print file to screen: moreExample: more myfile Merging multiple files: catExample: cat file1 file2 > combinedfile Create empty file: touchExample: touch newfile Search a string in a file:grepExample: grep mystring myfile Compress & decompress files: tar, zip, gzip Remove files: rmExample: rm filetodelete Remove entire directory: rm -rfExample: rm -rf directorytodelete See more at: Exercise: Using commands on the left: 1. Create a directory named “mydir”. 2. Change into the directory, created a file named “myfile”. 3. Delete “myfile” and “mydir”.

7 Basic exercise - create directory/file

8 One editor : nano or any other you prefer Start nano by running ‘nano’. Save: Control + o Exit: Control + x

9 #!/bin/sh #SBATCH --time=00:05:00 #SBATCH --job-name=helloworld #SBATCH --error=helloworld.%J.err #SBATCH --output=helloworld.%J.out #SBATCH --qos=short echo “this is my first slurm job” sleep 30 Exercise - Create submit script Using nano, create the text file to the right. Save it as ‘helloworld.submit’. We will submit it to SLURM later.

10 Cluster Basics ●Every job must be run through a scheduler: Submitting Jobs using SLURM ●Software is managed through the module software: Using Module ●Every user has two directories to use Where to Put Data

11 SLURM (Simple Linux Utility for Resource Management) ●Define the job name #SBATCH --job-name=MY_Test_Job ●Emails for job notification #SBATCH #SBATCH --mail-type=ALL # (ALL = BEGIN, END, FAIL, REQUEUE) ●Requested run time #SBATCH --time=7-0 # Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days- hours", "days-hours:minutes" and "days-hours:minutes:seconds" ●Requested number of cores #SBATCH --ntasks=16 or #SBATCH --nodes=2 #SBATCH --ntasks-per-node=16

12 ●Request memory for your job #SBATCH --mem-per-cpu=128 # Set the memory requirements for the job in MB. Your job will be allocated exclusive access to that amount of RAM. In the case it overuses that amount, Slurm will kill it. ●Set a pattern for the output file #SBATCH --output= # The filename pattern may contain one or more replacement symbols, which are a percent sign "%" followed by a letter (e.g. %J). # Supported replacement symbols are: %J Job allocation number. %N Main node name. ●Submit an interactive job Submitting an interactive job is done with the command srun. $ srun --pty $SHELL or to allocate 4 cores per node: $ srun --nodes=1 --ntasks-per-node=4 --mem-per-cpu=1024 --pty $SHELL

13 Exercise - Submit hello world job

14 Use Globus Connect to: ●Transfer files between HCC clusters (Tusker, Crane, & Sandhills) ●Transfer files between your laptop/pc and HCC clusters ●Share files with colleagues Globus Connect

15 Getting Started: (HCC-DOCS full instructions)HCC-DOCS full instructions ●Sign up for a Globus accountSign up for a Globus account ●Install Globus Connect Personal on your computerInstall Globus Connect Personal on your computer ●Activate HCC endpoint(s): hcc#tusker, hcc#crane, hcc#sandhillsActivate HCC endpoint(s): ●Log in to Globus account (online) and start making transfersonlinestart making transfers Important: /home is read-only via Globus Connect (transfer from /home, but not to /home ). /work is readable/writable via Globus Connect (can transfer files to and from /work ).

16 Exercise: Transfer files with Globus Option 1: Transfer file with Globus 1.Download example query file to your laptop: matlab_demo.tar.gzmatlab_demo.tar.gz 2.Transfer file from your laptop to your Crane work directory using Globus 3.Unpack the tar archive on Crane: cd $WORK tar -xzf matlab_demo.tar.gz Option 2: Copy file into your work directory 1.Log in to crane 2.Copy query file to work with the following command: cp -r /work/demo/shared/matlab_demo $WORK

17 Exercise - Submit MATLAB Job Invert a large (10 4 x 10 4 ) random square matrix ●Using only 1 compute core ●Using 10 cores (open a matlabpool with 10 workers) ●Compare times Use a Job Array to invert 5 (or more!) matrices at once

18 Exercise - Submit MATLAB Job Examine contents of Matlab and Slurm scripts. Copy demo files to $WORK Submit first job. Edit Slurm submit script (change # tasks) and re-submit. Examine submit script for job array and then submit. Monitor job. Compare output files (can also use ‘ cat ’ or ‘ tail ’). Examine job array output.

Download ppt "HCC Workshop Department of Earth and Atmospheric Sciences September 23/30, 2014."

Similar presentations

Ads by Google