Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to HPCC at MSU

Similar presentations


Presentation on theme: "Introduction to HPCC at MSU"— Presentation transcript:

1 Introduction to HPCC at MSU
Chun-Min Chang Research Consultant Institute for Cyber-Enabled Research Download this presentation:

2 How this workshop works
We are going to cover some basics with hands on exampes. Exercises are denoted by the following icon in this presents: Bold italic means commands which I expect you type on your terminal in most cases.

3 Green and Red Sticky Use the sticky notes provided to help me help you. - No Sticky = I am working - Green = I am done and ready to move on (yea!) - Red = I am stuck and need more time and/or some help If you have not followed the set up instructions: Please do now

4 Agenda Summary Introduction How to Use the HPC Today, I will give
iCER & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites Today, I will give

5 iCER Overview What does iCER stand for? Mission
Institute for Cyber-Enabled Research Established in 2009 to encourage and support the application of advanced computing resources and techniques by MSU researchers. Mission Reducing the “Mean time to Science” (Latency and Throughput) iCER’s mission to help researchers with computational components of their research Note that icer resources are “computational XXX” of the research. I could be modeling, simulation, data analysis, data processing,…

6 iCER Overview iCER is a research unit at MSU. We:
Maintain the university’s supercomputer Organize Training/workshops Provide 1-on-1 consulting Help with grant proposal I would like to take this opportunity to advertise icer. It would be nice if everyone of you to spread out the information to broader community of research. We will have new machine installed and operation next year.

7 Funding From… The Vice President office for Research and Graduate Studies (VPRGS) Engineering College, College of Natural Science and College of Social Science This allows us to provides services and resources for FREE!!! If one day you are told that if you could ride train for free for the energy efficiency, would you be happy

8 Our Fleet Now lets take a look at what we have in HPCC for services to support research?

9 Why use HPCC? (Latency and Throughput)
Your own computer is too slow You have large amounts of data to process (hundreds of gigabytes) Your program requires massive amounts of memory ( 64gb - 6 TB ) Your work could be done in parallel You need specialized software (in a Linux environment) You need licensed software (Stata, Matlab, Gaussian, etc) Your work takes a long time (days, weeks) You want to collaborate using shared programs and data Software that can take advantage of accelerator cards (Graphics Processors)

10 Each computing node in a cluster is a single computer
HPC vs PC Each computing node in a cluster is a single computer Your Computer 2014 Cluster 2016 Cluster Processors 1 2 per node Cores per processor 4-8 20 per node 28 per node Speed ghz 2.5 ghz 2.4 ghz Connection to other computers Campus ethernet 1,000 MBit/sec "Infiniband" 50,000 MBit/sec Computers (nodes) 1 => 8 cores total 223 => 4,676 cores total 367 => 10,276 cores total Users ~1,200 Schedule On Demand 24/7, Queue High Performance => Running work in parallel, for long periods of time

11 MSU HPC System Large Memory Nodes (up to 6TB!)
GPU Accelerated cluster (K20, k80) PHI Accelerated cluster (5110p) Over 670 nodes, computing cores 2PB high speed parallel scratch file space 50GB replicated file spaces Access to large open-source software stack and specialized bioinformatics VMs What does “shared” means? /Managed/queued/scheduler/home space vs. public space/ access control / limited power / efficiency / care for each other / We will talk about how to use the shared resource later in the lecture. Shared Free !!! (& nodes coming in 2016!!!)

12 Nodes & Clusters A “node” is a single computer, usually with a few processors and many cores. Each node has it’s own host name/address like 'dev-intel14' A “cluster” is composed of dozens (or hundreds) of nodes, tied together with high speed network Types of Nodes: gateway : computer available to outside world for you to login ; doorware only; not enough to run programs. From here, one can connect to other nodes inside the HPCC network dev-node : computer you choose and log-in to do setup, development, compile, and testing your programs using similar configuration as cluster nodes for the most accurate testing. dev-nodes includes dev-intel10, dev-intel14, dev-intel14-k20 (which includes k20 GPU add-in card). compute-nodes: run actual computation; accessible via the scheduler/queue; can’t log in directly eval : unique nodes purchased to evaluated a specific kind of hardware prior to cluster investment. gpu, phi: nodes within a cluster that have special-purpose hardware. ‘fat-nodes’ have large RAM

13 HPCC System

14 Cluster Resources (for Computing Nodes)
Year Name Description ppn Memory Nodes Total Cores 2016 intel16 Intel Xeon E v4 (2.4 GHz) 28 128GB+ 442 12,448 2011 intel11 Intel Xeon E (2.67 GHz) 32 512GB 2 64 1TB 1 2TB 128 2014 intel14 Intel Xeon E v2 (2.5 GHz) 20 64GB 2560 256GB 23 460 2 NVIDIA K20 GPUs 128GB 39 780 2 Xeon Phi 5110P 27 540 Large Memory 1 - 6 TB 6 336 670 17,348 This table give some details of machines architecture. At this point, we need to let our users know that we have cluster, node, core, memory. How these resources are sharing?

15 Available Software Compilers, debuggers, and profilers Libraries
Intel compilers, Impi, openmpi, mvapich, totalview, DDD, DDT… Libraries BLAS, FFTW, LaPACK, MKL, PETSc, Trilinos… Commercial Software Mathematica, FLUENT, Abaqus, MATLAB… For developper, you may be familier with some of these names. But don’t worry if you are not familier with any of them. You could install one that you made or have in your home directory. ~ 2000 titles NOTE: Not include software installed privately under users’ own space

16 Online Resources icer.msu.edu: iCER Home
hpcc.msu.edu: HPCC Home wiki.hpcc.msu.edu: HPCC User Wiki Documentation and user manual Contact HPCC Reporting system problems HPC program writing/debugging consultation Help with HPC grant writhing System requests Other general questions For training/workshops and registration details, visit:

17 Agenda Summary Introduction How to Use the HPC Today, I will give
iCER & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites Today, I will give

18 Get an Accounts PIs can request accounts (for each group member) at
Each account has access up to: 50 GB of replicated file space (/mnt/home/userid) 520 processing cores 50 TB of high-speed scratch space (/mnt/scratch/userid) Also available: shared group folder upon request Getting aacount is like getting a ticket. Resource are resverved for you. But it does not mean that you are on the board. You need to get on. This is similar to the disc partition and multi users system. Home is private, Research group is shared with group members, Scratch space is not backup. Three types of space: Home space /mnt/home/$USER Group space /mnt/research/<group> Scratch space /mnt/scratch/$USER /mnt/ls15/scratch/groups/<group>

19 Connecting to the HPCC using ssh
Windows: we recommand “mobaXterm” as connection client. Insert the thumb drive Open appropriate folder Looking for MobaXterm Install MobaXterm Server: hpcc.msu.edu or Remote Desktop (rdp.hpcc.msu.edu) Username=your netid; password = your netid password OSX: Access spotlight from the menu bar icon (or ⌘ SPACE) Type terminal In terminal, type: ssh For those you have account, you could use the account, or use the tem. Account provided to connect to. For Mac user, You need to get something like next page.

20 Our supercomputer is a “remote service”
Log in to gateway Our supercomputer is a “remote service” dev-intel16 dev-intel16-k80 dev-intel14-k20 dev-intel14-phi dev-intel14 rsync.hpcc.msu.edu hpcc.msu.edu Gateway Nodes Development Nodes Compute Nodes

21 Successful connection HPCC Gateway
HPCC message Hostname is “Gateway” Command prompt at bottom (Note: file list on left side does not present in Mac terminal)

22 Gateway Nodes Shared: accessible by anyone with an MSU-HPCC account.
Shared resource -- hundreds of users on the gateway nodes Gateway node is ONLY meant for accessing HPCC dev nodes NOT for running software, connecting to scratch space or compute nodes. ** DO NOT RUN ANYTHING ON THE NODE! ** Rsync gateway is majorly for file transfer and able to connect to scratch but not able to access to compute nodes. Directly goto dev node.

23 Our supercomputer is a “remote service”
ssh to a dev node Our supercomputer is a “remote service” dev-intel16 dev-intel16-k80 dev-intel14-k20 dev-intel14-phi dev-intel14 hpcc.msu.edu rsync.hpcc.msu.edu Gateway Nodes Development Nodes Compute Nodes

24 All the board! Take a look at what you have in your account (ls)
If you’re not connected to the HPCC, please do so now. From gateway, please connect to a development node: ssh dev-intel16-k or ssh dev-intel14-phi ssh == secure shell (allows you to securely connect to other computers) Take a look at what you have in your account (ls) Take a look at what available on system (ml) Jump to other dev-node and exit “go to any dev-node” means ssh to it. You can jump from one to the other. How to get out? “exit” go from door to door without need to check ticket.

25 MobaXterm (ssh to a dev node)
Demo the MobaXterm window.

26 Developement Nodes Shared: accessible by anyone with an MSU-HPCC account Meant for testing / short jobs. Currently, up to 2 hours of CPU time Development nodes are “identical” to compute nodes in the cluster Node name descriptive (name= feature, #=year) (HPCC Quick Reference Sheet)

27 Compute Nodes Use them by job submission (reservation)
Dedicated: you request # cores, memory, walltime (also for advanced users: accelerators, temporary file space, licenses) Queuing system: when those resources are available for your job, those resources are assigned to you. Two modes batch mode: generate a script that runs a series of commands (use pbs job scription) Interactive mode (qsub –I)

28 Basic navigation commands: ls
Use the Command with the Options to look into your home directory: ls list files and directories Some options for ls command -a list all files and directories -F append indicator (one of to entries -h print sizes in human readable format (e.g., 1k, 2.5M, 3.1G) -l list with a long listing format -t sort by modification time

29 Basic navigation commands: cd
Use the Command with the Options to try: cd directory_name change to named directory cd change to home-directory cd ~ cd .. change to parent directory cd - change to the previous directory pwd display the path of the current directory

30 Agenda Summary Introduction How to Use the HPC
iCER & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites There are not many stuff on the account. We would like to upload files and start real research.

31 Old fashion scp source destination scp == secure copy SFTP
Use hostname: rsync.hpcc.msu.edu Lets try to copy files without fancy eqipment. Use scp command inside ssh window. At the same place as you run ssh. Let’s try. This simple and straight. Sometimes you have to do it in this way.

32 Load to/from your (use hostname: rsync.hpcc.msu.edu)
Transfer files Load to/from your (use hostname: rsync.hpcc.msu.edu) MobaXterm Filezilla or Cyberduck (thumb drive) Mapping to HPCC rsync, sftp commands Load to/from HPCC wget curl Q: how to do mapping on windows machine?

33 MobaXterm (Sessions -> New session -> SFTP)

34 Transferring Files SFTP program download and install cyberduck
Hostname: rsync.hpcc.msu.edu or: rdp.hpcc.msu.edu Username: your netid Password: your netid password Port : 22

35 Cyberduck

36 Mapping HPC drives to a campus computer
You can connect your laptop to the HPCC home & research directories Can be very convenient and work on Windows 10! For more information see: determine your file system using the command df -h ~ Filesystem Size Used Avail Use% Mounted on ufs-10-b.i:/b10/u/home/billspat server path Mac: smb: //ufs-10-b.hpcc.msu.edu/billspat Windows: \\ufs-10-b.hpcc.msu.edu\billspat connect Mac : Finder | connect to server | put in server path | netid and Password Windows : My Computer | Map Network Drive | select letter Windows fix: user name = hpcc\netid (same password)

37 https://wiki.hpcc.msu.edu/display/hpccdocs/HPCC+File+Systems
File Storage Location

38 Type of File Storage on HPCC

39 Summary of MSU HPCC File Systems
GLOBAL function path (mount point) Env Home directories your personal /mnt/home/$USER $HOME Research Space Shared with working group /mnt/research/<group> Scratch temporary computational workspace /mnt/scratch/$USER /mnt/ls15/scratch/groups/<group> $SCRATCH Software software installed /opt/software $PATH LOCAL Operating system local disk, don't use these /etc, /usr/lib /usr/bin temp or local fast in each single node only, some programs use /tmp /tmp or /mnt/local

40 File Systems and static link
Files can have aliases or shortcuts, so one file or folder can be accessed multiple way, e.g. /mnt/scratch is actually a static link to /mnt/ls15/scratch/users try this: ls –l /mnt/scratch cd /mnt/scratch/$USER # replace netid with YOUR netid pwd What do you see? Note that "ls15" stands for "Lustre Scratch 2015", meaning the scratch lustre file system installed in 2015.

41 Private and Shared files
Home directories are private; Research folders are shared by group cd ~ ls -al # -rw billspat staff-np Apr 9 10:35 myfiles.sam Group Example: (you may not have a research group) ls -al /mnt/research/holekamplab #this won't work for you -rw-rw-r-- 1 billspat holekamplab Apr 9 10:35 eg1.sam The Difference is read and write (rw) by group # /mnt/scratch/netid may be public to system unless you make it private To share a file using your scratch directory: touch ~/testfile cp ~/testfile /mnt/scratch/<netid>/testfile chmod g+r /mnt/scratch/<netid>/testfile

42 Copying files to/from HPCC
Can copy files from system to laptop with secure ftp = sftp or secure copy = scp Example create a file to copy, please type cd ~/class echo 'Hello from $USER' > greeting.txt echo 'Created on `date`' >>greeting.txt cat greeting.txt # make sure you are in the directory with pwd command, and it’s lower case 2. exit from the HPCC exit; exit 3. in terminal or Moba issue following command scp => secure copy scp testfile.txt open testfile.txt Windows: MobaXTerm auto connected, files are listed on the side

43 Great job! Let’s take a 5 min break.

44 Module System For maximizing the different types of software and system configurations that are available to the users, HPCC uses a Module system to set Paths and Libraries. Key Commands module list : List currently loaded modules module load modulename : Load a module module unload modulename : Unload a module module spider keyword : Search modules for a keyword module show modulename : Show what is changed by a module module purge : Unload all modules

45 Using Modules module list # show modules currently loaded in your profile module purge # clear all modules module list # none loaded # which version of Matlab do you need? module spider Python module spider Python/2.7.2 # after purging (no modules), find the path to Python with the Linux "which" command which python; python --version module load python which python

46 Some Software requires other base programs to be loaded
module purge # remove all modules module load R/3.1.1 # … error: why? base modules not loaded # to find out what is needed, please use spider command for detailed outpu module spider R/3.1.1 module load gnu module load openmpi

47 Default Modules re-load every time you connect
module purge # remove all modules module list exit # return to gateway ssh dev-intel10 # … what do you see? default modules module load Mathematica; module list ssh dev-intel14 Need to load all modules you need for every session or job or look into auto loading of modules using profile (covered later)

48 Powertools module load powertools
Powertools is a collection of software tools and examples that allows researchers to better utilize High Performance Computing (HPC) systems. module load powertools quota # list your home directory disk usage poweruser # Set up your account to load powertools by default priority_status #IF part of a buy-in group; Shows the status of an individuals (buy-in nodes) userinfo <$USER> # get details of a user (from public MSU directory)

49 Load supply from HPCC Please Load Software modules type (in your terminal): module load powertools getexample intro2hpcc See the changes before and after module load This will copy some example files (for this workshop) to your directory. Exercise: try to find the file ‘youfoundit.txt’ in the “hidden” directory. Let’s do an exercise: find a place in your home directory. You do not want to mess up if something wrong. Then see that you get.

50 getexample You can obtain a lot of examples through getexample. Take advantage of it! See something familier? For example, NAMD? MATLAB? ……

51 Useful Powertools commands
getexample : Download user examples powertools : Display a list of current powertools or a long description licensecheck : Check software licenses avail : Show currently available node resources For more information, refer to this webpage (HPCC Powertools)

52 (What are the commands you use to accomplish above?)
Short Exercise Unload all modules and load these modules GNU, powertools Check which modules are loaded Several versions of MATLAB are installed on HPC. Find what versions are available on HPC. Load the latest version. (What are the commands you use to accomplish above?)

53 getexample If you do not load powertools, please do it now:
Download the helloworld example using getexample Check what you downloaded. What is the biggest file?

54 Using MSU HPCC 'getexample' tool
We will test using a simple program from HPCC examples. So, after connecting to hpcc, please type : ssh dev-intel16 mkdir class cd class module load powertools getexample getexample helloworld cd helloworld ls cat README cat hello.c cat hello.qsub connect to dev node create directory change directory HPCC scripts list all examples copy one example list files show README file show c program show qsub program

55 Compile, Test, and Queue The README file has instructions for compiling and running gcc hello.c -o hello # compile and output to new program 'hello' ./hello # test run this hello program. Does it work? qsub hello.qsub # submit to queue to run on cluster This 'job' was assigned the 'job' id and put into the 'queue.' Now, we wait for it to be scheduled, run, and output files created...

56 Agenda Summary Introduction How to Use the HPC Today, I will give
iCER & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites Today, I will give

57 CLI vs. GUI CLI: Command Line Interface: So far, we used CLI
GUI: Graphical User Interface

58 Run Matlab on HPCC Run Matlab in command line mode
Can you Run Matlab in GUI ? If you have problem open GUI, you may need some more tools

59 What is X11 and what is needed for X11?
Method for running Graphical User Interface (GUI) across a network connection SSH X11

60 What is needed for X11? X11 server running on your personal computer
SSH connection with X11 enabled Fast network connection Preferably on campus

61 Run X11 on Windows Run remote desktop on rdp.hpcc.msu.edu
Install MobaXterm at MobaXterm, ssh -X Run remote desktop on rdp.hpcc.msu.edu

62 Run X11 on Mac (OSX) Mac users: Install XQuartz from iCER thumb drive
Run XQuartz Quit terminal Run terminal again ssh -X Download remote desktop from Apple store (free) and run it on rdp.hpcc.msu.edu

63 Test GUI using X11 run X11 Try one of the following commands xeyes
or/and firefox

64 Batch vs. Interactive Send job to scheduler and run in batch queue

65 Running Jobs on the HPC The developer (dev) nodes are used to compile, test and debug programs Two ways to run jobs submission job scripts interactive way Submission scripts are used to run (heavy/many) jobs on the cluster.

66 Development Nodes name Processors Cores Memory accelerators
dev-intel14 Two 2.5Ghz 10-core Intel Xeon E5-2670v2 20 256GB none dev-intel14-k20 128GB 2x Nvidia Tesla K20 GPUs dev-intel14-phi 2x Xeon Phi 5110P accelerators dev-intel16 Two 2.4Ghz 14-core Intel Xeon E5-2680v4 28 dev-intel16-k80 Two 2.4Ghz 14-core Intel Xeon E5-2680v4, and four Nvidia Tesla K80 GPUs 4x Nvidia Tesla K80 GPUs

67 Advantages of running Interactively
Yo do not need to write a submission script Yo do not need to wait in the queue You can provide input to and get feedback from your programs as they are running

68 Disadvantages of running Interactively
All the resources on developer nodes are shared between all uses Any single process is limited to 2 hours of cpu time. If a process runs longer than 2 hours it will be killed. Programs that overutilize the resources on a developer node (preventing other to use the system) can be killed without warning.

69 Programs that can use GUI
MATLAB Mathematica totalView : C/C++/fortran debugger especially for multiple processors DDD (Data Display Debugger) : graphical front-end for command-line debugger Etc, etc, and etc

70 Need to run Jobs with more resources?
We’ve focused so far on working interactively (command line) Need dedicated resources for a computationally intensive task? List resource requests in a job script List commands in the job script Submit job to the scheduler Check results when job is finished

71 The Queue The system is busy and used by many people
may want to run one program for a long time may want to run many programs all at once on multiple nodes or cores To share we have a 'resource manager' that takes in requests to run programs, and allocates them you write a special script that tells the resource manager what you need, and how to run your program, and 'submit to the queue' (get in line) with the qsub command We submitted our job to the queue using the command qsub hello.qsub And the 'qsub file' had the instructions to the scheduler on how to run

72 Submission of Job to the Queue
A Submission Script is a mini program for the scheduler that has : List of required resources All command line instructions needed to run the computation including loading all modules Ability for script to identify arguments collect diagnostic output (qstat -f) Script tells the scheduler how to run your program on the cluster. Unless you specify, your program can run on any one of the 7,500 cores available.

73 Submission Script List of required resources
All command line instructions needed to run the computation We often hear “qsub script”

74 Typical Submission Script
#!/bin/bash -login ### define resources needed: #PBS -l walltime=00:01:00 #PBS -l nodes=5:ppn=1 #PBS -l mem=2gb ### you can give your job a name for easier identification #PBS -N name_of_job ### reserve license for conmercial software #PBS –W x=gres:MATLAB ### load necessary modules, e.g. module load MATLAB ### change to the working directory where your code is located cd ${PBS_O_WORKDIR} ### call your executable matlab –nodisplay –r “test(10)” # Record Job execution information qstat –f ${PBS_JOBID} Each line worth an explantion.

75 Submit a job Go to the helloworld directory
cd ~/helloworld Create a simple submission script nano hello.qsub See next slide to edit the file…

76 hello.qsub #!/bin/bash –login #PBS –l walltime=00:05:00
#PBS –l nodes=1:ppn=1 cd ${PBS_O_WORKDIR} ./hello qstat –f ${PBS_JOBID}

77 Details about job script
“#” is normally a comment except “#!” special system commands #!/bin/bash #PBS instructions to the scheduler #PBS -l nodes=1:ppn=1 #PBS -l walltime=hh:mm:ss #PBS -l mem=2gb (!!! Not per core but total) (Scheduling Jobs) NOTE: all the PBS will be skiped if there is some comman that not PBS.

78 Submitting and monitoring
Once job script created, submit the file to the queue: qsub hello.qsub Record job id number (######) and wait around 30 seconds Check jobs in the queue with: qstat -u netid & showq -u netid What about using the Commands qs & sq ? Status of a job: qstat –f jobid Delete a job in a queue: qdel jobid

79 Common Commands qsub “submission script” qdel “job id”
Submit a job to the queue qdel “job id” Delete a job from the queue showq -u “user id” , qstat -u “user id” Show the current job queue of the user checkjob -v “job id” , qstat -f “job id” Check the status of the current job showstart -e all “job id” Show the estimated start time of the job

80 Scheduling Priorities
NOT First Come First Serve! Jobs that use more resources get higher priority (because these are hard to schedule) Smaller jobs are backfilled to fit in the holes created by the bigger jobs Eligible jobs acquire more priority as they sit in the queue Jobs can be in three basic states: (check states by showq) Blocked, eligible or running Jobs in BatchHold

81 Scheduling Tips Requesting more resources does not make a job run faster unless you are running a parallel program. The more resources you request, the “harder” it is for the job manger to schedule the resources. First time: over-estimate how many resources you need, and then modify appropriately. qstat -f ${PBS_JOBID} at the bottom of your scripts will give you resources information when the job is done) Job finished time = Job waiting time + Job running time Queue Run Time

82 Advanced Scheduling Tips
Resources A large proportion of the cluster can only run jobs that are four hours or less Most nodes have at least 24 GB of memory Half have at least 64 GB of memory Few have more than 64 GB of memory Maximum running time of jobs: 7 days (168 hours) Maximum memory that can be requested: 6 TB Scheduling Limit total 520 cores using on HPCC at a time 15 eligible jobs at a time 1000 submitted jobs

83 Job Completion By default the job will automatically generate two files when it completes: Standard Output: Ex: jobname.o Standard Error: Ex: jobname.e You can combine these files if you add the join option in your submission script: #PBS -j oe You can change the output file name #PBS -o /mnt/home/netid/myoutputfile.txt

84 Other Job Properties resources (-l) Many others, see the wiki:
Walltime, memory, nodes, processor, network, etc. #PBS –l feature=gpgpu,gbe #PBS –l nodes=2:ppn=8:gpu=2 #PBS –l mem=16gb address (-M) #PBS –M Options (-m) #PBS –m abe Many others, see the wiki:

85 Advanced Environment Variables
The scheduler adds a number of environment variables that you can use in your script: PBS_JOBID The job number for the current job. PBS_O_WORKDIR The original working directory which the job was submitted Example mkdir ${PBS_O_WORKDIR}/${PBS_JOBID}

86 Agenda Summary Introduction How to Use the HPC Today, I will give
iCER & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites Today, I will give

87 Getting Help http://explainshell.com/
HPCC user wiki -- Contact iCER – Training information -- Linux Command manual: An exhaustive list A useful cheatsheet Explain a command given to you

88 Agenda Summary Introduction How to Use the HPC Today, I will give
iCER & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites Today, I will give

89 Terms ICER, HPC Account: login, directory (folder)
Node: Laptop, gateway, dev-nodes, comp-nodes Cores: nodes=2:ppn=4 Memory: total, per node, mem=2gb Storage: home, research, scratch, disk Module: list, load, spider, unload, purge, show,…… Files: software, data, job script Path: Queue: qsub, qstat, qdel. Jobs: JobID,

90 Agenda Summary Introduction How to Use the HPC Today, I will give
iCER & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites Today, I will give

91 Commands ssh cd, ls, pwd, module sub_comand rm, mv, cp, scp
more, less, cat nano, vi, qsub, qstat, qdel, showq echo, export Mkdir, tar, unzip, zip

92 Agenda Summary Introduction How to Use the HPC Today, I will give
iCER & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites Today, I will give

93 Websites Documentation and User Manual – http://wiki.hpcc.msu.edu
Contact HPCC and iCER Staff for: Reporting System Problems HPC Program writing/debugging Consultation Help with HPC grant writing System Requests Other General Questions Primary form of contact - HPCC Request tracking system – Open Office Hours – 1-2 pm Monday/Thursday (PBS 1440)

94 Q & A

95 Thanks!


Download ppt "Introduction to HPCC at MSU"

Similar presentations


Ads by Google