Introduction to Running CFX on U2  Introduction to the U2 Cluster  Getting Help  Hardware Resources  Software Resources  Computing Environment  Data.

Slides:



Advertisements
Similar presentations
Utilizing the GDB debugger to analyze programs Background and application.
Advertisements

Learning Unix/Linux Bioinformatics Orientation 2008 Eric Bishop.
Chapter One The Essence of UNIX.
CCPR Workshop Lexis Cluster Introduction October 19, 2007 David Ash.
FILE TRANSFER PROTOCOL Short for File Transfer Protocol, the protocol for exchanging files over the Internet. FTP works in the same way as HTTP for transferring.
Introduction to CCR  where to find information  required software  login and data transfer  basic UNIX commands  data storage  using modules  compiling.
Exploring the UNIX File System and File Security
Introducing the Command Line CMSC 121 Introduction to UNIX Much of the material in these slides was taken from Dan Hood’s CMSC 121 Lecture Notes.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Unix Workshop Aug What is Unix Multitasking, multiuser operating system Often the OS of choice for large servers, large clusters.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
A crash course in njit’s Afs
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2012, Jan 18, 2012assignprelim.1 Assignment Preliminaries ITCS 4145/5145 Spring 2012.
Overview of Linux CS3530 Spring 2014 Dr. José M. Garrido Department of Computer Science.
Help session: Unix basics Keith 9/9/2011. Login in Unix lab  User name: ug0xx Password: ece321 (initial)  The password will not be displayed on the.
Unix Primer. Unix Shell The shell is a command programming language that provides an interface to the UNIX operating system. The shell is a “regular”
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
Linux environment ● Graphical interface – X-window + window manager ● Text interface – terminal + shell.
Essential Unix at ACEnet Joey Bernard, Computational Research Consultant.
BIOSTAT LINUX CLUSTER By Helen Wang October 11, 2012.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
FTP Server and FTP Commands By Nanda Ganesan, Ph.D. © Nanda Ganesan, All Rights Reserved.
PROGRAMMING PROJECT POLICIES AND UNIX INTRO Sal LaMarca CSCI 1302, Fall 2009.
CCPR Workshop Introduction to the Cluster July 13, 2006.
ITCS 4/5010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Dec 28, 2012assignprelim.1 Assignment Preliminaries ITCS 4010/5010 Spring 2013.
1 Operating Systems and Using Linux Topics What is an Operating System? Linux Overview Frequently Used Linux Commands Reading None.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Introduction to C Programming Lecture 2. C Tutor Schedule / 3rd Floor Lab b The lab and tutor schedule is available at the following URL:
Introduction to Programming Using C An Introduction to Operating Systems.
Page 1 Printing & Terminal Services Lecture 8 Hassan Shuja 11/16/2004.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Unix Servers Used in This Class  Two Unix servers set up in CS department will be used for some programming projects  Machine name: eustis.eecs.ucf.edu.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
CMSC 104, Version 8/061L03OperatingSystems.ppt Operating Systems and Using Linux Topics What is an Operating System? Linux Overview Frequently Used Linux.
CS 120 Extra: The CS1 Server Tarik Booker CS 120.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
INTRODUCTION TO SHELL SCRIPTING By Byamukama Frank
Assignprelim.1 Assignment Preliminaries © 2012 B. Wilkinson/Clayton Ferner. Modification date: Jan 16a, 2014.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Hackinars in Bioinformatics
GRID COMPUTING.
Specialized Computing Cluster An Introduction
UNIX To do work for the class, you will be using the Unix operating system. Once connected to the system, you will be presented with a login screen. Once.
PARADOX Cluster job management
HPC usage and software packages
Web Programming Essentials:
How to use the HPCC to do stuff
Andy Wang Object Oriented Programming in C++ COP 3330
FTP Lecture supp.
BIOSTAT LINUX CLUSTER By Helen Wang October 29, 2015.
Part 3 – Remote Connection, File Transfer, Remote Environments
Assignment Preliminaries
Postdoctoral researcher Department of Environmental Sciences, LSU
Welcome to our Nuclear Physics Computing System
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
Web Programming Essentials:
Andy Wang Object Oriented Programming in C++ COP 3330
UNIX/LINUX Commands Using BASH Copyright © 2017 – Curt Hill.
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

Introduction to Running CFX on U2  Introduction to the U2 Cluster  Getting Help  Hardware Resources  Software Resources  Computing Environment  Data Storage  Login and File Transfer  UBVPN  Login and Logout  More about X-11 Display  File Transfer

Introduction to Running CFX on U2  Unix Commands  Short list of Basic Unix Commands  Reference Card  Paths and Using Modules  Starting the CFX Solver  Launching CFX  Monitoring  Running CFX on the Cluster  PBS Batch Scheduler  Interactive Jobs  Batch Jobs

Information and Getting Help  Getting help:  CCR uses an problem ticket system. Users send their questions and descriptions of problems to  The technical staff receives the and responds to the user. Usually within one business day.  This system allows staff to monitor and contribute their expertise to the problem.  CCR website: 

Cluster Computing  The u2 cluster is the major computational platform of the Center for Computational Research.  Login (front-end) and cluster machines run the Linux operating system.  Requires a CCR account.  Accessible from the UB domain.  The login machine is u2.ccr.buffalo.edu  Compute nodes are not accessible from outside the cluster.  Traditional UNIX style command line interface.  A few basic commands are necessary.

Cluster Computing  The u2 cluster consists of 1056 dual processor DELL SC1425 compute nodes.  The compute nodes have Intel Xeon processors.  Most of the cluster machines are 3.2 GHz with 2 GB of memory.  There 64 compute nodes with 4 GB of memory and 32 with 8 GB.  All nodes are connected to a gigabit ethernet network.  756 nodes are also connected the Myrinet, a high speed fibre network.

Cluster Computing

Data Storage  Home directory:  /san/user/UBITusername/u2  The default user quota for a home directory is 2GB. Users requiring more space should contact the CCR staff.  Data in home directories are backed up. CCR retains data backups for one month.  Projects directories:  /san/projects[1-3]/research-group-name  UB faculty can request additional disk space for the use by the members of the research group.  The default group quota for a project directory is 100GB.  Data in project directories is NOT backed up by default.

Data Storage  Scratch spaces are available for TEMPORARY use by jobs running on the cluster.  /san/scratch provides 2TB of space. Accessible from the front-end and all compute nodes.  /ibrix/scratch provides 25TB of high performance storage. Applications with high IO and that share data files benefit the most from using IBRIX. Accessible from the front-end and all compute nodes.  /scratch provides a minimum of 60GB of storage. The front-end and each computer nodes has local scratch space. This space is accessible from that machine only. Applications with high IO and that do not share data files benefit the most from using local scratch. Jobs must copy files to and from local scratch.

Software  CCR provides a wide variety of scientific and visualization software.  Some examples: BLAST, MrBayes, iNquiry, WebMO, ADF, GAMESS, TurboMole, CFX, Star-CD, Espresso, IDL, TecPlot, and Totalview.  The CCR website provides a complete listing of application software, as well as compilers and numerical libraries.  The GNU, INTEL, and PGI compilers are available on the U2 cluster.  A version of MPI (MPICH) is available for each compiler, and network.  Note: U2 has two networks: gigabit ethernet and Myrinet.  Myrinet performs at twice the speed of gigabit ethernet.

Accessing the U2 Cluster  The u2 cluster front-end is accessible from the UB domain (.buffalo.edu)  Use VPN for access from outside the University.  The UBIT website provides a VPN client for Linux, MAC, and Windows machines.  The VPN client connects the machine to the UB domain, from which u2 can be accessed.  Telnet access is not permitted.

Login and X-Display  LINUX/UNIX workstation:  ssh u2.ccr.buffalo.edu ssh  The –X or –Y flags will enable an X-Display from u2 to the workstation. ssh –X u2.ccr.buffalo.edu  Windows workstation:  Download and install the X-Win32 client from ubit.buffalo.edu/software/win/XWin32  Use the configuration to setup ssh to u2.  Set the command to xterm -ls  Logout: logout or exit in the login window.

File Transfer  FileZilla is a available of Windows, Linux and MAC machines.  Check the UBIT software pages.  This is a drag and drop graphical interface.  Please use port 22 for secure file transfer.  Command line file transfer for Unix.  sftp u2.ccr.buffalo.edu put, get, mput and mget are used to uploaded and download data files. The wildcard “*” can be used with mput and mget.  scp filename u2.ccr.buffalo.edu:filename

Basic Unix Commands  Using the U2 cluster requires knowledge of some basic UNIX commands.  The CCR Reference Card provides a list of the basic commands.  The Reference Card is a pdf linked to  These will get you started, then you can learn more commands as you go.  List files: ls ls –la(long listing that shows all files)

Basic Unix Commands  View files: cat filename(displays file to screen) more filename(displays file with page breaks)  Change directory: cd directory-pathname cd(go to home directory) cd..(go back one level)  Show directory pathname pwd(shows current directory pathname)  Copy files and directories cp old-file new-file cp –R old-directory new-directory

Basic Unix Commands  Move files and directories: mv old-file new-file mv old-directory new-directory NOTE: move is a copy and remove  Create a directory: mkdir new-directory  remove files and directories: rm filename rm –R directory(removes directory and contents) rmdir directory (directory must be empty) Note: be careful when using the wildcard “*”  More about a command: man command

Basic Unix Commands  View files and directory permissions using ls command. ls –l  Permissions have the following format: -rwxrwxrwx … filename –user group other  Change permissions of files and directories using the chmod command. Arguments for chmod are ugo+-rxw –user group other read write execute chmod g+r filename –add read privilege for group chmod –R o-rwx directory-name –Removes read, write and execute privileges from the directory and its contents.

Basic Unix Commands  There are a number of editors available:  emacs, vi, nano, pico Emacs will default to a GUI if logged in with X-DISPLAY enabled.  Files edited on Windows PCs may have embedded characters that can create runtime problems.  Check the type of the file: file filename  Convert DOS file to Unix. This will remove the Windows/DOS characters. dos2unix –n old-file new-file

Modules  Modules are available to set variables and paths for application software, communication protocols, compilers and numerical libraries.  module avail(list all available modules)  module load module-name(loads a module) Updates PATH variable with path of application.  module unload module-name (unloads a module) Removes path of application from the PATH variable.  module list (list loaded modules)  module show module-name Show what the module sets.  Modules can be loaded in the user’s.bashrc file.

Starting the CFX Solver  Create a subdirectory  mkdir bluntbody  Change directory to bluntbody  cd bluntbody  Copy the Blunt Body.def file to the bluntbody directory  cp /util/cfx- ub/CFX110/ansys_inc/v110/CFX/examples/Blu ntBody.def.  ls -l

Starting the CFX Solver  Load the CFX module  module load cfx  Launch CFX: cfx5  The CFX solver GUI will display on the workstation  Launch with detach from command line  cfx5 &  Click on CFX-Solver 11.0

Starting the CFX Solver

 Click on File  Select Define Run  Select the BluntBody.def  Run mode is serial  In another window on u2 start top to monitor the memory and CPU  Click Start Run in the CFX Define Run window.  After solver has completed, click on NO for post processing,

Starting the CFX Solver

Running on the U2 Cluster  The compute machines are assigned to user jobs by the PBS (Portable Batch System) scheduler.  The qsub command submits jobs to the scheduler  Interactive jobs depend on the connection from the workstation to u2.  If the workstation is shut down or disconnected from the network, then the job will terminate.

PBS Execution Model  PBS executes a login as the user on the master host, and then proceeds according to one of two modes, depending on how the user requested that the job be run.  Script - the user executes the command: qsub [options] job-script where job-script is a standard UNIX shell script containing some PBS directives along with the commands that the user wishes to run (examples later).  Interactive - the user executes the command: qsub [options] –I the job is run “interactively,” in the sense that standard output and standard error are connected to the terminal session of the initiating ’qsub’ command. Note that the job is still scheduled and run as any other batch job (so you can end up waiting a while for your prompt to come back “inside” your batch job).

Execution Model Schematic qsub myscript pbs_server SCHEDULER Run? No Yes $PBS_NODEFILE node1 node2 nodeN prologue $USER login myscript epilogue

PBS Queues  The PBS queues defined for the U2 cluster are CCR and debug.  The CCR queue is the default  The debug queue can be requested by the user.  Used to test applications.  qstat –q  Shows queues defined for the scheduler.  Availability of the queues.  qmgr  Shows details of the queues and scheduler.

PBS Queues Do you even need to specify a queue?  You probably don’t need (and may not even be able) to specify a specific queue destination.  Most of our PBS servers use a routing queue.  The exception is the debug queue on u2, which requires a direct submission. This queue has a certain number of compute nodes set aside for its use during peak times.  Usually, this queue has 32 compute nodes.  The queue is always available, however it has dedicated nodes Monday through Friday, from 9:00am to 5:00pm.  Use -q debug to specify the debug queue on the u2 cluster.

Batch Scripts - Resources  The “-l” options are used to request resources for a job.  Used in batch scripts and interactive jobs.  -l walltime=01:00:00 wall-clock limit of the batch job.  Requests 1 hour wall-clock time limit.  If the job does not complete before this time limit, then it will be terminated by the scheduler. All tasks will be removed from the nodes.  -l nodes=8:ppn=2 number of cluster nodes, with optional processors per node.  Requests 8 nodes with 2 processors per node.  All the compute nodes in the u2 cluster have 2 processors per node. If you request 1 processor per node, then you may share that node with another job.

Environmental Variables  $PBS_O_WORKDIR - directory from which the job was submitted.  By default, a PBS job starts from the user’s $HOME directory.  Note that you can change this default in your.cshrc or.bashrc file.  add the following to your.cshrc file: if ( $?PBS_ENVIRONMENT ) then cd $PBS_O_WORKDIR endif  or this to your.bashrc file: if [ -n "$PBS_ENVIRONMENT" ]; then cd $PBS_O_WORKDIR Fi  In practice, many users change directory to the $PBS_O_WORKDIR directory in their scripts.

Environmental Variables  $PBSTMPDIR - reserved scratch space, local to each host (this is a CCR definition, not part of the PBS package).  This scratch directory is created in /scratch and is unique to the job.  The $PBSTMPDIR is created on every compute node running a particular job.  $PBS_NODEFILE - name of the file containing a list of nodes assigned to the current batch job.  Used to allocate parallel tasks in a cluster environment.

Sample Interactive Job  Example:  qsub -I -X -q debug -lnodes=1:ppn=2 -lwalltime=01:00:00

Sample Script – Cluster  Example of a PBS script for the cluster:  /util/pbs-scripts/pbsCFXu2-sample

Submitting a Batch Job  qsub pbsCFXu2-sample  qstat –an –u username  qstat –an jobid