© 2011 Pittsburgh Supercomputing Center XSEDE 2012 Intro To Our Environment John Urbanic Pittsburgh Supercomputing Center July, 2012.

Slides:



Advertisements
Similar presentations
Client-server practices DSC340 Mike Pangburn. Agenda Overview of client-server development Editing on client (e.g., Notepad) or directly on server (e.g.,
Advertisements

Version Control System (Sub)Version Control (SVN).
11 Getting Started with ASP.NET Beginning ASP.NET 4.0 in C# 2010 Chapters 5 and 6.
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
ISG We build general capability Job Submission on the Olympus Cluster J. DePasse; S. Brown, PhD; T. Maiden Pittsburgh Supercomputing Center Public Health.
CS1020: Intro Workshop. Topics CS1020Intro Workshop Login to UNIX operating system 2. …………………………………… 3. …………………………………… 4. …………………………………… 5. ……………………………………
Course Introduction and Getting Started with C 1 USF - COP C for Engineers Summer 2008.
Introduction to UNIX/Linux Exercises Dan Stanzione.
Eucalyptus Virtual Machines Running Maven, Tomcat, and Mysql.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Partner Logo German Cancio – WP4-install LCFG HOW-TO - n° 1 WP4 hands-on workshop: EDG LCFGng exercises
Intro to Linux/Unix (user commands) Box. What is Linux? Open Source Operating system Developed by Linus Trovaldsa the U. of Helsinki in Finland since.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
HTML Hyper Text Markup Language A simple introduction.
Debugging and Profiling GMAO Models with Allinea’s DDT/MAP Georgios Britzolakis April 30, 2015.
CPSC 217 T03 Week I Part #1: Unix and HELLO WORLD Hubert (Sathaporn) Hu.
ENEE150 – 0202 ANDREW GOFFIN Introduction to ENEE150.
Managing SX.e and TWL with MARC and Scripts Jeremiah Curtis
GumTree Development Environment Setup Windows Only Compatible with Eclipse 3.2 M3 (Last update: 16/11/05)
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Threaded Programming Lecture 2: Introduction to OpenMP.
CPS120: Introduction to Computer Science Compiling a C++ Program From The Command Line.
1 Day 2 Logging in, Passwords, Man, talk, write. 2 Logging in Unix is a multi user system –Many people can be using it at the same time. –Connections.
Object Oriented Programming COP3330 / CGS5409.  Assignment Submission Overview  Compiling with g++  Using Makefiles  Misc. Review.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
SQL SERVER 2008 Installation Guide A Step by Step Guide Prepared by Hassan Tariq.
Introduction to IMSL and VNI Welcome to OIT’s seminar on IMSL Numerical Libraries Sam Gordji, Weir 107.
Installing Applications in FreeBSD lctseng. Computer Center, CS, NCTU 2 Before we start  Permission issue root: the super user Like administrator in.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
How to configure, build and install Trilinos November 2, :30-9:30 a.m. Jim Willenbring.
January 9, 2001 Router Plugins (Crossbow) 1 Washington WASHINGTON UNIVERSITY IN ST LOUIS Exercises.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Object Oriented Programming COP3330 / CGS5409.  Assignment Submission Overview  Compiling with g++  Using Makefiles  Misc. Review.
Grid Computing: An Overview and Tutorial Kenny Daily BIT Presentation 22/09/2016.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Hackinars in Bioinformatics
Installing Applications in FreeBSD
GRID COMPUTING.
Auburn University
Welcome to Indiana University Clusters
PARADOX Cluster job management
CS1010: Intro Workshop.
Open OnDemand: Open Source General Purpose HPC Portal
HPC usage and software packages
Welcome to Indiana University Clusters
Lecture 2 Python Basics.
How to use the HPCC to do stuff
Introduction to Python
Brian Leonard ブライアン レオナルド
Part 3 – Remote Connection, File Transfer, Remote Environments
Introduction to Programming the WWW I
Intro to UNIX System and Homework 1
Paul Sexton CS 566 February 6, 2006
Introduction to HPC Workshop
Productivity Tools for Scientific Computing
Writing Functions( ) (Part 5)
Operation System Program 4
Compiling and Job Submission
has many aspects that work together to give people almost instant communication from any computer on the internet to any other computer There.
Star Math PreTest Instructions For iPad users with the STAR app
Getting Started: Developing Code with Cloud9
Quick Tutorial on MPICH for NIC-Cluster
CSCE 206 Lab Structured Programming in C
Working in The IITJ HPC System
An Introduction to Designing and Executing Workflows with Taverna
Presentation transcript:

© 2011 Pittsburgh Supercomputing Center XSEDE 2012 Intro To Our Environment John Urbanic Pittsburgh Supercomputing Center July, 2012

© 2012 Pittsburgh Supercomputing Center Our Environment For Today Your laptops: need some kind of ssh client kollman.psc.edu is your HPC platform –Modest but sufficient. Successfully used for world’s first hands-on OpenACC workshop. –Built as a real HPC environment, but stripped down and configured for an easy workshop. –You will be ready for keeneland. We will briefly go through the steps to login, edit, compile and run before we get into the real materials.

© 2012 Pittsburgh Supercomputing Center Getting Connected From your workstation you can use any ssh client to login to kollman.psc.edu. Use the generic account and password sheets we are handing out to login. For editors, we have several options: –emacs –vi –pico/nano : use this is you aren’t familiar with the others

© 2012 Pittsburgh Supercomputing Center Compiling We will be using standard Fortran and C compilers for this workshop (with PGI’s special extensions). They should look familiar. pgcc for C pgf90 for Fortran We will be using some special switches and options as we go.

© 2012 Pittsburgh Supercomputing Center Makefiles We will be providing very basic makefiles for the examples and exercises. They are simple and don’t do much – we could simply let you compile files directly: pgcc –acc -ta=nvidia exercise_1.c or pgf90 –acc -ta=nvidia exercise_1.f90 This would pretty much cover it for most of what we are doing. However, the makefiles reflect more of the “porting a real project to a GPU using OpenACC” process. So, we start with a normal serial C or Fortran makefile.

© 2012 Pittsburgh Supercomputing Center Makefiles You will have to make some minor edits to these makefiles to enable OpenACC features. If you are familiar with makefiles, this should be trivial. If you are not, you are just fine: you might easily be able to guess where to add the options, or you can simply copy the makefile from the corresponding solutions directory if you really don’t care. After you edit your source code (and possible makefile), you use the makefile to compile the final executable. C: make Fortran: make –f Makefile_f90

© 2012 Pittsburgh Supercomputing Center PBS (Oh no, a batch system) No, we don’t just do it to be control freaks –Parallel machines are big 4D jigsaws that we need to fit together. –If we didn’t annoy you with this, other users would annoy you more. –This will be in place on every serious GPU resource that you don’t own. We have the world’s simplest version for you to use.

© 2012 Pittsburgh Supercomputing Center PBS Commands Only 3 simple commands: –qsub to submit a job –qstat to see how it and other are doing –qdel to delete a job you don’t care about If you know about PBS, you can easily understand the basic 3 line jobfiles that we are using here. If you don’t, you can probably make out what they are doing anyway, but you can simply just use them obliviously without any loss.

© 2012 Pittsburgh Supercomputing Center Running a PBS job Here is a typical session of running a job, checking it, deleting it, and running it again. urbanic$ qsub test.job 3254.kollman0.psc.edu urbanic$ qstat Job id Name User Time Use S Queue kollman0 qsub.sh shirts 111:33:5 R batch 3251.kollman0 qsub.sh shirts 09:49:29 R batch 3252.kollman0 qsub.sh shirts 0 Q batch 3254.kollman0 test.job urbanic 0 Q batch urbanic$ qdel 3254 urbanic$ qstat Job id Name User Time Use S Queue kollman0 qsub.sh shirts 111:42:5 R batch 3251.kollman0 qsub.sh shirts 09:58:29 R batch 3252.kollman0 qsub.sh shirts 0 Q batch urbanic$ qsub test.job 3255.kollman0.psc.edu

© 2012 Pittsburgh Supercomputing Center PBS Returns Your Results Your job will return it’s output in a file named with the job name appended by the job number. Something like test.job.o3255 urbanic$ more test.job.o3255 Warning: no access to tty (Bad file descriptor). Thus no job control in this shell. Hello! There is an annoying warning at the top that you can ignore, followed by the output of your program (just “Hello!” in the above job).

© 2012 Pittsburgh Supercomputing Center PBS Recap We have copied all the files you need for the exercises into your student accounts. 1.Edit the source files, and maybe makefiles, in our exercises 2.Run the provided makefile (“make”) 3.qsub the job file name (“qsub laplace_acc.job”) 4.Wait for your output file (“more laplace_acc.job.o4323”)

© 2012 Pittsburgh Supercomputing Center Our Setup For Today /exercises test exercise_1 exercise_2 exercise_3 /solutions exercise_1 exercise_2 exercise_3

© 2012 Pittsburgh Supercomputing Center Our Setup For Today The included files look something like: /exercise_1 laplace2d.cC source file MakefileC makefile laplace2d.f90Fortran source file Makefile_f90Fortran makefile laplace_omp.jobSerial job file laplace_acc.jobGPU job file timer.hHelper file you can ignore Foreshadowing: we optionally use OpenMP in the serial versions. We will discuss this when we get there.

© 2012 Pittsburgh Supercomputing Center Our Setup For Today As you do the exercises you will generate files like these: /exercise_1 laplace_omp“Serial” executable laplace_accGPU accelerated executable laplace_acc.job.04452output from accelerated job That is about it. It will all make sense as we go along, and we can now focus on the fun stuff. But first, one pass through the procedure…

© 2012 Pittsburgh Supercomputing Center Preliminary Exercise Let’s get the boring stuff out of the way now. 1.Login from your laptop to kollman.psc.edu Use your password sheet 2.Make sure you can edit a file Try “nano” if you don’t have a preference. Create any random file. 3.Test your compiler We will skip the makefile for this basic exercise and just compile cd into /test and try “pgcc test.c” or “pgf90 test.f90” 4.Test the batch system “qsub test.job” and make sure you get back the output you should (it says “Congratulations!”) in the returned file. At any point today, please raise your hand during the exercises if you are stuck. We have people anxious to help.