October 14, 2005Guassian Performance Workshop at Jackson State University The Mississippi Center for Supercomputing Research Gaussian Performance Workshop.

Slides:



Advertisements
Similar presentations
Queuing ANSYS jobs on a local machine
Advertisements

MPI version of the Serial Code With One-Dimensional Decomposition Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve?
Variant Calling Workshop Chris Fields Variant Calling Workshop v2 | Chris Fields1 Powerpoint by Casey Hanson.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
ISG We build general capability Job Submission on the Olympus Cluster J. DePasse; S. Brown, PhD; T. Maiden Pittsburgh Supercomputing Center Public Health.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Perl Process Management Learning Objectives: 1. To learn the different Perl’s commands for invoking system process 2. To learn how to perform process management.
Navigate to the User Control Panel Click on User Control Panel Site:
Understanding the Basics of Computational Informatics Summer School, Hungary, Szeged Methos L. Müller.
B i o w u l f : A Beowulf for Bioscience
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
High Throughput Computing with Condor at Purdue XSEDE ECSS Monthly Symposium Condor.
Variant Calling Workshop Chris Fields Variant Calling Workshop | Chris Fields | PowerPoint by Casey Hanson.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Variant Calling Workshop.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
1. Introduction  The JavaScript Grid Portal is trying to find a way to access Grid through Web browser, while using Web 2.0 technologies  The portal.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Using Gaussian 03 at MCSR Brian W. Hopkins Mississippi Center for Supercomputing Research 19 February 2009.
Introduction on R-GMA Shi Jingyan Computing Center IHEP.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
WestGrid Seminar Series Copyright © 2006 University of Alberta. All rights reserved Integrating Gridstore Into The Job Submission Process With GSUB Edmund.
Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.
Launch SpecE8 and React from GSS. You can use the chemical analyses in a GSS data sheet to set up and run SpecE8 and React calculations. Analysis → Launch…
February 23, 2007MSUAG Meeting, Starkville Presentation for the Mississippi Supercomputer User Advisory Group (MSUAG) February 23, 2007 Starkville, MS.
Execute Workflow. Home page To execute a workflow navigate to My Workflows Page.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 The University of Southern Mississippi April 8, 2010.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 MCSR Unix Camp.
Linux & Shell Scripting Small Group Lecture 3 How to Learn to Code Workshop group/ Erin.
William H. Bowers – Rethinking Files and Save Cooper 13.
EMT 2390L Lecture 5 Dr. Reyes Reference: The Linux Command Line, W.E. Shotts.
Introduction to Fortran Welcome to IT’s seminar on Fortran Sam Gordji, Weir 107.
Design and Implementation of PARK (PARallel Kernel for data fitting) Paul KIENZLE, Wenwu CHEN and Ziwen FU Reflectometry Group.
Unified scripts ● Currently they are composed of a main shell script and a few auxiliary ones that handle mostly the local differences. ● Local scripts.
Virtual mpirun Jason Hale Engineering 692 Project Presentation Fall 2007.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Variant Calling Workshop.
Step 1. StrengthsQuest Assignment StrengthsQuest & Optimal Portfolio *
Faucets Queuing System Presented by, Sameer Kumar.
Weekly Report By: Devin Trejo Week of June 21, 2015-> June 28, 2015.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve? What Kinds of Accounts? Why Does Mississippi Need Supercomputers? What Kinds of Research?
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
PTA Linux Series Copyright Professional Training Academy, CSIS, University of Limerick, 2006 © Workshop VI Scheduling & Process Management Professional.
ELK Stack Kashif Mohammad University of Oxford. Motivations Looks cool Planning to use as Central Sys-Logger Accounting Look for interesting patterns.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
ENG College of Engineering Engineering Education Innovation Center 1 Functions 2 in MATLAB Topics Covered: 1.Functions in Script Files Inline Functions.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Introduction to HPC Workshop March 1 st, Introduction George Garrett & The HPC Support Team Research Computing Services CUIT.
Computational chemistry packages (efficient usage issues?) Jemmy Hu SHARCNET HPC Consultant Summer School June 3, 2016 /work/jemmyhu/ss2016/chemistry/
Advanced Computing Facility Introduction
Gaussian 09 Tutorial Ph. D. Candidate
The GWB installation directory must be in your Path
Welcome to Indiana University Clusters
Examples Example: UW-Madison CHTC Example: Global CMS Pool
Kratos 3D Structural Analysis Tutorial
BIMSB Bioinformatics Coordination
Practice #0: Introduction
Introduction to HPC Workshop
File Systems Implementation
High-Performance Computing at the Martinos Center
TEISS Case Project Introduction
LMC Little Man Computer What do you know about LMC?
Perl Process Management
Working in The IITJ HPC System
Kajornsak Piyoungkorn,
Presentation transcript:

October 14, 2005Guassian Performance Workshop at Jackson State University The Mississippi Center for Supercomputing Research Gaussian Performance Workshop At Jackson State University

October 14, 2005Guassian Performance Workshop at Jackson State University Topics Estimating Parallel Efficiency with qstat –f s to Researchers Parallel Efficiency Example New g03sub

October 14, 2005Guassian Performance Workshop at Jackson State University qstat –f Redwood Example qstat -f 7508 | grep resources resources_used.cpupercent = 601 resources_used.cput = 96:32:39 resources_used.mem = kb resources_used.ncpus = 2 resources_used.vmem = kb resources_used.walltime = 53:15:14 96 / 53 = …. Speedup = / 2 = … Efficiency = 90.5%

October 14, 2005Guassian Performance Workshop at Jackson State University qstat –f Sweetgum Example sweetgum> qstat -f 466 | grep resources resources_used.cpupercent = 98 resources_used.cput = 133:48:31 resources_used.mem = kb resources_used.ncpus = 4 resources_used.vmem = kb resources_used.walltime = 136:04: / 136 = …. Speedup = / 4 = … Efficiency = 24.5%

October 14, 2005Guassian Performance Workshop at Jackson State University s to Researchers -Job id 466 -Job id 7508

October 14, 2005Guassian Performance Workshop at Jackson State University Mimosa Parallel Efficiency - Can’t use resources_used.cput - Can’t trust resources_used.cpupercent - Run same job on 1, 2, 4 nodes & compare wallclock times - qstat –f | grep exec_host - rsh to nodes and execute “top”

October 14, 2005Guassian Performance Workshop at Jackson State University Mimosa Example - Gaussian job took 8 days on 8 nodes - Same job took 11 days on 4 nodes - Same job took 13 days on 1 node - 8 node speedup: 1.5x - 8 node parallel efficiency: 19% - 4 node speedup: 1.3x - 4 node parallel efficiency: 64%

October 14, 2005Guassian Performance Workshop at Jackson State University Mimosa Example - User saved 5 days by running on 1 node - If user had only 8 nodes… - And had 8 similar jobs to run… - If he runs them as 1 node jobs… - He can finish all 8 in 13 days. - If he runs then as 8 node jobs… - It will take 64 days to finish them all.

October 14, 2005Guassian Performance Workshop at Jackson State University newg03sub - Validates input file memory with PBS - Validates input file %nprocl and %nprocs with PBS - Adds prefix to job name based on calculation type - Names scratch file directories based on jobid - Takes additional input parameter: max disk/scratch file - Example: -/ptmp/jghale/benchmarking/sweetgum/h artree-fock/medium/first/400mb/2proc