PBS Job Management and Taskfarming Joachim Wagner 2008-07-24.

Slides:



Advertisements
Similar presentations
GXP in nutshell You can send jobs (Unix shell command line) to many machines, very fast Very small prerequisites –Each node has python (ver or later)
Advertisements

NGS computation services: API's,
Chapter 3 Process Description and Control
Parallel ISDS Chris Hans 29 November 2004.
Job Submission Using PBSPro and Globus Job Commands.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Network for Computational Nanotechnology (NCN) Purdue, Norfolk State, Northwestern, UC Berkeley, Univ. of Illinois, UTEP Basic Portable Batch System (PBS)
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
Tutorial on MPI Experimental Environment for ECE5610/CSC
IT MANAGEMENT OF FME, 21 ST JULY  THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS 
High Performance Computing
Job Submission on WestGrid Feb on Access Grid.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Review: Operating System Manages all system resources ALU Memory I/O Files Objectives: Security Efficiency Convenience.
Science Advisory Committee Meeting - 20 September 3, 2010 Stanford University 1 04_Parallel Processing Parallel Processing Majid AlMeshari John W. Conklin.
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
Chapter 3 Operating Systems Introduction to CS 1 st Semester, 2015 Sanghyun Park.
DIANE Overview Germán Carrera, Alfredo Solano (CNB/CSIC) EMBRACE COURSE Monday 19th of February to Friday 23th. CNB-CSIC Madrid.
Task Farming on HPCx David Henty HPCx Applications Support
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
The Glidein Service Gideon Juve What are glideins? A technique for creating temporary, user- controlled Condor pools using resources from.
Sun Grid Engine. Grids Grids are collections of resources made available to customers. Compute grids make cycles available to customers from an access.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Threads Many software packages are multi-threaded Web browser: one thread display images, another thread retrieves data from the network Word processor:
Bigben Pittsburgh Supercomputing Center J. Ray Scott
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-1 Process Concepts Department of Computer Science and Software.
Network Queuing System (NQS). Controls batch queues Only on Cray SV1 Presently 8 queues available for general use and one queue for the Cray analyst.
Using the BYU SP-2. Our System Interactive nodes (2) –used for login, compilation & testing –marylou10.et.byu.edu I/O and scheduling nodes (7) –used for.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
Faucets Queuing System Presented by, Sameer Kumar.
1  process  process creation/termination  context  process control block (PCB)  context switch  5-state process model  process scheduling short/medium/long.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
1 Process Description and Control Chapter 3. 2 Process A program in execution An instance of a program running on a computer The entity that can be assigned.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 4: Processes Process Concept Process Scheduling Types of shedulars Process.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Active-HDL Server Farm Course 11. All materials updated on: September 30, 2004 Outline 1.Introduction 2.Advantages 3.Requirements 4.Installation 5.Architecture.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
Advanced Computing Facility Introduction
GRID COMPUTING.
Welcome to Indiana University Clusters
PARADOX Cluster job management
HPC usage and software packages
Welcome to Indiana University Clusters
CommLab PC Cluster (Ubuntu OS version)
BIMSB Bioinformatics Coordination
Lecture 1 Runtime environments.
Welcome to our Nuclear Physics Computing System
Apollo Weize Sun Feb.17th, 2017.
Welcome to our Nuclear Physics Computing System
Process & its States Lecture 5.
Sun Grid Engine.
Lecture 1 Runtime environments.
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
PU. Setting up parallel universe in your pool and when (not
Presentation transcript:

PBS Job Management and Taskfarming Joachim Wagner

Outline Why do we need a cluster? Architecture Machines Software Job Management Running jobs Commands PBS Job discriptions Taskfarming Plans

Why do we need a cluster? Resource conflicts Waiting for colleague’s job to finish Trouble, e.g. disk full Medium-size jobs Too big for desktop PC Too small for ICHEC Preparation of ICHEC runs Learning

Cluster Architecture School Network maia.computing.dcu.ie Separate Network Logins Home Software Job Queue Nodes

Installed Software OpenMPI SRILM MaTrEx, Moses, GIZA++ XLE, Sicstus Johnson & Charniak’s reranking parser In progress: LFG AA, incl. function labeller

PBS Job Management Job Queue Job SubmissionUser: Job Description Job Execution Job Scheduler Nodes are allocated job-exclusive for the duration of the job

PBS Job Management Commands qsub myjob.pbs submits a job PBS description: shell script with #PBS commands (ignored by shell, see next slide) qstat, qstat –f jobnumber qdel jobnumber pbsnodes –a list all nodes with status and properties

PBS Job Description Number of nodes CPUs/node Notification: end, begin and abort Maximum runtime

Example: Memory-Intensive Job

Node Properties min4GB, min8GB: at least this much mem4GB, mem8GB: exactly this much long: please use this property for long jobs will leave nodes that do not have this property available for other jobs no limit enforced, but 24 h seems reasonable Future: 16 and 32 GB CPU local disk space (/tmp and swap)

CPU-Intensive Jobs Parallelisable, for example Sentence by sentence processing Cross-validation runs Parameter search Split into parts Run each part on a different CPU

Taskfarming PBS Job Description Taskfarming Executable (n instances) 1 Mastern-1 Worker Task file (.tfm): one task per line reading MPI Communication Task execution child process

Taskfarming Executable If Instance ID == 0 Run master code loop: Read.tfm file (arg 1) Send lines to worker Exit if no more task and all worker finished Else Run worker loop: Ask master for a task Execute task Exit if master has no more tasks

Example: Taskfarming PBS File

Example: Taskfarming TFM File

Example: Taskfarming Helper Script run-package.sh

Example: Taskfarming in Action 000 CPU Master: reads.tfm and distributes tasks CPU 2 CPU 3 CPU time idle

Example: Non-Terminating Task 000 CPU Master: reads.tfm and distributes tasks CPU 2 CPU 3 CPU (does not terminate) Killed at Walltime Limit idle

Estimating the PBS Walltime Parameter Collect durations from test run Usually high variance of execution time Long sentences Parameters Don’t use #packages x avg. time per package High risk (~50 %) that more time is needed Instead: Random sampling with observed package durations: /home/jwagner/tools/walltime.py

Effect of Task Size Job will wait for last task to finish (or be killed when walltime limit is reached) What if a task crashes? Results are incomplete Next tasks is executed What if a task does not terminate? Results are incomplete Fewer CPUs available for remaining tasks Overhead of starting tasks

Considering Multiple CPUs per Node 4 CPU cores per node 8 GB node -> 2 GB per core CPUs compete for RAM Swapping of one task effects the 3 other tasks Relatively slow CPUs in nodes Compared to new desktop PCs Optimise throughput of cluster / EUR Not: throughput of node or CPU Depends on application

Plans Fix sporadic errors of taskfarm.py re-implentation in C XML-RPC-based taskfarming http-based Run master on maia Run workers also outside the cluster Set parameters at runtime Add more nodes (CNGL) Install additional software

Questions ? Contact: Joachim Wagner CNGL System Administrator (01)