An Introduction to High-Throughput Computing Rob Quick OSG Operations Officer Indiana University Content Contributed by the University of Wisconsin Condor.

Slides:



Advertisements
Similar presentations
Jaime Frey Computer Sciences Department University of Wisconsin-Madison OGF 19 Condor Software Forum Routing.
Advertisements

Dan Bradley Computer Sciences Department University of Wisconsin-Madison Schedd On The Side.
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
1 Concepts of Condor and Condor-G Guy Warner. 2 Harvesting CPU time Teaching labs. + Researchers Often-idle processors!! Analyses constrained by CPU time!
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
More HTCondor 2014 OSG User School, Monday, Lecture 2 Greg Thain University of Wisconsin-Madison.
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
Post ASP 2012: Protein Docking at the IU School of Medicine.
Efficiently Sharing Common Data HTCondor Week 2015 Zach Miller Center for High Throughput Computing Department of Computer Sciences.
Operating Systems Concepts 1. A Computer Model An operating system has to deal with the fact that a computer is made up of a CPU, random access memory.
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Derek Wright Computer Sciences Department, UW-Madison Lawrence Berkeley National Labs (LBNL)
Communicating with Users about HTCondor and High Throughput Computing Lauren Michael, Research Computing Facilitator HTCondor Week 2015.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Harnessing the Capacity of Computational.
Alain Roy Computer Sciences Department University of Wisconsin-Madison An Introduction To Condor International.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
High Throughput Computing with Condor at Purdue XSEDE ECSS Monthly Symposium Condor.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
Building a Real Workflow Thursday morning, 9:00 am Lauren Michael Research Computing Facilitator University of Wisconsin - Madison.
IntroductiontotoHTCHTC 2015 OSG User School, Monday, Lecture1 Greg Thain University of Wisconsin– Madison Center For High Throughput Computing.
Connecting OurGrid & GridSAM A Short Overview. Content Goals OurGrid: architecture overview OurGrid: short overview GridSAM: short overview GridSAM: example.
An Introduction to High-Throughput Computing Rob Quick OSG Operations Officer Indiana University Some Content Contributed by the University of Wisconsin.
An Introduction to High-Throughput Computing Monday morning, 9:15am Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
1 HawkEye A Monitoring and Management Tool for Distributed Systems Todd Tannenbaum Department of Computer Sciences University of.
Grid Computing I CONDOR.
Experiences with a HTCondor pool: Prepare to be underwhelmed C. J. Lingwood, Lancaster University CCB (The Condor Connection Broker) – Dan Bradley
Intermediate Condor Rob Quick Open Science Grid HTC - Indiana University.
Grid job submission using HTCondor Andrew Lahiff.
ETICS All Hands meeting Bologna, October 23-25, 2006 NMI and Condor: Status + Future Plans Andy PAVLO Peter COUVARES Becky GIETZEL.
Condor: High-throughput Computing From Clusters to Grid Computing P. Kacsuk – M. Livny MTA SYTAKI – Univ. of Wisconsin-Madison
Building a Real Workflow Thursday morning, 9:00 am Greg Thain University of Wisconsin - Madison.
Turning science problems into HTC jobs Wednesday, July 29, 2011 Zach Miller Condor Team University of Wisconsin-Madison.
The Roadmap to New Releases Derek Wright Computer Sciences Department University of Wisconsin-Madison
Building a Real Workflow Thursday morning, 9:00 am Lauren Michael Research Computing Facilitator University of Wisconsin - Madison.
Intermediate Condor: Workflows Rob Quick Open Science Grid Indiana University.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
Review of Condor,SGE,LSF,PBS
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Derek Wright Computer Sciences Department University of Wisconsin-Madison Condor and MPI Paradyn/Condor.
Derek Wright Computer Sciences Department University of Wisconsin-Madison New Ways to Fetch Work The new hook infrastructure in Condor.
Condor Week 2004 The use of Condor at the CDF Analysis Farm Presented by Sfiligoi Igor on behalf of the CAF group.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
An Introduction to High-Throughput Computing With Condor Tuesday morning, 9am Zach Miller University of Wisconsin-Madison.
Weekly Work Dates:2010 8/20~8/25 Subject:Condor C.Y Hsieh.
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
Douglas Thain, John Bent Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau, Miron Livny Computer Sciences Department, UW-Madison Gathering at the Well: Creating.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
An Introduction to High-Throughput Computing Monday morning, 9:15am Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
Computational Sciences at Indiana University an Overview Rob Quick IU Research Technologies HTC Manager.
Introduction to Distributed HTC and overlay systems Tuesday morning, 9:00am Igor Sfiligoi Leader of the OSG Glidein Factory Operations University of California.
Introduction to the Grid and the glideinWMS architecture Tuesday morning, 11:15am Igor Sfiligoi Leader of the OSG Glidein Factory Operations University.
UCS D OSG Summer School 2011 Intro to DHTC OSG Summer School An introduction to Distributed High-Throughput Computing with emphasis on Grid computing.
UCS D OSG Summer School 2011 Overlay systems OSG Summer School An introduction to Overlay systems Also known as Pilot systems by Igor Sfiligoi University.
Profiling Applications to Choose the Right Computing Infrastructure plus Batch Management with HTCondor Kyle Gross – Operations Support.
Condor Week May 2012No user requirements1 Condor Week 2012 An argument for moving the requirements out of user hands - The CMS experience presented.
Turning science problems into HTC jobs Tuesday, Dec 7 th 2pm Zach Miller Condor Team University of Wisconsin-Madison.
Operations Support Manager - Open Science Grid
Quick Architecture Overview INFN HTCondor Workshop Oct 2016
The Landscape of Academic Research Computing
Building Grids with Condor
Haiyan Meng and Douglas Thain
Basic Grid Projects – Condor (Part I)
Introduction to High Throughput Computing and HTCondor
Genre1: Condor Grid: CSECCR
HTCondor Training Florentia Protopsalti IT-CM-IS 1/16/2019.
Introduction to High Performance Computing Using Sapelo2 at GACRC
Introduction to research computing using Condor
Presentation transcript:

An Introduction to High-Throughput Computing Rob Quick OSG Operations Officer Indiana University Content Contributed by the University of Wisconsin Condor Team and Scot Kronenfeld

2012 Africa Grid School Who Am I? Manager High Throughput Computing Indiana University (IU) Chief Operations Officer of the Open Science Grid – 8 years IU Institutional OSG PI Co-PI International Science Grid This Week External Advisor to European Grid Infrastructure 2

2012 Africa Grid School Protein Docking with IU School of Medicine SPLINTER - Structural Protein-Ligand Interactome Used autodock-vina – “…open-source program for drug discovery, molecular docking and virtual screening…”molecular docking Frist run - docked ~3900 Proteins with 5000 Ligands for a total of ~19M docked pairs. Submitted via command line to Condor using Pegasus on the OSG-XSEDE submission node Infrastructure is set and new runs can be easily started 3

2012 Africa Grid School Various rotations of Protein CBFA2T1 (Cyclin-D- related protein) (Eight twenty one protein) (Protein ETO) (Protein MTG8) (Zinc finger MYND domain- containing protein 2)

2012 Africa Grid School Some Numbers Amazon S3 Computing $0.073/hour $1.025M Compute Only Data Transfer and Storage Not Included 5

2012 Africa Grid School Follow Along at: ridSchool14Materials 6

2012 Africa Grid School Overview of day Lectures alternating with exercises  Emphasis on lots of exercises  Hopefully overcome PowerPoint fatigue & help you understand HTC better 7

2012 Africa Grid School Some thoughts on the exercises It’s okay to move ahead on exercises if you have time It’s okay to take longer on them if you need to If you move along quickly, try the “On Your Own” sections and “Challenges” 8

2012 Africa Grid School Most important! Please ask me questions! …during the lectures …during the exercises …during the breaks …during the meals …over dinner …via after we depart If I don’t know, I’ll find the right person to answer your question. 9

2012 Africa Grid School Goals for this session Understand basics of high-throughput computing Understand the basics of Condor Run a basic Condor job 10

2012 Africa Grid School The setup: You have a problem Your science computing is complex!  Monte carlo, image analysis, genetic algorithm, simulation… It will take a year to get the results on your laptop, but the conference is in a week. What do you do? 11

2012 Africa Grid School Option 1: Use a “supercomputer” aka High Performance Computing (HPC) “Clearly, I need the best, fastest computer to help me out” Maybe you do…  Do you have a highly parallel program?  i.e. individual modules must communicate  Do you require the fastest network/disk/memory? Are you willing to:  Port your code to a special environment?  Request and wait for an allocation? 12

2012 Africa Grid School Option 2: Use lots of commodity computers Instead of the fastest computer, lots of individual computers May not be fastest network/disk/memory, but you have a lot of them Job can be broken down into separate, independent pieces  If I give you more computers, you run more jobs  You care more about total quantity of results than instantaneous speed of computation This is high-throughput computing 13

2012 Africa Grid School What is high-throughput computing? (HTC) An approach to distributed computing that focuses on long-term throughput, not instantaneous computing power  We don’t care about operations per second  We care about operations per year Implications:  Focus on reliability  Use all available resources  That slow four-year old cluster down the hall? Use it! 14

2012 Africa Grid School Think about a race Assume you can run a four minute mile Does that mean you can run a 104 minute marathon? The challenges in sustained computation are different than achieving peak in computation speed  Our focus is sustained computation 15

2012 Africa Grid School HTC is not for all problems Do you need real time results? Do you need to minimize latency to results? Do you have very parallel code with lots of communication between modules? You might need HPC instead 16

2012 Africa Grid School An example problem: BLAST A scientist has:  Question: Does a protein sequence occur in other organisms?  Data: lots of protein sequences from various organisms  Parameters: how to search the database. More throughput means  More protein sequences queried  Larger/more protein data bases examined  More parameter variation We’ll talk more about BLAST tomorrow 17

2012 Africa Grid School Why is HTC hard? The HTC system has to keep track of:  Individual tasks (a.k.a. jobs) & their inputs  Computers that are available The system has to recover from failures  There will be failures! Distributed computers means more chances for failures. You have to share computers  Sharing can be within an organization, or between orgs  So you have to worry about security  And you have to worry about policies on how you share If you use a lot of computers, you have to handle variety:  Different kinds of computers (arch, OS, speed, etc..)  Different kinds of storage (access methodology, size, speed, etc…)  Different networks interacting (network problems are hard to debug!) 18

2012 Africa Grid School Let’s take one step at a time Can you run one job on one computer? Can you run one job on another local computer? Can you run 10 jobs on a set of local computers? Can you run a multiple job workflow? What happens if the resources are distributed and not local? How do we put this all together to get a international grid to service science on the scale of the LHC? This is the path we’ll take in the school 19 Small Large Local Distributed

2012 Africa Grid School Discussion For 5 minutes, talk to a neighbor: If you want to run one job in a local cluster of computers: 1)What do you (the user) need to provide so a single job can be run? 2)What does the system need to provide so your single job can be run?  Think of this as a set of processes: what needs happen when the job is given? A “process” could be a computer process, or just an abstract task. 20

2012 Africa Grid School What does the user provide? A “headless job”  Not interactive/no GUI: how could you interact with 1000 simultaneous jobs? A set of input files A set of output files A set of parameters (command-line arguments) Requirements:  Ex: My job requires at least 2GB of RAM  Ex: My job requires Linux Control/Policy:  Ex: Send me when the job is done  Ex: Job 2 is more important than Job 1  Ex: Kill my job if it runs for more than 6 hours 21

2012 Africa Grid School What does the system provide? Methods to:  Submit/Cancel job  Check on state of job  Check on state of available computers Processes to:  Reliably track set of submitted jobs  Reliably track set of available computers  Decide which job runs on which computer  Manage a single computer  Start up a single job 22

2012 Africa Grid School Surprise! Condor does this (and more) Methods to:  Submit/Cancel job. condor_submit/condor_rm  Check on state of job. condor_q  Check on state of avail. computers. condor_status Processes to:  Reliably track set of submitted jobs. schedd  Reliably track set of avail. computers. collector  Decide which job runs on where. negotiator  Manage a single computer startd  Start up a single job starter 23

2012 Africa Grid School But not only Condor You can use other systems:  PBS/Torque  Oracle Grid Engine (né Sun Grid Engine)  LSF  SLURM  … But I won’t cover them.  Our expertise is with Condor  Our bias is with Condor  Overlays exist (BOSCO) What should you learn at the school?  How do you think about HTC?  How can you do your science with HTC?  … For now, learn it with Condor, but you can apply it to other systems. 24

2012 Africa Grid School A brief introduction to Condor Please note, we will only scratch the surface of Condor:  We won’t cover MPI, Master-Worker, advanced policies, site administration, security mechanisms, Condor-C, submission to other batch systems, virtual machines, cron, high-availability, computing on demand, anything with the word cloud (though concepts are the same)… 25

2012 Africa Grid School …And matches them Condor Takes Computers… Dedicated Clusters Desktop Computers …And jobs I need a Mac! I need a Linux box with 2GB RAM! Matchmaker

2012 Africa Grid School Quick Terminology Cluster: A dedicated set of computers not for interactive use Pool: A collection of computers used by Condor  May be dedicated  May be interactive Remember:  Condor can manage a cluster in a machine room  Condor can use desktop computers  Condor can access remote computers  HTC uses all available resources

2012 Africa Grid School Matchmaking Matchmaking is fundamental to Condor Matchmaking is two-way  Job describes what it requires: I need Linux && 8 GB of RAM  Machine describes what it requires: I will only run jobs from the Physics department Matchmaking allows preferences  I need Linux, and I prefer machines with more memory but will run on any machine you provide me

2012 Africa Grid School Why Two-way Matching? Condor conceptually divides people into three groups:  Job submitters  Computer owners  Pool (cluster) administrator All three of these groups have preferences } May or may not be the same people

2012 Africa Grid School ClassAds ClassAds state facts  My job’s executable is analysis.exe  My machine’s load average is 5.6 ClassAds state preferences  I require a computer with Linux ClassAds are extensible  They say whatever you want them to say

2012 Africa Grid School Example ClassAd MyType = "Job" TargetType = "Machine" ClusterId = 1377 Owner = "roy“ Cmd = “analysis.exe“ Requirements = (Arch == "INTEL") && (OpSys == "LINUX") && (Disk >= DiskUsage) && ((Memory * 1024)>=ImageSize) … String Number Expression

2012 Africa Grid School Schema-free ClassAds Condor imposes some schema  Owner is a string, ClusterID is a number… But users can extend it however they like, for jobs or machines  AnalysisJobType = “simulation”  HasJava_1_6 = TRUE  ShoeLength = 10 Matchmaking can use these attributes  Requirements = OpSys == "LINUX" && HasJava_1_6 == TRUE

2012 Africa Grid School Don’t worry You won’t write ClassAds (usually)  You’ll create a simple submit file  Condor will write the ClassAd  You can extend the ClassAd if you want to You won’t write requirements (usually)  Condor writes them for you  You can extend them  In some environments you provide attributes instead of requirements expressions 33

2012 Africa Grid School Submitting jobs: condor_schedd Users submit jobs from a computer  Jobs described as ClassAds  Each submission computer has a queue  Queues are not centralized  Submission computer watches over queue  Can have multiple submission computers  Submission handled by condor_schedd condor_schedd Queue

2012 Africa Grid School Advertising computers Machine owners describe computers  Configuration file extends ClassAd  ClassAd has dynamic features  Load Average  Free Memory  …  ClassAds are sent to Matchmaker by condor_startd on computer Matchmaker (Collector) ClassAd Type = “Machine” Requirements = “…”

2012 Africa Grid School Matchmaking Negotiator collects list of computers Negotiator contacts each schedd  What jobs do you have to run? Negotiator compares each job to each computer  Evaluate requirements of job & machine  If both are satisfied, there is a match Upon match, schedd contacts execution computer to run job

2012 Africa Grid School Matchmaking Service Job queue service Information service Matchmaking diagram condor_schedd Queue Matchmaker Collector Negotiator 1 2 3

2012 Africa Grid School Manages Remote Job Manages Machine Running a Job condor_schedd Queue Matchmaker condor_collector condor_negotiator condor_startd condor_submit Manages Local Job condor_shadow condor_starter Job

2012 Africa Grid School Condor processes ProcessFunction MasterTakes care of other processes CollectorStores ClassAds NegotiatorPerforms Matchmaking ScheddManages job queue ShadowManages job (submit side) StartdManages computer StarterManages job (execution side)

2012 Africa Grid School If you forget most of these remember two (for other lectures) ProcessFunction MasterTakes care of other processes CollectorStores ClassAds NegotiatorPerforms Matchmaking ScheddManages job queue ShadowManages job (submit side) StartdManages computer StarterManages job (execution side) ScheddManages job queue StartdManages computer

2012 Africa Grid School Some notes One negotiator/collector per pool (Perhaps extras for reliability) Can have many schedds (submitters) Can have many startds (computers) A machine can play multiple roles  E.g. Can have both schedd & startd

2012 Africa Grid School Today’s Condor setup One submit computer  One schedd/queue for everyone  Also has a startd Will use local Condor pool  Have dedicated subset Tomorrow we will expand

2012 Africa Grid School Quick UNIX Refresher Before We Start $, %, # ssh nano, vi, emacs, cat >, etc. cd, mkdir, ls, rm, cp (scp) 43

2012 Africa Grid School That was a whirlwind tour! Enough with the presentation: let’s use Condor! 44 Goal: Learn about out our installation, run some basic jobs.

2012 Africa Grid School Questions? Questions? Comments?  Feel free to ask me questions now or later: Rob Quick Exercises start here: ricaGridSchool14Materials Presentations are also available from this URL. 45