Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Users Tutorial.

Slides:



Advertisements
Similar presentations
Condor Project Computer Sciences Department University of Wisconsin-Madison Introduction Condor.
Advertisements

June 21-25, 2004Lecture2: Grid Job Management1 Lecture 3 Grid Resources and Job Management Jaime Frey Condor Project, University of Wisconsin-Madison
1 Concepts of Condor and Condor-G Guy Warner. 2 Harvesting CPU time Teaching labs. + Researchers Often-idle processors!! Analyses constrained by CPU time!
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
1 Using Stork Barcelona, 2006 Condor Project Computer Sciences Department University of Wisconsin-Madison
A Computation Management Agent for Multi-Institutional Grids
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
Intermediate Condor: DAGMan Monday, 1:15pm Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
Dr. David Wallom Use of Condor in our Campus Grid and the University September 2004.
SIE’s favourite pet: Condor (or how to easily run your programs in dozens of machines at a time) Adrián Santos Marrero E.T.S.I. Informática - ULL.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor-G and DAGMan.
1 Using Condor An Introduction ICE 2008.
First steps implementing a High Throughput workload management system Massimo Sgaravatto INFN Padova
Condor Overview Bill Hoagland. Condor Workload management system for compute-intensive jobs Harnesses collection of dedicated or non-dedicated hardware.
Evaluation of the Globus GRAM Service Massimo Sgaravatto INFN Padova.
Alain Roy Computer Sciences Department University of Wisconsin-Madison 24-June-2002 Using and Administering.
Intermediate HTCondor: Workflows Monday pm Greg Thain Center For High Throughput Computing University of Wisconsin-Madison.
Derek Wright Computer Sciences Department, UW-Madison Lawrence Berkeley National Labs (LBNL)
Zach Miller Condor Project Computer Sciences Department University of Wisconsin-Madison Flexible Data Placement Mechanisms in Condor.
Introduction to Condor DMD/DFS J.Knudstrup December 2005.
Zach Miller Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
High Throughput Computing with Condor at Purdue XSEDE ECSS Monthly Symposium Condor.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor User Tutorial.
The Glidein Service Gideon Juve What are glideins? A technique for creating temporary, user- controlled Condor pools using resources from.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Condor Tugba Taskaya-Temizel 6 March What is Condor Technology? Condor is a high-throughput distributed batch computing system that provides facilities.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor-G and DAGMan.
Grid Computing I CONDOR.
Condor Birdbath Web Service interface to Condor
1 Using Condor An Introduction ICE 2010.
Part 6: (Local) Condor A: What is Condor? B: Using (Local) Condor C: Laboratory: Condor.
Intermediate Condor Rob Quick Open Science Grid HTC - Indiana University.
1 The Roadmap to New Releases Todd Tannenbaum Department of Computer Sciences University of Wisconsin-Madison
Part 8: DAGMan A: Grid Workflow Management B: DAGMan C: Laboratory: DAGMan.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
Condor: High-throughput Computing From Clusters to Grid Computing P. Kacsuk – M. Livny MTA SYTAKI – Univ. of Wisconsin-Madison
Grid Compute Resources and Job Management. 2 Local Resource Managers (LRM)‏ Compute resources have a local resource manager (LRM) that controls:  Who.
1 The Roadmap to New Releases Todd Tannenbaum Department of Computer Sciences University of Wisconsin-Madison
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor RoadMap.
The Roadmap to New Releases Derek Wright Computer Sciences Department University of Wisconsin-Madison
1 Getting popular Figure 1: Condor downloads by platform Figure 2: Known # of Condor hosts.
Derek Wright Computer Sciences Department University of Wisconsin-Madison MPI Scheduling in Condor: An.
Intermediate Condor: Workflows Rob Quick Open Science Grid Indiana University.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
Review of Condor,SGE,LSF,PBS
Condor Project Computer Sciences Department University of Wisconsin-Madison Grids and Condor Barcelona,
Derek Wright Computer Sciences Department University of Wisconsin-Madison Condor and MPI Paradyn/Condor.
Derek Wright Computer Sciences Department University of Wisconsin-Madison New Ways to Fetch Work The new hook infrastructure in Condor.
Intermediate Condor: Workflows Monday, 1:15pm Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
Grid Compute Resources and Job Management. 2 How do we access the grid ?  Command line with tools that you'll use  Specialised applications Ex: Write.
VDT 1 The Virtual Data Toolkit Todd Tannenbaum (Alain Roy)
Peter F. Couvares Computer Sciences Department University of Wisconsin-Madison Condor DAGMan: Managing Job.
Job Submission with Globus, Condor, and Condor-G Selim Kalayci Florida International University 07/21/2009 Note: Slides are compiled from various TeraGrid.
Grid Compute Resources and Job Management. 2 Job and compute resource management This module is about running jobs on remote compute resources.
Peter Couvares Computer Sciences Department University of Wisconsin-Madison Condor DAGMan: Introduction &
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison What’s New in Condor-G.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor NT Condor ported.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
HTCondor’s Grid Universe Jaime Frey Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison.
Five todos when moving an application to distributed HTC.
Intermediate Condor Monday morning, 10:45am Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
Condor DAGMan: Managing Job Dependencies with Condor
Using Condor An Introduction Condor Week 2004
Using Condor An Introduction Condor Week 2003
Basic Grid Projects – Condor (Part I)
Using Condor An Introduction Paradyn/Condor Week 2002
Condor-G Making Condor Grid Enabled
Presentation transcript:

Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Users Tutorial National e-Science Centre Edinburgh, Scotland October 2003

2 The Condor Project (Established ‘85) Distributed High Throughput Computing research performed by a team of ~35 faculty, full time staff and students.

3 The Condor Project (Established ‘85) Distributed High Throughput Computing research performed by a team of ~35 faculty, full time staff and students who:  face software engineering challenges in a distributed UNIX/Linux/NT environment  are involved in national and international grid collaborations,  actively interact with academic and commercial users,  maintain and support large distributed production environments,  and educate and train students. Funding – US Govt. (DoD, DoE, NASA, NSF, NIH), AT&T, IBM, INTEL, Microsoft, UW-Madison, …

4 A Multifaceted Project › Harnessing the power of clusters - opportunistic and/or dedicated (Condor) › Job management services for Grid applications (Condor-G, Stork) › Fabric management services for Grid resources (Condor, GlideIns, NeST) › Distributed I/O technology (Parrot, Kangaroo, NeST) › Job-flow management (DAGMan, Condor, Hawk) › Distributed monitoring and management (HawkEye) › Technology for Distributed Systems (ClassAD, MW) › Packaging and Integration (NMI, VDT)

5 Some software produced by the Condor Project › Condor System › ClassAd Library › DAGMan › Fault Tolerant Shell (FTSH) › Hawkeye › MW › NeST › Stork › Parrot › Condor-G › And others… all as open source

6 Fault Tolerant Shell (FTSH) › The Grid is a hard environment. › FTSH  The ease of scripting with very precise error semantics.  Exception-like structure allows scripts to be both succinct and safe.  A focus on timed repetition simplifies the most common form of recovery in a distributed system.  A carefully-vetted set of language features limits the "surprises" that haunt system programmers.

7 Simple Bourne script… #!/bin/sh cd /work/foo rm –rf data cp -r /fresh/data. What if ‘/work/foo’ is unavailable??

8 Getting Grid Ready… #!/bin/sh for attempt in cd /work/foo if [ ! $? ] then echo "cd failed, trying again..." sleep 5 else break fi done if [ ! $? ] then echo "couldn't cd, giving up..." return 1 fi

9 Or with FTSH #!/usr/bin/ftsh try 5 times cd /work/foo rm -rf bar cp -r /fresh/data. end

10 Or with FTSH #!/usr/bin/ftsh try for 3 days or 100 times cd /work/foo rm -rf bar cp -r /fresh/data. end

11 Or with FTSH #!/usr/bin/ftsh try for 3 days every 1 hour cd /work/foo rm -rf bar cp -r /fresh/data. end

12 Another quick example… hosts="mirror1.wisc.edu mirror2.wisc.edu mirror3.wisc.edu" forany h in ${hosts} echo "Attempting host ${h}" wget end echo "Got file from ${h}"

13 FTSH › All the usual constructs  Redirection, loops, conditionals, functions, expressions, nesting, … › And more  Logging  Timeouts  Process Cancellation  Complete parsing at startup  File cleanup › Used on Linux, Solaris, Irix, Cygwin, … › Simplify your life!

14 › HawkEye  A monitoring tool › MW  Framework to create a master-worker style application in a opportunistic environment › NeST  Flexible Network Storage appliance  “Lots” : reserved space › Stork  A scheduler for grid data placement activities  Treat data movement as a “first class citizen” More Software…

15 More Software, cont. › Parrot  Useful in distributed batch systems where one has access to many CPUs, but no consistent distributed filesystem (BYOFS!).  Works with any program % gv /gsiftp/ % grep Yahoo /http/

16 What is Condor? › Condor converts collections of distributively owned workstations and dedicated clusters into a distributed high- throughput computing (HTC) facility. › Condor manages both resources (machines) and resource requests (jobs) › Condor has several unique mechanisms such as :  ClassAd Matchmaking  Process checkpoint/ restart / migration  Remote System Calls  Grid Awareness

17 Condor can manage a large number of jobs › Managing a large number of jobs  You specify the jobs in a file and submit them to Condor, which runs them all and keeps you notified on their progress  Mechanisms to help you manage huge numbers of jobs (1000’s), all the data, etc.  Condor can handle inter-job dependencies (DAGMan)  Condor users can set job priorities  Condor administrators can set user priorities

18 Condor can manage Dedicated Resources… › Dedicated Resources  Compute Clusters › Manage  Node monitoring, scheduling  Job launch, monitor & cleanup

19 …and Condor can manage non-dedicated resources › Non-dedicated resources examples:  Desktop workstations in offices  Workstations in student labs › Non-dedicated resources are often idle --- ~70% of the time! › Condor can effectively harness the otherwise wasted compute cycles from non-dedicated resources

20 Mechanisms in Condor used to harness non-dedicated workstations › Transparent Process Checkpoint / Restart › Transparent Process Migration › Transparent Redirection of I/O (Condor’s Remote System Calls)

21 What else is Condor Good For? › Robustness  Checkpointing allows guaranteed forward progress of your jobs, even jobs that run for weeks before completion  If an execute machine crashes, you only lose work done since the last checkpoint  Condor maintains a persistent job queue - if the submit machine crashes, Condor will recover

22 What else is Condor Good For? (cont’d) › Giving you access to more computing resources  Dedicated compute cluster workstations  Non-dedicated workstations  Resources at other institutions Remote Condor Pools via Condor Flocking Remote resources via Globus Grid protocols

23 What is ClassAd Matchmaking? › Condor uses ClassAd Matchmaking to make sure that work gets done within the constraints of both users and owners. › Users (jobs) have constraints:  “I need an Alpha with 256 MB RAM” › Owners (machines) have constraints:  “Only run jobs when I am away from my desk and never run jobs owned by Bob.” › Semi-structured data --- no fixed schema

24 Some HTC Challenges › Condor does whatever it takes to run your jobs, even if some machines…  Crash (or are disconnected)  Run out of disk space  Don’t have your software installed  Are frequently needed by others  Are far away & managed by someone else

25 The Condor System › Unix and NT › Operational since 1986 › More than 400 pools installed, managing more than CPUs worldwide. › More than 1800 CPUs in 10 pools on our campus › Software available free on the web  Open license › Adopted by the “real world” (Galileo, Maxtor, Micron, Oracle, Tigr, CORE… )

26

27

28 Globus Toolkit › The Globus Toolkit is an open source implementation of Grid-related protocols & middleware services designed by the Globus Project and collaborators  Remote job execution, security infrastructure, directory services, data transfer, …

29 The Condor Project and the Grid … › Close collaboration and coordination with the Globus Project – joint development, adoption of common protocols, technology exchange, … › Partner in major national Grid R&D 2 (Research, Development and Deployment) efforts (GriPhyN, iVDGL, IPG, TeraGrid) › Close collaboration with Grid projects in Europe (EDG, GridLab, e-Science)

30 Remote Resource Access: Globus “globusrun myjob …” Globus GRAM Protocol Globus JobManager fork() Organization A Organization B

31 Remote Resource Access: Globus Globus GRAM Protocol Globus JobManager fork() Organization A Organization B “globusrun myjob …”

32 Remote Resource Access: Globus + Condor Globus GRAM Protocol Globus JobManager Submit to Condor Condor Pool Organization A Organization B “globusrun myjob …”

33 Remote Resource Access: Globus + Condor “globusrun …” Globus GRAM Protocol Globus JobManager Submit to Condor Condor Pool Organization A Organization B

34 Condor-G A Grid-enabled version of Condor that provides robust job management for Globus clients.  Robust replacement for globusrun  Provides extensive fault-tolerance  Can provide scheduling across multiple Globus sites  Brings Condor’s job management features to Globus jobs

35 Remote Resource Access: Condor-G + Globus + Condor Globus GRAM Protocol Globus JobManager Submit to Condor Condor Pool Organization A Organization B Condor-G myjob1 myjob2 myjob3 myjob4 myjob5 …

36 User/Application Fabric ( processing, storage, communication ) Grid

37 User/Application Fabric ( processing, storage, communication ) Grid Condor Globus Toolkit Condor

38 User/Application Fabric ( processing, storage, communication ) Grid Condor Pool Globus Toolkit Condor-G

39 The Idea Computing power is everywhere, we try to make it usable by anyone.

40 Meet Frieda. She is a scientist. But she has a big problem.

41 Frieda’s Application … Simulate the behavior of F(x,y,z) for 20 values of x, 10 values of y and 3 values of z (20*10*3 = 600 combinations)  F takes on the average 3 hours to compute on a “typical” workstation ( total = 1800 hours )  F requires a “moderate” (128MB) amount of memory  F performs “moderate” I/O - (x,y,z) is 5 MB and F(x,y,z) is 50 MB

42 I have 600 simulations to run. Where can I get help?

43 Install a Personal Condor!

44 Installing Condor › Download Condor for your operating system › Available as a free download from › Stable –vs- Developer Releases  Naming scheme similar to the Linux Kernel… › Available for most Unix platforms and Windows NT

45 So Frieda Installs Personal Condor on her machine… › What do we mean by a “Personal” Condor?  Condor on your own workstation, no root access required, no system administrator intervention needed › So after installation, Frieda submits her jobs to her Personal Condor…

46 your workstation personal Condor 600 Condor jobs

47 Personal Condor?! What’s the benefit of a Condor “Pool” with just one user and one machine?

48 Your Personal Condor will... › … keep an eye on your jobs and will keep you posted on their progress › … implement your policy on the execution order of the jobs › … keep a log of your job activities › … add fault tolerance to your jobs › … implement your policy on when the jobs can run on your workstation

49 Getting Started: Submitting Jobs to Condor › Choosing a “Universe” for your job  Just use VANILLA for now › Make your job “batch-ready” › Creating a submit description file › Run condor_submit on your submit description file

50 Making your job ready › Must be able to run in the background: no interactive input, windows, GUI, etc. › Can still use STDIN, STDOUT, and STDERR (the keyboard and the screen), but files are used for these instead of the actual devices › Organize data files

51 Creating a Submit Description File › A plain ASCII text file › Tells Condor about your job:  Which executable, universe, input, output and error files to use, command-line arguments, environment variables, any special requirements or preferences (more on this later) › Can describe many jobs at once (a “cluster”) each with different input, arguments, output, etc.

52 Simple Submit Description File # Simple condor_submit input file # (Lines beginning with # are comments) # NOTE: the words on the left side are not # case sensitive, but filenames are! Universe = vanilla Executable = my_job Queue

53 Running condor_submit › You give condor_submit the name of the submit file you have created › condor_submit parses the file, checks for errors, and creates a “ClassAd” that describes your job(s) › Sends your job’s ClassAd(s) and executable to the condor_schedd, which stores the job in its queue  Atomic operation, two-phase commit › View the queue with condor_q

54 Running condor_submit % condor_submit my_job.submit-file Submitting job(s). 1 job(s) submitted to cluster 1. % condor_q -- Submitter: perdita.cs.wisc.edu : : ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 1.0 frieda 6/16 06: :00:00 I my_job 1 jobs; 1 idle, 0 running, 0 held %

55 Another Submit Description File # Example condor_submit input file # (Lines beginning with # are comments) # NOTE: the words on the left side are not # case sensitive, but filenames are! Universe = vanilla Executable = /home/wright/condor/my_job.condor Input = my_job.stdin Output = my_job.stdout Error = my_job.stderr Arguments = -arg1 -arg2 InitialDir = /home/wright/condor/run_1 Queue

56 “Clusters” and “Processes” › If your submit file describes multiple jobs, we call this a “cluster” › Each job within a cluster is called a “process” or “proc” › If you only specify one job, you still get a cluster, but it has only one process › A Condor “Job ID” is the cluster number, a period, and the process number (“23.5”) › Process numbers always start at 0

57 Example Submit Description File for a Cluster # Example condor_submit input file that defines # a cluster of two jobs with different iwd Universe = vanilla Executable = my_job Arguments = -arg1 -arg2 InitialDir = run_0 Queue  Becomes job 2.0 InitialDir = run_1 Queue  Becomes job 2.1

58 % condor_submit my_job.submit-file Submitting job(s). 2 job(s) submitted to cluster 2. % condor_q -- Submitter: perdita.cs.wisc.edu : : ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 1.0 frieda 6/16 06: :02:11 R my_job 2.0 frieda 6/16 06: :00:00 I my_job 2.1 frieda 6/16 06: :00:00 I my_job 3 jobs; 2 idle, 1 running, 0 held %

59 Submit Description File for a BIG Cluster of Jobs › The initial directory for each job is specified with the $(Process) macro, and instead of submitting a single job, we use “Queue 600” to submit 600 jobs at once › $(Process) will be expanded to the process number for each job in the cluster (from 0 up to 599 in this case), so we’ll have “run_0”, “run_1”, … “run_599” directories › All the input/output files will be in different directories!

60 Submit Description File for a BIG Cluster of Jobs # Example condor_submit input file that defines # a cluster of 600 jobs with different iwd Universe = vanilla Executable = my_job Arguments = -arg1 –arg2 InitialDir = run_$(Process) Queue 600

61 Using condor_rm › If you want to remove a job from the Condor queue, you use condor_rm › You can only remove jobs that you own (you can’t run condor_rm on someone else’s jobs unless you are root) › You can give specific job ID’s (cluster or cluster.proc), or you can remove all of your jobs with the “-a” option.

62 Temporarily halt a Job › Use condor_hold to place a job on hold  Kills job if currently running  Will not attempt to restart job until released › Use condor_release to remove a hold and permit job to be scheduled again

63 Using condor_history › Once your job completes, it will no longer show up in condor_q › You can use condor_history to view information about a completed job › The status field (“ST”) will have either a “C” for “completed”, or an “X” if the job was removed with condor_rm

64 Getting from Condor › By default, Condor will send you when your jobs completes  With lots of information about the run › If you don’t want this , put this in your submit file: notification = never › If you want every time something happens to your job (preempt, exit, etc), use this: notification = always

65 Getting from Condor (cont’d) › If you only want in case of errors, use this: notification = error › By default, the is sent to your account on the host you submitted from. If you want the to go to a different address, use this: notify_user =

66 A Job’s life story: The “User Log” file › A UserLog must be specified in your submit file:  Log = filename › You get a log entry for everything that happens to your job:  When it was submitted, when it starts executing, preempted, restarted, completes, if there are any problems, etc. › Very useful! Highly recommended!

67 Sample Condor User Log 000 ( ) 05/25 19:10:03 Job submitted from host: ( ) 05/25 19:12:17 Job executing on host: ( ) 05/25 19:13:06 Job terminated. (1) Normal termination (return value 0) Usr 0 00:00:37, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:05 - Run Local Usage Usr 0 00:00:37, Sys 0 00:00:00 - Total Remote Usage Usr 0 00:00:00, Sys 0 00:00:05 - Total Local Usage Run Bytes Sent By Job Run Bytes Received By Job Total Bytes Sent By Job Total Bytes Received By Job...

68 Uses for the User Log › Easily read by human or machine  C++ library and Perl Module for parsing UserLogs is available  log_xml=True – XML formatted › Event triggers for meta-schedulers  Like DagMan… › Visualizations of job progress  Condor JobMonitor Viewer

Condor JobMonitor Screenshot

70 Job Priorities w/ condor_prio › condor_prio allows you to specify the order in which your jobs are started › Higher the prio #, the earlier the job will start % condor_q -- Submitter: perdita.cs.wisc.edu : : ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 1.0 frieda 6/16 06: :02:11 R my_job % condor_prio % condor_q -- Submitter: perdita.cs.wisc.edu : : ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 1.0 frieda 6/16 06: :02:13 R my_job

71 Want other Scheduling possibilities? Use the Scheduler Universe › In addition to VANILLA, another job universe is the Scheduler Universe. › Scheduler Universe jobs run on the submitting machine and serve as a meta-scheduler. › DAGMan meta-scheduler included

72 DAGMan › Directed Acyclic Graph Manager › DAGMan allows you to specify the dependencies between your Condor jobs, so it can manage them automatically for you. › (e.g., “Don’t run job “B” until job “A” has completed successfully.”)

73 What is a DAG? › A DAG is the data structure used by DAGMan to represent these dependencies. › Each job is a “node” in the DAG. › Each node can have any number of “parent” or “children” nodes – as long as there are no loops! Job A Job BJob C Job D

74 › A DAG is defined by a.dag file, listing each of its nodes and their dependencies: # diamond.dag Job A a.sub Job B b.sub Job C c.sub Job D d.sub Parent A Child B C Parent B C Child D › each node will run the Condor job specified by its accompanying Condor submit file Defining a DAG Job A Job BJob C Job D

75 Submitting a DAG › To start your DAG, just run condor_submit_dag with your.dag file, and Condor will start a personal DAGMan daemon which to begin running your jobs: % condor_submit_dag diamond.dag › condor_submit_dag submits a Scheduler Universe Job with DAGMan as the executable. › Thus the DAGMan daemon itself runs as a Condor job, so you don’t have to baby-sit it.

76 DAGMan Running a DAG › DAGMan acts as a “meta-scheduler”, managing the submission of your jobs to Condor based on the DAG dependencies. Condor Job Queue C D A A B.dag File

77 DAGMan Running a DAG (cont’d) › DAGMan holds & submits jobs to the Condor queue at the appropriate times. Condor Job Queue C D B C B A

78 DAGMan Running a DAG (cont’d) › In case of a job failure, DAGMan continues until it can no longer make progress, and then creates a “rescue” file with the current state of the DAG. Condor Job Queue X D A B Rescue File

79 DAGMan Recovering a DAG › Once the failed job is ready to be re-run, the rescue file can be used to restore the prior state of the DAG. Condor Job Queue C D A B Rescue File C

80 DAGMan Recovering a DAG (cont’d) › Once that job completes, DAGMan will continue the DAG as if the failure never happened. Condor Job Queue C D A B D

81 DAGMan Finishing a DAG › Once the DAG is complete, the DAGMan job itself is finished, and exits. Condor Job Queue C D A B

82 Additional DAGMan Features › Provides other handy features for job management…  nodes can have PRE & POST scripts  failed nodes can be automatically re- tried a configurable number of times  job submission can be “throttled”

83 Another sample DAGMan submit file # Filename: diamond.dag Job A A.condor Job B B.condor Job C C.condor Job D D.condor Script PRE A top_pre.csh Script PRE B mid_pre.perl $JOB Script POST B mid_post.perl $JOB $RETURN Script PRE C mid_pre.perl $JOB Script POST C mid_post.perl $JOB $RETURN Script PRE D bot_pre.csh PARENT A CHILD B C PARENT B C CHILD D Retry C 3 Job A Job BJob C Job D

84 DAGMan, cont. › DAGMan can help w/ visualization of the DAG  Can create input files for AT&T’s graphviz package (dot input). › Why not just use make? › In the works: dynamic DAGs.

85 We’ve seen how Condor will … keep an eye on your jobs and will keep you posted on their progress … implement your policy on the execution order of the jobs … keep a log of your job activities … add fault tolerance to your jobs ?

86 What if each job needed to run for 20 days? What if I wanted to interrupt a job with a higher priority job?

87 Condor’s Standard Universe to the rescue! › Condor can support various combinations of features/environments in different “Universes” › Different Universes provide different functionality for your job:  Vanilla – Run any Serial Job  Scheduler – Plug in a meta-scheduler  Standard – Support for transparent process checkpoint and restart

88 Process Checkpointing › Condor’s Process Checkpointing mechanism saves all the state of a process into a checkpoint file  Memory, CPU, I/O, etc. › The process can then be restarted from right where it left off › Typically no changes to your job’s source code needed – however, your job must be relinked with Condor’s Standard Universe support library

89 Relinking Your Job for submission to the Standard Universe To do this, just place “condor_compile” in front of the command you normally use to link your job: condor_compile gcc -o myjob myjob.c OR condor_compile f77 -o myjob filea.f fileb.f OR condor_compile make –f MyMakefile

90 Limitations in the Standard Universe › Condor’s checkpointing is not at the kernel level. Thus in the Standard Universe the job may not  Fork()  Use kernel threads  Use some forms of IPC, such as pipes and shared memory › Many typical scientific jobs are OK

91 When will Condor checkpoint your job? › Periodically, if desired  For fault tolerance › To free the machine to do a higher priority task (higher priority job, or a job from a user with higher priority)  Preemptive-resume scheduling › When you explicitly run condor_checkpoint, condor_vacate, condor_off or condor_restart command

92 “Standalone” Checkpointing › Can use Condor Project’s checkpoint technology outside of Condor…  SIGTSTP = checkpoint and exit  SIGUSR2 = periodic checkpoint condor_compile cc myapp.c –o myapp myapp -_condor_ckpt foo-image.ckpt … myapp -_condor_restart foo-image.ckpt

93 Checkpoint Library Interface › void init image with file name( char *ckpt file name ) › void init image with file descriptor( int fd ) › void ckpt() › void ckpt and exit() › void restart() › void condor ckpt disable() › void condor ckpt enable() › int condor warning config( const char *kind,const char *mode) › extern int condor compress ckpt

94 What Condor Daemons are running on my machine, and what do they do?

95 Condor Daemon Layout Personal Condor / Central Manager master collector negotiator scheddstartd = Process Spawned

96 condor_master › Starts up all other Condor daemons › If there are any problems and a daemon exits, it restarts the daemon and sends to the administrator › Checks the time stamps on the binaries of the other Condor daemons, and if new binaries appear, the master will gracefully shutdown the currently running version and start the new version master

97 condor_master (cont’d) › Acts as the server for many Condor remote administration commands:  condor_reconfig, condor_restart, condor_off, condor_on, condor_config_val, etc.

98 condor_startd › Represents a machine to the Condor system › Responsible for starting, suspending, and stopping jobs › Enforces the wishes of the machine owner (the owner’s “policy”… more on this soon) master startd

99 condor_schedd › Represents users to the Condor system › Maintains the persistent queue of jobs › Responsible for contacting available machines and sending them jobs › Services user commands which manipulate the job queue:  condor_submit,condor_rm, condor_q, condor_hold, condor_release, condor_prio, … master scheddstartd

100 condor_collector › Collects information from all other Condor daemons in the pool  “Directory Service” / Database for a Condor pool › Each daemon sends a periodic update called a “ClassAd” to the collector › Services queries for information:  Queries from other Condor daemons  Queries from users (condor_status) schedd collector master startd

101 condor_negotiator › Performs “matchmaking” in Condor › Gets information from the collector about all available machines and all idle jobs › Tries to match jobs with machines that will serve them › Both the job and the machine must satisfy each other’s requirements master collector negotiator scheddstartd

102 Happy Day! Frieda’s organization purchased a Beowulf Cluster! › Frieda Installs Condor on all the dedicated Cluster nodes, and configures them with her machine as the central manager… › Now her Condor Pool can run multiple jobs at once

103 your workstation personal Condor 600 Condor jobs Condor Pool

104 Layout of the Condor Pool Central Manager (Frieda’s) master collector negotiator schedd startd = ClassAd Communication Pathway = Process Spawned Cluster Node master startd Cluster Node master startd

105 condor_status % condor_status Name OpSys Arch State Activity LoadAv Mem ActvtyTime haha.cs.wisc. IRIX65 SGI Unclaimed Idle :00:04 antipholus.cs LINUX INTEL Unclaimed Idle :28:42 coral.cs.wisc LINUX INTEL Claimed Busy :27:21 doc.cs.wisc.e LINUX INTEL Unclaimed Idle :20:04 dsonokwa.cs.w LINUX INTEL Claimed Busy :01:45 ferdinand.cs. LINUX INTEL Claimed Suspended :00:55 LINUX INTEL Unclaimed Idle :03:28 LINUX INTEL Unclaimed Idle :03:29

106 Frieda tries out ‘static’ parallel jobs: MPI Universe › Schedule and start an MPICH job on dedicated resources ## MPI example submit description file universe = MPI executable = simplempi log = logfile input = infile.$(NODE) output = outfile.$(NODE) error = errfile.$(NODE) machine_count = 4 queue

107 The Boss says Frieda can add her co-workers’ desktop machines into her Condor pool as well… but only if they can also submit jobs. (Boss Fat Cat)

108 Layout of the Condor Pool Central Manager (Frieda’s) master collector negotiator schedd startd = ClassAd Communication Pathway = Process Spawned Desktop schedd startd master Desktop schedd startd master Cluster Node master startd Cluster Node master startd

109 Some of the machines in the Pool do not have enough memory or scratch disk space to run my job!

110 Specify Requirements! › An expression (syntax similar to C or Java) › Must evaluate to True for a match to be made Universe = vanilla Executable = my_job InitialDir = run_$(Process) Requirements = Memory >= 256 && Disk > Queue 600

111 Specify Rank! › All matches which meet the requirements can be sorted by preference with a Rank expression. › Higher the Rank, the better the match Universe = vanilla Executable = my_job Arguments = -arg1 –arg2 InitialDir = run_$(Process) Requirements = Memory >= 256 && Disk > Rank = (KFLOPS*10000) + Memory Queue 600

112 What attributes can I reference in Requirements/Rank ? › Answer: Any attributes that appear in the machine or job classad › Out of the box, Condor has ~70 attributes per machine classad and ~70 attributes per job classad › Sites can add their own custom machine or job classads › To see all ad attributes:  condor_status –long (for machine classads)  condor_q –long (for job classads)

113 How can my jobs access their data files?

114 Access to Data in Condor › Use Shared Filesystem if available › No shared filesystem?  Remote System Calls (in the Standard Universe)  Condor File Transfer Service Can automatically send back changed files Atomic transfer of multiple files  Remote I/O Proxy Socket

115 Standard Universe Remote System Calls › I/O System calls trapped and sent back to submit machine › Allows Transparent Migration Across Administrative Domains  Checkpoint on machine A, restart on B › No Source Code changes required › Language Independent › Opportunities  For Application Steering Example: Condor tells customer process “how” to open files  For compression on the fly  More…

116 Customer Job Job Startup Submit Schedd Shadow Startd Starter Condor Syscall Lib

117 condor_q -io c01(69)% condor_q -io -- Submitter: c01.cs.wisc.edu : : c01.cs.wisc.edu ID OWNER READ WRITE SEEK XPUT BUFSIZE BLKSIZE 72.3 edayton [ no i/o data collected yet ] 72.5 edayton 6.8 MB 0.0 B KB/s KB 32.0 KB 73.0 edayton 6.4 MB 0.0 B KB/s KB 32.0 KB 73.2 edayton 6.8 MB 0.0 B KB/s KB 32.0 KB 73.4 edayton 6.8 MB 0.0 B KB/s KB 32.0 KB 73.5 edayton 6.8 MB 0.0 B KB/s KB 32.0 KB 73.7 edayton [ no i/o data collected yet ] 0 jobs; 0 idle, 0 running, 0 held

118 Condor File Transfer › Set Should_Transfer_Files  YES : Always transfer files to execution site  NO : Rely on a shared filesystem  IF_NEEDED : will automatically transfer the files if the submit and execute machine are not in the same FileSystemDomain › Set When_To_Transfer_Output  ON_EXIT or ON_EXIT_OR_VACATE Universe = vanilla Executable = my_job Requirements = Memory >= 256 && Disk > Should_Transfer_Files = IF_NEEDED When_To_Transfer_Output = IF_NEEDED Transfer_input_files = dataset$(Process), common.data Transfer_output_files = TheAnswer.dat Queue 600

119 Remote I/O Socket › Job can request that the condor_starter process on the execute machine create a Remote I/O Proxy Socket › Used for online access of file on submit machine – without Standard Universe.  Use in Vanilla, Java, … › Libraries provided for Java and for C, e.g. : Java: FileInputStream -> ChirpInputStream C : open() -> chirp_open() › Or use Parrot!

120 Job Fork startershadow Home File System I/O Library I/O ServerI/O Proxy Secure Remote I/O Local System Calls Local I/O (Chirp) Execution Site Submission Site

121 I am adding nodes to the Cluster… but the Engineering Department has priority on these nodes. (Boss Fat Cat) Policy Configuration

122 The Machine (Startd) Policy Expressions START – When is this machine willing to start a job RANK - Job Preferences SUSPEND - When to suspend a job CONTINUE - When to continue a suspended job PREEMPT – When to nicely stop running a job KILL - When to immediately kill a preempting job

123 Freida’s Current Settings START = True RANK = SUSPEND = False CONTINUE = PREEMPT = False KILL = False

124 Freida’s New Settings for the Chemistry nodes START = True RANK = Department == “Chemistry” SUSPEND = False CONTINUE = PREEMPT = False KILL = False

125 Submit file with Custom Attribute Executable = charm-run Universe = standard +Department = Chemistry queue

126 What if “Department” not specified? START = True RANK = Department =!= UNDEFINED && Department == “Chemistry” SUSPEND = False CONTINUE = PREEMPT = False KILL = False

127 Another example START = True RANK = Department =!= UNDEFINED && ((Department == “Chemistry”)*2 + Department == “Physics”) SUSPEND = False CONTINUE = PREEMPT = False KILL = False

128 The Cluster is fine. But not the desktop machines. Condor can only use the desktops when they would otherwise be idle. (Boss Fat Cat) Policy Configuration, cont

129 So Frieda decides she wants the desktops to: › START jobs when their has been no activity on the keyboard/mouse for 5 minutes and the load average is low › SUSPEND jobs as soon as activity is detected › PREEMPT jobs if the activity continues for 5 minutes or more › KILL jobs if they take more than 5 minutes to preempt

130 Macros in the Config File NonCondorLoadAvg = (LoadAvg - CondorLoadAvg) BackgroundLoad = 0.3 HighLoad = 0.5 KeyboardBusy = (KeyboardIdle < 10) CPU_Busy = ($(NonCondorLoadAvg) >= $(HighLoad)) MachineBusy = ($(CPU_Busy) || $(KeyboardBusy)) ActivityTimer= (CurrentTime - EnteredCurrentActivity)

131 Desktop Machine Policy START = $(CPU_Idle) && KeyboardIdle > 300 SUSPEND= $(MachineBusy) CONTINUE = $(CPU_Idle) && KeyboardIdle > 120 PREEMPT= (Activity == "Suspended") && $(ActivityTimer) > 300 KILL = $(ActivityTimer) > 300

132 Policy Review › Users submitting jobs can specify Requirements and Rank expressions › Administrators can specify Startd Policy expressions individually for each machine (Start,Suspend,etc) › Expressions can use any job or machine ClassAd attribute › Custom attributes easily added › Bottom Line: Enforce almost any policy!

133 I want to use Java. Is there any easy way to run Java programs via Condor?

134 Java Universe Job universe = java executable = Main.class jar_files = MyLibrary.jar input = infile output = outfile arguments = Main queue condor_submit

135 Why not use Vanilla Universe for Java jobs? › Java Universe provides more than just inserting “java” at the start of the execute line  Knows which machines have a JVM installed  Knows the location, version, and performance of JVM on each machine  Provides more information about Java job completion than just JVM exit code Program runs in a Java wrapper, allowing Condor to report Java exceptions, etc.

136 Java support, cont. condor_status -java Name JavaVendor Ver State Activity LoadAv Mem aish.cs.wisc. Sun Microsy Owner Idle anfrom.cs.wis Sun Microsy Owner Idle babe.cs.wisc. Sun Microsy Claimed Busy

137 My MPI programs are running on the dedicated nodes. Can I run parallel jobs on the non-dedicated nodes?

138 PVM Universe › Allows dynamic, “opportunistic” PVM  Number of nodes can change dynamically › Specify a minimum and maximum number of nodes › Works well for Master/Worker paradigm › Differences from regular PVM  pvm_addhost() is non-blocking  pvm_notify enhanced w/ suspend state  PVM “arch string” enhanced › Can also use “MW” … does all the work for you.

139 Non-dedicated Parallel Job: PVM Universe, Cont. # The job is a PVM universe job. universe = PVM # The executable of the master PVM program is ``master.exe''. executable = master.exe input = "in.dat" output = "out.dat" error = "err.dat" ################### Machine class 0 ################## Requirements = (Arch == "INTEL") && (OpSys == "LINUX") # We want at least 2 machines in class 0 before starting the # program. We can use up to 4 machines. machine_count = 2..4 queue ################### Machine class 1 ################## Requirements = (Arch == "SUN4x") && (OpSys == "SOLARIS26") # We can use up to 50 more…. machine_count = queue

140 General User Commands › condor_status View Pool Status › condor_qView Job Queue › condor_submitSubmit new Jobs › condor_rmRemove Jobs › condor_prioIntra-User Prios › condor_historyCompleted Job Info › condor_submit_dagSpecify Dependencies › condor_checkpointForce a checkpoint › condor_compileLink Condor library

141 Administrator Commands › condor_vacateLeave a machine now › condor_onStart Condor › condor_offStop Condor › condor_reconfigReconfig on-the-fly › condor_config_valView/set config › condor_userprioUser Priorities › condor_statsView detailed usage accounting stats

142 CondorView Usage Graph

143 Back to the Story… Frieda Needs Remote Resources…

144 Frieda Builds a Grid! › First Frieda takes advantage of her Condor friends! › She knows people with their own Condor pools, and gets permission to access their resources flock › She then configures her Condor pool to “flock” to these pools

145 your workstation Friendly Condor Pool personal Condor 600 Condor jobs Condor Pool

146 How Flocking Works › Add a line to your condor_config : FLOCK_HOSTS = Pool-Foo, Pool-Bar Schedd Collector Negotiator Central Manager (CONDOR_HOST ) Collector Negotiator Pool-Foo Central Manager Collector Negotiator Pool-Bar Central Manager Submit Machine

147 Condor Flocking › Remote pools are contacted in the order specified until jobs are satisfied › The list of remote pools is a property of the Schedd, not the Central Manager  So different users can Flock to different pools  And remote pools can allow specific users › User-priority system is “flocking-aware”  A pool’s local users can have priority over remote users “flocking” in.

148 Condor Flocking, cont. › Flocking is “Condor” specific technology… › Frieda also has access to Globus resources she wants to use  She has certificates and access to Globus gatekeepers at remote institutions › But Frieda wants Condor’s queue management features for her Globus jobs! › She installs Condor-G so she can submit “Globus Universe” jobs to Condor

149 Condor-G: Access non-Condor Grid resources Globus › middleware deployed across entire Grid › remote access to computational resources › dependable, robust data transfer Condor › job scheduling across multiple resources › strong fault tolerance with checkpointing and migration › layered over Globus as “personal batch system” for the Grid

150 Condor-G Condor-G Job Description (Job ClassAd) GT2 [.1|2|4] HTTPS CondorNorduGridOracle GT3 OGSI Unicore?

151 Frieda Submits a Globus Universe Job › In her submit description file, she specifies:  Universe = Globus  Which Globus Gatekeeper to use  Optional: Location of file containing your Globus certificate universe = globus globusscheduler = beak.cs.wisc.edu/jobmanager executable = progname queue

152 How It Works Schedd LSF Personal CondorGlobus Resource

153 How It Works Schedd LSF Personal CondorGlobus Resource 600 Globus jobs

154 How It Works Schedd LSF Personal CondorGlobus Resource GridManager 600 Globus jobs

155 How It Works Schedd JobManager LSF Personal CondorGlobus Resource GridManager 600 Globus jobs

156 How It Works Schedd JobManager LSF User Job Personal CondorGlobus Resource GridManager 600 Globus jobs

Condor Globus Universe

158 Globus Universe Concerns › What about Fault Tolerance?  Local Crashes What if the submit machine goes down?  Network Outages What if the connection to the remote Globus jobmanager is lost?  Remote Crashes What if the remote Globus jobmanager crashes? What if the remote machine goes down?

159 Changes to the Globus JobManagerfor Fault Tolerance › Ability to restart a JobManager › Enhanced two-phase commit submit protocol

160 Globus Universe Fault-Tolerance: Submit-side Failures › All relevant state for each submitted job is stored persistently in the Condor job queue. › This persistent information allows the Condor GridManager upon restart to read the state information and reconnect to JobManagers that were running at the time of the crash. › If a JobManager fails to respond…

161 Globus Universe Fault-Tolerance: Lost Contact with Remote Jobmanager Can we contact gatekeeper? Yes – network was down No – machine crashed or job completed Yes - jobmanager crashedNo – retry until we can talk to gatekeeper again… Can we reconnect to jobmanager? Has job completed? No – is job still running? Yes – update queue Restart jobmanager

162 Globus Universe Fault-Tolerance: Credential Management › Authentication in Globus is done with limited-lifetime X509 proxies › Proxy may expire before jobs finish executing › Condor can put jobs on hold and user to refresh proxy › Todo: Interface with MyProxy…

163 Can Condor-G decide where to run my jobs?

164 Condor-G Matchmaking › Alternative to Glidein: Use Condor-G matchmaking with globus universe jobs › Allows Condor-G to dynamically assign computing jobs to grid sites › An example of lazy planning

165 Condor-G Matchmaking, cont. › Normally a globus universe job must specify the site in the submit description file via the “globusscheduler” attribute like so: Executable = foo Universe = globus Globusscheduler = beak.cs.wisc.edu/jobmanager-pbs queue

166 Condor-G Matchmaking, cont. › With matchmaking, globus universe jobs can use requirements and rank: Executable = foo Universe = globus Globusscheduler = $$(GatekeeperUrl) Requirements = arch == LINUX Rank = NumberOfNodes Queue › The $$(x) syntax inserts information from the target ClassAd when a match is made.

167 Condor-G Matchmaking, cont. › Where do these target ClassAds representing Globus gatekeepers come from? Several options:  Simple script on gatekeeper publishes an ad via condor_advertise command-line utility (method used by D0 JIM, USCMS)  Program to query Globus MDS and convert information into ClassAd (method used by EDG)  Run HawkEye with appropriate plugins on the gatekeeper › For explanation of Condor-G matchmaking setup for USCMS, see

168 DAGMan Callouts › Another mechanism to achieve lazy planning: DAGMan callouts › Define DAGMAN_HELPER_COMMAND in condor_config (usually a script) › The helper command is passed a copy of the job submit file when DAGMan is about to submit that node in the graph › This allows changes to be made to the submit file (such as changing GlobusScheduler) at the last minute

169 But Frieda Wants More… › She wants to run standard universe jobs on Globus-managed resources  For matchmaking and dynamic scheduling of jobs  For job checkpointing and migration  For remote system calls

170 One Solution: Condor-G GlideIn › Frieda can use the Globus Universe to run Condor daemons on Globus resources › When the resources run these GlideIn jobs, they will temporarily join her Condor Pool › She can then submit Standard, Vanilla, PVM, or MPI Universe jobs and they will be matched and run on the Globus resources

171 your workstation Friendly Condor Pool personal Condor 600 Condor jobs Globus Grid PBS LSF Condor Condor Pool glide-in jobs

172 How It Works Schedd LSF Collector Personal CondorGlobus Resource 600 Condor jobs

173 How It Works Schedd LSF Collector Personal CondorGlobus Resource 600 Condor jobs GlideIn jobs

174 How It Works Schedd LSF Collector Personal CondorGlobus Resource GridManager 600 Condor jobs GlideIn jobs

175 How It Works Schedd JobManager LSF Collector Personal CondorGlobus Resource GridManager 600 Condor jobs GlideIn jobs

176 How It Works Schedd JobManager LSF Startd Collector Personal CondorGlobus Resource GridManager 600 Condor jobs GlideIn jobs

177 How It Works Schedd JobManager LSF Startd Collector Personal CondorGlobus Resource GridManager 600 Condor jobs GlideIn jobs

178 How It Works Schedd JobManager LSF User Job Startd Collector Personal CondorGlobus Resource GridManager 600 Condor jobs GlideIn jobs

179

180 GlideIn Concerns › What if a Globus resource kills my GlideIn job?  That resource will disappear from your pool and your jobs will be rescheduled on other machines  Standard universe jobs will resume from their last checkpoint like usual › What if all my jobs are completed before a GlideIn job runs?  If a GlideIn Condor daemon is not matched with a job in 10 minutes, it terminates, freeing the resource

181 Common Questions, cont. My Personal Condor is flocking with a bunch of Solaris machines, and also doing a GlideIn to a Silicon Graphics O2K. I do not want to statically partition my jobs. Solution: In your submit file, say: Executable = myjob.$$(OpSys).$$(Arch) The “$$(xxx)” notation is replaced with attributes from the machine ClassAd which was matched with your job.

182 In Review With Condor Frieda can…  … manage her compute job workload  … access local machines  … build a grid to access remote Condor Pools via flocking  … access remote compute resources on grids via Globus Universe jobs  … carve out her own personal Condor Pool from a grid with GlideIn technology

183 I wan to create a portal to Condor. Is there a developer API to Condor?

184 Developer API › Do not underestimate the flexibility of the command line tools! › If not possible, consider SOAP

185 HTTP Stack Added Condor Service Cedar Todo: HTTPS and/or HTTPG And now HTTP

186 Current SOAP status › Clients can now use CEDAR or HTTP protocol to communicate to Condor daemons. › If HTTP command is  GET : use a built-in “mini” web server Useful for retrieving WSDL from the service itself  POST : assumed to be a SOAP RPC

187 Current SOAP status, cont. › Created first pass XML Schema representation of a list of ClassAds, and first pass WSDL files. › CEDAR is more of a message-passing model instead of a true RPC model  many back-and-forth messages.  we working on the considerable task of re-arranging the implementation in the Condor daemons from message-passing model to a true RPC model.

188 Current SOAP status, cont. › Started with the Collector  modified the implementation of the collector so all of the query operations are ignorant of the underlying transport (CEDAR or SOAP, it no longer knows or cares)  created SOAP stubs for all collector query operations  Proof of concept: simple “condor_status” was written in Perl. It works!

189 Current Activity › Currently adding soap stubs in the schedd for our queue management API.  This will give the equal of condor_q, condor_prio, condor_qedit, condor_rm, … › Adding DIME support (binary attachments to SOAP messages) in preparation for job sandbox delivery for submit interface.

190 Thank you! Check us out on the Web: