Presentation is loading. Please wait.

Presentation is loading. Please wait.

Condor DAGMan: Managing Job Dependencies with Condor

Similar presentations


Presentation on theme: "Condor DAGMan: Managing Job Dependencies with Condor"— Presentation transcript:

1 Condor DAGMan: Managing Job Dependencies with Condor

2 Condor DAGMan What is DAGMan? What is it good for? How does it work?
What’s next? Condor DAGMan

3 DAGMan Directed Acyclic Graph Manager
DAGMan allows you to specify the dependencies between your Condor jobs, so it can manage them automatically for you. (e.g., “Don’t run job “B” until job “A” has completed successfully.”) In the simplest case… Condor DAGMan

4 Typical Scenarios Jobs whose output needs to be summarized or post-processed once they complete. Jobs that need data to be generated or pre-processed before they can use it. Jobs which require data to be staged to/from remote repositories before they start or after they finish. Condor DAGMan

5 What is a DAG? A DAG is the data structure used by DAGMan to represent these dependencies. Each job is a “node” in the DAG. Each node can have any number of “parents” or “children” (or neither) – as long as there are no loops! Job A Job B Job C Job D a DAG is the best data structure to represent a workflow of jobs with dependencies children may not run until their parents have finished – this is why the graph is a directed graph … there’s a direction to the flow of work In this example, called a “diamond” dag, job A must run first; when it finishes, jobs B and C can run together; when they are both finished, D can run; when D is finished the DAG is finished Loops, where two jobs are both descended from one another, are prohibited because they would lead to deadlock – in a loop, neither node could run until the other finished, and so neither would start – this restriction is what makes the graph acyclic Condor DAGMan

6 An Example DAG Jobs whose output needs to be summarized or post-processed once they complete: Job A Job B Job C Job D Condor DAGMan

7 Another Example DAG Jobs that need data to be generated or pre-processed before they can use it: Job A Job B Job C Job D Condor DAGMan

8 Defining a DAG A DAG is defined by a .dag file., listing all its nodes and any dependencies: # diamond.dag Job A a.sub Job B b.sub Job C c.sub Job D d.sub Parent A Child B C Parent B C Child D Job A Job B Job C Job D This is all it takes to specify the example “diamond” dag Condor DAGMan

9 Defining a DAG (cont’d)
Each node in the DAG will run a Condor job, specified by a Condor submit file: # diamond.dag Job A a.sub Job B b.sub Job C c.sub Job D d.sub Parent A Child B C Parent B C Child D Job A Job B Job C Job D These are normal condor submit files, the same ones you would use to submit the jobs by hand Condor DAGMan

10 Submitting a DAG To start your DAG, just run condor_submit_dag with your .dag file, and Condor will start a personal DAGMan daemon & begin running your jobs: % condor_submit_dag diamond.dag The DAGMan daemon itself runs as a Condor job, so you don’t have to baby-sit it. Just like any other Condor job, you get fault tolerance in case the machine crashes or reboots, or if there’s a network outage And you’re notified when it’s done, and whether it succeeded or failed % condor_q -- Submitter: foo.bar.edu : < :1027> : foo.bar.edu ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD user /8 19: :00:02 R condor_dagman -f - Condor DAGMan

11 Running a DAG DAGMan acts as a “meta-scheduler”, managing the submission of your jobs to Condor based on the DAG dependencies. DAGMan A Condor Job Queue .dag File A B C First, job A will be submitted alone… D Condor DAGMan

12 Running a DAG (cont’d) DAGMan holds & submits jobs to the Condor queue at the appropriate times. DAGMan A Condor Job Queue B B C Once job A completes successfully, jobs B and C will be submitted at the same time… C D Condor DAGMan

13 Running a DAG (cont’d) In case of a job failure, DAGMan continues until it can no longer make progress, and then creates a “rescue” file with the current state of the DAG. DAGMan A Condor Job Queue Rescue File B X If job C fails, DAGMan will wait until job B completes, and then will exit, creating a rescue file. Job D will not run. In its log, DAGMan will provide additional details of which node failed and why. D Condor DAGMan

14 Recovering a DAG Once the failed job is ready to be re-run, the Rescue file can be used to restore the prior state of the DAG. DAGMan A Condor Job Queue Rescue File B C Since jobs A and B have already completed, DAGMan will start by re-submitting job C C D Condor DAGMan

15 Recovering a DAG (cont’d)
Once that job completes, DAGMan will continue the DAG as if the failure never happened. DAGMan A Condor Job Queue B C D D Condor DAGMan

16 Finishing a DAG Once the DAG is complete, the DAGMan job itself is finished, and exits. DAGMan A Condor Job Queue B C If job C fails, DAGMan will wait until job B completes, and then will exit, creating a rescue file. D Condor DAGMan

17 Additional Features Provides some other handy features for job management… nodes can have PRE & POST scripts job submission can be “throttled” Condor DAGMan

18 PRE & POST Scripts Each node can have a PRE or POST script, executed as part of the node: # diamond.dag Job A a.sub Job B b.sub Job C c.sub Job D d.sub PARENT A CHILD B C PARENT B C CHILD D Script PRE B stage-in.sh Script POST B stage-out.sh Job A PRE Job B POST Job C PRE and POST scripts will execute locally on the submitting machine, before the job is submitted or after it completes The PRE & POST scripts are part of the node – in other words, if any part of the node fails, the node was not successful and any children will not start Job D Condor DAGMan

19 Submit Throttling DAGMan can limit the maximum number of jobs it will submit to Condor at once: condor_submit_dag -maxjobs N Useful for managing resource limitations (e.g., storage). Ex: 1000 jobs, each of which require 1 GB of disk space, and you have 100 GB of disk. Condor DAGMan

20 Summary DAGMAN: manages dependencies, holding & running jobs only at the appropriate times monitors job progress is fault-tolerant is recoverable in case of job failure provides additional features to Condor Condor DAGMan

21 Future Work More sophisticated management of remote data transfer & staging to maximize CPU throughput. Keep the pipeline full! I.e., always try to have data ready when a CPU become available, while adhering to disk & network limitations. Integration with Kangaroo, etc. Better integration with Condor tools condor_q, etc. displaying DAG information Condor DAGMan

22 Conclusion Interested in seeing more? Come to the DAGMan demo
Wednesday 9am - noon Room 3393, Computer Sciences (1210 W. Dayton St.) me: Try it: Condor DAGMan


Download ppt "Condor DAGMan: Managing Job Dependencies with Condor"

Similar presentations


Ads by Google