Presentation is loading. Please wait.

Presentation is loading. Please wait.

CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.

Similar presentations


Presentation on theme: "CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations."— Presentation transcript:

1 CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations

2 2 DAGMan Directed Acyclic Graph Manager DAGMan allows you to specify the dependencies between your Condor jobs, so it can manage them automatically for you. (e.g., “Don’t run job “B” until job “A” has completed successfully.”)‏

3 3 What is a DAG? A DAG is the data structure used by DAGMan to represent these dependencies. Each job is a “node” in the DAG. Each node can have any number of “parent” or “children” nodes – as long as there are no loops! Job A Job BJob C Job D

4 4 A DAG is defined by a.dag file, listing each of its nodes and their dependencies: # diamond.dag Job A a.sub Job B b.sub Job C c.sub Job D d.sub Parent A Child B C Parent B C Child D each node will run the Condor job specified by its accompanying Condor submit file Defining a DAG Job A Job BJob C Job D

5 5 Submitting a DAG To start your DAG, just run condor_submit_dag with your.dag file, and Condor will start a personal DAGMan daemon which to begin running your jobs: % condor_submit_dag diamond.dag condor_submit_dag submits a Scheduler Universe Job with DAGMan as the executable. Thus the DAGMan daemon itself runs as a Condor job, so you don’t have to baby-sit it.

6 6 DAGMan Running a DAG DAGMan acts as a “meta-scheduler”, managing the submission of your jobs to Condor-G based on the DAG dependencies. Condor-G Job Queue C D A A B.dag File

7 7 DAGMan Running a DAG (cont’d) DAGMan holds & submits jobs to the Condor-G queue at the appropriate times. Condor-G Job Queue C D B C B A

8 8 DAGMan Running a DAG (cont’d) In case of a job failure, DAGMan continues until it can no longer make progress, and then creates a “rescue” file with the current state of the DAG. Condor-G Job Queue X D A B Rescue File

9 9 DAGMan Recovering a DAG -- fault tolerance Once the failed job is ready to be re-run, the rescue file can be used to restore the prior state of the DAG. Condor-G Job Queue C D A B Rescue File C

10 10 DAGMan Recovering a DAG (cont’d) Once that job completes, DAGMan will continue the DAG as if the failure never happened. Condor-G Job Queue C D A B D

11 11 DAGMan Finishing a DAG Once the DAG is complete, the DAGMan job itself is finished, and exits. Condor-G Job Queue C D A B

12 Additional DAGMan Features Provides other handy features for job management… – nodes can have PRE & POST scripts – failed nodes can be automatically re-tried a configurable number of times – job submission can be “throttled”

13 HANDS-ON http://users.cs.fiu.edu/~skala001/DAGMan_Lab.htm

14 Ewa Deelman, deelman@isi.edu www.isi.edu/~d eelman pegasus.isi.edu

15

16 Pegasus: Planning for Execution in Grids Abstract Workflows - Pegasus input workflow description –workflow “high-level language” –only identifies the computations that a user wants to do –devoid of resource descriptions –devoid of data locations Pegasus (http://pegasus.isi.edu)http://pegasus.isi.edu –a workflow “compiler” –target language - DAGMan’s DAG and Condor submit files –transforms the workflow for performance and reliability –automatically locates physical locations for both workflow components and data –finds appropriate resources to execute the components –provides runtime provenance DAGMan –A workflow executor –Scalable and reliable execution of an executable workflow

17 Pegasus Workflow Management System Condor Schedd DAGMan Pegasus mapper Reliable, scalable execution of independent tasks (locally, across the network), priorities, scheduling Reliable and scalable execution of dependent tasks A reliable, scalable workflow management system that an application or workflow composition service can depend on to get the job done A decision system that develops strategies for reliable and efficient execution in a variety of environments Cyberinfrastructure: Local machine, cluster, Condor pool, OSG, TeraGrid Abstract Workflow client tool with no special requirements on the infrastructure

18 Generating a Concrete Workflow Information – location of files and component Instances – State of the Grid resources Select specific – Resources – Files – Add jobs required to form a concrete workflow that can be executed in the Grid environment Data movement – Data registration – Each component in the abstract workflow is turned into an executable job

19 Information Components used by Pegasus Globus Monitoring and Discovery Service (MDS) – Locates available resources – Finds resource properties Dynamic: load, queue length Static: location of GridFTP server, RLS, etc Globus Replica Location Service – Locates data that may be replicated – Registers new data products Transformation Catalog – Locates installed executables

20 Example Workflow Reduction Original abstract workflow If “b” already exists (as determined by query to the RLS), the workflow can be reduced

21 Mapping from abstract to concrete Query RLS, MDS, and TC, schedule computation and data movement

22 Pegasus Research resource discovery and assessment resource selection resource provisioning workflow restructuring – task merged together or reordered to improve overall performance adaptive computing – Workflow refinement adapts to changing execution environment

23 Benefits of the workflow & Pegasus approach The workflow exposes – the structure of the application – maximum parallelism of the application Pegasus can take advantage of the structure to – Set a planning horizon (how far into the workflow to plan) – Cluster a set of workflow nodes to be executed as one (for performance) Pegasus shields from the Grid details

24 Benefits of the workflow & Pegasus approach Pegasus can run the workflow on a variety of resources Pegasus can run a single workflow across multiple resources Pegasus can opportunistically take advantage of available resources (through dynamic workflow mapping) Pegasus can take advantage of pre-existing intermediate data products Pegasus can improve the performance of the application.


Download ppt "CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations."

Similar presentations


Ads by Google