6a.1 Globus Toolkit Execution Management. Data Management Security Common Runtime Execution Management Information Services Web Services Components Non-WS.

Slides:



Advertisements
Similar presentations
Globus Workshop at CoreGrid Summer School 2006 Dipl.-Inf. Hamza Mehammed Leibniz Computing Centre.
Advertisements

WS-JDML: A Web Service Interface for Job Submission and Monitoring Stephen M C Gough William Lee London e-Science Centre Department of Computing, Imperial.
Grid Computing, Barry Wilkinson, 2004A3.1 Assignment 3 Simple Job Submission Using GRAM.
A3.1 Assignment 3 Simple Job Submission Using GT 4 GRAM.
Grid Resource Allocation Management (GRAM) GRAM provides the user to access the grid in order to run, terminate and monitor jobs remotely. The job request.
CSF4, SGE and Gfarm Integration Zhaohui Ding Jilin University.
CENG 546 Dr. Esma Yıldırım.  A fundamental enabling technology for the "Grid," letting people share computing power, databases, and other tools securely.
This product includes material developed by the Globus Project ( Introduction to Grid Services and GT3.
A Computation Management Agent for Multi-Institutional Grids
MTA SZTAKI Hungarian Academy of Sciences Grid Computing Course Porto, January Introduction to Grid portals Gergely Sipos
Seminar Grid Computing ‘05 Hui Li Sep 19, Overview Brief Introduction Presentations Projects Remarks.
USING THE GLOBUS TOOLKIT This summary by: Asad Samar / CALTECH/CMS Ben Segal / CERN-IT FULL INFO AT:
Globus Toolkit 4 hands-on Gergely Sipos, Gábor Kecskeméti MTA SZTAKI
2-1.1 Job Submission © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid computing course. Modification date: Jan 18, 2010.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC press, © Chapter 1, pp For educational use only.
1-2.1 Grid computing infrastructure software Brief introduction to Globus © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid computing course. Modification.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Assignment 3 Using GRAM to Submit a Job to the Grid James Ruff Senior Western Carolina University Department of Mathematics and Computer Science.
4b.1 Grid Computing Software Components of Globus 4.0 ITCS 4010 Grid Computing, 2005, UNC-Charlotte, B. Wilkinson, slides 4b.
Grids and Globus at BNL Presented by John Scott Leita.
14.1 “Grid-enabling” applications ITCS 4146/5146 Grid Computing, 2007, UNC-Charlotte, B. Wilkinson. March 27, 2007.
1b.1 Globus Toolkit 4.0 Grid Resource Allocation Manager (GRAM) Job submission ITCS 4146/5146 Grid Computing, 2007, UNC-Charlotte, B. Wilkinson. Jan 24,
Globus Computing Infrustructure Software Globus Toolkit 11-2.
Evaluation of the Globus GRAM Service Massimo Sgaravatto INFN Padova.
1 Globus Developments Malcolm Atkinson for OMII SC 18 th January 2005.
Globus 4 Guy Warner NeSC Training.
Kate Keahey Argonne National Laboratory University of Chicago Globus Toolkit® 4: from common Grid protocols to virtualization.
Resource Management Reading: “A Resource Management Architecture for Metacomputing Systems”
Overview of TeraGrid Resources and Usage Selim Kalayci Florida International University 07/14/2009 Note: Slides are compiled from various TeraGrid Documentations.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
OPEN GRID SERVICES ARCHITECTURE AND GLOBUS TOOLKIT 4
High Performance Louisiana State University - LONI HPC Enablement Workshop – LaTech University,
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
GRAM: Software Provider Forum Stuart Martin Computational Institute, University of Chicago & Argonne National Lab TeraGrid 2007 Madison, WI.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Job Submission Condor, Globus, Java CoG Kit Young Suk Moon.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
COMP3019 Coursework: Introduction to GridSAM Steve Crouch School of Electronics and Computer Science.
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
Part 6: (Local) Condor A: What is Condor? B: Using (Local) Condor C: Laboratory: Condor.
Rochester Institute of Technology Job Submission Andrew Pangborn & Myles Maxfield 10/19/2015Service Oriented Cyberinfrastructure Lab,
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
2-1.1 Job Submission Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC press, © Chapter 2, pp
1 Globus Grid Middleware: Basics, Components, and Services Source: The Globus Project Argonne National Laboratory & University of Southern California
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Globus Toolkit Installation Report. What is Globus Toolkit? The Globus Toolkit is an open source software toolkit used for building Grid systems.
Java Commodity Grid (Java CogKit) Java CogKits allow developers to use commodity technologies such as Java or Python in programming the Grid based on Globus.
Grid Security: Authentication Most Grids rely on a Public Key Infrastructure system for issuing credentials. Users are issued long term public and private.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
Part Five: Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
APST Internals Sathish Vadhiyar. apstd daemon should be started on the local resource Opens a port to listen for apst client requests Runs on the host.
High-Performance Computing Lab Overview: Job Submission in EDG & Globus November 2002 Wei Xing.
Job Submission with Globus, Condor, and Condor-G Selim Kalayci Florida International University 07/21/2009 Note: Slides are compiled from various TeraGrid.
Globus Grid Tutorial Part 2: Running Programs Across Multiple Resources.
Grid Compute Resources and Job Management. 2 Job and compute resource management This module is about running jobs on remote compute resources.
Rochester Institute of Technology 1 Job Submission Andrew Pangborn & Myles Maxfield 01/19/09Service Oriented Cyberinfrastructure Lab,
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
Grid Workload Management (WP 1) Massimo Sgaravatto INFN Padova.
DataGrid is a project funded by the European Commission EDG Conference, Heidelberg, Sep 26 – Oct under contract IST OGSI and GT3 Initial.
CSF. © Platform Computing Inc CSF – Community Scheduler Framework Not a Platform product Contributed enhancement to The Globus Toolkit Standards.
HTCondor’s Grid Universe Jaime Frey Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison.
First evaluation of the Globus GRAM service Massimo Sgaravatto INFN Padova.
CSF4 Meta-Scheduler Zhaohui Ding College of Computer Science & Technology Jilin University.
Duncan MacMichael & Galen Deal CSS 534 – Autumn 2016
Peter Kacsuk – Sipos Gergely MTA SZTAKI
Globus Job Management. Globus Job Management Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
Job Submission Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC press, © Chapter 2, pp
Presentation transcript:

6a.1 Globus Toolkit Execution Management

Data Management Security Common Runtime Execution Management Information Services Web Services Components Non-WS Components Pre-WS Authentication Authorization GridFTP Grid Resource Allocation Mgmt (Pre-WS GRAM) Monitoring & Discovery System (MDS2) C Common Libraries GT2GT2 WS Authentication Authorization Reliable File Transfer OGSA-DAI [Tech Preview] Grid Resource Allocation Mgmt (WS GRAM) Monitoring & Discovery System (MDS4) Java WS Core Community Authorization Service GT3GT3 Replica Location Service XIO GT3GT3 Credential Management GT4GT4 Python WS Core [contribution] C WS Core Community Scheduler Framework [contribution] Delegation Service GT4GT4 I Foster

6a.3 Grid Resource Allocation Manager (GRAM) Job submission

6a.4 Resource Management Job submission Job status Basic resource allocation

6a.5 Outline GT2 job submission using RSL version 1 language. GT 3.2 job submission using RSL version 2 language. GT 4 job submission (can use RSL version language

6a.6 Resource Allocation Globus (2, 3.2, or 4.0) does not have its own job scheduler to find resources and automatically send jobs to suitable machines. For that, use a separate scheduler, e.g. Condor, Sun Grid Engine, LSF, PBS, ….

6a.7 Globus Version 2 (pre-2004) Pre-WS GRAM

6a.8 From: “Introduction to Grid Computing with Globus,” IBM Redbooks, Fig Globus version 2

6a.9 GT 2 GRAM Job startup done using GRAM service. Consist of: Gatekeeper Job Manager Job manager can connect to a local resource manager (scheduler) GASS service -- provides access to remote files and for redirecting standard output streams.

6a.10 GRAM Commands globusrun -- Runs a single executable on a remote site. globus-job-run -- Allows you to run a job at one or several remote resources. Uses globusrun to submit job. globus-job-submit -- For batch job submission (e.g. using a local scheduling job manager). Not recommended; use globus-job-run or globusrun instead, with job manager specified

6a.11 Scheduler Can specify a job scheduler with globusrun, by adding scheduler name to hostname: /jobmanager-lsf

6a.12 Specifying job Command used a file to describe job in a language called Resource Specification Language, RSL RSL Version 1 -- a metalanguage describing job and its required execution.

6a.13 Resource Specification Language RSL Provides a specification for: Resource requirements - machine type, number of nodes, memory, etc. Job description - directory, executable, arguments, environment

6a.14 RSL Version 1 Constraints Example Conjunction (AND): & To create 3-5 instances of myProg, each on a machine with at least 64 Mbytes memory available to me for 1 hours: & (executable=myProg) (count>=3)(count =64) (max_time=60)

6a.15 RSL Version 1 Constraints Example Disjunction (OR): | To create 5 instances of myProg, each on a machine with at least 64 Mbytes memory or 7 instances of myProg, each on a machine with at least 32 Mbytes memory : &(executable=myProg) (|(&(count=5)(memory>=64)) (&(count=7)(memory>=32)))

6a.16 RSL version 1 Requesting multiple resources multirequest: + To execute 5 instances of myProg1 on a machine with at least 64 Mbytes memory and execute 2 instances of myProg2: +(&(count=5)(memory>=64)) (executable=myProg1)) (&(count=2)(executable=myProg2))

6a.17 Can specify different resource managers on different machines using resourceManagerContact attribute.

6a.18 RSL creation with Globus version 2 GT2 globus-job-run can be used to generate RSL from command line arguments with - dumprsl flag -help gives options

6a.19 Globus 3.2

6a.20 GT 3.2 GRAM “Globus Resource Allocation Manager” A set of “OGSI” compliant services provided to start remote jobs. notably: Master Managed Job Factory Service (MMJFS). Also a set of non-OGSI compliant services (Gatekeeper, Jobmanager) from pre-GT3.

6a.21 Globus GT 3.x

6a.22 Resource Specification Language, RSL, version 2 GT3 and GT 4 use RSL version 2. (Some differences in RSL language specification in GT4, so not completely interchangeable.) RSL Version 2 is an XML language.

6a.23 Resource Specification Language Version 2 (RSL -2) Can specify everything from executable, paths, arguments, input/output, error file, number of processes, max/min execution time, max/min memory, job type etc. etc.

6a.24 RSL-2 Much more elegant and flexible, and in keeping with systems using XML. Can use XML parsers. Allows more powerful mechanisms with job schedulers. Resource scheduler/broker applies specification to local resources.

6a.25 RSL-2 Example Specifying Executable (executable=/bin/echo) In GT 4 version of RSL-2, can simply write: /bin/echo

6a.26 RSL-2 Example Specifying Directory (directory=“/bin”) In GT 4 version of RSL-2, can simply write: /bin/

6a.27 RSL-2 Example Specifying Number (count=1) In GT 4 version of RSL-2, can simply write: 1

6a.28 RSL-2 Example Specifying Arguments (arguments=“Hello”) In GT 4 version of RSL-2, can simply write: hello world

6a.29 RSL and (GT 3.2) RSL-2 comparison for echo program &((executable=/bin/echo) (directory="/bin") (arguments="Hello World") (stdin=/dev/null) (stdout="stdout") (stderr="stderr") (count=1) ) <rsl:rsl xmlns:rsl=" xmlns:gram=" xmlns:xsi=" xsi:schemaLocation=" c:/ogsa-3.0/schema/base/gram/rsl.xsd c:/ogsa-3.0/schema/base/gram/gram_rsl.xsd">

6a.30 Running GT 3 Job Command: managed-job-globusrun and arguments -- named master job factory service to process job and an xml file to specify job. Command equivalent to GT 2 globusrun command.

6a.31 Globus 4.0

6a.32 GT 4 WS-GRAM

6a.33 In WS GRAM, jobs started by the ManagedExecutionJobService, which is a Java service implementation running within globus service container.

6a.34 Running GT 4 Job Command: globusrun-ws and arguments to specify job. Equivalent to GT 3.2 managed-job- globusrun command, and GT 2 globusrun command.

6a.35 GT4 job submission command globusrun-ws Submit and monitor GRAM jobs Replaces (java) managed-job-globusrun Written in C, faster startup and execution Supports multiple and single job submission Handles credential management Streaming of job stdout/err during execution

6a.36 Simple job submission Step 1: Create proxy with grid-proxy-int command. Step 2: globusrun-ws with parameters to specify job.

6a.37 Some globusrun-ws flags for job submission

6a.38 Job submission -submit Submits (or resubmits) a job to a job host in one of three output modes: batch, interactive, or interactive- streaming. This flag needed.

6a.39 Specifying where job is submitted (ManagedJobFactory) -F Specifies the “contact” for the job submission. Default ManagedJobFactoryService In assignment 3, simply localhost and container port number used, i.e. -F localhost:8443

6a.40 Submitting a single job - c Causes globusrun-ws to generate a simple job description with the named program and arguments. This flag, if used, must be the last flag.

6a.41 Example Submit program echo with argument hello to default local host. % globusrun-ws –submit –c /bin/echo hello Submitting job...Done. Job ID: uuid:d23a7be0-f87c-11d9-a53b aae1f Termination time: 07/20/ :44 GMT Current job state: Active Current job state: CleanUp Current job state: Done Destroying job...Done.

6a.42 A successful submission will create a new ManagedJob resource with its own unique EPR for messaging. globusrun-ws will output this EPR to a file when requested, as the sole standard output when running in batch mode.

6a.43 Selecting a different host Example $ globusrun-ws –submit –F services/managedJobFactoryService –c /bin/echo hello

6a.44 Using an RSL file –f Similar to -c except job description held in a file. Example globusrun-ws –submit –f echo.xml where echo.xml is an RSL-2 file describing job.

6a.45 Contents of echo.xml /bin/echo hello

6a.46 Batch Submission -batch Results in ManagedJob EPR as the sole standard output (unless in quiet mode) and then exits. -o filename Created ManagedJob EPR written to file (if submission successful)

6a.47 Batch Job Submission $ globusrun-ws –submit –batch –o job_epr –s /bin/sleep 50 Submitting job…Done JoB ID: uuid:f c5-11d9-97e a5ad41e5 Termination time: 01/08/ :05 GMT

6a.48 Monitoring Batch Submission -monitor Attaches to an existing job in interactive or interactive-streaming output modes. -j filename EPR for ManagedJob read from file.

6a.49 Monitoring Batch Job globusrun-ws –monitor –j job_epr job state: Active Current job state: CleanUp Current job state: Done Requesting original job description...Done. Destroying job...Done

6a.50 Batch Submission -status Reports the current state of the job and exits -kill Requests immediate cancellation of job and exits.

6a.51

6a.52 Input/Output RSL file can specify where stdout/err goes. Example /bin/echo /tmp hello job.out job.err …

6a.53 Stream Input/Output -s The standard output and standard error files of the job are monitored and data is written to the corresponding output of globusrun-ws. Allows streaming stdout/err during executing to the terminal.

6a.54 Stream output data files Can also “stream” output data files. Specify in RSL file where to.

6a.55 Example … file:///tmp/job.out gsiftp://host.domain:2811/ tmp/stage.out …

6a.56 Reliable File Transfer (RTF) Example … … /O=NCSU/OU=HPC/ OU=unity.ncsu.edu/CN=Barry Wilkinson 4 …

6a.57 Sources of GT 4 information “GRAM, RFT & Job Submission, Execution Management for GT4 Developers,” S. Martinb and P. Plaszczak, globusWorld, 2005, /execution/wsgram/user/globusrun-ws.html