Www.see-grid-sci.eu SEE-GRID-SCI Hands-On Session: Workload Management System (WMS) Installation and Configuration Dusan Vudragovic Institute of Physics.

Slides:



Advertisements
Similar presentations
Workload Management David Colling Imperial College London.
Advertisements

EU 2nd Year Review – Jan – Title – n° 1 WP1 Speaker name (Speaker function and WP ) Presentation address e.g.
Workload management Owen Maroney, Imperial College London (with a little help from David Colling)
INFSO-RI Enabling Grids for E-sciencE Workload Management System and Job Description Language.
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
INFSO-RI Enabling Grids for E-sciencE LCG-2 and gLite Architecture and components Author E.Slabospitskaya.
Job Submission The European DataGrid Project Team
SEE-GRID-SCI User Interface (UI) Installation and Configuration Branimir Ackovic Institute of Physics Serbia The SEE-GRID-SCI.
INFSO-RI Enabling Grids for E-sciencE Architecture of the gLite Workload Management System Giuseppe Andronico INFN EGEE Tutorial.
E-infrastructure shared between Europe and Latin America 12th EELA Tutorial for Users and System Administrators Architecture of the gLite.
INFSO-RI Enabling Grids for E-sciencE EGEE Middleware The Resource Broker EGEE project members.
1 Architecture of the gLite WMS Esther Montes Prado CIEMAT 10th EELA Tutorial Madrid,
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
IST E-infrastructure shared between Europe and Latin America Architecture of the gLite WMS Alexandre Duarte CERN Fifth EELA.
E-infrastructure shared between Europe and Latin America Architecture of the WMS Manuel Rubio del Solar CETA-CIEMAT EELA Tutorial, Mérida,
Special Jobs Claudio Cherubino INFN - Catania. 2 MPI jobs on gLite DAG Job Collection Parametric jobs Outline.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Special Jobs Matias Zabaljauregui UNLP.
SEE-GRID-SCI Hands-On Session: Computing Element (CE) and site BDII Installation and Configuration Dusan Vudragovic Institute of Physics.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) WMPROXY API Python & C++ Diego Scardaci
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
FESR Consorzio COMETA Grid Introduction and gLite Overview Corso di formazione sul Calcolo Parallelo ad Alte Prestazioni (edizione.
The gLite API – PART I Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
SEE-GRID-SCI SEE-GRID-SCI Operations Procedures and Tools Antun Balaz Institute of Physics Belgrade, Serbia The SEE-GRID-SCI.
Enabling Grids for E-sciencE Workload Management System on gLite middleware Matthieu Reichstadt CNRS/IN2P3 ACGRID School, Hanoi (Vietnam)
Interactive Job Monitor: CafMon kill CafMon tail CafMon dir CafMon log CafMon top CafMon ps LcgCAF: CDF submission portal to LCG resources Francesco Delli.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
Nadia LAJILI User Interface User Interface 4 Février 2002.
INFSO-RI Enabling Grids for E-sciencE Workload Management System Mike Mineter
- Distributed Analysis (07may02 - USA Grid SW BNL) Distributed Processing Craig E. Tull HCG/NERSC/LBNL (US) ATLAS Grid Software.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Using gLite API Vladimir Dimitrov IPP-BAS “gLite middleware Application Developers.
INFSO-RI Enabling Grids for E-sciencE The gLite Workload Management System Elisabetta Molinari (INFN-Milan) on behalf of the JRA1.
SEE-GRID-SCI The SEE-GRID-SCI initiative is co-funded by the European Commission under the FP7 Research Infrastructures contract no.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Feb. 06, Introduction to High Performance and Grid Computing Faculty of Sciences,
SEE-GRID-SCI The SEE-GRID-SCI initiative is co-funded by the European Commission under the FP7 Research Infrastructures contract no.
EGEE is a project funded by the European Union under contract INFSO-RI Practical approaches to Grid workload management in the EGEE project Massimo.
Enabling Grids for E-sciencE The gLite Workload Management System Alessandro Maraschini OGF20, Manchester,
SEE-GRID-SCI Overview of YAIM and SEE-GRID-SCI YAIM templates Dusan Vudragovic Institute of Physics Belgrade Serbia The.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Feb. 06, Introduction to High Performance and Grid Computing Faculty of Sciences,
SEE-GRID-SCI The SEE-GRID-SCI initiative is co-funded by the European Commission under the FP7 Research Infrastructures contract no.
High-Performance Computing Lab Overview: Job Submission in EDG & Globus November 2002 Wei Xing.
EGEE is a project funded by the European Union under contract IST WS-Based Advance Reservation and Co-allocation Architecture Proposal T.Ferrari,
Workload Management System Jason Shih WLCG T2 Asia Workshop Dec 2, 2006: TIFR.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks WMPROXY usage Álvaro Fernández IFIC (CSIC)
INFSO-RI Enabling Grids for E-sciencE EGEE is a project funded by the European Union under contract IST Job sandboxes.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using WMProxy advanced job submission.
EGEE 3 rd conference - Athens – 20/04/2005 CREAM JDL vs JSDL Massimo Sgaravatto INFN - Padova.
SEE-GRID-SCI Grid Operations Procedures Antun Balaz Institute of Physics Belgrade Serbia The SEE-GRID-SCI initiative.
SEE-GRID-SCI MON Hands-on Session Vladimir Slavnić Institute of Physics Belgrade Serbia The SEE-GRID-SCI initiative.
SEE-GRID-SCI New AEGIS services Dusan Vudragovic Institute of Physics Belgrade Serbia The SEE-GRID-SCI initiative is co-funded.
Introduction to Computing Element HsiKai Wang Academia Sinica Grid Computing Center, Taiwan.
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
EGEE is a project funded by the European Union under contract IST Padova report Massimo Sgaravatto On behalf of the INFN Padova JRA1 Group.
FESR Trinacria Grid Virtual Laboratory Practical using WMProxy advanced job submission Emidio Giorgio INFN Catania.
DIRAC: Workload Management System Garonne Vincent, Tsaregorodtsev Andrei, Centre de Physique des Particules de Marseille Stockes-rees Ian, University of.
Regional SEE-GRID-SCI Training for Site Administrators
Practical using C++ WMProxy API advanced job submission
Architecture of the gLite WMS
Workload Management System on gLite middleware
Workload Management System ( WMS )
and Alexandre Duarte OurGrid/EELA Interoperability Meeting
Introduction to Grid Technology
Job workflow Pre production operations:
Workload Management System (WMS) & Job Description Language (JDL)
Job Application Monitoring (JAM)
GRID Workload Management System for CMS fall production
Job Submission M. Jouvin (LAL-Orsay)
Presentation transcript:

SEE-GRID-SCI Hands-On Session: Workload Management System (WMS) Installation and Configuration Dusan Vudragovic Institute of Physics Belgrade Serbia The SEE-GRID-SCI initiative is co-funded by the European Commission under the FP7 Research Infrastructures contract no Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009

Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, Overview of WMS [1/5]

Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, Overview of WMS [2/5] Workload Manager Proxy WMProxy  Provides access to WMS functionality through a Web Services based interface  Each job submitted to a WMProxy Service is given the delegated credentials of the user who submitted it.  These credentials can then be used to perform operations requiring interactions with other services  WMProxy advantages:  web service, SOAP  job collections, DAG jobs, shared and compressed  sandboxes  WMProxy caveats:  needs delegated credentials  Delegate once,submit many

Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, Overview of WMS [3/5] Workload Manager (WM)  Is responsible for  Calls Matchmaker to find the resource which best matches the job requirements.  Interacting with Information System and File catalog.  Calculates the ranking of all the matchmaked resource Information Supermarket (ISM)  is responsible for  basically consists of a repository of resource information that is available in read only mode to the matchmaking engine Job Adapter  is responsible for  making the final touches to the JDL expression for a job, before it is passed to CondorC for the actual submission  creating the job wrapper script that creates the appropriate execution environment in the CE worker node – transfer of the input and of the output sandboxes

Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, Overview of WMS [4/5] Job Controller (JC)  Is responsible for  Converts the condor submit file into ClassAd  hands over the job to CondorC Condor  responsible for  performing the actual job management operations: job submission, removal Log Monitor  is responsible for  watching the Condor log file  intercepting interesting events concerning active jobs – events affecting the job state machine  triggering appropriate actions.

Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, Overview of WMS [5/5] Task Queue  Gives the possibility to keep track of the requests if no resources are immediatelly avalaible  Non-matching requests will be retried periodically (eager scheduling)  Or wait for notification of avalaible resources (lazy scheduling)