EGEE / ARC Interoperability Status Michael Grønager, PhD (UNI-C / NBI / EGEE) for the NorduGrid collaboration September 28, 2005, Culham.

Slides:



Advertisements
Similar presentations
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks MyProxy and EGEE Ludek Matyska and Daniel.
Advertisements

Grid Standardization from the NorduGrid/ARC perspective Balázs Kónya, Lund University, Sweden NorduGrid Technical Coordinator ETSI Grid Workshop on Standardization,
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
WP 1 Grid Workload Management Massimo Sgaravatto INFN Padova.
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
Swedish participation in DataGrid and NorduGrid Paula Eerola SWEGRID meeting,
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Status of Interoperability Markus Schulz.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
Managing Computational Activities on the Grid - from Specifications to Implementation: The GLUE 2 information model OGF25, 2nd March 2009, Catania Balázs.
HPDC 2007 / Grid Infrastructure Monitoring System Based on Nagios Grid Infrastructure Monitoring System Based on Nagios E. Imamagic, D. Dobrenic SRCE HPDC.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Simply monitor a grid site with Nagios J.
Computational grids and grids projects DSS,
NorduGrid: a collaboration, the middleware and related infrastructure projects Balázs Kónya Technical Coordinator Nordugrid Collaboration/Lund.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Information System on gLite middleware Vincent.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
Grid Workload Management Massimo Sgaravatto INFN Padova.
LCG / ARC Interoperability Status Michael Grønager, PhD (UNI-C / NBI) January 19, 2006, Uppsala.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks State of Interoperability Laurence Field.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
The NorduGrid Information System Balázs Kónya GGF July, 2002, Edinburgh.
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks State of Interoperability Laurence Field.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
LCG EGEE is a project funded by the European Union under contract IST LCG PEB, 7 th June 2004 Prototype Middleware Status Update Frédéric Hemmer.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
E-infrastructure shared between Europe and Latin America FP6−2004−Infrastructures−6-SSA gLite Information System Pedro Rausch IF.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Performance of The NorduGrid ARC And The Dulcinea Executor in ATLAS Data Challenge 2 Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America gLite Information System Claudio Cherubino.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Last update: 03/03/ :37 LCG Grid Technology Area Quarterly Status & Progress Report SC2 February 6, 2004.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
FESR Trinacria Grid Virtual Laboratory gLite Information System Muoio Annamaria INFN - Catania gLite 3.0 Tutorial Trigrid Catania,
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
E-science grid facility for Europe and Latin America Updates on Information System Annamaria Muoio - INFN Tutorials for trainers 01/07/2008.
NDGF – a Joint Nordic Production Grid Lars Fischer ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science Cracow, 2 October.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
NorduGrid's ARC: A Grid Solution for Decentralized Resources Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration ISGC 2005, Taiwan.
NorduGrid's ARC: A Grid Solution for Decentralized Resources
The EDG Testbed Deployment Details
Oxana Smirnova, Jakob Nielsen (Lund University/CERN)
NorduGrid and LCG middleware comparison
Report on GLUE activities 5th EU-DataGRID Conference
Sergio Andreozzi Laurence Field Balazs Konya
Information Services Claudio Cherubino INFN Catania Bologna
Presentation transcript:

EGEE / ARC Interoperability Status Michael Grønager, PhD (UNI-C / NBI / EGEE) for the NorduGrid collaboration September 28, 2005, Culham

Joint OSG and EGEE Operations Workshop, Culham, September Overview  Interoperability background  NorduGrid/ARC in 2 minutes  Interoperability Status:  Multi middleware clusters  Common Interfaces  Mediated/adapted interoperability: Gateways  Roadmap

Joint OSG and EGEE Operations Workshop, Culham, September Interoperability background  Many sites have deployed ARC (~50sites ~5000CPUs)  Site wish to provide compute power via ARC  Experiment support are keyed towards LCG  Interoperability meeting in Rome in February 2005  Interoperability meeting at CERN in August 2005

Joint OSG and EGEE Operations Workshop, Culham, September Interoperability background  Proposed work plan from the CERN meeting: –Short term: Multiple Middlewares at large sites –Medium term: Gateways between grids –Long term: Common Interfaces  Short term is being addressed already  Work plan made for Medium term tasks  Long term: CRM Initiative, GLUE2, GGF, GT4

Joint OSG and EGEE Operations Workshop, Culham, September NorduGrid/ARC in 2 minutes  History  The present ARC grid  Architecture and components  Distribution

Joint OSG and EGEE Operations Workshop, Culham, September  : a part of the NORDUNet2 program, aimed to enable Grid middleware and applications in the Nordic countries –Middleware: EDG –Applications: HEP (ATLAS), theoretical physics –Participants: academic groups from 4 Nordic countries Denmark: Research Center COM, DIKU, NBI Finland: HIP Norway: U. of Bergen, U. of Oslo Sweden: KTH, Stockholm U., Lund U., Uppsala U. (ATLAS groups)  Since end-2002 is a research collaboration between Nordic academic institutes –Open to anybody, non-binding  Hardware: mostly rental resources and those belonging to users  Since end-2003 focuses only on middleware –Develops own Grid middleware: the Advanced Resource Connector (ARC) –6 core developers, many contributing student projects –Provides middleware to research groups and national Grid projects  ARC is now installed on ~50 sites (~5000 CPUs) in 13 countries all over the World NorduGrid history

Joint OSG and EGEE Operations Workshop, Culham, September  NorduGrid had strong links with EDG –WP6: active work with the ITeam; Nordic CA –WP8: active work with ATLAS DC1 –WP2: contribution to GDMP –Attempts to contribute to RC, Infosystem  Had to diverge from EDG in 2002 –January 2002: became increasingly aware that EDG Is not suitable for non-dedicated resources with a non-CERN OS Won’t deliver a production-level middleware in time –February 2002: developed own lightweight Grid architecture –March 2002: prototypes of the core services in place –April 2002: first live demos ran –May 2002: entered a continuous production mode  Since 2004, used by more and more national Grid projects, not necessarily related to NorduGrid or HEP/CERN ARC history

Joint OSG and EGEE Operations Workshop, Culham, September ARC Grid  A Grid based on ARC middleware –Driven (so far) mostly by the needs of the LHC experiments –One of the world’s largest production-level Grids  Close cooperation with other Grid projects: –EU DataGrid ( ) –SWEGRID, DCGC … –NDGF –LCG –EGEE  Assistance in Grid deployment outside the Nordic area  Recently introduced: the ARC Community VO to join those who share their resources

Joint OSG and EGEE Operations Workshop, Culham, September Architecture  Each resource has a front-end –Authenticates users, interprets tasks, interacts with LRMS, publishes info, moves data, supports non-Linux WNs  Each user can have an independent lightweight brokering client (or many) –Resource discovery, matchmaking, job submission and manipulation, monitoring  Grid topology is achieved by an hierarchical, multi- rooted set of indexing services  Monitoring relies entirely on the information system  Ad-hoc data management, for the beginning

Joint OSG and EGEE Operations Workshop, Culham, September Components

Joint OSG and EGEE Operations Workshop, Culham, September Components  Computing resources: Linux clusters/pools or workstations –Addition of non-Linux resources is possible via Linux front-ends  Front-end: –Runs custom pluggable GridFTP server for all the communications Accepts job requests and formulates jobs for LRMS/fork Performs most data movement (stage in and out), cache management, interacts with data indexing systems Manages user work area –Performs all kinds of job management upon client request –Publishes system and job information

Joint OSG and EGEE Operations Workshop, Culham, September Components  Client: a lightweight User Interface with the built-in Resource Broker –A set of command line utilities –Minimal and simple –Under the hood: resource discovery, matchmaking, optimization, job submission –Complete support for single job management –Basic functionality for multiple job management –Support for single file manipulations  Portals and GUI clients are being developed

Joint OSG and EGEE Operations Workshop, Culham, September Components  Information System: based on Globus-patched OpenLDAP: it uses GRIS and GIIS back-ends –Keeps strict registration hierarchy –Multi-rooted –Effectively provides a pseudo-mesh architecture, similar to file sharing networks –Information is only kept on the resource; never older than 30 seconds –Own schema and providers

Joint OSG and EGEE Operations Workshop, Culham, September Components  Storage: any kind of storage system with a disk front-end –Own GridFTP server implementation with pluggable back-ends Ordinary file system access Grid Access Control Lists (GACL) based access –“Smart" Storage Element - WS based data service with direct support for Indexing Services (Globus’ RC, RLS) –tape storage systems are being acquired

Joint OSG and EGEE Operations Workshop, Culham, September Distribution, availability  At ftp.nordugrid.org: –Stable releases, including: Binary RPMs and tar-balls are available for most Linux platforms Source RPMs Standalone client tar-ball for installation by a non-privileged user –Only 13 MB when unpacked –Contains all the EU Grid PMA approved CA keys –Includes all the basic Globus client tools –Weekly development builds –Nightly builds  CVS at cvs.nordugrid.org  License: GPL  More info, complete documentation, contacts at

Joint OSG and EGEE Operations Workshop, Culham, September Multi middleware clusters  Simplest form of interoperability is co-existence  Clusters with ARC/LCG co-existing –Germany: FZK (1200CPUs) –Switzerland: PHOENIX (35CPUs) –Sweden: Ingrid, Bluesmoke (200CPUs) –Denmark: Morpheus(almost there), Steno(scheduled) (400CPUs)  Not nice for site administrators though -need to install/maintain two middlewares…

Joint OSG and EGEE Operations Workshop, Culham, September Common Interfaces Service/componentLCG-2, gLiteARC BasisGT2 from VDTGT2 own patch, GT3 pre-WS Data transferGridFTP, SRM v? (DPM)GridFTP, SRM v1.1 client Data managementEDG RLS, Fireman & Co, LFCRC, RLS, Fireman InformationLDAP, GLUE1.1, MDS+BDII, R-GMA LDAP, ARC schema, MDS -GIIS Job descriptionJDL (based on classAds)RSL Job submissionCondor-G to GRAMGridFTP VO managementVOMS, gLite VOMS, CAS (?)VOMS  Diverges mainly on:  Job submission and description  However, work on JSDL is in progress  Information system

Joint OSG and EGEE Operations Workshop, Culham, September Common Interfaces Service/componentLCG-2, gLiteARC BasisGT2 from VDTGT2 own patch, GT3 pre-WS Data transferGridFTP, SRM v? (DPM)GridFTP, SRM v1.1 client Data managementEDG RLS, Fireman & Co, LFCRC, RLS, Fireman InformationLDAP, GLUE1.1, MDS+BDII, R-GMA LDAP, ARC schema, MDS -GIIS Job descriptionJDL (based on classAds)RSL Job submissionCondor-G to GRAMGridFTP VO managementVOMS, gLite VOMS, CAS (?)VOMS  Diverges mainly on:  Job submission and description  However, work on JSDL is in progress  Information system Note: –“Rome” Common Resource Management initiative (includes Globus, UNICORE, LCG, EGEE, NorduGrid, NAREGI) converged on usage of GGF JSDL for job description JSDL v1.0 is still rudimentary, but is the least common denominator -ARC now supports JSDL at Grid Manager level. -xRSL JSDL in client is in alpha stage.

Joint OSG and EGEE Operations Workshop, Culham, September Common Interfaces Service/componentLCG-2, gLiteARC BasisGT2 from VDTGT2 own patch, GT3 pre-WS Data transferGridFTP, SRM v? (DPM)GridFTP, SRM v1.1 client Data managementEDG RLS, Fireman & Co, LFCRC, RLS, Fireman InformationLDAP, GLUE1.1, MDS+BDII, R-GMA LDAP, ARC schema, MDS -GIIS Job descriptionJDL (based on classAds)RSL Job submissionCondor-G to GRAMGridFTP VO managementVOMS, gLite VOMS, CAS (?)VOMS  Diverges mainly on:  Job submission and description  However, work on JSDL is in progress  Information system Note: –“Rome” Common Resource Management initiative (includes Globus, UNICORE, LCG, EGEE, NorduGrid, NAREGI) converged on usage of GGF JSDL for job description JSDL v1.0 is still rudimentary, but is the least common denominator -ARC now supports JSDL at Grid Manager level. -xRSL JSDL in client is in alpha stage. Note: -GLUE2 schema is expected to be developed soon, with participation of NorduGrid, OSG and others. All chances to get a common resource representation.

Joint OSG and EGEE Operations Workshop, Culham, September Common Interfaces Service/componentLCG-2, gLiteARC BasisGT2 from VDTGT2 own patch, GT3 pre-WS Data transferGridFTP, SRM v? (DPM)GridFTP, SRM v1.1 client Data managementEDG RLS, Fireman & Co, LFCRC, RLS, Fireman InformationLDAP, GLUE1.1, MDS+BDII, R-GMA LDAP, ARC schema, MDS -GIIS Job descriptionJDL (based on classAds)RSL Job submissionCondor-G to GRAMGridFTP VO managementVOMS, gLite VOMS, CAS (?)VOMS  Diverges mainly on:  Job submission and description  However, work on JSDL is in progress  Information system Note: –“Rome” Common Resource Management initiative (includes Globus, UNICORE, LCG, EGEE, NorduGrid, NAREGI) converged on usage of GGF JSDL for job description JSDL v1.0 is still rudimentary, but is the least common denominator -ARC now supports JSDL at Grid Manager level. -xRSL JSDL in client is in alpha stage. Note: -GLUE2 schema is expected to be developed soon, with participation of NorduGrid, OSG and others. All chances to get a common resource representation. Note: –Mark Linesch (GGF Chairman), June 2005: “OGSA is in the early stages of development and standardization“ “GGF distinguishes between the OGSA architectural process, OGSA profiles and specifications, and OGSA software. All of these are important to maintain coherence around OGSA and Grid standards” “At the time of writing, we have OGSA Use Case documents, OGSA Architectural documents and drafts of an OGSA Basic Profile document and OGSA Roadmap document. We do not yet have any OGSA-compliant software implementations or OGSA compliance tests”

Joint OSG and EGEE Operations Workshop, Culham, September Common Interfaces – higher level  Condor / Condor-G –LCG supports submission via Condor-G natively –LCG supports Condor as a queuing system –ARC supports Condor as a queuing system –Cooperation between ARC and Condor led in October 2004 to Condor-G version that can submit jobs to ARC GridFTP (translation from ARC infosystem schema to GLUE was developed by Rod Walker). Was meant to be used by LCG – but nobody configured an RB this way yet  Perhaps the most important common interface?

Joint OSG and EGEE Operations Workshop, Culham, September Gateways  Work plan proposed at the CERN meeting: –WP1: Document LCG CE (David Smith) –WP2: LCG to ARC job submission (Laurence Fields) –WP3: ARC to LCG job submission (Balazs Konya) –WP4: Service discovery (Laurence Fields) –WP5: GLUE2 (long term…)  Work plan about to initiate – next status meeting: CERN October 31 st

Joint OSG and EGEE Operations Workshop, Culham, September Gateways  Possible submission scheme from LCG to ARC –Setup an LCG CE using Condor LRMS –Setup Condor-G queue to submit to ARC  Possible submission scheme from ARC to LCG –Setup an ARC CE using Condor LRMS –Setup Condor-G queue to submit to LCG

Joint OSG and EGEE Operations Workshop, Culham, September Conclusions  Initiate gateway work plan –(rather sooner than later)  Tight schedule – LHC is closing in…  ARC Interoperability part of EGEE-II SA3 –Enables easy use of several extra CPUs in LCG –Ensures gateway support

Joint OSG and EGEE Operations Workshop, Culham, September End of Slideshow Extra Slides…

Joint OSG and EGEE Operations Workshop, Culham, September Major Architectural Differences  Problem: Globus GRAM submission does not work for many jobs: –Each new queued job spawns a process on the gate keeper, which regularly executes a Perl script –Does not perform for more than 400 jobs  Solutions:  LCG: –Condor-G Grid Monitor: kills listening process and replaces it with a per user process forked on the gate keeper  ARC: –Own Grid Manager: Interacts with local queuing system, submission via GridFTP.

Joint OSG and EGEE Operations Workshop, Culham, September Major Architectural Differences  Problem: Globus MDS badly implemented: –Problems with caching –Schema not complete  Solutions:  LCG: –Berkeley DB MDS implementation: BDII –Introduces GLUE schema  ARC: –Not using cache information system but MDS in file sharing like network with super nodes. –Additions to MDS schema

Joint OSG and EGEE Operations Workshop, Culham, September Major Architectural Differences  Problem: Globus has no broker:  Solutions:  LCG: –Central broker service  ARC: –Brokering done on the fly by client