Presentation is loading. Please wait.

Presentation is loading. Please wait.

INFSO-RI-508833 Enabling Grids for E-sciencE www.eu-egee.org OSG-LCG Interoperability Activity Author: Laurence Field (CERN)

Similar presentations


Presentation on theme: "INFSO-RI-508833 Enabling Grids for E-sciencE www.eu-egee.org OSG-LCG Interoperability Activity Author: Laurence Field (CERN)"— Presentation transcript:

1 INFSO-RI-508833 Enabling Grids for E-sciencE www.eu-egee.org OSG-LCG Interoperability Activity Author: Laurence Field (CERN)

2 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 2 Overview Introduction –LCG/EGEE –OSG –VDT Why Interoperability? Abstraction of the Grid Middleware layer Architecture Overview of LCG and OSG The Interoperability Matrix Timeline of the activity Important focus areas

3 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 3 WLCG Large Hadron Collider (LHC) @ CERN. –~5000 Scientists –~500 Institutes –100, 000 CPUs of 2004 spec. computing power Distributed solution, –The Grid model – For data storage and analysis World LHC Computing Grid –Will build and maintain this infrastructure Enabling Grids for E-SciencE (EGEE) –Aims to establish a Grid infrastructure for science –WLCG is part of the production environment for EGEE –re-engineer existing Grid middleware,  including that developed by the European DataGrid (EDG) project.  to ensure that it is robust enough for production environments.

4 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 4 OSG The Open Science Grid (OSG) –US grid computing infrastructure –Supports scientific computing –Schedule driven by U.S. participants of LHC –Other projects contribute and benefit from advances in grid technology OSG has grown out of the Grid3 activity –Grid 3 started in 2003 through a joint project  U.S. LHC software  The National Science Foundations’ GriPhyN and iVDGL projects  The Department of Energy’s PPDG project OSG Consortium –Builds and operates the OSG

5 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 5 Why Interoperability OSG and LCG built upon previous Grid projects –Grid3 and EDG respectively Grid3 and EDG were developed independently – however, both use middleware from the Globus Alliance Initially, two separate universes…But –One user community VOs using adaptors –Key hole approach  Using minimal functionality from both Grids Reduced Status Information –Each VO duplicating work Interoperability should improve the situation.

6 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 6 A Grid Local Storage System Local Batch System Local Security Mechanism Abstract Interface Local Storage System Local Batch System Local Security Mechanism Abstract Interface Site A Site B

7 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 7 Hourglass Model Site Specific Systems VO/Grid Specific Middleware Security File TransferJob Submission Service Discovery Monitoring

8 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 8 The Common Core The Globus Alliance –involves several universities and research laboratories –conducting research and development to create fundamental Grid technologies. The Virtual Data Toolkit (VDT) –Delivery channel for grid technologies –Takes grid software from multiple sources,  which includes the Globus Alliance –Conducts verification testing and packages the software –Grid3 and EDG both contributed to and used VDT

9 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 9 SRM Slide from CHEP 2004

10 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 10 LCG Architecture The four principal service components –Computing Element (CE)  Globus gatekeeper  Globus Grid Resource Information Service (GRIS) Glue Schema Information –Storage Element (SE)  Globus GridFTP Server  Storage Resource Manager (SRM)  Globus Grid Resource Information Service (GRIS) Glue Schema Information –Resource Broker (RB)  Job Management Service.  Submits the job to CE –Berkeley Database Information Index (BDII)  Information aggregator  Similar to a Globus Grid Information Index Service (GIIS).

11 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 11 OSG Architecture One basic node type –Equivalent to the LCG Computing Element  But publishes Grid3 schema information  Also contains a GridFTP server data can be copied to and from a shared area. OSG also uses SRM –Front end to the storage systems The Job submission –Managed by the VO’s own job submission framework

12 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 12 Interoperability Matrix OSGLCG Job SubmissionGRAM Service DiscoveryLDAP/GIISLDAP/BDII SchemaMDS/Grid3GLUE Storage Transfer ProtocolGridFTP Storage Control ProtocolSRM

13 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 13 Timeline November and December 2004 –Initial meeting with OSG to discuss interoperability  A common schema is the key –Proposal for version 1.2 of the Glue Schema was discussed  Include new attributes required by OSG, Marco Mambelli January 2005 –Proof of concept was tried, Leigh Grundhoefer (Indiana)  Installed Generic Information Provider (GIP) on an OSG CE  OSG CE was configured to support the dteam VO  “Hello world” job, submitted through the LCG RB and ran on an OSG CE  Installed the LCG clients available on OSG from a tarball Oliver Keeble (CERN)  Submitted test job that did basic data management operations

14 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 14 Timeline Modifications to the OSG and LCG software releases –Updated the GIP to publish version 1.2 of the Glue Schema  The GridFTP server on the OSG CE advertised as an LCG SE –Automatically configure the GIP in the OSG release  Information scavenger script, Shaowen Wang (Iowa) August 2005 (month of focussed activity) –Included first OSG sites into the LCG operational framework –Set up a BDII that represented these OSG sites –Included this BDII to the LCG information system –All OS sites found in this BDII were automatically tested  Using the Site Functional Tests (SFT) framework –Created a script to install the LCG clients on OSG CEs November 2005 –First user jobs from GEANT4 arrived on OSG –Started discussions on interoperations

15 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 15 Interface Security –Require a common security mechanism for all services –Currently use Grid Security Infrastructure (GSI)  Based on the use of X509 certificates –The grid PMA is manages certification authorities –Authorization  VOMS proposed as a solution  “VOMSification” of services needs to be done. How to translate capabilities between grids? Service Discovery –Currently done via the information system –LDAP data model, a legacy from Globus MDS –The information schema is an important aspect –An attempt has been made to create a common schema  in the form of the Glue schema.

16 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 16 Interface Computing Interface. –OSG and LCG are currently using GRAM,  the version used is dated and will be replaced. –many different types of job submission interface –around and no common agreement on what is the standard. –This interface along with the Job Description Language needs to be standardised. Storage Interface. –The SRM is generally accepted as the interface to storage.  However, there have been problems with different interpretations of the specification and incompatible implementations. Monitoring Interface. –An area that requires a great deal of attention! –Many different monitoring systems, even within a single grid –Needed a schema that defines the essential metrics (Glue2 ??) –An interface to retrieve them at the site level

17 Enabling Grids for E-sciencE INFSO-RI-508833 Laurence.Field@cern.ch 17 Summary Basic interoperability between OSG and LCG, has been achieved! –Moving to sustained production operations The whole concept of a Grid is about interoperability –The grid middleware is the abstraction layer between different systems This activity re-enforced the need for the grid abstraction layer. –Security, Service Discovery, Computing, Storage, Monitoring Monitoring needs specific attention –Require metrics and a monitoring interface. Common interfaces will enable different implementation while remaining interoperable Interoperability introduces dependencies between grids –None backwards compatible upgrades even harder Don’t forget, interoperating infrastructures also need interoperations!


Download ppt "INFSO-RI-508833 Enabling Grids for E-sciencE www.eu-egee.org OSG-LCG Interoperability Activity Author: Laurence Field (CERN)"

Similar presentations


Ads by Google