GLite, the next generation middleware for Grid computing Oxana Smirnova (Lund/CERN) Nordic Grid Neighborhood Meeting Linköping, October 20, 2004 Uses material.

Slides:



Advertisements
Similar presentations
INFSO-RI Enabling Grids for E-sciencE EGEE and gLite Slides by: Erwin Laure EGEE Deputy Middleware Manager.
Advertisements

Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
E-science grid facility for Europe and Latin America gLite Overview Vanessa Hamar Universidad de Los Andes.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
INFSO-RI Enabling Grids for E-sciencE LCG-2 and gLite Architecture and components Author E.Slabospitskaya.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
EGEE is a project co-funded by the European Commission under contract INFSO-RI EU EGEE project – status and plans Bob Jones EGEE Technical Director.
Grid for CBM Kilian Schwarz, GSI. What is Grid ? ● Sharing of distributed resources within one Virtual Organisations !!!!
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Grid services based architectures Growing consensus that Grid services is the right concept for building the computing grids; Recent ARDA work has provoked.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
INFSO-RI Enabling Grids for E-sciencE Comparison of LCG-2 and gLite Author E.Slabospitskaya Location IHEP.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
Grid Deployment Data challenge follow-up & lessons learned Ian Bird LCG Deployment Area Manager LHCC Comprehensive Review 22 nd November 2004.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
LCG LCG Workshop – March Generic Middleware Services LCG Workshop March 2004 EGEE is proposed as a project funded by the European.
EGEE Catalogs Peter Kunszt EGEE Data Management Middleware Service Grids NeSC, July 2004 EGEE is a project funded by the.
INFSO-RI Enabling Grids for E-sciencE Status and Plans of gLite Middleware Erwin Laure 4 th ARDA Workshop 7-8 March 2005.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
JRA1 Middleware Frédéric Hemmer on behalf of Alberto Aimar, Maite Barroso, Predrag Buncic, Alberto Di Meglio, Steve Fisher, Leanne Guy, Peter Kunszt, Erwin.
EGEE is a project funded by the European Union under contract IST Middleware Planning for LCG/EGEE Bob Jones EGEE Technical Director e-Science.
GLite – An Outsider’s View Stephen Burke RAL. January 31 st 2005gLite overview Introduction A personal view of the current situation –Asked to be provocative!
JRA Execution Plan 13 January JRA1 Execution Plan Frédéric Hemmer EGEE Middleware Manager EGEE is proposed as a project funded by the European.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
Replica Management Services in the European DataGrid Project Work Package 2 European DataGrid.
LCG EGEE is a project funded by the European Union under contract IST LCG PEB, 7 th June 2004 Prototype Middleware Status Update Frédéric Hemmer.
EGEE is a project funded by the European Union under contract IST EGEE Middleware Frédéri c Hemmer GridPP 10 Meeting, 4 th June
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America gLite Overview Roberto Barbera Univ. of.
INFSO-RI Enabling Grids for E-sciencE gLite Data Management and Interoperability Peter Kunszt (JRA1 DM Cluster) 2 nd EGEE Conference,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
Segundo Taller Latino Americano de Computación GRID – Primer Taller Latino Americano de EELA – Primer Tutorial Latino Americano de EELA
EGEE is a project funded by the European Union under contract INFSO-RI Practical approaches to Grid workload management in the EGEE project Massimo.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
INFSO-RI Enabling Grids for E-sciencE EGEE Middleware reengineering Claudio Grandi – JRA1 Activity Manager - INFN EGEE Final EU.
Glite. Architecture Applications have access both to Higher-level Grid Services and to Foundation Grid Middleware Higher-Level Grid Services are supposed.
Enabling Grids for E-sciencE gLite for ATLAS Production Simone Campana, CERN/INFN ATLAS production meeting May 2, 2005.
EGEE is a project funded by the European Union under contract INFSO-RI Middleware for the next Generation Grid Infrastructure Erwin Laure EGEE Deputy.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
David Adams ATLAS ATLAS-ARDA strategy and priorities David Adams BNL October 21, 2004 ARDA Workshop.
Segundo Taller Latino Americano de Computación GRID – Primer Taller Latino Americano de EELA – Primer Tutorial Latino Americano de EELA
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
INFSO-RI Enabling Grids for E-sciencE Middleware Reengineering (JRA1) Frédéric Hemmer, JRA1 Manager, CERN On behalf of JRA1 EGEE.
INFSO-RI Enabling Grids for E-sciencE gLite Test and Certification Effort Nick Thackray CERN.
JRA1 Activity Feedback Frédéric Hemmer EGEE Middleware Manager and the JRA1 team EGEE is a project funded by the European Union under contract IST
The Institute of High Energy of Physics, Chinese Academy of Sciences Sharing LCG files across different platforms Cheng Yaodong, Wang Lu, Liu Aigui, Chen.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
13th EELA Tutorial, La Antigua, 18-19, October E-infrastructure shared between Europe and Latin America FP6−2004−Infrastructures−6-SSA
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
INFSO-RI Enabling Grids for E-sciencE gLite Overview Riccardo Bruno, Salvatore Scifo gLite - Tutorial Catania, dd.mm.yyyy.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Grid Computing: Running your Jobs around the World
EGEE Middleware Activities Overview
StoRM: a SRM solution for disk based storage systems
gLite Grid Services Salma Saber
Comparison of LCG-2 and gLite v1.0
Slides contributed by EGEE Team
Current status of gLite
Short update on the latest gLite status
LCG middleware and LHC experiments ARDA project
Overview of gLite Middleware
gLite The EGEE Middleware Distribution
Presentation transcript:

gLite, the next generation middleware for Grid computing Oxana Smirnova (Lund/CERN) Nordic Grid Neighborhood Meeting Linköping, October 20, 2004 Uses material from E.Laure and F.Hemmer

2 gLite What is gLite:  “the next generation middleware for grid computing”  “collaborative efforts of more than 80 people in 10 different academic and industrial research centers”  “Part of the EGEE project (  “bleeding-edge, best-of-breed framework for building grid applications tapping into the power of distributed computing and storage resources across the Internet” EGEE Activity Areas (quoted from Nordic contributors: HIP, PDC, UiB

3 Architecture guiding principles Lightweight services  Easily and quickly deployable  Use existing services where possible as basis for re-engineering  “Lightweight” does not mean less services or non- intrusiveness – it means modularity Interoperability  Allow for multiple implementations Performance/Scalability & Resilience/Fault Tolerance  Large-scale deployment and continuous usage Portability  Being built on Scientific Linux and Windows Co-existence with deployed infrastructure  Reduce requirements on participating sites  Flexible service deployment  Multiple services running on the same physical machine (if possible)  Co-existence with LCG-2 and OSG (US) are essential for the EGEE Grid service Service oriented approach 60+ external dependencies …

4 Service-oriented approach By adopting the Open Grid Services Architecture, with components that are:  Loosely coupled (messages)  Accessible across network; modular and self-contained; clean modes of failure  Can change implementation without changing interfaces  Can be developed in anticipation of new use cases Follow WSRF standardization  No mature WSRF implementations exist to-date so start with plain WS WSRF compliance is not an immediate goal, but the WSRF evolution is followed WS-I compliance is important

5 Globus 2 basedWeb services based gLite-2gLite-1LCG-2LCG-1 gLite vs LCG-2 Intended to replace LCG-2 Starts with existing components Aims to address LCG-2 shortcoming and advanced needs from applications (in particular feedback from DCs) Prototyping short development cycles for fast user feedback Initial web-services based prototypes being tested with representatives from the application groups

6 Approach Exploit experience and components from existing projects  AliEn, VDT, EDG, LCG, and others Design team works out architecture and design  Architecture:  Design:  Feedback and guidance from EGEE PTF, EGEE NA4, LCG GAG, LCG Operations, LCG ARDA Components are initially deployed on a prototype infrastructure  Small scale (CERN & Univ. Wisconsin)  Get user feedback on service semantics and interfaces After internal integration and testing components to be deployed on the pre-production service EDGVDT... LCG...AliEn

7 Subsystems/components LCG2: componentsgLite: services User Interface AliEn Computing Element Worker Node Workload Management System Package Management Job Provenance Logging and Bookkeeping Data Management Information & Monitoring Job Monitoring Accounting Site Proxy Security Fabric management

8 Workload Management System

9 Computing Element Works in push or pull mode Site policy enforcement Exploit new Globus GK and Condor-C (close interaction with Globus and Condor team) CEA … Computing Element Acceptance JC … Job Controller MON … Monitoring LRMS … Local Resource Management System

10 Data Management Scheduled data transfers (like jobs) Reliable file transfer Site self-consistency SRM based storage

11 Storage Element Interfaces SRM interface  Management and control  SRM 1.1 (with possible evolution) Posix-like File I/O  File Access  Open, read, write  Not real posix (like rfio) SRM interface rfiodcapchirpaio Castor dCacheNeST Disk POSIX API File I/O Control User

12 Catalogs File Catalog Metadata Catalog LFN Metadata File Catalog  Filesystem-like view on logical file names  Keeps track of sites where data is stored  Conflict resolution Replica Catalog  Keeps information at a site (Metadata Catalog)  Attributes of files on the logical level  Boundary between generic middleware and application layer Replica Catalog Site A GUIDSURL LFN Replica Catalog Site B GUIDSURL LFN GUID Site ID

13 Information and Monitoring R-GMA for  Information system and system monitoring  Application Monitoring No major changes in architecture  But re-engineer and harden the system Co-existence and interoperability with other systems is a goal  E.g. MonaLisa MPP – Memory Primary Producer DbSP – Database Secondary Producer Job wrapper MPP DbSP Job wrapper MPP Job wrapper MPP e.g: D0 application monitoring:

14 “The Grid” Joe Pseudonymity Service (optional) Credential Storage Obtain Grid (X.509) credentials for Joe “Joe → Zyx” “Issue Joe’s privileges to Zyx” “User=Zyx Issuer=Pseudo CA” Attribute Authority myProxy tbd VOMS GSI LCAS/LCMAP S Security

15 GAS & Package Manager Grid Access Service (GAS)  Discovers and manages services on behalf of the user  File and metadata catalogs already integrated Package Manager  Provides application software at execution site  Based upon existing solutions  Details being worked out together with experiments and operations

16 Current Prototype WMS  AliEn TaskQueue, EDG WMS, EDG L&B (CNAF) CE (CERN, Wisconsin)  Globus Gatekeeper, Condor-C, PBS/LSF, “Pull component” (AliEn CE) WN  23 at CERN + 1 at Wisconsin SE (CERN, Wisconsin)  External SRM implementations (dCache, Castor), gLite-I/O Catalogs (CERN)  AliEn FileCatalog, RLS (EDG), gLite Replica Catalog Data Scheduling (CERN)  File Transfer Service (Stork) Data Transfer (CERN, Wisc)  GridFTP Metadata Catalog (CERN)  Simple interface defined Information & Monitoring (CERN, Wisc)  R-GMA Security  VOMS (CERN), myProxy, gridmapfile and GSI security User Interface (CERN & Wisc)  AliEn shell, CLIs and APIs, GAS Package manager  Prototype based on AliEn PM

17 Summary, plans Most Grid systems (including LCG2) are batch-job production oriented, gLite addresses distributed analysis  Most likely will co-exist, at least for a while A prototype exists, new services are being added:  Dynamic accounts, gLite CEmon, Globus RLS, File Placement Service, Data Scheduler, fine-grained authorization, accounting… A Pre-Production Testbed is being set up  more sites, tested/stable services First release due end of March 2005  Functionality freeze at Christmas  Intense integration and testing period from January to March nd release candidate: November 2005  May: revised architecture doc, June: revised design doc