6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.

Slides:



Advertisements
Similar presentations
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
A Model for Grid User Management Rich Baker Dantong Yu Tomasz Wlodek Brookhaven National Lab.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Microsoft ® Application Virtualization 4.6 Infrastructure Planning and Design Published: September 2008 Updated: February 2010.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
TeraGrid Information Services John-Paul “JP” Navarro TeraGrid Grid Infrastructure Group “GIG” Area Co-Director for Software Integration and Information.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
OSG Public Storage and iRODS
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
The Disk Resource Manager A Berkeley SRM for Disk Cache Implementation and Testing Experience on the OSG Jorge Luis Rodriguez IBP/Grid Mini-Workshop Apr.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
Microsoft SharePoint Server 2010 for the Microsoft ASP.NET Developer Yaroslav Pentsarskyy
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
The Replica Location Service The Globus Project™ And The DataGrid Project Copyright (c) 2002 University of Chicago and The University of Southern California.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
SEAL Core Libraries and Services CLHEP Workshop 28 January 2003 P. Mato / CERN Shared Environment for Applications at LHC.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
GCRC Meeting 2004 BIRN Coordinating Center Software Development Vicky Rowley.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
OSG Integration Activity Report Rob Gardner Leigh Grundhoefer OSG Technical Meeting UCSD Dec 16, 2004.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
Distributed Data for Science Workflows Data Architecture Progress Report December 2008.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Service Pack 2 System Center Configuration Manager 2007.
Nanbor Wang, Balamurali Ananthan Tech-X Corporation Gerald Gieraltowski, Edward May, Alexandre Vaniachine Argonne National Laboratory 2. ARCHITECTURE GSIMF:
OSG Deployment Preparations Status Dane Skow OSG Council Meeting May 3, 2005 Madison, WI.
DIRAC Project A.Tsaregorodtsev (CPPM) on behalf of the LHCb DIRAC team A Community Grid Solution The DIRAC (Distributed Infrastructure with Remote Agent.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Integration TestBed (iTB) and Operations Provisioning Leigh Grundhoefer.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
Cofax Scalability Document Version Scaling Cofax in General The scalability of Cofax is directly related to the system software, hardware and network.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Jean-Philippe Baud, IT-GD, CERN November 2007
Bob Jones EGEE Technical Director
Accessing the VI-SEEM infrastructure
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Sergio Fantinel, INFN LNL/PD
Ákos Frohner EGEE'08 September 2008
GGF15 – Grids and Network Virtualization
Leigh Grundhoefer Indiana University
OSG Technology Roadmap
Presentation transcript:

6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for in 2005/6?  How do we introduce new services into the OSG?

6/23/2005 R. GARDNER OSG Baseline Services 2 Guidance for Capabilities - take a wish list of the present Principles and paths to deployment are guided by essential needs of the participating VO’s Example list  Ability to store, serve, catalog, manage and discover collaboration- wide datasets on a very large scale  Ability to access opportunistically non-dedicated resources  Ability to host VO managed services and agents on gatekeeper hosts

6/23/2005 R. GARDNER OSG Baseline Services 3 …with hard lessons from the past Protect grid services  that are vulnerable in a multi-VO environment Managed data transfer services required  since data staging is most likely point of failure Policy-based authorization infrastructure  to distinguish user roles within a VO Delay binding jobs to resources to last possible moment  to optimize utilization Robustness and reliability required  to keep operating costs low

6/23/2005 R. GARDNER OSG Baseline Services 4 …and performance targets for the (near!) future. Submission of collections of O(1000) jobs should happen within a few seconds. It is expected that even a typical data analysis task will translate into submission of O(1000) jobs. WMS needs to be able to keep all available resources busy. Overall reliability must be such that task completion is generally guaranteed within less than 3 retries even for large tasks. For late 2007 this implies >95% success rate per job.

6/23/2005 R. GARDNER OSG Baseline Services 5 Example: ATLAS Production System Requirements for VO and Core Services DDMS WMS

6/23/2005 R. GARDNER OSG Baseline Services 6..introduces challenging distributed data management issues A number of catalogs in play - standardize on an interface Require traffic shaping and load balancing for I/O (SRM-dCache deployed on US Tier2 Centers) IO & space resources utilized based on VO policies Site-level catalogs for managed Tier2 storage elements Reliable transfer agents used also by WMS

6/23/2005 R. GARDNER OSG Baseline Services 7 Managing Persistent Services VO-owned and dedicated sites will allow running of “persistent” VO specific services and agents for cataloging, space management, replication, web caching, and WMS related services Non-dedicated sites can be managed with VO services running at another site…  Or dynamically deploying VO services & agents

6/23/2005 R. GARDNER OSG Baseline Services 8 Portal Partitions & Edge Services Strategy for multiple VOs: partition resources into VO-managed and shared Provision for hosting persistent “Guest” VO services & agents Resources GRAM GRIDFTP GIP SRM VO Services, Agents, Proxy Caches, Catalogs Resources Guest VO Services, Agents, Proxy Caches Shared OSG G A T E K E E P E R S

6/23/2005 R. GARDNER OSG Baseline Services 9 Edge Services Need to support VO specific agents and services on “leased” gateways Typical:  Need a local MySQL database  Access to port 80 (or generally, a port range)  Will give I/O requirements in advance  Need this for a time >> job execution time

6/23/2005 R. GARDNER OSG Baseline Services 10 Example Edge Service: Scalable Remote Data Access ATLAS reconstruction and analysis jobs require access to remote database servers at CERN, and elsewhere. Presents additional traffic on network, long latencies for remote sites & bottleneck at central DB. Suggest use of local mechanisms, such was web proxy caches, to minimize this impact

6/23/2005 R. GARDNER OSG Baseline Services 11 Summary of OSG 0.4 targets (end 2005) GT4 gridftp already deployed; move to GT4 gram Managed computing elements for multi-VO Edge services framework providing capabilities, VO-managed late binding of job-to-resources Job sandbox inspection, globally Policy and trust infrastructure Data location services and transfer agents dynamically deployed via edge services Site catalogs, providing VO space management

6/23/2005 R. GARDNER OSG Baseline Services 12 New Services in OSG - how? Requirements and schedule are determined with the OSG deployment activity Architectural coherence will maintained through participation with the blueprint group Integrate middleware services from technology providers targeted for the OSG Provide testbed for evaluation and testing of new services and applications Test and exercise installation and distribution methods Provide feedback to service providers and VO application developers Prepare release candidates for provisioning.

6/23/2005 R. GARDNER OSG Baseline Services 13 Service Readiness and Integration Plans Service proponents come to the integration testbed with an appropriately scoped functionality and integration plan  Purpose, scope  Service Description  Packaging Description  Dependencies: resources and services needed  Test use cases identified  Testing tools – clients, harness; metrics for success clearly defined  Effort to contribute to the OSG-IVC and schedule  Links to appropriate documentation, WSDL, etc

6/23/2005 R. GARDNER OSG Baseline Services 14 Path for New Services in OSG OSG Deployment Activity Metrics & Certification Application validation VO Application Software Installation OSG Integration Activity Release Description Middleware Interoperability Software & packaging Functionality & Scalability Tests Readiness plan adopted Service deployment OSG Operations-Provisioning Activity Release Candidate Readiness plan Effort Resources feedback

6/23/2005 R. GARDNER OSG Baseline Services 15 OSG Integration Testbed Layout Stable Production Release Integration Release Resources enter and leave as necessary OSG Integration Testbed Service platform applications, test harness, clients VO contributed

6/23/2005 R. GARDNER OSG Baseline Services 16 Deployed ITB

6/23/2005 R. GARDNER OSG Baseline Services 17 Validation: CMS-MOP and ATLAS-Capone

6/23/2005 R. GARDNER OSG Baseline Services 18 Validation Jobs on ITB 0.1.5

6/23/2005 R. GARDNER OSG Baseline Services 19 Validation: GT4 GridFTP

6/23/2005 R. GARDNER OSG Baseline Services 20 Conclusions OSG driven by VO requirements, core capabilities, and principles for guidance Capabilities for next major release to introduce flexibility for maturing middleware and VO boxes for delegated responsibility, via Edge Services Paths for new services, validation and release process within a contributed, consortium model is working reasonably well so far Process is leading to reliable, core computing substrate, with VO flexibility