Storage Interfaces Introduction Wahid Bhimji University of Edinburgh Based on previous discussions with Working Group: (Brian Bockelman, Simone Campana,

Slides:



Advertisements
Similar presentations
Storage Workshop Summary Wahid Bhimji University Of Edinburgh On behalf all of the participants…
Advertisements

Jens G Jensen CCLRC/RAL hepsysman 2005Storage Middleware SRM 2.1 issues hepsysman Oxford 5 Dec 2005.
Wahid Bhimji SRM; FTS3; xrootd; DPM collaborations; cluster filesystems.
Data Management TEG Status Dirk Duellmann & Brian Bockelman WLCG GDB, 9. Nov 2011.
Operations Coordination Team Maria Girone, CERN IT-ES Kick-off meeting 24 th September 2012.
WLCG Operations and Tools TEG Monitoring – Experiment Perspective Simone Campana and Pepe Flix Operations TEG Workshop, 23 January 2012.
Data & Storage Management TEGs Summary of recommendations Wahid Bhimji, Brian Bockelman, Daniele Bonacorsi, Dirk Duellmann GDB, CERN 18 th April 2012.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
EVOLUTION OF THE EXPERIMENT PROBE SUBMISSION FRAMEWORK (SAM/NAGIOS) Marian Babik.
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
LHCb input to DM and SM TEGs. Remarks to DM and SM TEGS Introduction m We have already provided some input during our dedicated session of the TEG m Here.
PhysX CoE: LHC Data-intensive workflows and data- management Wahid Bhimji, Pete Clarke, Andrew Washbrook – Edinburgh And other CoE WP4 people…
Your university or experiment logo here Storage and Data Management - Background Jens Jensen, STFC.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
Data and Storage Evolution in Run 2 Wahid Bhimji Contributions / conversations / s with many e.g.: Brian Bockelman. Simone Campana, Philippe Charpentier,
Steve Traylen PPD Rutherford Lab Grid Operations PPD Christmas Lectures Steve Traylen RAL Tier1 Grid Deployment
MW Readiness Verification Status Andrea Manzi IT/SDC 21/01/ /01/15 2.
CERN IT Department CH-1211 Geneva 23 Switzerland GT WG on Storage Federations First introduction Fabrizio Furano
GLUE 2 Open Issues in Storage Information Providers 16 th May 2014.
OSG Technology Area Brian Bockelman Area Coordinator’s Meeting February 15, 2012.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
Maarten Litmaath (CERN), GDB meeting, CERN, 2006/06/07 SRM v2.2 working group update Results of the May workshop at FNAL
LCG Storage Accounting John Gordon CCLRC – RAL LCG Grid Deployment Board September 2006.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
Your university or experiment logo here The Protocol Zoo A Site Presepective Shaun de Witt, STFC (RAL)
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT DPM / LFC and FTS news Ricardo Rocha ( on behalf of the IT/GT/DMS.
Evolution of storage and data management Ian Bird GDB: 12 th May 2010.
CCRC’08 Monthly Update ~~~ WLCG Grid Deployment Board, 14 th May 2008 Are we having fun yet?
The new FTS – proposal FTS status. EMI INFSO-RI /05/ FTS /05/ /05/ Bugs fixed – Support an SE publishing more than.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
Future Plans at RAL Tier 1 Shaun de Witt. Introduction Current Set-Up Short term plans Final Configuration How we get there… How we plan/hope/pray to.
Data Placement Intro Dirk Duellmann WLCG TEG Workshop Amsterdam 24. Jan 2012.
Storage Classes report GDB Oct Artem Trunov
Storage Interfaces and Access pre-GDB Wahid Bhimji University of Edinburgh On behalf of all those who participated.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
WLCG Information System Use Cases Review WLCG Operations Coordination Meeting 18 th June 2015 Maria Alandes IT/SDC.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
Testing Infrastructure Wahid Bhimji Sam Skipsey Intro: what to test Existing testing frameworks A proposal.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Storage Interfaces and Access: Interim report Wahid Bhimji University of Edinburgh On behalf of WG: Brian Bockelman, Philippe Charpentier, Simone Campana,
Wahid Bhimji (Some slides are stolen from Markus Schulz’s presentation to WLCG MB on 19 June Apologies to those who have seen some of this before)
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
1 DIRAC Data Management Components A.Tsaregorodtsev, CPPM, Marseille DIRAC review panel meeting, 15 November 2005, CERN.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
Report TEG WLCG Data and Storage Management Giacinto DONVITO INFN-IGI 14/05/12Workshop INFN CCR - GARR
WLCG Accounting Task Force Update Julia Andreeva CERN GDB, 8 th of June,
Dynamic Federation of Grid and Cloud Storage Fabrizio Furano, Oliver Keeble, Laurence Field Speaker: Fabrizio Furano.
CERN IT Department CH-1211 Genève 23 Switzerland t DPM status and plans David Smith CERN, IT-DM-SGT Pre-GDB, Grid Storage Services 11 November.
CMS data access Artem Trunov. CMS site roles Tier0 –Initial reconstruction –Archive RAW + REC from first reconstruction –Analysis, detector studies, etc.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
WLCG IPv6 deployment strategy
WLCG Workshop 2017 [Manchester] Operations Session Summary
WLCG Data Steering Group
Future of WAN Access in ATLAS
Storage Interfaces and Access: Introduction
dCache – protocol developments and plans
Storage / Data TEG Introduction
SRM v2.2 / v3 meeting report SRM v2.2 meeting Aug. 29
Storage Protocol overview
Taming the protocol zoo
Proposal for obtaining installed capacity
Storage Resource Reporting Proposal
Presentation transcript:

Storage Interfaces Introduction Wahid Bhimji University of Edinburgh Based on previous discussions with Working Group: (Brian Bockelman, Simone Campana, Philippe Charpentier, Dirk Duellmann, Oliver Keeble, Paul Millar, Markus Schulz)

Short reminder of motivation and mandate For more details see previous GDB and TEG report.previous GDB TEG report Little of current management interface (SRM) is used: Performance overheads for experiments. Work on developers to maintain. Restricts sites technology choices. WG proposed by TEG; presented to MB on 19 / 6 1 st meeting was 21 Sep. This meeting is a “working” one.

Slightly Refined Mandate Building on Storage/Data TEG, clarify for disk-only systems the minimal functionality for WLCG Storage Management Interface. Evaluate alternative interfaces as they emerge, call for the need of tests whenever interesting, and recommend those shown to be interoperable, scalable and supportable. Help ensure that these alternatives can be supported by FTS and lcg_utils to allow interoperability. Meetings to coincide with GDBs plus extras on demand: Presentations from developers / sites / experiments covering activity Not designing a replacement interface to SRM but there are already activities so bringing together and coordinating these.

Brief functionality table: (see also LHCb talk and backup slides) FunctionUsed by ATLASCMSLHCb Is there an existing Alternative or Issue (to SRM) Transfer: 3 rd Party (FTS) YES Using just gridFTP in EOS (ATLAS) and Nebraska (CMS) What about on other SEs? Transfer: Job in/out (LAN) YES ATLAS and CMS using LAN protocols directly Negotiate a transport protocol NO YESLHCb use lcg-getturls; Transfer: Direct Download YESNO ATLAS use SRM via lcg-cp, Alternative plugins in rucio Namespace: Manipulation / Deletion YES ATLAS: Deletion would need plugin for an alternative Space QueryYESNOYES?Development Required Space UploadYESNOYES?Minor Development Required

Areas requiring possible development Needed by?Development for?Issue ATLASMiddlewareReporting of space used in space tokens: WebDav quotas? ATLASMiddlewareTargeting upload to space token: Could just use namespace but certain SEs would need to change the way they report space to reflect ATLASATLAS?Deletion LHCbMiddlewareSurl->Turl (see Philippe’s talk / discussion) Any?MiddlewareChecksum check – confirm not needed? All?MiddlewareTransfers using pure gridFTP on different storage types

Summary and goals for this session Experiments: current and future data models. CMS: no “blockers” for non-SRM usage ATLAS: some that may be resolved in next gen of data management: rucio LHCb have some more Sites: Those looking moving to possible technology without SRM: e.g. CERN; RAL Middleware and tool development : e.g. DPM and FTS Finalize functionality map; identify blocking issues and needed development. Links to other activity: Accounting (StAR) and publishing (glue/bdii) Later is only minimally used (and former doesn’t exist yet). Is this within scope of WG? Federation WG: Medium term there will still be another interface Longer term use of federation is not yet clear.

Extra Slides

Table of used functions from TEG Somewhat simplified and removed those only relevant for Archive/T1 Still probably can’t read it (!) but a couple of observations: Not that much is needed – e.g. space management is only querying and not even that for CMS

“getturls” issue (my short summary) LHCb use SRM to get a tURL For WAN: load balancing of gridftp servers. For WAN: sites don’t want to expose name of “doors”. For LAN: tells which protocol to use CMS use “rule-based” lookups method and are happy with it ATLAS have a zoo of site-specific regexps etc. But see an advantage in simplification Ways forward: WAN: gridFTP redirection or DNS-balancing LAN: Force single protocol (e.g. xroot or http) and/or path