CASTOR SRM v1.1 experience Presentation at HEPiX MSS Forum 28/05/2004 Olof Bärring, CERN-IT.

Slides:



Advertisements
Similar presentations
GT4 Architectural Security Review December 17th, 2004.
Advertisements

© 2006 Open Grid Forum GGF18, 13th September 2006 OGSA Data Architecture Scenarios Dave Berry & Stephen Davey.
30-31 Jan 2003J G Jensen, RAL/WP5 Storage Elephant Grid Access to Mass Storage.
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
29 June 2006 GridSite Andrew McNabwww.gridsite.org VOMS and VOs Andrew McNab University of Manchester.
HEPiX Storage, Edinburgh May 2004 SE Experiences Supporting Multiple Interfaces to Mass Storage J Jensen
J Jensen CCLRC RAL Data Management AUZN (mostly about SRM though) GGF 16, Athens J Jensen.
HEPiX GFAL and LCG data management Jean-Philippe Baud CERN/IT/GD.
Jens G Jensen CCLRC/RAL hepsysman 2005Storage Middleware SRM 2.1 issues hepsysman Oxford 5 Dec 2005.
GridKa January 2005 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann 1 Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH.
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
1 XML Web Services Practical Implementations Bob Steemson Product Architect iSOFT plc.
CERN LCG Overview & Scaling challenges David Smith For LCG Deployment Group CERN HEPiX 2003, Vancouver.
CASTOR SRM v1.1 experience Presentation at SRM meeting 01/09/2004, Berkeley Olof Bärring, CERN-IT.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
GGF Toronto Spitfire A Relational DB Service for the Grid Peter Z. Kunszt European DataGrid Data Management CERN Database Group.
Data Grid Web Services Chip Watson Jie Chen, Ying Chen, Bryan Hess, Walt Akers.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Ákos FROHNER – DataGrid Security Requirements n° 1 Security Group D7.5 Document and Open Issues
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Mass Storage System Forum HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof Bärring (CERN)
A. Sim, CRD, L B N L 1 OSG Applications Workshop 6/1/2005 OSG SRM/DRM Readiness and Plan Alex Sim / Jorge Rodriguez Scientific Data Management Group Computational.
Data Management The GSM-WG Perspective. Background SRM is the Storage Resource Manager A Control protocol for Mass Storage Systems Standard protocol:
ILDG Middleware Status Chip Watson ILDG-6 Workshop May 12, 2005.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
ILDG Middleware Status Bálint Joó UKQCD University of Edinburgh, School of Physics on behalf of ILDG Middleware Working Group alternative title: Report.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Lattice QCD Data Grid Middleware: status report M. Sato, CCS, University of Tsukuba ILDG6, May, 12, 2005.
Communicating Security Assertions over the GridFTP Control Channel Rajkumar Kettimuthu 1,2, Liu Wantao 3,4, Frank Siebenlist 1,2 and Ian Foster 1,2,3 1.
1 Meeting Location: LBNL Sept 18, 2003 The functionality of a Replica Registration Service Attendees Michael Haddox-Schatz, JLAB Ann Chervenak, USC/ISI.
SRM workshop – September’05 1 SRM: Expt Reqts Nick Brook Revisit LCG baseline services working group Priorities & timescales Use case (from LHCb)
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Computing Sciences Directorate, L B N L 1 CHEP 2003 Standards For Storage Resource Management BOF Co-Chair: Arie Shoshani * Co-Chair: Peter Kunszt ** *
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
SRM Monitoring 12 th April 2007 Mirco Ciriello INFN-Pisa.
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
SRM & SE Jens G Jensen WP5 ATF, December Collaborators Rutherford Appleton (ATLAS datastore) CERN (CASTOR) Fermilab Jefferson Lab Lawrence Berkeley.
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
Maarten Litmaath (CERN), GDB meeting, CERN, 2006/06/07 SRM v2.2 working group update Results of the May workshop at FNAL
CERN SRM Development Benjamin Coutourier Shaun de Witt CHEP06 - Mumbai.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
Owen Synge and Shaun De Witt HTTP as a better file transfer protocol default for SRM Slide 1 HTTP as a better file transfer protocol default for SRM By.
Padova, 5 October StoRM Service view Riccardo Zappi INFN-CNAF Bologna.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
GridKa December 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann dCache Implementation at FZK Forschungszentrum Karlsruhe.
1 Xrootd-SRM Andy Hanushevsky, SLAC Alex Romosan, LBNL August, 2006.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
CASTOR in SC Operational aspects Vladimír Bahyl CERN IT-FIO 3 2.
Martina Franca (TA), 07 November Installazione, configurazione, testing e troubleshooting di Storage Element.
A System for Monitoring and Management of Computational Grids Warren Smith Computer Sciences Corporation NASA Ames Research Center.
Enabling Grids for E-sciencE EGEE-II INFSO-RI Status of SRB/SRM interface development Fu-Ming Tsai Academia Sinica Grid Computing.
Bologna, March 30, 2006 Riccardo Zappi / Luca Magnoni INFN-CNAF, Bologna.
9/20/04Storage Resource Manager, Timur Perelmutov, Jon Bakken, Don Petravick, Fermilab 1 Storage Resource Manager Timur Perelmutov Jon Bakken Don Petravick.
Federating Data in the ALICE Experiment
Jean-Philippe Baud, IT-GD, CERN November 2007
StoRM: a SRM solution for disk based storage systems
Status of the SRM 2.2 MoU extension
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
SRM v2.2 / v3 meeting report SRM v2.2 meeting Aug. 29
StoRM Architecture and Daemons
SRM Developers' Response to Enhancement Requests
Data Management cluster summary
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

CASTOR SRM v1.1 experience Presentation at HEPiX MSS Forum 28/05/2004 Olof Bärring, CERN-IT

28/05/2004 CASTOR SRM v1.1 experience 2 Outline Brief overview of SRM v1.1 CASTOR implementation Interoperability tests Problems found –SRM specification –GSI GGF: GSM WG –Input to the definition of SRM-Basic Conclusions and outlook

28/05/2004 CASTOR SRM v1.1 experience 3 Brief overview of SRM v1.1 SRM = Storage Resource Manager First (v1.0) interface definition – –October 22, 2001 –JLAB, FNAL and LBNL –Some key features: Transfer protocol negotiation Multi-file requests Asynchronous operations SRM is a management interface –Make files available for access (e.g. recall to disk) –Prepare resources for receiving files (e.g. allocate disk space) –Query status of requests or files managed by the SRM –Not a WAN file transfer protocol URLs –SURL – Site specific URL. Protocol neutral »srm://castorgrid.cern.ch/castor/home/me/test –TURL – Transfer URL. Protocol specific »gsiftp://gridftp03.cern.ch/tmp/home/me/test

28/05/2004 CASTOR SRM v1.1 experience 4 SRM v1.0 operations getRecall from tape and pin on disk putReserve disk space, pin and maybe make permanent getRequestStatusGet the status of a running get/put setFileStatusSet the status of a file pinPin file on disk unPinCancel a previous pin operation mkPermanentMake existing file permanent getProtocolsGet list of supported transfer/access protocols getFileMetadataGet file metadata advisoryDeleteRecommend SRM to delete a file getEstGetTimeFake get for time estimation getEstPutTimeFake put for time estimation AsynchronousSynchronous/stateless

28/05/2004 CASTOR SRM v1.1 experience 5 The copy operation SRM v1.1 == SRM v1.0 + copy copy quite different from other SRM operations: –Copy file(s) from/to local SRM to/from another (optionally remote) SRM –The target SRM performs the necessary put and get operations and executes the file transfers using the negotiated protocol (e.g. gsiftp) The copy operation allows a batch job running on a worker node without in&out-bound WAN access to copy files to a remote storage element The copy operation was documented only 4 days ago(!) The copy operation could potentially provide the framework for planning transfers of a large data volumes (e.g. LHC T0 T1 data broadcasting)??

28/05/2004 CASTOR SRM v1.1 experience 6 CASTOR SRM v1.1 Implements the vital operations –get, put, getRequestStatus, setFileStatus, getProtocols No-ops: –pin, unPin, getEstGetTime, getEstPutTime Implemented but optionally disabled (requested by LCG) –advisoryDelete CASTOR GSI (CGSI) plug-in for gSOAP –Also used in GFAL CERN: –First prototype in summer 2003 –First production version deployed in December 2003 Other sites having deployed the CASTOR SRM –CNAF (INFN/Bologna) –PIC (Barcelona)

28/05/2004 CASTOR SRM v1.1 experience 7 CASTOR SRM v1.1 CASTOR tape archive SRM request repository Grid services SRMgridftp GSI CASTOR disk cache stagerRFIO Tape mover Tape queue CASTOR name space Volume Manager Local clients

28/05/2004 CASTOR SRM v1.1 experience 8 Interoperability tests CASTOR SRM has been running interoperability tests with various clients, notably –GFAL (Jean-Philippe) –EDG replica manager (Peter) –FNAL/dCache SRM (Timur)

28/05/2004 CASTOR SRM v1.1 experience 9 Problems found The interoperability problems can be classified as: –Due to problems with the SRM specification –Due to assumptions in SRM or SOAP implementations –Due to GSI incompatibilities The debugging of GSI incompatibilities is by far the most difficult and time consuming

28/05/2004 CASTOR SRM v1.1 experience 10 Problems with SRM spec (1) Lack of enumeration –All enumeration-like types are strings –Client needs to find a common denominator (e.g. cast all strings in capital letters) Request and file state lifecycles –Concise for put or get –Undefined for copy (a proposal was circulated 4 days ago). This turned out to be an important interoperability issue between CERN/CASTOR and FNAL/dCache SRMs –Undefined for mkPermanent, pin, unpin (probably irrelevant for the latter two)? Request history –What an SRM should with requests that have reached the Done or Failed status

28/05/2004 CASTOR SRM v1.1 experience 11 Problems with SRM spec (2) Immutability of request identifier –Request id is a 32 bit word –Unspecified if an SRM can reuse request ids for finished (Done or Failed) requests SURL (Site URL) semantics –Is it an URL or URI? –If URL, does it support relative and absolute paths? –If URI name space is virtually flat for an arbitrary client Pin lifetime –Pin lifetime is defined to be subject for site policy –No way to query the remaining pin lifetime for a particular file

28/05/2004 CASTOR SRM v1.1 experience 12 Problems with SRM spec (3) Exception handling and error propagation –Unspecified if a multi-file request should fail when a subset of the files got an error –Unspecified if and when an SRM can do retries –Only one error message, global for all files in a multi-file request, is available for reporting –Format and contents of error message undefined advisoryDelete != delete –It may be vital to know what the effect is No effect at all (if so, what happens if SURL is reused for a new file?) Only remove disk resident copy (if so, when?) Remove HSM file (if so, when?) Directory creation on the fly for put requests –If a put requests specifies a SURL corresponding to a path for which one or several sub-directory levels do not exist, should it create the missing dirs on the fly (provided the client has the appropriate permissions)?

28/05/2004 CASTOR SRM v1.1 experience 13 Problems due to SRM or SOAP implementation details SRM WSDL discovery –FNAL client assumed wsdl and service are hosted by same web-server Bug in gSOAP v2.3 WSDL importer Various bugs in CASTOR SRM found but not reported here

28/05/2004 CASTOR SRM v1.1 experience 14 GSI problems (1) CASTOR (GSI) – EDG RC (Java TrustManager) –TrustManager does not use GSI default of SSL handshake + credential delegation, but just a SSL handshake –TrustManager client would not work with SSL 3.0, which is forced by GSI –Solution: EDG RC uses CoG (Globus Java Security Implementation) instead CASTOR (GSI) – FNAL dCache (Java CoG) –FNAL client only used a limited number of algorithms for encryption that were not matching those provided by standard GSI –Limited Proxy certificate GSI error reporting not working properly

28/05/2004 CASTOR SRM v1.1 experience 15 GSI problems (2) Administration and deployment issues –EDG globus patch for supporting for dynamic pool accounts requires GRIDMAPDIR environment to be declared, even if default location was used for the security files –configuration problems (right Root CA not trusted) –CERN CA changed the Certificate naming scheme (number added at the end of DN). New certificates were not automatically propagated (to, for instance, FNAL). The effort for debugging GSI problems will scale with the number of SRM implementations –Establishing a SRM reference implementation for certifying new servers and clients would help

28/05/2004 CASTOR SRM v1.1 experience 16 GGF: GSM WG GGF GSM (Grid Storage Management) WG –SRM interface specification for GGF will proceed in two steps SRM-Basic SRM-Advanced –Current proposal is to have SRM-Basic relatively close to SRM v1.1 SRM-Advanced close to SRM v2.1 + vaguely defined features like authorization, access control, monitoring Suggestion to HEPiX MSS forum how we could use GSM WG –SRM-Basic is hopefully sufficient for LHC Tier-0 Tier-1 data distribution. With that objective it is essential that all existing interoperability problems with SRM v1.1 definition are addressed as appropriate adding of new features should be kept at the minimum necessary –Hopefully we have already come up with some input during these two days

28/05/2004 CASTOR SRM v1.1 experience 17 Conclusions and outlook CASTOR SRM v1.1 is in production since a couple of months at CERN and some other CASTOR Tier- 1 sites SRM interoperability does not come for free –Definition not concise enough, room for too much site specific interpretation –Is GSI interoperability an illusion and, if so, will it continue to be so? We have currently no plans for a CASTOR SRM v2.1 implementation. Would rather like to tighten up SRM v1.1 in the context of the GGF GSM WG and the SRM-Basic definition