SRM & SE Jens G Jensen WP5 ATF, December 2002. Collaborators Rutherford Appleton (ATLAS datastore) CERN (CASTOR) Fermilab Jefferson Lab Lawrence Berkeley.

Slides:



Advertisements
Similar presentations
30-31 Jan 2003J G Jensen, RAL/WP5 Storage Elephant Grid Access to Mass Storage.
Advertisements

HEPiX Storage, Edinburgh May 2004 SE Experiences Supporting Multiple Interfaces to Mass Storage J Jensen
J Jensen CCLRC RAL Data Management AUZN (mostly about SRM though) GGF 16, Athens J Jensen.
HEPiX GFAL and LCG data management Jean-Philippe Baud CERN/IT/GD.
Jens G Jensen CCLRC/RAL hepsysman 2005Storage Middleware SRM 2.1 issues hepsysman Oxford 5 Dec 2005.
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
Andrew McNab - EDG Access Control - 14 Jan 2003 EU DataGrid security with GSI and Globus Andrew McNab University of Manchester
CASTOR SRM v1.1 experience Presentation at SRM meeting 01/09/2004, Berkeley Olof Bärring, CERN-IT.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
GGF Toronto Spitfire A Relational DB Service for the Grid Peter Z. Kunszt European DataGrid Data Management CERN Database Group.
Data Grid Web Services Chip Watson Jie Chen, Ying Chen, Bryan Hess, Walt Akers.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
A. Sim, CRD, L B N L 1 OSG Applications Workshop 6/1/2005 OSG SRM/DRM Readiness and Plan Alex Sim / Jorge Rodriguez Scientific Data Management Group Computational.
Data Management The GSM-WG Perspective. Background SRM is the Storage Resource Manager A Control protocol for Mass Storage Systems Standard protocol:
ILDG Middleware Status Chip Watson ILDG-6 Workshop May 12, 2005.
Write-through Cache System Policies discussion and A introduction to the system.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
SRM 2.2 Issues Well, er, and 2.3 too Jens Jensen (STFC RAL/GridNet2) On behalf of GSM-WG OGF22, Cambridge, MA.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
Andrew C. Smith – Storage Resource Managers – 10/05/05 Functionality and Integration Storage Resource Managers.
1 Meeting Location: LBNL Sept 18, 2003 The functionality of a Replica Registration Service Attendees Michael Haddox-Schatz, JLAB Ann Chervenak, USC/ISI.
Enabling Grids for E-sciencE Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008.
SRM workshop – September’05 1 SRM: Expt Reqts Nick Brook Revisit LCG baseline services working group Priorities & timescales Use case (from LHCb)
HEPSYSMAN UCL, 26 Nov 2002Jens G Jensen, CLRC/RAL UK e-Science Certification Authority Status and Deployment.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Data Access for Analysis Jeff Templon PDP Groep, NIKHEF A. Tsaregorodtsev, F. Carminati, D. Liko, R. Trompert GDB Meeting 8 march 2006.
11/5/2001WP5 UKHEPGRID1 WP5 Mass Storage UK HEPGrid UCL 11th May Tim Folkes, RAL
Computing Sciences Directorate, L B N L 1 CHEP 2003 Standards For Storage Resource Management BOF Co-Chair: Arie Shoshani * Co-Chair: Peter Kunszt ** *
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
DGC Paris WP2 Summary of Discussions and Plans Peter Z. Kunszt And the WP2 team.
CERN SRM Development Benjamin Coutourier Shaun de Witt CHEP06 - Mumbai.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
SEE-GRID-SCI Storage Element Installation and Configuration Branimir Ackovic Institute of Physics Serbia The SEE-GRID-SCI.
INFSO-RI Enabling Grids for E-sciencE Introduction Data Management Ron Trompert SARA Grid Tutorial, September 2007.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
C O M P U T A T I O N A L R E S E A R C H D I V I S I O N SRM Basic/Advanced Spec Issues Arie Shoshani, Alex Sim, Junmin Gu Scientific Data Management.
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
David Adams ATLAS ATLAS-ARDA strategy and priorities David Adams BNL October 21, 2004 ARDA Workshop.
Data Management The European DataGrid Project Team
Author - Title- Date - n° 1 Partner Logo WP5 Status John Gordon Budapest September 2002.
WLCG Grid Deployment Board CERN, 14 May 2008 Storage Update Flavia Donno CERN/IT.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Storage Element Model and Proposal for Glue 1.3 Flavia Donno,
SRM-iRODS Interface Development WeiLong UENG Academia Sinica Grid Computing 1.
EGEE is a project funded by the European Union under contract IST Data Management Data Access From WN Paolo Badino Ricardo.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks GLUE Schema Configuration for SRM 2.2 Stephen.
1 SRM v2.2 Discussion of key concepts, methods and behaviour F. Donno CERN 11 February 2008.
GridPP2 Data Management work area J Jensen / RAL GridPP2 Data Management Work Area – Part 2 Mass storage & local storage mgmt J Jensen
INFSO-RI Enabling Grids for E-sciencE University of Coimbra gLite 1.4 Data Management System Salvatore Scifo, Riccardo Bruno Test.
Storage Element Security Jens G Jensen, WP5 Barcelona, May 2003.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Architecture of LHC File Catalog Valeria Ardizzone INFN Catania – EGEE-II NA3/NA4.
SRM 2.2: experiment requirements, status and deployment plans 6 th March 2007 Flavia Donno, INFN and IT/GD, CERN.
Bologna, March 30, 2006 Riccardo Zappi / Luca Magnoni INFN-CNAF, Bologna.
9/20/04Storage Resource Manager, Timur Perelmutov, Jon Bakken, Don Petravick, Fermilab 1 Storage Resource Manager Timur Perelmutov Jon Bakken Don Petravick.
User Domain Storage Elements SURL  TURL LFC Domain (LCG File Catalogue) SA1 – Data Grid Interoperation Enabling Grids for E-sciencE EGEE-III INFSO-RI
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
StoRM: a SRM solution for disk based storage systems
John Gordon EDG Conference Barcelona, May 2003
SRM v2.2 / v3 meeting report SRM v2.2 meeting Aug. 29
SRM Developers' Response to Enhancement Requests
SRM2 Migration Strategy
SRM V2.1: Additional Design Issues
The INFN Tier-1 Storage Implementation
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
A Web-Based Data Grid Chip Watson, Ian Bird, Jie Chen,
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

SRM & SE Jens G Jensen WP5 ATF, December 2002

Collaborators Rutherford Appleton (ATLAS datastore) CERN (CASTOR) Fermilab Jefferson Lab Lawrence Berkeley –Americans have: ENSTORE, SAM, … Also present: WP2 For an overview of storage systems see D5.1

File pinning & replication Local SRM Remote SRM RC file (replica) Client fetches a file using srmGet(LFN,SURL) Local SRM gets a replica identified by LFN. Issue: LFN is optional Other issues: What are defautl ACLs on replicas? Does the local SRM have to check ACLs from the remote SRM?

RCs SRM must be able to interoperate with several different RCs Getting SURLs (RC  SRM) is easy, but: –Does SRM need reverse lookup (SURL  LFN)? –How to notify RC that a file has been deleted? Add hooks to SRM?

SRM file lifetime overview Files are: Volatile (e.g. pinned files) –Files get deleted by SE when they time out –Files created as volatile are never written to tape Durable (e.g. permanent but not in final location) –Files can be deleted only by administrator or owner –Files still have a lifetime – owner and/or admin notified when lifetime expires Permanent (e.g. your important experiment) –Files can be deleted only by administrator or owner

New lifetime stuff: space Each user has space with a per-user (probably) quota for each type of space Users must reserve space before they can reserve for writing or reading Users are charged for the type and amount of space they reserve Space can be –Volatile – reservations not guaranteed beyond a minimum –Durable – notify user if file didn’t move, space wasn’t released, etc – if user doesn’t react, notify admin –Permanent SRM Proposal: space lifetime is encoded in the filepath!!! (JLAB has this) Proposal: space has a lifetime as well.

Storage Semantics VolatileDurablePermanent

copyToSpace Users can “give away” files in their own space to another user –First step: the user who owns the files gives them away –Second step: the other user takes over the files –Done without copying in the SRM, but quotas are adjusted accordingly Alternatively, let users “give” files to other users without deleting them from their own space –copy-on-write

SRM possibly wildly inaccurate status Get PULL Put PUSH Copy File Lifetime Space Lifetime GSIFTP SE SE 1.0 TB 2.0 SE 1.0 TB 2.0 SE 1.1 TB 2.1 SE 1.1 TB 2.1 SE 1.x x>=1 SE 1.0 TB 2.0 FermiOK only vol --OK JLABOK --OK

Comparing SRM v2.{0,1} to SE 1.0 Get, Put: supported –Get in PULL mode (same as SRM) –Put in PUSH mode (same as SRM) File lifetime: will be easy to support –Currently (pre-1.0) all files are permanent… Space management – currently supported as volatile, best-effort only (i.e. very little support) Information providers: supported soon Directory stuff : mostly supported (not rmdir, no recursion) Support for arbitrarily long LFNs: OK Need to have API for more detailed comparison…

Need for POSIX style access RFIO (CERN, and other sites with HPSS) DCAP (FermiLab) Globus XIO? file: Pick one, and call it a standard?

Security Americans expect to use CAS –Possibly need for interim solution until CAS is sufficiently mature? Need for “weak” ftp? –Users “delegate” username/password to SRM Europeans expect to use VOMS –SE solution until VOMS is sufficiently mature: GACL (SE will always support GACL) –SE can use CAS as well but support not planned Use GSI-HTTPS for now – common ground essential for interoperability Question: are CAS and VOMS interchangeable?

ACLs How to manage ACLs on replicas? –Replicas are read-only; or –Replicas are writable by whoever created the replica –SRM may have to contact the SRM containing the master to pick up changes to the ACL SRM proposal: default is world readable for volatile space, owner access only for permanent SE proposal –Have default ACL per user and/or per directory

Web services Fermi uses GLUE but plans to switch to Apache/Tomcat/Axis. “Secret” SRM v1.0 WSDL available… New WSDL spec for v2.1 specification in preparation (Fermi) SOAP interoperability issues, particularly with parameters more complicated than strings and ints – e.g. (WSDL) structs. Need to be OGSI compliant…

Schema Agree that GLUE is important More effort needed?

How to ensure that users behave? Good behaviour == close file && release pin Make them pay… Charge per space reservations and/or size of files in user’s reservation? –CASTOR charges per MB on tape Count number of expired pins (in durable and permanent only!?) What to do with files that are opened and never closed? –Can be detected sometimes (local files, NFS?, /grid?) –CASTOR currently does nothing

Collaborations… Agree standard protocols, accept each others certificates, etc.  Testing Interoperability Sharing code is seen as less important –Occasional licence conflict –People would rather have more independent implementations