The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.

Slides:



Advertisements
Similar presentations
GridPP9 – 5 February 2004 – Data Management DataGrid is a project funded by the European Union GridPP is funded by PPARC GridPP2: Data and Storage Management.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Storage: Futures Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 8 October 2008.
Storage Issues: the experiments’ perspective Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 9 September 2008.
LCG Introduction Kors Bos, NIKHEF, Amsterdam GDB Jan.10, 2007 IT Auditorium.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 experience Sophie Lemaitre WLCG Workshop.
CERN - IT Department CH-1211 Genève 23 Switzerland t LCG Deployment GridPP 18, Glasgow, 21 st March 2007 Tony Cass Leader, Fabric Infrastructure.
CERN IT Department CH-1211 Genève 23 Switzerland t EIS section review of recent activities Harry Renshall Andrea Sciabà IT-GS group meeting.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Status report on SRM v2.2 implementations: results of first stress tests 2 th July 2007 Flavia Donno CERN, IT/GD.
Status of SRM 2.2 implementations and deployment 29 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
CCRC’08 Weekly Update Plus Brief Comments on WLCG Collaboration Workshop Jamie Shiers ~~~ WLCG Management Board, 29 th April 2008.
News from the HEPiX IPv6 Working Group David Kelsey (STFC-RAL) WLCG GDB, CERN 8 July 2015.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
CERN IT Department CH-1211 Geneva 23 Switzerland t Storageware Flavia Donno CERN WLCG Collaboration Workshop CERN, November 2008.
Ian Bird LCG Project Leader LHCC Referee Meeting Project Status & Overview 22 nd September 2008.
What is expected from ALICE during CCRC’08 in February.
WLCG Service Report ~~~ WLCG Management Board, 1 st September
Your university or experiment logo here Storage and Data Management - Background Jens Jensen, STFC.
Report on Installed Resource Capacity Flavia Donno CERN/IT-GS WLCG GDB, CERN 10 December 2008.
MW Readiness WG Update Andrea Manzi Maria Dimou Lionel Cons 10/12/2014.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
Maarten Litmaath (CERN), GDB meeting, CERN, 2006/06/07 SRM v2.2 working group update Results of the May workshop at FNAL
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Automatic Resource & Usage Monitoring Steve Traylen/Flavia Donno CERN/IT.
Busy Storage Services Flavia Donno CERN/IT-GS WLCG Management Board, CERN 10 March 2009.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
4 March 2008CCRC'08 Feb run - preliminary WLCG report 1 CCRC’08 Feb Run Preliminary WLCG Report.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
Storage Classes report GDB Oct Artem Trunov
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
GridPP storage status update Joint GridPP Board Deployment User Experiment Update Support Team, Imperial 12 July 2007,
Storage Interfaces and Access pre-GDB Wahid Bhimji University of Edinburgh On behalf of all those who participated.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
WLCG Operations Coordination Andrea Sciabà IT/SDC 10 th July 2013.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Storage Element Model and Proposal for Glue 1.3 Flavia Donno,
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
CERN 13-Jun-2002 Andreas Pfeiffer, CERN/IT-API, Development Infrastructure Andreas Pfeiffer CERN IT/API
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
SRM 2.2: experiment requirements, status and deployment plans 6 th March 2007 Flavia Donno, INFN and IT/GD, CERN.
The HEPiX IPv6 Working Group David Kelsey (STFC-RAL) EGI OMB 19 Dec 2013.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
LCG Accounting Update John Gordon, CCLRC-RAL 10/1/2007.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
WLCG Operations Coordination Andrea Sciabà IT/SDC GDB 11 th September 2013.
WLCG IPv6 deployment strategy
LCG Service Challenge: Planning and Milestones
Status of the SRM 2.2 MoU extension
Flavia Donno, Jamie Shiers
Flavia Donno CERN GSSD Storage Workshop 3 July 2007
Database Readiness Workshop Intro & Goals
SRM2 Migration Strategy
Proposal for obtaining installed capacity
Data Management cluster summary
LHC Data Analysis using a worldwide computing grid
Presentation transcript:

The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN

2 Grid Storage System Deployment (GSSD)  Working group launched by the GDB to coordinate SRM 2.2 deployment for Tier-1s and Tier-2s   Mailing list:  Mandate:  Establishing a migration plan from SRM v1 to SRM v2 so that the experiments can access the same data from the 2 endpoints transparently.  Coordinating with sites, experiments, and developers the deployment of the various 2.2 SRM implementations and the corresponding Storage Classes.  Coordinating the Glue Schema v1.3 deployment for the Storage Element making sure that the information published are correct.  Ensuring transparency of data access and the functionalities required by the experiments (see Baseline Service report).  People involved: developers (SRM and clients), storage providers (CASTOR, dCache, DPM, StoRM), site admins, experiments

3 GSSD Rough Plan  (A.) Collecting requirements from the experiments: more details with respect to what is described in TDR. The idea is to understand how to specifically configure a Tier-1 or a Tier-2: storage classes (quality of storage), disk caches (how many, how big and with which access), storage transition patterns, etc.  Now  (B.) Understand current Tier-1 setup: requirements ?  Now  (C.) Getting hints from developers: manual/guidelines ?  Now  (D.) Selecting production sites as guinea pigs and start testing with experiments.  Beginning of March July 2007  (E.) Assisting experiments and sites during tests (monitoring tools, guidelines in case of failures, cleaning-up, etc.). Define mini SC milestones  March - July 2007  (F.) Accommodate new needs, not initially foreseen, if necessary.  (G.) Have production SRM 2.2 fully functional (MoU) by September 2007

4 Deployment plan in practice  Detailed requirements already collected from LHCb. Still some reiteration is needed for CMS and ATLAS. ALICE in progress (?).  Focus on dCache deployment first. Target sites: FZK, Lyon, SARA. Experiments: LHCb and ATLAS.  Variety of implementations. Many issues covered by this exercise  Compiling specific guidelines targeting an experiment: LHCb and ATLAS.  Guidelines reviewed by developers and sites. Covering possibly also some Tier-2 sites.  Working then with some of the T2s deploying DPM. Repeat the exercise done for d-Cache. Possible parallel activity.  Start working with CASTOR if ready: CNAF, RAL.  Define mini SC milestones with the experiments and test in coordination with FTS and lcg-utils developers.

5 Deployment plan in practice: the client perspective  Site admins have to run both SRM v1.1 and SRM v2.2  Restrictions: SRM v1 and v2 endpoints must run on the same “host” and use the same file “path”. Endpoints may run on different ports.  Experiments will be able to access both old and new data through both the new SRM v2.2 endpoints and the old SRM v1.1  This is guaranteed when using higher level middleware such as GFAL, lcg- utils, FTS, LFC client tools. The endpoints conversion is performed automatically via the information system.  SRM type is retrieved from the information system  In case 2 versions found for the same endpoint SRM v2.2 is chosen only if space token (storage quality) specified. Otherwise SRM v1.1 is the default  FTS can be configured per channel on the version to use; policies can also be specified (“always use SRM 2.2”, “use SRM 2.2 if space token specified”,…)  It is the task of the GSSD Working Group to define and coordinate configuration details for mixed mode operations. ===>>> It is possible and foreseen to run in mixed mode with SRM v1.1 and SRM v2.2, until SRM v2.2 is proven stable for all implementations.

6 Coordination  Use pre-GDB meetings if possible. Otherwise monthly face-to-face meetings ?  Sub-groups to work on specific issues: bi-weekly phone conferences ?  Use mailing list to report updates/problems.  Use wiki web site for documents.  Report progress at WLCG MB  Some subgroups can be started today: volunteers for T2 setup for LHCb and ATLAS with DPM ?  Enjoy! ;-)