INFSO-RI-508833 Enabling Grids for E-sciencE www.eu-egee.org Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.

Slides:



Advertisements
Similar presentations
Storage: Futures Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 8 October 2008.
Advertisements

Storage Issues: the experiments’ perspective Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 9 September 2008.
NorthGrid status Alessandra Forti Gridpp13 Durham, 4 July 2005.
Jan 2010 Current OSG Efforts and Status, Grid Deployment Board, Jan 12 th 2010 OSG has weekly Operations and Production Meetings including US ATLAS and.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 experience Sophie Lemaitre WLCG Workshop.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Simply monitor a grid site with Nagios J.
Status report on SRM v2.2 implementations: results of first stress tests 2 th July 2007 Flavia Donno CERN, IT/GD.
Status of SRM 2.2 implementations and deployment 29 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
CERN IT Department CH-1211 Geneva 23 Switzerland t Storageware Flavia Donno CERN WLCG Collaboration Workshop CERN, November 2008.
What is expected from ALICE during CCRC’08 in February.
INFSO-RI Enabling Grids for E-sciencE SA1 and gLite: Test, Certification and Pre-production Nick Thackray SA1, CERN.
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
Report on Installed Resource Capacity Flavia Donno CERN/IT-GS WLCG GDB, CERN 10 December 2008.
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
DPM Python tools Ivan Calvet IT/SDC-ID DPM Workshop 10 th October 2014.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Busy Storage Services Flavia Donno CERN/IT-GS WLCG Management Board, CERN 10 March 2009.
XROOTD AND FEDERATED STORAGE MONITORING CURRENT STATUS AND ISSUES A.Petrosyan, D.Oleynik, J.Andreeva Creating federated data stores for the LHC CC-IN2P3,
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 in DPM Sophie Lemaitre Jean-Philippe.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Storage Accounting for Grid Environments Fabio Scibilia INFN - Catania.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
WLCG Grid Deployment Board CERN, 14 May 2008 Storage Update Flavia Donno CERN/IT.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Regional Nagios Emir Imamagic /SRCE EGEE’09,
INFSO-RI Enabling Grids for E-sciencE FTS failure handling Gavin McCance Service Challenge technical meeting 21 June.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Operations Automation Team Kickoff Meeting.
INFSO-RI Enabling Grids for E-sciencE gLite Certification and Deployment Process Markus Schulz, SA1, CERN EGEE 1 st EU Review 9-11/02/2005.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks ROCs Top 5 Middleware Issues Daniele Cesini,
INFSO-RI Enabling Grids for E-sciencE gLite Test and Certification Effort Nick Thackray CERN.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Data Management cluster summary David Smith JRA1 All Hands meeting, Catania, 7 March.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Storage Element Model and Proposal for Glue 1.3 Flavia Donno,
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Grid Configuration Data or “What should be.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Andrea Manzi CERN EGI Conference on Challenges and Solutions for Big Data Processing on cloud 24/09/2014 Storage Management Overview 1 24/09/2014.
II EGEE conference Den Haag November, ROC-CIC status in Italy
SRM 2.2: experiment requirements, status and deployment plans 6 th March 2007 Flavia Donno, INFN and IT/GD, CERN.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks CYFRONET site report Marcin Radecki CYFRONET.
INFSO-RI Enabling Grids for E-sciencE FTS Administrators Tutorial for Tier-2s Paolo Badino
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
CERN IT Department CH-1211 Genève 23 Switzerland t DPM status and plans David Smith CERN, IT-DM-SGT Pre-GDB, Grid Storage Services 11 November.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
The Grid Information System Maria Alandes Pradillo IT-SDC White Area Lecture, 4th June 2014.
Jean-Philippe Baud, IT-GD, CERN November 2007
Status of the SRM 2.2 MoU extension
Flavia Donno CERN GSSD Storage Workshop 3 July 2007
SRM v2.2 / v3 meeting report SRM v2.2 meeting Aug. 29
SRM2 Migration Strategy
Proposal for obtaining installed capacity
Data Management cluster summary
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB - CERN, 9 January 2007

Enabling Grids for E-sciencE INFSO-RI Storage Class Pre-GDB Meeting - CERN, 7 November Summary Status of SRM 2.2 Summary of previous discussions and conclusions Very interesting reports from NIKHEF and T2 sites Tentative plan for next WLCG Workshop in January The new working group

Enabling Grids for E-sciencE INFSO-RI Storage Class Pre-GDB Meeting - CERN, 7 November Status of SRM v2.2 Tests: – S2 test suite just ported to new published WSDL (minor changes), result published. Cron job will be restarted as of today. – LBNL SRM-Tester also run as a cron. – List of open issues kept updated by CERN. Many will be deferred to v2.3 or v3.0. Servers: – Current endpoints sometimes hard to test since they use a production instance behind (hard to exploit and stress the systems). – Only one end-point available per type. Clients: – SRM client code unit-tested and mostly integrated in FTS – FTS in development testbed by end of February – GFAL/lcg-utils under certification GLUE schema: – V1.3 ready for deployment by end of Jan.; static information providers in YAIM ready by end of Feb.

Enabling Grids for E-sciencE INFSO-RI Storage Class Pre-GDB Meeting - CERN, 7 November Summary of Storage Class discussions Supported Storage Classes in WLCG are: T1D1, T1D0, T0D1. T0D0 might be required by other VOs Clients should not use dynamic reservation initially. Storage Areas are created statically by Site Administrators at sites. VO administrators can use srmReserveSpace to create Storage Areas. Tools will be available to publish VO specific Space Descriptions (ATLAS_RAW, CMS_TAG, etc.) The Namespace should be used to organize the data. Brief review of the T1s current implementations and plans (GridKa, SARA, Lyon). Main issues are about configuration of WAN/LAN buffers, storage areas organizations, storage classes, management tools and clear procedures in case of failures from both developers and experiments. – Input needed from the experiments to understand requirements and data flow – Input needed from developers for the configuration of the specific implementations A tentative plan for the WG with future targets has been presented (see later)

Enabling Grids for E-sciencE INFSO-RI Storage Class Pre-GDB Meeting - CERN, 7 November Sites NIKHEF: – Very good experience with DPM – Discovered few limitations while supporting ATLAS:  Missing management tools  Impossibility to force the system not to use the volatile pool for ATLAS durable files  Some minor problems with ACLs (a production manager cannot write in a user pool) USCMS T2s: – SRM-dCache – No Tape backend – WNs used as storage elements – Not using groups/roles nor storage classes at the moment. – Different configuration of pools for WAN or LAN GridPP:

Enabling Grids for E-sciencE INFSO-RI Storage Class Pre-GDB Meeting - CERN, 7 November Sites GridPP T2s: – No tape backend and small RAIDs very often spread on WNs (~500!) – Shared resources with non WLCG VOs – Limited manpower (=> easy management) – Management tools are essentials: pool draining, namespace management, disk quotas, resize disk pools (increase or decrease) – Is data loss a real issue ? What about analysis ? Procedures to manage this situation – Hints on hardware and software configuration and tuning (raids, power supplies, communication channels, databases, backups, firewall problems, etc.) – Storage systems limitations (files open/sec, read/write rates, etc.) – System monitoring – Tests to check site availability (SAM tests depend on other subsystems: BDII, catalogues, etc.) also at a pool level per VO. – Is it important to publish the real size of a pool (available and used) ? – Storage accounting – 32 vs. 64 bit installations. –Which storage classes ? –Dynamic or static reservation ? –Interoperability tests ? –What to do in case of not used/corrupted data sets? What about full disk pools ? Empty pools ? –What are the procedures in case of SRM failures (clear and written) ?

Enabling Grids for E-sciencE INFSO-RI Storage Class Pre-GDB Meeting - CERN, 7 November Plans – Collect requirements per VO in terms of:  Storage Classes needed at various sites: how much disk for each?  Data Flow between Tier-0, Tier-1s, Tier-2s  Space reservation requirements: static and/or dynamic  Space Token Descriptions per VO: which ? How many ? Transition patterns ?  Special requirements: xrootd, etc.  Data Access Patterns.  Plans from 1st of April 2007 till end of the year – Understand requirements and existing setup of T1s and T2s (ongoing effort). Assisting T1s and T2s. – Discuss with the developers on how to best implement the requirements – Produce manuals and guidelines for deployment – Selecting sites as guinea pigs – Testing with experiments the setup at sites – Assisting the experiments and the site administrators during the tests. Support for possible further requirements/needs – Review the status of the plan at the WLCG workshop in January