EGEE is a project funded by the European Union under contract IST-2003-508833 Using SRM: DPM and dCache G.Donvito,V.Spinoso INFN Bari www.eu-egee.org.

Slides:



Advertisements
Similar presentations
J Jensen CCLRC RAL Data Management AUZN (mostly about SRM though) GGF 16, Athens J Jensen.
Advertisements

HEPiX GFAL and LCG data management Jean-Philippe Baud CERN/IT/GD.
Steve Traylen Particle Physics Department Experiences of DCache at RAL UK HEP Sysman, 11/11/04 Steve Traylen
Owen SyngeDCache Slide 1 D-Cache Status Report Owen Synge.
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
12th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America SRM Installation and Configuration.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
DPM Name Server (DPNS) Namespace Authorization Location of physical files DPM Server Requests queuing and processing Space Management SRM Servers v1.1,
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
Storage: Futures Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 8 October 2008.
1 Principles of Reliable Distributed Systems Tutorial 12: Frangipani Spring 2009 Alex Shraer.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
5.1 © 2004 Pearson Education, Inc. Exam Managing and Maintaining a Microsoft® Windows® Server 2003 Environment Lesson 5: Working with File Systems.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 experience Sophie Lemaitre WLCG Workshop.
Don Quijote Data Management for the ATLAS Automatic Production System Miguel Branco – CERN ATC
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
DCache at Tier3 Joe Urbanski University of Chicago US ATLAS Tier3/Tier2 Meeting, Bloomington June 20, 2007.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
CERN IT Department CH-1211 Geneva 23 Switzerland t Storageware Flavia Donno CERN WLCG Collaboration Workshop CERN, November 2008.
1 Administering Shared Folders Understanding Shared Folders Planning Shared Folders Sharing Folders Combining Shared Folder Permissions and NTFS Permissions.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
INFSO-RI Enabling Grids for E-sciencE DPM Administration Jean-Philippe Baud (Sophie Lemaitre)
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Chapter 8 Configuring and Managing Shared Folder Security.
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
INFSO-RI Enabling Grids for E-sciencE gLite Data Management and Interoperability Peter Kunszt (JRA1 DM Cluster) 2 nd EGEE Conference,
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
CERN SRM Development Benjamin Coutourier Shaun de Witt CHEP06 - Mumbai.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
SEE-GRID-SCI Storage Element Installation and Configuration Branimir Ackovic Institute of Physics Serbia The SEE-GRID-SCI.
INFSO-RI Enabling Grids for E-sciencE Introduction Data Management Ron Trompert SARA Grid Tutorial, September 2007.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America SRM + gLite IO Server install Emidio Giorgio.
The new FTS – proposal FTS status. EMI INFSO-RI /05/ FTS /05/ /05/ Bugs fixed – Support an SE publishing more than.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 in DPM Sophie Lemaitre Jean-Philippe.
GridKa December 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann dCache Implementation at FZK Forschungszentrum Karlsruhe.
EGEE is a project funded by the European Union under contract IST Enabling bioinformatics applications to.
DMLite GridFTP frontend Andrey Kiryanov IT/SDC 13/12/2013.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Architecture of LHC File Catalog Valeria Ardizzone INFN Catania – EGEE-II NA3/NA4.
Martina Franca (TA), 07 November Installazione, configurazione, testing e troubleshooting di Storage Element.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
Enabling Grids for E-sciencE EGEE-II INFSO-RI Status of SRB/SRM interface development Fu-Ming Tsai Academia Sinica Grid Computing.
EMI is partially funded by the European Commission under Grant Agreement RI Future Proof Storage with DPM Oliver Keeble (on behalf of the CERN IT-GT-DMS.
CMS data access Artem Trunov. CMS site roles Tier0 –Initial reconstruction –Archive RAW + REC from first reconstruction –Analysis, detector studies, etc.
Riccardo Zappi INFN-CNAF SRM Breakout session. February 28, 2012 Ingredients 1. Basic ingredients (Fabric & Conn. level) 2. (Grid) Middleware ingredients.
EGEE Data Management Services
a brief summary for users
Jean-Philippe Baud, IT-GD, CERN November 2007
StoRM: a SRM solution for disk based storage systems
Status of the SRM 2.2 MoU extension
dCache “Intro” a layperson perspective Frank Würthwein UCSD
Experiences with http/WebDAV protocols for data access in high throughput computing
Service Challenge 3 CERN
StoRM Architecture and Daemons
SRM Developers' Response to Enhancement Requests
Model (CMS) T2 setup for end users
SRM2 Migration Strategy
GFAL 2.0 Devresse Adrien CERN lcgutil team
Data Management cluster summary
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

EGEE is a project funded by the European Union under contract IST Using SRM: DPM and dCache G.Donvito,V.Spinoso INFN Bari

pre-GDB meeting, 9/7/ Outlook Installation Test set-up Tests Using SRM: considerations Future tests Conclusions

pre-GDB meeting, 9/7/ DPM Installation Manual Installation:  DPM-Admin-Guide is very useful and almost complete. Expanding the troubleshooting section with more known “trouble”, may help! YAIM Installation:  Almost automatic: small manual configuration is still needed  But… YAIM does not support, natively, more then one SE per site (a dirty trick is needed)

pre-GDB meeting, 9/7/ dCache Installation Manual Installation:  The installation guide is sufficient for standard installation  The integration in LCG infrastructure is not so easy  Expanding the documentation with some examples for more advanced configuration, may help. YAIM Installation:  The integration in the LCG infrastructure is simpler  Some steps need to be done by hand  And… YAIM does not support natively more then one SE per site (a dirty trick is needed)

pre-GDB meeting, 9/7/ Test set up: DPM Server configuration:  Two machine: One for: DPM server, DPNS server, MySQL server, gridftp and rfio One for: gridftp and rfio Client configuration:  Two machine with: srmcp client (from dCache), DPM rfio client  Other SRM server (other DPM, dCache and CASTOR)

pre-GDB meeting, 9/7/ Test set up : dCache Server configuration:  Two machine: One Admin node: Postgres Server, dCache core, dCache opt (dcap, gridftp doors), dCache pool One for: dCache opt (dcap, gridftp doors), dCache pool Client configuration:  Two machine with: srmcp client (from dCache), dccp client  Other SRM server (other DPM, dCache and CASTOR)

pre-GDB meeting, 9/7/ Tests Functionality tests:  SRM: (put, get, copy)  Local protocol: (dcap, rfio)  Using some deep configurations Stress tests:  Perfomance  Load balancing Fault tolerance:  Deamon crash  Hardware crash

pre-GDB meeting, 9/7/ Tests with DPM: Functionality tests Directory listing on DPM server via SRM: SUCCESSFUL Copy local fs -> DPM server via SRM: SUCCESSFUL Directory listing on DPM server via SRM, verifying the presence of the copied file: SUCCESSFUL Multiple access to the same file by many applications (deleting while transferring file): SUCCESSFUL Copy local fs -> DPM server via SRM, into a nonexistent directory: SUCCESSFUL Copy DPM server -> (same) DPM server via SRM: FAILED dpm-addfs, putting, listing and deleting a file: SUCCESSFUL Setting RDONLY on a fs (dpm-modifyfs): PARTLY SUCCESSFUL

pre-GDB meeting, 9/7/ Tests with DPM: Functionality tests (2) dpm daemon sync: FAILED Getting files stored on a removed DPM fs (dpm- rmfs): FAILED Interrupting transfer of a big file (9GB): FAILED  Write a file bigger than free space: FAILED File access through many protocols: SUCCESSFUL Manipulating fs and permissions: SUCCESSFUL Remote copy, interaction among many SRM: PARTLY SUCCESSFUL

pre-GDB meeting, 9/7/ Tests with DPM: Stress tests Balancing multiple requests to different server: SUCCESSFUL Performance on the single transfer: SUCCESSFUL Load on the machine during transfer: SUCCESSFUL

pre-GDB meeting, 9/7/ Tests with DPM: Fault Tolerance Crashing mysql while PreparetoPut: SUCCESSFUL Crashing a file server: SUCCESSFUL Crashing one machine: SUCCESSFUL

pre-GDB meeting, 9/7/ Tests with dCache: Functionality tests Directory listing on dCache server via SRM: SUCCESSFUL Copy local fs -> dCache server via SRM: SUCCESSFUL Directory listing on dCache server via SRM, verifying the presence of the copied file: SUCCESSFUL Multiple access to the same file by many applications (deleting while transferring file): SUCCESSFUL Copy local fs -> dCache server via SRM, into a nonexistent directory: SUCCESSFUL Copy dCache server -> same dCache server via SRM: SUCCESSFUL Getting files stored on a removed pool (killing dcache-pool): SUCCESSFUL

pre-GDB meeting, 9/7/ Tests with dCache: Functionality tests (2) Interrupting transfer of a big file: SUCCESSFUL File access through many protocols: SUCCESSFUL Manipulating fs and permissions: SUCCESSFUL Remote copy, interaction among many SRM: SUCCESSFUL Multiple requests: SUCCESSFUL

pre-GDB meeting, 9/7/ Tests with dCache: Stress tests Balancing multiple requests to different server: SUCCESSFUL Performance on the single transfer: PARTLY SUCCESSFUL Load on the machine during transfer: PARTLY SUCCESSFUL

pre-GDB meeting, 9/7/ Tests with dCache: Fault Tolerance Crashing Postgres while PreparetoPut: SUCCESSFUL Crashing a file server: SUCCESSFUL Crashing one machine: SUCCESSFUL

pre-GDB meeting, 9/7/ Using DPM PRO:  Very simple to install, configure and manage  LCG integration quite automatic and up-to-date  Very good performance (also with single transfer)  Quite little overhead on the servers  Support SRM-v2 CONS:  Sometimes strange behavior: deamon crashes (dpm-gridftp) DPM not up-to-date to configuration: need to be restarted after some commands  srmcp not yet supported  Some advanced features not yet implemented (replica of “important” file, load balancing between accepting gridftp connections and writing files… )  Scalability not proved  Problem when server are not in the same domain

pre-GDB meeting, 9/7/ Using dCache PRO:  Many advanced features (replication of files, load balancing between accepting gridftp connections and writing files… )  More robust behavior in production environment  Support for srmcp (between all known SRM servers)  Larger scalability (proved in many Tier1/2) CONS:  Less “brilliant” perfomance on single transfer (but using more processes the bandwidth can be easily filled)  More CPU load for copy processes  Integration in LCG less automatic and up-to-date  Support only SRM-v1.1

pre-GDB meeting, 9/7/ Future tests with DPM Test the new releases as they come out Test the capability of using DPM with ROOT file acces Test the XROOT access to DPM Test and use of the DPM installation as back-end for gLite IO-Server Test deeply the API for accessing files, comparing GFAL and RFIO protocols, in some user’s applications

pre-GDB meeting, 9/7/ Future tests with dCache Test the new release Test the possibility to have more than one replica of (some) files for garatee the availability and the performance Test the use of the WN’s file system for scratch area for “volatile” files or as space to replicate “frequently accessed files” Test and use the “production” installation as back-end for gLite IO-Server Test deeply the API for accessing files, comparing GFAL and DCAP protocols, in some user’s applications

pre-GDB meeting, 9/7/ Conclusions DPM is the simplest SRM solution for small Tier 2 sites with no need for advanced features  It is possible to migrate “old” classic SE to SRM with no loss of data in a very easy way  But … seems that in a real production environment it needs some attention to fix the problem that can arise dCache seems a good solution for medium-big Tier 2 site  Many advanced features are available  The installation is more simple that in the past  But … some work need to simplify the integration with LCG