CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.

Slides:



Advertisements
Similar presentations
Windows Server ® 2008 File Services Infrastructure Planning and Design Published: June 2010 Updated: November 2011.
Advertisements

Delivering Experiment Software to WLCG sites A new approach using the CernVM Filesystem (cvmfs) Ian Collier – RAL Tier 1 HEPSYSMAN.
HEP Data Sharing … … and Web Storage services Alberto Pace Information Technology Division.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 7 Configuring File Services in Windows Server 2008.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Andrew McNab - Manchester HEP - 22 April 2002 UK Rollout and Support Plan Aim of this talk is to the answer question “As a site admin, what are the steps.
Pilots 2.0: DIRAC pilots for all the skies Federico Stagni, A.McNab, C.Luzzi, A.Tsaregorodtsev On behalf of the DIRAC consortium and the LHCb collaboration.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
70-294: MCSE Guide to Microsoft Windows Server 2003 Active Directory Chapter 12: Deploying and Managing Software with Group Policy.
What if you suspect a security incident or software vulnerability? What if you suspect a security incident at your site? DON’T PANIC Immediately inform:
What if you suspect a security incident or software vulnerability? What if you suspect a security incident at your site? DON’T PANIC Immediately inform:
Introduction to CVMFS A way to distribute HEP software on cloud Tian Yan (IHEP Computing Center, BESIIICGEM Cloud Computing Summer School.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
1 Administering Shared Folders Understanding Shared Folders Planning Shared Folders Sharing Folders Combining Shared Folder Permissions and NTFS Permissions.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
A Networked Machine Management System 16, 1999.
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
Adaptive Web Caching CS411 Dynamic Web-Based Systems Flying Pig Fei Teng/Long Zhao/Pallavi Shinde Computer Science Department.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Chapter 10 Chapter 10: Managing the Distributed File System, Disk Quotas, and Software Installation.
Towards a Global Service Registry for the World-Wide LHC Computing Grid Maria ALANDES, Laurence FIELD, Alessandro DI GIROLAMO CERN IT Department CHEP 2013.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Changes to CernVM-FS repository are staged on an “installation box" using a read/write file system interface. There is a dedicated installation box for.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
Catalin Condurache STFC RAL Tier-1 GridPP OPS meeting, 10 March 2015.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Using CVMFS to serve site software Sarah Williams Indiana University 2/01/121.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
LCG Distributed Databases Deployment – Kickoff Workshop Dec Database Lookup Service Kuba Zajączkowski Chi-Wei Wang.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
Nanbor Wang, Balamurali Ananthan Tech-X Corporation Gerald Gieraltowski, Edward May, Alexandre Vaniachine Argonne National Laboratory 2. ARCHITECTURE GSIMF:
10 May 2001WP6 Testbed Meeting1 WP5 - Mass Storage Management Jean-Philippe Baud PDP/IT/CERN.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES CVMFS deployment status Ian Collier – STFC Stefan Roiser – CERN.
Introduction to AFS IMSA Intersession 2003 An Overview of AFS Brian Sebby, IMSA ’96 Copyright 2003 by Brian Sebby, Copies of these slides.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Testing CernVM-FS scalability at RAL Tier1 Ian Collier RAL Tier1 Fabric Team WLCG GDB - September
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
The GridPP DIRAC project DIRAC for non-LHC communities.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
CernVM-FS – Best Practice to Consolidate Global Software Distribution Catalin CONDURACHE, Ian COLLIER STFC RAL Tier-1 ISGC15, Taipei, March 2015.
II EGEE conference Den Haag November, ROC-CIC status in Italy
Considerations on Using CernVM-FS for Datasets Sharing Within Various Research Communities Catalin Condurache STFC RAL UK ISGC, Taipei, 18 March 2016.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Status of ARGUS support Peter Solagna – EGI.eu.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
CVMFS Alessandro De Salvo Outline  CVMFS architecture  CVMFS usage in the.
CernVM-FS vs Dataset Sharing
Use of HLT farm and Clouds in ALICE
Regional Operations Centres Core infrastructure Centres
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
Service Challenge 3 CERN
Introduction to Data Management in EGI
Presentation transcript:

CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013

CVMFS Infrastructure for EGI VOs What is CVMFS? CVMFS for User Communities CVMFS for NGIs CVMFS for System Administrators Conclusion References Q&A CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

What is CVMFS? Introduction History Benefits WLCG deployment Non-LHC use CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

What is CVMFS - Introduction CernVM-FS is a read-only file system widely used to access HEP experiment software and conditions data. Files and directories are hosted on standard web servers and mounted in the universal namespace: /cvmfs. File data and meta-data are downloaded on demand and locally cached. CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

What is CVMFS - History In last 3 years CernVM-FS has transformed distribution of HEP experiment software and conditions data in WLCG –Removes local install jobs –Removes heavily loaded file servers –Removes local software tagging process –Install once and you know the software is available at any sites with CernVM-FS Robust decentralized network of repository replicas CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

What is CVMFS - History By April 2014 main experiments will no longer run any local installation Number of cvmfs servers around world for other communities End 2012 RAL Tier 1 offered non-LHC Stratum 0 –About to be fully replicated at CERN –Also discussions with NIKHEF CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

What is CVMFS – WLCG Deployment Repositories hosted at CERN replicated to Stratum-1 replica servers in Europe, the U.S. and Asia A distributed hierarchy of proxy servers fetches content from the closest public mirror server CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

What is CVMFS – WLCG Deployment CVMFS clients connect to one of the Stratum 1 services (via local squid caches) Transparent fail-over to other Stratum 1 service in case of connection problems CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

What is CVMFS - Benefits Install once, run anywhere (with CernVM-FS client installed) Easily scalable with standard tools (squid caches) Often better performance –Reduced internal network traffic Can now re-export via NFS if it is impossible to install client on nodes (or little disk space) CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

What is CVMFS – Non-LHC Use Firmly established for WLCG Other communities? Easy to set up and run local CernVM-FS server –We know there are several – do not know where they all are Real benefit may come with network analogous to WLCG one CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for Non-LHC Use – Next Steps Who is interested in trying CernVM-FS? Do non-LHC, non-HEP VOs have different requirements? How best to build a resilient scalable network to support non-LHC (& non-HEP) VOs Who is interested in developing an international CernVM-FS infrastructure for other communities? CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for Non-LHC Use – What Will Be Needed Multiple Stratum-0s –VOs need to be able to install software (or upload tarballs) –Site needs to manage publishing repository on cvmfs- server –Do not require 100% uptime CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

A network (mesh) of Replicas – which provide the resilience –Pull data from Stratum 0s –Need to be resilient Server itself – with some TB storage Ideally pair of reverse proxy accelerators in front SLA similar to Tier 1s CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September CVMFS for Non-LHC Use – What Will Be Needed

(Regional) CVMFS proxy servers that fetch content from the closest public mirror server and serve the CVMFS clients –Trivial configuration –Minimal effort to maintain CVMFS client suite installed and configured on batch farms –The expertise already available at many sites –Experts ready to help new participating sites Enthusiasm! CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September CVMFS for Non-LHC Use – What Will Be Needed

CVMFS for User Communities Advantages Disadvantages Actions Ultimate Goal CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for User Communities - Advantages A single point where to upload and maintain the master copy of the software The rest is done by the replication mechanism – down to Stratum-1 mesh and beyond to the WNs cache via the squids No longer the need for multiple NFS mounts across the grid –If a VO has more than one point of presence (i.e. 1+ supporting sites) then 1+ NFS areas to maintain (already double the work needed) CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for User Communities - Advantages An unique definition for the VO_SW_DIR variable everywhere –i.e. /cvmfs/VO_repository_name Usually a single installation job to run at the site hosting the stratum-0 repository for that VO (for upload and maintenance) CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for User Communities - Advantages Possible alternative(s) –Web interface allowing tarball upload (and unpacking), also basic filesystems operations on /cvmfs/VO_repository_name –‘Power users’ given read-write access into /cvmfs/VO_repository_name –Other case scenarios to be discussed if need be CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for User Communities - Disadvantages Repositories are world-readable (CVMFS limitation) –Not a problem with LHC VOs though CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for User Communities – Actions VO Managers express interest and… …contact –CVMFS Task Force please ask to be added to the mailing list –Appropriate NGI or Stratum-0 sites Along with RAL Tier1 other sites will act as (regional) CVMFS Stratum-0 soon (South Africa, Finland, Netherlands) CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for User Communities – Actions VO Managers identify cases where VOs are supported by both EGI and OSG –Make software distribution more homogeneous CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for User Communities – Ultimate Goal “We are making use of it already. Very nice and simple[…]” –excerpt from CVMFS Task Force wiki page What about having a lot of VO managers from various EGI VOs saying the same in 6-12 months time? CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for NGIs How can NGIs help? Ultimate goal CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for NGIs – How Can NGIs Help? Need to support CVMFS –understand sites availability to install clients –collect expression of interest from VOs CVMFS client suite needs to be installed and configured on WNs at sites Give support to emerging Stratum-0 sites if needed Assistance required to help the Stratum-1 network take shape CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for NGIs – How Can NGIs Help? Much more help required to deploy and maintain a hierarchy of proxy servers –Helpful to have access to squid farm service It could be 1-2 slots on the farm dedicated for squid farm service (if not already available) –And/or dedicated CVMFS squid nodes close to the batch farms optimal at least one such service per NGI ideally CVMFS squid service at each site CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for NGIs – How Can NGIs Help? CVMFS client suite needs to be installed and configured on WNs at sites Minimal changes otherwise –on existing squids just add the necessary CVMFS Stratum-1 endpoints CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for NGIs – Ultimate Goal Have CVMFS client suite installed by default on site batch farms! Have a CVMFS squid service per each NGI CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for Sys Admins How to convince them What is the real work needed Ultimate goal CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for Sys Admins – How to Convince Them No need any longer to maintain NFS areas to store the software at each batch farm Simple procedure to install and configure the CVMFS client at worker node level CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for Sys Admins – How to Convince Them Solutions available for diskless WNs or where CVMFS client cannot be installed –Use re-export feature of the latest client release Easy access to an active CVMFS community via mailing lists –Developers, sys admins involved in CVMFS for WLCG CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for Sys Admins – The Real Work Needed Install the latest CVMFS client production release across the entire farm – –client v2.1.X, documentation Modify VO_SW_DIR on WNs to point to CVMFS mount point (/cvmfs/VO_repository_name) CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for Sys Admins – The Real Work Needed Once configured for one VO, easy to replicate for further ones On existing squids just add the necessary CVMFS Stratum-1 endpoints Not complicated, but people might be reluctant CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

CVMFS for Sys Admins – Ultimate Goal Make CVMFS client suite part of the standard WN installation CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

Conclusion If you are –a VO willing to make use of CVMFS –a site willing to deploy CVMFS client –a site willing to run a stratum-0 repository –a site willing to run a stratum-1 replica then contact the CVMFS Task Force!! CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

References CVMFS Task Force wiki page – CVMFS home page – RAL Tier1 CVMFS – CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September

Thank You! Time for questions and answers now… CVMFS Infrastructure for EGI VOs - EGI Webinar, 5 September