1 Worker Nodes Web Proxies Grid Site Repository Mirrors (Stratum 1) HTTP CernVM-FS fuse module on WNs recommended deployment Worker Nodes Web Proxies Grid.

Slides:



Advertisements
Similar presentations
Delivering Experiment Software to WLCG sites A new approach using the CernVM Filesystem (cvmfs) Ian Collier – RAL Tier 1 HEPSYSMAN.
Advertisements

U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Andrew McNab - EDG Access Control - 14 Jan 2003 EU DataGrid security with GSI and Globus Andrew McNab University of Manchester
2.1 © 2004 Pearson Education, Inc. Exam Managing and Maintaining a Microsoft® Windows® Server 2003 Environment Lesson 2: Installing Windows Server.
Lesson 4-Installing Network Operating Systems. Overview Installing and configuring Novell NetWare 6.0. Installing and configuring Windows 2000 Server.
Installing and Maintaining ISA Server. Planning an ISA Server Deployment Understand the current network infrastructure Review company security policies.
DNS Setup DNS CONFIGURATION. DNS Configuration DNS Setup named daemon is used A DNS Server may be caching/master/slave server The named.ca file has information.
Web Server Setup WEB SERVER SETUP.
NETWORK FILE SYSTEM (NFS) By Ameeta.Jakate. NFS NFS was introduced in 1985 as a means of providing transparent access to remote file systems. NFS Architecture.
Network File System CIS 238. NFS (Network File System) The most commercially successful and widely available remote file system protocol Designed and.
Remote Disk Access with NFS
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Installing and Setting up mongoDB replica set PREPARED BY SUDHEER KONDLA SOLUTIONS ARCHITECT.
CVMFS AT TIER2S Sarah Williams Indiana University.
1 Network File Sharing. 2 Module - Network File Sharing ♦ Overview This module focuses on configuring Network File System (NFS) for servers and clients.
1 COP 4343 Unix System Administration Unit 15: file server – ftp – nfs.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
1 Apache. 2 Module - Apache ♦ Overview This module focuses on configuring and customizing Apache web server. Apache is a commonly used Hypertext Transfer.
Introduction to CVMFS A way to distribute HEP software on cloud Tian Yan (IHEP Computing Center, BESIIICGEM Cloud Computing Summer School.
XA R7.8 Link Manager Belinda Daub Sr. Technical Consultant 1.
Module 2: Installing and Maintaining ISA Server. Overview Installing ISA Server 2004 Choosing ISA Server Clients Installing and Configuring Firewall Clients.
Configuration Management with Cobbler and Puppet Kashif Mohammad University of Oxford.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
Write-through Cache System Policies discussion and A introduction to the system.
Introduction to AFS IMSA Intersession 2003 AFS Servers and Clients Brian Sebby, IMSA ‘96 Copyright 2003 by Brian Sebby, Copies of these.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
INFSO-RI Enabling Grids for E-sciencE Installation of an APT+kickstart server Giuseppe Platania INFN Catania EMBRACE Tutorial Clermont-Ferrand,
Module 1: Implementing Active Directory ® Domain Services.
Changes to CernVM-FS repository are staged on an “installation box" using a read/write file system interface. There is a dedicated installation box for.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Puppet at MWT2 Sarah Williams Indiana University.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
Linux Operations and Administration
Catalin Condurache STFC RAL Tier-1 GridPP OPS meeting, 10 March 2015.
Using CVMFS to serve site software Sarah Williams Indiana University 2/01/121.
+ AliEn site services and monitoring Miguel Martinez Pedreira.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES CVMFS deployment status Ian Collier – STFC Stefan Roiser – CERN.
Network File System Peter DSouza. NFS  Allows machines to mount a disk partition on a remote machine as if it were a local drive  Other systems similar.
Testing CernVM-FS scalability at RAL Tier1 Ian Collier RAL Tier1 Fabric Team WLCG GDB - September
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
Status of CVMFS for ALICE Deployment timeline ALICE © | Offline Week | June 2013| Predrag Buncic.
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
Predrag Buncic (CERN/PH-SFT) CernVM Status. CERN, 24/10/ Virtualization R&D (WP9)  The aim of WP9 is to provide a complete, portable and easy.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
A closer look at the VDT RPMs Alain Roy OSG Software Coordinator.
Tier 1 Experience Provisioning Virtualized Worker Nodes on Demand Ian Collier, Andrew Lahiff UK Tier 1 Centre, RAL ISGC 2014.
CernVM-FS – Best Practice to Consolidate Global Software Distribution Catalin CONDURACHE, Ian COLLIER STFC RAL Tier-1 ISGC15, Taipei, March 2015.
ITMT 1371 – Window 7 Configuration 1 ITMT Windows 7 Configuration Chapter 8 – Managing and Monitoring Windows 7 Performance.
ALICE migration to CVMFS Timeline & Procedures ALICE © | T1/T2 Workshop | 4-6 June 2013| Predrag Buncic.
Considerations on Using CernVM-FS for Datasets Sharing Within Various Research Communities Catalin Condurache STFC RAL UK ISGC, Taipei, 18 March 2016.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) 马兰馨 IHEP, CAS Setting Up a Repository.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Overview of software tools for gLite installation & configuration.
CernVM-FS Operations in the CERN IT Storage Group Dan van der Ster (CERN IT-ST) CernVM Users Workshop 6-8 June D. van der Ster2.
CVMFS Alessandro De Salvo Outline  CVMFS architecture  CVMFS usage in the.
Update On Scientific Linux Troy Dawson HEPIX Spring 2010 April 21, 2010.
Nov 05, 2008, PragueSA3 Workshop1 A short presentation from Owen Synge SA3 and dCache.
Day 28 File System.
CernVM-FS vs Dataset Sharing
COP 4343 Unix System Administration
Introduction to CVMFS A way to distribute HEP software on cloud
June 2011 David Front Weizmann Institute
IS3440 Linux Security Unit 4 Securing the Linux Filesystem
SUSE Linux Enterprise Desktop Administration
Presentation transcript:

1 Worker Nodes Web Proxies Grid Site Repository Mirrors (Stratum 1) HTTP CernVM-FS fuse module on WNs recommended deployment Worker Nodes Web Proxies Grid Site Repository Mirrors (Stratum 1) NF S HTTP CernVM-FS exported by NFS requires CernVM-FS 2.1 on SL6

① Squid Setup If there are Frontier Squids installed ( this step can be skippedhttp://frontier.cern.ch a)Install Squid from the Scientific Linux repository on 2 (virtual) machines $ yum install squid b)Edit /etc/squid/squid.conf so that it matches the following snippet max_filedesc 8192 maximum_object_size 1024 MB # 4 GB memory cache cache_mem 128 MB maximum_object_size_in_memory 128 KB # 50 GB disk cache cache_dir ufs /var/spool/squid acl cvmfs dst cvmfs-stratum-one.cern.ch acl cvmfs dst cernvmfs.gridpp.rl.ac.uk acl cvmfs dst cvmfs.racf.bnl.gov acl cvmfs dst cvmfs02.grid.sinica.edu.tw acl cvmfs dst cvmfs.fnal.gov acl cvmfs dst cvmfs-atlas-nightlies.cern.ch http_access allow cvmfs c)Use squid –k parse to verify the configuration and squid –z to create the cache Note: 50G hard disk cache and 4G memory cache is the recommended minimum 2

② Add CernVM-FS yum repository, install cvmfs packages c)Install the cvmfs-release RPM from d)(Optional) If you want to participate in pre-release testing, enable the cernvm-testing repository in /etc/yum.repos.d/cernvm.repo e)(Optional) For the CernVM-FS 2.1.X client, enable the cernvm-ng repository in /etc/yum.repos.d/cernvm.repo Note: The cvmfs 2.1 RPMs will be part of the production repository as soon as we have full deployment on a Tier 1 site. RAL is close to this point. f)Install cvmfs packages: yum install cvmfs cvmfs-keys cvmfs-init-scripts Note: do not use auto update on cvmfs packages g)Run cvmfs_config setup in order to configure fuse and autofs for the use with CernVM-FS. 3

③ Configure /etc/cvmfs/default.local a)CVMFS_REPOSITORIES=atlas.cern.ch,atlas- condb.cern.ch,lhcb.cern.ch,cms.cern.ch,alice.cern.ch,grid.cern.ch,sft.cern.ch resp. the subset of supported VOs. See for repository dependencies. b)CVMFS_HTTP_PROXY=” These are the squid servers from step 1. Note the quotes. c)CVMFS_QUOTA_LIMIT=20000 These is the limit for the CernVM-FS hard disk cache in Megabyte. Note: this should be larger than 12G and not more than 100G. Note: for the 2.0 client, the quota applies to all repositories independently (overall space is the sum of all quotas). The 2.1 client uses a shared cache. Note: the partition hosting the cache should have at least 10% more space since the CernVM-FS quota is a soft quota that occasionally can be overspent. d)(Optional) CVMFS_CACHE_BASE=/my/scratch/directory By default, the CernVM-FS cache is in /var/cache/cvmfs2 (2.0 client) resp. in /var/lib/cvmfs (2.1 client). Ensure that tmpwatch is not active on the cache directory. Note: changing the cache directory can SELinux make CernVM-FS block. 4

④ (Optional) Additional configuration for the NFS mode a)CVMFS_NFS_SOURCE=yes Necessary to activate NFS compliant meta-data handling. Note: this implies the loss of a quota enforcement for meta-data. Ensure that you have at least 50G additional hard disk space available, monitor hard disk consumption. b)Turn off the autofs service. Autofs mounted volumes cannot be exported by NFS. Mount CernVM-FS volumes via /etc/fstab on the NFS server. Example entry: atlas.cern.ch /cvmfs/atlas.cern.ch cvmfs defaults 0 0 c)CVMFS_MEMCACHE_SIZE=256 Assign 256M to CernVM-FS meta-data memory caches. This value works well at DESY with 4k job slots at 350 nodes. d)Increase the number of NFS daemons, set RPCNFSDCOUNT=128 in /etc/sysconfig/nfs e)Example entries in /etc/exports /cvmfs/atlas.cern.ch /24(ro,sync,no_root_squash,\ no_subtree_check,fsid=101) Note: the fsid has to be different from every other exported CernVM-FS mountpoint. f)Example entry in worker node /etc/fstab :/cvmfs/atlas.cern.ch /cvmfs/atlas.cern.ch nfs \ ro,nfsvers=3,noatime,nodiratime,ac,actimeo=60,lookupcache=all 0 0 Note: NFS performance will benefit from 16G or more memory and from the CernVM-FS cache directory hosted on SSDs. 5

⑤ Verify CernVM-FS configuration a)cvmfs_config chksetup should report “OK” b)To check if the repositories get mounted, run /sbin/service cvmfs probe (2.0 client) cvmfs_config probe (2.1 client) c)On errors, check syslog (/var/log/messages) for records from cvmfs d)Check SElinux audit log /var/log/audit/audit.log for violations from the cvmfs2 process e)For mounting problems, try to mount manually: mkdir –p /mnt/test mount –t cvmfs atlas.cern.ch /mnt/test f)Retry with clean caches: /sbin/service cvmfs restartclean (2.0 client) cvmfs_config wipecache (2.1 client) g)If a problem has been resolved, reload the autofs maps by /sbin/service autofs reload in order to avoid seeing errors from the autofs cache. h)If the problem persists, send an describing the problem to together with a bugreport tarball created by cvmfs_config Note: A Nagios check is available at Statistics can be gathered by cvmfs_config stat -vhttp://cernvm.cern.ch/portal/filesystem/downloads 6

Mailing Technical report, known issues, configuration examples: Bug tracker: Source code: RPMs: Yum repositories: Nightly builds: Cvmfs module for Puppet: Cvmfs and