SRM at Clemson Michael Fenn. What is a Storage Element? Provides grid-accessible storage space. Is accessible to applications running on OSG through either.

Slides:



Advertisements
Similar presentations
30-31 Jan 2003J G Jensen, RAL/WP5 Storage Elephant Grid Access to Mass Storage.
Advertisements

OSG Public Storage Project Summary Ted Hesselroth October 5, 2010 Fermilab.
Implementing Finer Grained Authorization in the Open Science Grid Gabriele Carcassi, Ian Fisk, Gabriele, Garzoglio, Markus Lorch, Timur Perelmutov, Abhishek.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
Exploring the UNIX File System and File Security
Implementing ISA Server Caching. Caching Overview ISA Server supports caching as a way to improve the speed of retrieving information from the Internet.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Guide To UNIX Using Linux Fourth Edition
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Introduction to OSG Storage Suchandra Thapa Computation Institute University of Chicago March 19, 20091GSAW 2009 Clemson.
A. Sim, CRD, L B N L 1 US CMS Workshop, Mar. 3, 2009 Berkeley Storage Manager (BeStMan) Alex Sim Scientific Data Management Research Group Computational.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
The Disk Resource Manager A Berkeley SRM for Disk Cache Implementation and Testing Experience on the OSG Jorge Luis Rodriguez IBP/Grid Mini-Workshop Apr.
SDM Center February 2, 2005 Progress on MPI-IO Access to Mass Storage System Using a Storage Resource Manager Ekow J. Otoo, Arie Shoshani and Alex Sim.
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
A. Sim, CRD, L B N L 1 OSG Applications Workshop 6/1/2005 OSG SRM/DRM Readiness and Plan Alex Sim / Jorge Rodriguez Scientific Data Management Group Computational.
G RID M IDDLEWARE AND S ECURITY Suchandra Thapa Computation Institute University of Chicago.
Xrootd, XrootdFS and BeStMan Wei Yang US ATALS Tier 3 meeting, ANL 1.
Chapter Two Exploring the UNIX File System and File Security.
Module 11: Implementing ISA Server 2004 Enterprise Edition.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
OSG Storage Architectures Tuesday Afternoon Brian Bockelman, OSG Staff University of Nebraska-Lincoln.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
Introduction to HDFS Prasanth Kothuri, CERN 2 What’s HDFS HDFS is a distributed file system that is fault tolerant, scalable and extremely easy to expand.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
 CASTORFS web page - CASTOR web site - FUSE web site -
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
How to Deploy and Configure the Smart Net Total Care CSPC Collector
OSG AuthZ components Dane Skow Gabriele Carcassi.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
Configuring and Troubleshooting Identity and Access Solutions with Windows Server® 2008 Active Directory®
DPM Python tools Ivan Calvet IT/SDC-ID DPM Workshop 10 th October 2014.
Last update 31/01/ :41 LCG 1 Maria Dimou Procedures for introducing new Virtual Organisations to EGEE NA4 Open Meeting Catania.
15 Copyright © 2004, Oracle. All rights reserved. Adding JAAS Security to the Client.
Data Management The European DataGrid Project Team
WLCG Grid Deployment Board CERN, 14 May 2008 Storage Update Flavia Donno CERN/IT.
Open Science Grid Build a Grid Session Siddhartha E.S University of Florida.
SESEC Storage Element (In)Security hepsysman, RAL 0-1 July 2009 Jens Jensen.
EGEE is a project funded by the European Union under contract IST Experiment Software Installation toolkit on LCG-2
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
Ted Hesselroth, OSG Site Administrators Meeting, December 13, 2007 Abhishek Singh Rana and Frank Wuerthwein UC San Diego dCache in OSG 1.0 and SRM 2.2.
OSG STORAGE AND DATA MOVEMENT Tanya Levshina. Talk Outline  Data movement methods and limitation  Open Science Grid (OSG) Storage  Storage Resource.
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Bologna, March 30, 2006 Riccardo Zappi / Luca Magnoni INFN-CNAF, Bologna.
OSG STORAGE OVERVIEW Tanya Levshina. Talk Outline  OSG Storage architecture  OSG Storage software  VDT cache  BeStMan  dCache  DFS:  SRM Clients.
A. Sim, CRD, L B N L 1 Production Data Management Workshop, Mar. 3, 2009 BeStMan and Xrootd Alex Sim Scientific Data Management Research Group Computational.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
a brief summary for users
Classic Storage Element
Data Bridge Solving diverse data access in scientific applications
Securing the Network Perimeter with ISA 2004
Berkeley Storage Manager (BeStMan)
T-StoRM: a StoRM testing framework
Hironori Ito Brookhaven National Laboratory
Data Management in Release 2
Exploring the UNIX File System and File Security
A Web-Based Data Grid Chip Watson, Ian Bird, Jie Chen,
Data Management Ouafa Bentaleb CERIST, Algeria
DRM Deployment Readiness Plan
Data services in gLite “s” gLite and LCG.
Grid Data Replication Kurt Stockinger Scientific Data Management Group Lawrence Berkeley National Laboratory.
Architecture of the gLite Data Management System
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

SRM at Clemson Michael Fenn

What is a Storage Element? Provides grid-accessible storage space. Is accessible to applications running on OSG through either GridFTP and/or SRM interface. Has GIP setup and configured properly which publishes its information. Has well-defined policy for cleanup and usage. Is registered with OSG.

Architecture Firebox: Dell PowerEdge: Head node and VPN server Oiltank1-9: Dell PowerEdge: storage node Birdnest: Xen VM, hosting CE and SE, VPN client Why bother with VPN? We have a unique situation where the grid head node is not co-located with the rest of the cluster

Distributed Filesystem After researching multiple distributed filesystems, we settled on PVFS. Simple configuration and easy to add extra clients. Server runs in userspace and client requires only a kernel module.

PVFS Server Setup./configure make make install PVFS then provides a command to help create your server config file /usr/bin/pvfs2-genconfig /etc/pvfs2- fs.conf Script will prompt for your desired protocol, servers, and other configuration options.

PVFS Server Setup This same config file can be distributed to each of the desired storage nodes at /etc/pvfs-fs.conf To initialize the server and allow it to allocate space, run /usr/sbin/pvfs2-server /etc/pvfs2-fs.conf -f To start the server normally, simply omit '-f'

PVFS Client Setup./configure –with-kernel-source=/path/to/kernelsrc make just_kmod insmod /usr/src/pvfs2/src/kernel/linux-2.6/pvfs2.ko mkdir /mnt/pvfs2/ Will need to make file /etc/pvfs2tab with form: tcp://testhost:3334/pvfs2-fs /mnt/pvfs2 pvfs2 defaults,noauto 0 0 pvfs2-client -p./pvfs2-client-core mount -t pvfs2 tcp://testhost:3334/pvfs2-fs /mnt/pvfs2

Grid Interfaces GridFTP-based SE is simply a file system directory accessible via GSI-authenticated FTP. Provides no space management functions Limited permission management functions. SRM/dCache - This is a full implementation of a Storage Element Enforces stronger constraints Can manage a space spanning more volumes.

Grid Interfaces -- BeStMan SRM/BeStMan - BeStMan is a full implementation of SRM v2.2, developed by Lawrence Berkeley National Laboratory, for a small disk based storage and mass storage systems. Works on top of existing disk-based UNIX file system. Works with any existing file transfer service, such as gsiftp, http, https, bbftp and ftp. Requires minimal administrative efforts on the deployment and updates. Source: rageElements rageElements

SRM/BeStMan at Clemson We had an existing filesystem, so BeStMan best fit our situation. Available for easy installation from the VDT pacman -get ITB:Bestman Configure BeStMan depending on your site setup and needs. Our config is on the next slide

SRM/BeStMan at Clemson $./configure --with-java-home=/opt/osg-1.0/jdk1.6 --with-srm-home=/opt/osg-1.0/bestman --with-srm-owner=daemon --enable-sudofsmng --with-cacert-path=/opt/osg-1.0/globus/TRUSTED_CA --with-certfile-path=/etc/grid-security/http/httpcert.pem --with-keyfile-path=/etc/grid-security/http/httpkey.pem --with-eventlog-path=/opt/osg-1.0/vdt-app-data/bestman/logs --with-cachelog-path=/opt/osg-1.0/vdt-app-data/bestman/logs --with-http-port= with-https-port= with-replica-storage-path=/mnt/pvfs2/sereplica --with-replica-storage-size= enable-gums --with-gums- url= horizationServicePort --with-gums- dn=/DC=org/DC=doegrids/OU=Services/CN=http/birdnest.cs.clemso n.edu Red options required for GUMS authentication Explanations of the options are available in the Administration guide. Source:

Testing the SRM Interface SRM commands used to test our install First you should create a proxy from a submit host on which you are registered so that you can run commands on the site. – voms-proxy-init -voms Engage - valid 72:00 This will create a proxy cert for the Virtual Organization Engage which will be valid for 72 hours. estman estman

Testing the SRM Interface Pinging the server: – srm-ping srm://birdnest.cs.clemson.edu:10443/srm/v2 /server Using ls: – srm-ls srm://birdnest.cs.clemson.edu:10443/srm/v2 /server\?SFN=/mnt/pvfs2/sedata/engage/test.txt SFN is the path to the file or directory which you have interest in. Remember to escape the ? in Bash!

Testing the SRM Interface Make a directory: – srm-mkdir srm://birdnest.cs.clemson.edu:10443/srm /v2/server\?SFN=/mnt/pvfs2/sedata/engag e/testdir/ Delete a directory: – srm-rmdir srm://birdnest.cs.clemson.edu:10443/srm /v2/server\?SFN=/mnt/pvfs2/sedata/engag e/testdir/

Testing the SRM Interface SRM commands expect full URLs, including filenames Will not infer that the source and destination filenames are the same Transfering files to the storage node – srm-copy file:///home/user/test.txt srm://birdnest.cs.clemson.edu:10443/srm/v2 /server\?SFN=/mnt/pvfs2/sedata/engage/test.txt Transfering file from the storage node – srm-copy srm://birdnest.cs.clemson.edu:10443/srm/v2 /server\?SFN=/mnt/pvfs2/sedata/engage/test.txt file:///home/user/test.txt

Registering for SRM Daily Tests Go to Select storage site registration and fill out the form. You will need to know the service endpoint on your server. – Ex: srm://birdnest.cs.clemson.edu:10443/srm/v2/server/ You will also need to know the path which will be write accessible to those using your SE. – Ex: /mnt/pvfs2/sedata/ Within a few days you will be contacted through for verification and to make sure your site is setup correctly. If the test run is successful, you will be registered for daily storage test reporting. ftest.v22.php?sitename=CIRG-CU-SRM&date= _09_20&vo=OSG ftest.v22.php?sitename=CIRG-CU-SRM&date= _09_20&vo=OSG