Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.

Slides:



Advertisements
Similar presentations
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Advertisements

Copy on Demand with Internal Xrootd Federation Wei Yang SLAC National Accelerator Laboratory Create Federated Data Stores for the LHC IN2P3-CC,
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
CVMFS AT TIER2S Sarah Williams Indiana University.
System Center 2012 Setup The components of system center App Controller Data Protection Manager Operations Manager Orchestrator Service.
Tier 3 Plan and Architecture OSG Site Administrators workshop ACCRE, Nashville August Marco Mambelli University of Chicago.
Tier 3g Infrastructure Doug Benjamin Duke University.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
Xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
Multi-Tiered Storage with Xrootd at ATLAS Western Tier 2 Andrew Hanushevsky Wei Yang SLAC National Accelerator Laboratory 1CHEP2012, New York
From Virtualization Management to Private Cloud with SCVMM 2012 Dan Stolts Sr. IT Pro Evangelist Microsoft Corporation
Xrootd, XrootdFS and BeStMan Wei Yang US ATALS Tier 3 meeting, ANL 1.
Configuration Management with Cobbler and Puppet Kashif Mohammad University of Oxford.
Tier 3 Storage Survey and Status of Tools for Tier 3 access, futures Doug Benjamin Duke University.
SLAC Experience on Bestman and Xrootd Storage Wei Yang Alex Sim US ATLAS Tier2/Tier3 meeting at Univ. of Chicago Aug 19-20,
PROOF Cluster Management in ALICE Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
Proxy Servers.
RAL Site Report Castor Face-to-Face meeting September 2014 Rob Appleyard, Shaun de Witt, Juan Sierra.
Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend Xrootd storage.
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
Tier 3 architecture Doug Benjamin Duke University.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Portal Update Plan Ashok Adiga (512)
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
Maite Barroso - 10/05/01 - n° 1 WP4 PM9 Deliverable Presentation: Interim Installation System Configuration Management Prototype
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
US Atlas Tier 3 Overview Doug Benjamin Duke University.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
Southwest Tier 2 (UTA). Current Inventory Dedidcated Resources  UTA_SWT2 320 cores - 2GB/core Xeon EM64T (3.2GHz) Several Headnodes 20TB/16TB in IBRIX/DDN.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Wide-Area Xrootd testing at MWT2 Sarah Williams Indiana University US ATLAS Tier2/Tier3 Workshop at UChicago
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Andrea Manzi CERN EGI Conference on Challenges and Solutions for Big Data Processing on cloud 24/09/2014 Storage Management Overview 1 24/09/2014.
Cloud computing and federated storage Doug Benjamin Duke University.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
OSG STORAGE OVERVIEW Tanya Levshina. Talk Outline  OSG Storage architecture  OSG Storage software  VDT cache  BeStMan  dCache  DFS:  SRM Clients.
Atlas Tier 3 Overview Doug Benjamin Duke University.
CVMFS Alessandro De Salvo Outline  CVMFS architecture  CVMFS usage in the.
KIT - University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association Xrootd SE deployment at GridKa WLCG.
Global Data Access – View from the Tier 2
Computer Networks Part 1
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
dCache “Intro” a layperson perspective Frank Würthwein UCSD
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
GSIAF "CAF" experience at GSI
Brookhaven National Laboratory Storage service Group Hironori Ito
DPM releases and platforms status
Presentation transcript:

Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University

ANL ATLAS Analysis Center “prototype” Tier 3 Used to develop all of the scripts needed to setup a Tier 3 site – Used to test tools and software to train others Expect it to grow as the number of users at the ASC increases Developing puppet configurations at this site Supported by DPB (very part time) Use Xrootd.org Yum repository for most libraries OSG Yum repository for the gridftp bits

“baseline” Tier 3g public/private networks Head = redirector Standalone xrootd data server Xrootd data servers all on private network only Clustered xrootd storage

Head Node/Redire ctor Stand alone Xrootd data server worker Node/ xrootd data server Private network Xrootd clustered storage (private network) Baseline Tier 3 (public/private network) All data access within cluster (LAN) via private network Use case 2. – storage on worker node data servers as part of Federation Physical machine w/ local xrootd functionality Overlay xrootd functionality Public IP network Head node xrootd Proxy (public network) Head node xrootd Proxy (public network) Head node - Xrootd read-only redirector(private network) Xrootd read-only data server(private network) Outgoing read only data flow

Xrootd configuration Wiki page with example configuration p p Know issues “sss” xrootdfs Frm checksuming turned off - Due to configuration of the data servers (oss.localroot).

Duke Tier 3 Support Department system admin’s DPB (very part time) Puppet helps reduce the loads Disk heavy worker nodes and on big file server (SL 6) Stand alone SL 5 gridftp server (used a proxy server) Use Xrootd.org Yum repository for most libraries OSG Yum repository for the gridftp bits

Head Node/Redire ctor Stand alone Xrootd data server worker Node/ xrootd data server Xrootd clustered storage Public network version of Baseline Tier 3 Use Case C. - data from all data servers (WN and stand alone) Physical machine w/ local xrootd functionality Overlay xrootd functionality Public IP network Head node - Xrootd read-only redirector Xrootd read-only data server Outgoing read only data flow Information flow to global redirector

Xrootd usage at Duke Users begrudgingly are using Xrootd More senior folks like to use NFS like access Then to use xrootdfs to access the storage Graduate students much more adaptable.