Brookhaven National Laboratory Storage service Group Hironori Ito

Slides:



Advertisements
Similar presentations
The Enterprise Guide to Video Conferencing Created using iThoughts [...] [...]
Advertisements

Copy on Demand with Internal Xrootd Federation Wei Yang SLAC National Accelerator Laboratory Create Federated Data Stores for the LHC IN2P3-CC,
Network+ Guide to Networks, Fourth Edition
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
Network+ Guide to Networks, Fourth Edition Chapter 1 An Introduction to Networking.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
Introduction to Cyberspace
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS Computing Integration.
Network+ Guide to Networks, Fourth Edition Chapter 1 An Introduction to Networking.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Data management at T3s Hironori Ito Brookhaven National Laboratory.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
Multi-Tiered Storage with Xrootd at ATLAS Western Tier 2 Andrew Hanushevsky Wei Yang SLAC National Accelerator Laboratory 1CHEP2012, New York
Moving Large Amounts of Data Rob Schuler University of Southern California.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend Xrootd storage.
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Efi.uchicago.edu ci.uchicago.edu Data Federation Strategies for ATLAS using XRootD Ilija Vukotic On behalf of the ATLAS Collaboration Computation and Enrico.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
1 Xrootd-SRM Andy Hanushevsky, SLAC Alex Romosan, LBNL August, 2006.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Data Management at Tier-1 and Tier-2 Centers Hironori Ito Brookhaven National Laboratory US ATLAS Tier-2/Tier-3/OSG meeting March 2010.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Data Distribution Performance Hironori Ito Brookhaven National Laboratory.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation and Enrico Fermi.
KIT - University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association Xrootd SE deployment at GridKa WLCG.
Security recommendations for dCache
a brief summary for users
Dynamic Extension of the INFN Tier-1 on external resources
WLCG IPv6 deployment strategy
Bob Ball/University of Michigan
Global Data Access – View from the Tier 2
BNL Box Hironori Ito Brookhaven National Laboratory
Computer Networks Part 1
CONNECTING TO THE INTERNET
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
dCache “Intro” a layperson perspective Frank Würthwein UCSD
Computer Networking Devices
Service Challenge 3 CERN
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
File System Implementation
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
STORM & GPFS on Tier-2 Milan
ATLAS Sites Jamboree, CERN January, 2017
XROOTd for cloud storage
Hironori Ito Brookhaven National Laboratory
Ákos Frohner EGEE'08 September 2008
Enabling High Speed Data Transfer in High Energy Physics
The INFN Tier-1 Storage Implementation
Large Scale Test of a storage solution based on an Industry Standard
Comparison of LAN, MAN, WAN
An Introduction to Computer Networking
Network+ Guide to Networks, Fourth Edition
Specialized Cloud Architectures
Introduction to Cyberspace
Any Data, Anytime, Anywhere
Presentation transcript:

Brookhaven National Laboratory Storage service Group Hironori Ito Use Of XROOTd at BNL Brookhaven National Laboratory Storage service Group Hironori Ito

XROOTd are used dCache and XROOTd Forwarding Proxy Reverse Proxy FAX Regional Redirector Stash Cache Cloud Storage

Dcache and XROOTd dCache is the main storage for BNL ATLAS Tier 1 dCache has many interfaces to access data; SRM, GridFTP, NFS3/4.1, dCap, HTTP(s)/WebDAV, and XROOTd XROOTd interface is very stable. And, it is the main access protocol used for streaming data from storage internally over LAN and externally over WAN. In terms of accessing data directly within LAN, XROOTd has replaced dCap protocol. Externally, it is currently the only protocol used for streaming.

Accessing data by XROOTd Protocol within LAN XROOTd service understand two namespaces Normal //pnfs/usatlas.bnl.gov/BNLT0D1/rucio/data15_13TeV/08/ad/DAOD_EXOT4.08608816._000045.pool.root.1 Global //atlas/rucio/data15_13TeV:DAOD_EXOT4.08608816._000045.pool.root.1 LAN Client / Job Control Message Data dCache XROOTd Service Data Server

Need of DTNs and Proxy; Accessing Remote Data through Firewall. A site with strong security consideration might have a site-wide firewall(s) to protect internal servers. A site-wide firewall might not be capable to handle high bandwidth, scientific data. Overwhelming amount of data to a site-wide firewall might bring down the firewall service, shutting down the complete WAN access, resulting in becoming own DoS. A site might select number of data transfer nodes (DTNs) with dual interfaces to WAN and LAN to bypass the firewall. All data flowing in & out of a site must go through DTNs to avoid the site- wide firewall. DTNs can be designed to provide exact throughput for WAN One might use the DTNs even if a site does not have a site-wide firewall.

Forwarding Proxy via FAX Forwarding proxy services are run on DTNs. Forwarding proxy services will direct all WAN data requests from internal clients to a remote site. All data flow through forwarding proxy. Using FAX with global namespace, a client can access data located at remote sites seamlessly. A local user does not need to know if the files exists locally. A local user does not need to know at which remote sites the files exist.

Forwarding Proxy BNL dCache XROOTd service is configured in such a way that the clients redirect to the forwarding proxy if files do not exist locally. Forwarding XROOTd proxy is not part of dCache. data data Client / Job LAN File does not exists Redirect Remote Data Server Forwarding Proxy Local Data Server WAN dCache XROOTd Service

Reverse Proxy Reverse proxy services are run on DTNs. Reverse proxy services will direct in-coming XROOTd requests to BNL storage from WAN to XROOTd data services run on DTNs. XROOTd data services on DTNs will use dCache XROOTd services as a source, allowing access to all data within dCache. Reverse proxy services understand both normal and global namespaces.

Reverse Proxy Still understand both namespaces. WAN LAN Normal Global //pnfs/usatlas.bnl.gov/BNLT0D1/rucio/data15_13TeV/08/ad/DAOD_EXOT4.08608816._000045.pool.root.1 Global //atlas/rucio/data15_13TeV:DAOD_EXOT4.08608816._000045.pool.root.1 Client / Job data LAN dCache XROOTd Service data Reverse proxy redirector WAN Reverse proxy data servers Local Data Server

FAX FAX allows the access to files with the use of Global file name. Auto-file discovery: No need to catalog the physical location of data FAX requires Global file name to allow the universal access to data. No need to know different physical paths at different sites FAX network is configured in a tiered scheme.

FAX Redirectors All redirectors are on RHEV cluster on LHCONE network. All redirectors are paired by round-robin alias with each virtual host located in different physical host of the RHEV cluster, allowing redundancy and serviceability without need of many scheduled and unscheduled downtime. They are tiered to find One pair of the top level North American Redirector. BNL Reverse redirector is attached to this redirector. Three distinct pairs of regional redirectors, serving geographically different group of subordinate redirectors; EAST, CENTRAL and WEST All Tier2s and production Tier3 are attached to one of the regional redirector. One Tier 3 redirector Search files anywhere on FAX system But, excluded from FAX file discovery from other redirectors due to instabilities of Tier3 system.

FAX System A client can access a file from any location of FAX system. WEST CENTRAL EAST BNL Reverse proxy SLAC, TRIUMF, etc... MWT2, AGLT2, SWT2, etc... NET2, etc... A client can access a file from any location of FAX system.

Stash Cache OSG Data cache See the following presentation for the detail. https://indico.cern.ch/event/330212/contributions/1718788/attachments/642386/ 883836/StashCache_at_UCSD_Xrootd_workshop.pdf BNL hosts one of its cache. 5TB 10Gbps limit No management of data

Cloud Storage Possible use of XROOTd for popular cloud storage like Owncloud/Nextcloud. See the presentation on Thursday

Summary BNL LAN XROOTd is being used at various places. North American Redirector WEST CENTRAL EAST SLAC, TRIUMF, etc... MWT2, AGLT2, SWT2, etc... NET2, etc... dCache XROOTd is being used at various places. LAN data BNL