Global Data Access – View from the Tier 2

Slides:



Advertisements
Similar presentations
ATLAS T1/T2 Name Space Issue with Federated Storage Hironori Ito Brookhaven National Laboratory.
Advertisements

Distributed Xrootd Derek Weitzel & Brian Bockelman.
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
CVMFS AT TIER2S Sarah Williams Indiana University.
Filesytems and file access Wahid Bhimji University of Edinburgh, Sam Skipsey, Chris Walker …. Apr-101Wahid Bhimji – Files access.
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
ATLAS federated xrootd monitoring requirements Rob Gardner July 26, 2012.
Xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
PhysX CoE: LHC Data-intensive workflows and data- management Wahid Bhimji, Pete Clarke, Andrew Washbrook – Edinburgh And other CoE WP4 people…
11-July-2008Fabrizio Furano - Data access and Storage: new directions1.
Efi.uchicago.edu ci.uchicago.edu Towards FAX usability Rob Gardner, Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS.
Efi.uchicago.edu ci.uchicago.edu FAX meeting intro and news Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Federated Xrootd.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1.
Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend Xrootd storage.
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
02-June-2008Fabrizio Furano - Data access and Storage: new directions1.
6 th dCache WS | Daniel Becker| 18 April 2012 | 1 Daniel Becker 6 th dCache workshop, Zeuthen, April 18, 2012 The HTTP Federation.
Summary of Data Management Jamboree Ian Bird WLCG Workshop Imperial College 7 th July 2010.
Storage Federations and FAX (the ATLAS Federation) Wahid Bhimji University of Edinburgh.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
Evolution of storage and data management Ian Bird GDB: 12 th May 2010.
Efi.uchicago.edu ci.uchicago.edu Data Federation Strategies for ATLAS using XRootD Ilija Vukotic On behalf of the ATLAS Collaboration Computation and Enrico.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
11-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Introduce Caching Technologies using Xrootd Wei Yang 10/28/14ATLAS TIM 2014 Univ. Chicago1.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
Maria Girone, CERN  CMS in a High-Latency Environment  CMSSW I/O Optimizations for High Latency  CPU efficiency in a real world environment  HLT 
Andrea Manzi CERN EGI Conference on Challenges and Solutions for Big Data Processing on cloud 24/09/2014 Storage Management Overview 1 24/09/2014.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Solutions for WAN data access: xrootd and NFSv4.1 Andrea Sciabà.
DPM in FAX (ATLAS Federation) Wahid Bhimji University of Edinburgh As well as others in the UK, IT and Elsewhere.
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
Efi.uchicago.edu ci.uchicago.edu Federating ATLAS storage using XrootD (FAX) Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
KIT - University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association Xrootd SE deployment at GridKa WLCG.
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
Federating Data in the ALICE Experiment
a brief summary for users
WLCG IPv6 deployment strategy
DPM at ATLAS sites and testbeds in Italy
DAaM summary Ian Bird MB 29th June 2010.
Dynamic Storage Federation based on open protocols
Ricardo Rocha ( on behalf of the DPM team )
Data Federation & Data Management for the CMS Experiment at the LHC
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
Silvio Pardi R&D Storage Silvio Pardi
dCache “Intro” a layperson perspective Frank Würthwein UCSD
Experiences with http/WebDAV protocols for data access in high throughput computing
dCache – protocol developments and plans
PanDA in a Federated Environment
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Brookhaven National Laboratory Storage service Group Hironori Ito
WLCG Demonstrator R.Seuster (UVic) 09 November, 2016
Ákos Frohner EGEE'08 September 2008
DCache things Paul Millar … on behalf of the dCache team.
LAT Data Server Serve what?
Distributed Database Management System
Summary of the dCache workshop
Presentation transcript:

Global Data Access – View from the Tier 2 Rob Gardner Charles Waldman

project We have long recognized the need for providing efficient user access to datasets at Tier 2 Talk: http://www.mwt2.org/~cgw/t2t/talk.html Demo: http://repo.mwt2.org/t2t Now the reality is that we have > 5 PB in our T2 cloud Typical sites have > 50K datasets, O(10M) files Since gained experience with local and wide area access using both dCache and xrootd services Assuming a namespace convention we could start a prototype T2 access project using WLCG demonstrator findings

diagram (courtesy Brian Bockelman)

diagram (courtesy Brian Bockelman) Extend “User” to be a T3 user + T3 xrootd storage system acting as cache - no pre-fetching of files - no “data management”

(some) questions On T2 with many data servers what additional services may be required? An xrootd on each dCache pool, eg? What local caching strategy is best on the client side – block or file? And associated additional services, (eg. frm, squid)

First steps Try out namespace convention & dq2 client Setup needed T2 site level xrootd federation services Register with SLAC global redirector – functional tests Create namespace-to-site path translation method Test with xrdcp Test local direct read access: ROOT client and xrootd storage ROOT client and dCache, GPFS storage Same, remote client access. Study latency issues Next: investigate caching on client (user) side