Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend Xrootd storage.

Slides:



Advertisements
Similar presentations
Potential Data Access Architectures using xrootd OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
Advertisements

ATLAS T1/T2 Name Space Issue with Federated Storage Hironori Ito Brookhaven National Laboratory.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Windows Server 2012 Storage: Windows Gets a Bit SANer Presented by Mark on twitter 1 V2.00. contents copyright 2013 Mark.
EGEE is a project funded by the European Union under contract IST Using SRM: DPM and dCache G.Donvito,V.Spinoso INFN Bari
Container JVMClient JVM Server side Classes to test JUnit Test Runner Cactus Test Case Cactus Redirector Proxy Cactus Test Case.
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
16 th May 2006Alessandra Forti Storage Alessandra Forti Group seminar 16th May 2006.
Fermi National Accelerator Laboratory 3 Fermi National Accelerator Laboratory Mission Advances the understanding of the fundamental nature of matter.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Xrootd, XrootdFS and BeStMan Wei Yang US ATALS Tier 3 meeting, ANL 1.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1 cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd Client Client A small 2-level cluster. Can.
11-July-2008Fabrizio Furano - Data access and Storage: new directions1.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
FAX Redirection Topology Wei Yang 7/16/121. Redirector hardware at CERN Redundant redirectors for EU, UK, DE, FR – Redundant (the “+” sign below) VMs.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
CERN IT Department CH-1211 Genève 23 Switzerland t Xrootd setup An introduction/tutorial for ALICE sysadmins.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1.
RAL DCache Status Derek Ross/Steve Traylen April 2005.
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
STATUS OF DCACHE N2N AND MONITORING REPORT I. CURRENT SITUATION xrootd4j is a part of dCache implemented in such a way that each change requires new dCache.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
02-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Performance Tests of DPM Sites for CMS AAA Federica Fanzago on behalf of the AAA team.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003.
XROOTD AND FEDERATED STORAGE MONITORING CURRENT STATUS AND ISSUES A.Petrosyan, D.Oleynik, J.Andreeva Creating federated data stores for the LHC CC-IN2P3,
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS XROOTD news New release New features.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
1 Xrootd-SRM Andy Hanushevsky, SLAC Alex Romosan, LBNL August, 2006.
11-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Bestman & Xrootd Storage System at SLAC Wei Yang Andy Hanushevsky Alex Sim Junmin Gu.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
09-Apr-2008Fabrizio Furano - Scalla/xrootd status and features1.
Wide-Area Xrootd testing at MWT2 Sarah Williams Indiana University US ATLAS Tier2/Tier3 Workshop at UChicago
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
OSG STORAGE OVERVIEW Tanya Levshina. Talk Outline  OSG Storage architecture  OSG Storage software  VDT cache  BeStMan  dCache  DFS:  SRM Clients.
A. Sim, CRD, L B N L 1 Production Data Management Workshop, Mar. 3, 2009 BeStMan and Xrootd Alex Sim Scientific Data Management Research Group Computational.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Solutions for WAN data access: xrootd and NFSv4.1 Andrea Sciabà.
CMS data access Artem Trunov. CMS site roles Tier0 –Initial reconstruction –Archive RAW + REC from first reconstruction –Analysis, detector studies, etc.
Презентацию подготовила Хайруллина Ч.А. Муслюмовская гимназия Подготовка к части С ЕГЭ.
KIT - University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association Xrootd SE deployment at GridKa WLCG.
Federating Data in the ALICE Experiment
a brief summary for users
Tom Byrne, Bruno Canning
Ricardo Rocha ( on behalf of the DPM team )
Data Federation & Data Management for the CMS Experiment at the LHC
Global Data Access – View from the Tier 2
Blueprint of Persistent Infrastructure as a Service
Status of the SRM 2.2 MoU extension
File Share Dependencies
dCache “Intro” a layperson perspective Frank Würthwein UCSD
Experiences with http/WebDAV protocols for data access in high throughput computing
dCache – protocol developments and plans
Data services on the NGS
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
CMS analysis job and data transfer test results
Brookhaven National Laboratory Storage service Group Hironori Ito
Hironori Ito Brookhaven National Laboratory
Summary of the dCache workshop
Presentation transcript:

Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend Xrootd storage Backend Xrootd storage 0. Join the Federation 1. Open (GFN) 2. Open (GFN) 3. Open/Read (GFN) 4. Read (PFN)Fetched data data Export Xrootd storage via Xrootd Proxy cluster GFN: global file name PFN: physical (storage) file name N2N: translation GFN to PFN cms.dfs directive: see cms.dfs : all data servers are equal

Redirector Xrootd mgr Redirector Xrootd mgr Regluar Xrd data server N2N Regular Xrd data server N2N Global Redirector Client Backend Posix storage Backend Posix storage 0. Join the Federation 1. Open (GFN) 2. Open (GFN) 3. Open/Read (GFN) 4. Read (PFN)Fetched data data Export Posix storage via regular Xrootd cluster GFN: global file name PFN: physical (storage) file name N2N: translation GFN to PFN Posix file system: GPFN, Luster, XrootdFS cms.dfs : all data servers are equal Posix File system

Redirector Xrootd mgr Redirector Xrootd mgr Regular Xrd data server with N2N Regular Xrd data server with N2N Global Redirector Client Backend dCache dCap doors Backend dCache dCap doors 0. Join the Federation 1. Open (GFN) 2. Open (GFN) 3. Open/Read (GFN) 4. Read (PFN)Fetched data data Export dCache dCap doors via regular Xrootd cluster GFN: global file name PFN: physical (storage) file name N2N: translation GFN to PFN dCap plugin: a Xrootd OFS plug-in moudel using dCap protocol cms.dfs : all data servers are equal dCap plug-in

Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend dCache Xrootd doors Backend dCache Xrootd doors 0. Join the Federation 1. Open (GFN) 2. Open (GFN) 3. Open/Read (GFN) 4. Read (PFN)Fetched data data Export dCache Xrootd doors via Xrootd Proxy cluster, 1 GFN: global file name PFN: physical (storage) file name N2N: translation GFN to PFN cms.dfs : all data servers are equal

Regular xrootd data server with Xrootd.redirect … Global Redirector Client 0. Join the Federation 1. Open (GFN) 2. Open (GFN) data Export dCache Xrootd doors via Xrootd Proxy cluster, 2 GFN: global file name PFN: physical (storage) file name N2N: translation GFN to PFN For Xrootd.redirect directive, see: dCache pool node with Auth plug-in for N2N dCache pool node with Auth plug-in for N2N dCache pool node with Auth plug-in for N2N dCache pool node with Auth plug-in for N2N dCache xrootd door Auth plug-in for N2N dCache xrootd door Auth plug-in for N2N 3. Open (GFN) 4. Open/Read (GFN)

Redirector Xrootd mgr Redirector Xrootd mgr Global Redirector Client 0. Join the Federation 1. Open (GFN) 2. Open (GFN) 3. Open/Read (GFN) data Overlapping Xrootd cluster on top of dCache GFN: global file name PFN: physical (storage) file name N2N: translation GFN to PFN GFN name space : native xrootd name space layout according to GFN, with symlinks pointing to actual file on the same node (some CMS sites do this) dCache pool node Native Xrootd data server GFN name space (symlinks) or N2N dCache pool node Native Xrootd data server GFN name space (symlinks) or N2N