Scalla/xrootd Introduction Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 6-April-09 ATLAS Western Tier 2 User’s Forum.

Slides:



Advertisements
Similar presentations
Potential Data Access Architectures using xrootd OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
Advertisements

Xrootd Roadmap Atlas Tier 3 Meeting University of Chicago September 12-13, 2011 Andrew Hanushevsky, SLAC
Distributed Xrootd Derek Weitzel & Brian Bockelman.
Xrootd Update OSG All Hands Meeting University of Nebraska March 19-23, 2012 Andrew Hanushevsky, SLAC
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Distributed File Systems Sarah Diesburg Operating Systems CS 3430.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
Scalla Back Through The Future Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 8-April-10
XRootD Roadmap To Start The Second Decade Root Workshop Saas-Fee March 11-14, 2013 Andrew Hanushevsky, SLAC
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
16 th May 2006Alessandra Forti Storage Alessandra Forti Group seminar 16th May 2006.
The Next Generation Root File Server Andrew Hanushevsky Stanford Linear Accelerator Center 27-September-2004
Scalla/xrootd Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 19-August-2009 Atlas Tier 2/3 Meeting
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
Scalla/xrootd Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 19-May-09 ANL Tier3(g,w) Meeting.
Scalla/xrootd Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 29-October-09 ATLAS Tier 3 Meeting at ANL
Multi-Tiered Storage with Xrootd at ATLAS Western Tier 2 Andrew Hanushevsky Wei Yang SLAC National Accelerator Laboratory 1CHEP2012, New York
March 6, 2009Tofigh Azemoon1 Real-time Data Access Monitoring in Distributed, Multi Petabyte Systems Tofigh Azemoon Jacek Becla Andrew Hanushevsky Massimiliano.
Xrootd, XrootdFS and BeStMan Wei Yang US ATALS Tier 3 meeting, ANL 1.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
SLAC Experience on Bestman and Xrootd Storage Wei Yang Alex Sim US ATLAS Tier2/Tier3 meeting at Univ. of Chicago Aug 19-20,
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Xrootd Monitoring Atlas Software Week CERN November 27 – December 3, 2010 Andrew Hanushevsky, SLAC.
Are SE Architectures Ready For LHC? Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 3-November-08 ACAT Workshop.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Xrootd Update Andrew Hanushevsky Stanford Linear Accelerator Center 15-Feb-05
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
ROOT-CORE Team 1 PROOF xrootd Fons Rademakers Maarten Ballantjin Marek Biskup Derek Feichtinger (ARDA) Gerri Ganis Guenter Kickinger Andreas Peters (ARDA)
02-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Accelerating Debugging In A Highly Distributed Environment CHEP 2015 OIST Okinawa, Japan April 28, 2015 Andrew Hanushevsky, SLAC
The Process Manager in the ATLAS DAQ System G. Avolio, M. Dobson, G. Lehmann Miotto, M. Wiesmann (CERN)
Performance and Scalability of xrootd Andrew Hanushevsky (SLAC), Wilko Kroeger (SLAC), Bill Weeks (SLAC), Fabrizio Furano (INFN/Padova), Gerardo Ganis.
Xrootd Present & Future The Drama Continues Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University HEPiX 13-October-05
Xrootd Update Alice Tier 1/2 Workshop Karlsruhe Institute of Technology (KIT) January 24-26, 2012 Andrew Hanushevsky, SLAC
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
Scalla/xrootd Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 08-June-10 ANL Tier3 Meeting.
Scalla Advancements xrootd /cmsd (f.k.a. olbd) Fabrizio Furano CERN – IT/PSS Andrew Hanushevsky Stanford Linear Accelerator Center US Atlas Tier 2/3 Workshop.
XRootD & ROOT Considered Root Workshop Saas-Fee September 15-18, 2015 Andrew Hanushevsky, SLAC
Scalla Authorization xrootd /cmsd Andrew Hanushevsky SLAC National Accelerator Laboratory CERN Seminar 10-November-08
Xrootd Proxy Service Andrew Hanushevsky Heinz Stockinger Stanford Linear Accelerator Center SAG September-04
Scalla In’s & Out’s xrootdcmsd xrootd /cmsd Andrew Hanushevsky SLAC National Accelerator Laboratory OSG Administrator’s Work Shop Stanford University/SLAC.
SRM Space Tokens Scalla/xrootd Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 27-May-08
Scalla As a Full-Fledged LHC Grid SE Wei Yang, SLAC Andrew Hanushevsky, SLAC Alex Sims, LBNL Fabrizio Furano, CERN SLAC National Accelerator Laboratory.
Scalla + Castor2 Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 27-March-07 Root Workshop Castor2/xrootd.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
11-June-2008Fabrizio Furano - Data access and Storage: new directions1.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Bestman & Xrootd Storage System at SLAC Wei Yang Andy Hanushevsky Alex Sim Junmin Gu.
09-Apr-2008Fabrizio Furano - Scalla/xrootd status and features1.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
Scalla Update Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 25-June-2007 HPDC DMG Workshop
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Storage Architecture for Tier 2 Sites Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 8-May-07 INFN Tier 2 Workshop
a brief summary for users
Xrootd explained Cooperation among ALICE SEs
SLAC National Accelerator Laboratory
XRootD Release 4.5 And Beyond
Data access and Storage
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Brookhaven National Laboratory Storage service Group Hironori Ito
Scalla/XRootd Advancements
Data Management cluster summary
Presentation transcript:

Scalla/xrootd Introduction Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 6-April-09 ATLAS Western Tier 2 User’s Forum

ATLAS WT2 UF 6-Apr-092 Outline File servers NFS & xrootd How xrootd manages file data Multiple file servers (i.e., clustering) Considerations and pitfalls Getting to xrootd hosted file data Available programs and interfaces

ATLAS WT2 UF 6-Apr-093 File Server Types Application Linux NFS Server Linux NFS Client Client Machine Server Machine Alternatively xrootd is nothing more than an application level file server & client using another protocol DataFiles Application Linux Linux Client Machine Server Machine xroot Server DataFiles xroot Client

ATLAS WT2 UF 6-Apr-094 Why Not Just Use NFS? NFS V2 & V3 inadequate Scaling problems with large batch farms Unwieldy when more than one server needed NFS V4? Relatively new Multiple server support still being vetted Still has a single point of failure problems

ATLAS WT2 UF 6-Apr-095 NFS & Multiple File Servers Which Server? Linux NFS Server Server Machine A DataFiles Application Linux NFS Client Client Machine Linux NFS Server Server Machine B DataFiles open(“/foo”); NFS can’t naturally deal with this problem. Typical ad hoc solutions are cumbersome, restrictive and error prone! cp /foo /tmp

ATLAS WT2 UF 6-Apr-096 xrootd & Multiple File Servers I DataFiles Application Linux Client Machine Linux Server Machine B DataFiles open(“/foo”); xroot Client Linux Server Machine A xroot Server Linux Server Machine R xroot Server /foo Redirector 1 Who has /foo? 2 I do! 3 Try B 4 open(“/foo”); xrdcp root://R//foo /tmp The xroot client does all of these steps automatically without application (user) intervention!

ATLAS WT2 UF 6-Apr-097 File Discovery Considerations I The redirector does not have a catalog of files It always asks each server, and Caches the answers in memory for a “while” So, it won’t ask again when asked about a past lookup Allows real-time configuration changes Clients never see the disruption Does have some side-effects The lookup takes less than a microsecond when files exist Much longer when a requested file does not exist!

ATLAS WT2 UF 6-Apr-098 xrootd & Multiple File Servers II DataFiles Application Linux Client Machine Linux Server Machine B DataFiles open(“/foo”); xroot Client Linux Server Machine A xroot Server Linux Server Machine R xroot Server /foo Redirector 1 Who has /foo? 2Nope!5 File deemed not to exist if there is no response after 5 seconds! xrdcp root://R//foo /tmp

ATLAS WT2 UF 6-Apr-099 File Discovery Considerations II System optimized for “file exists” case! Penalty for going after missing files Aren’t new files, by definition, missing? Yes, but that involves writing data! The system is optimized for reading data So, creating a new file will suffer a 5 second delay Can minimize the delay by using the xprep command Primes the redirector’s file memory cache ahead of time Can files appear to be missing any other way?

ATLAS WT2 UF 6-Apr-0910 Missing File vs. Missing Server In xroot files exist to the extent servers exist The redirector cushions this effect for 10 minutes Afterwards, the redirector cannot tell the difference This allows partially dead server clusters to continue Jobs hunting for “missing” files will eventually die But jobs cannot rely on files actually being missing xroot cannot provide a definitive answer to “  s:  x” This requires manual safety for file creation

ATLAS WT2 UF 6-Apr-0911 Safe File Creation Avoiding the basic problem.... Today’s new file may be on yesterday’s dead server Generally, do not re-use output file names Otherwise, serialize file creation Use temporary file names when creating new files E.g., path/....root.temp Remove temporary to clean-up any previous failures E.g., -f xrdcp option or truncate option on open Upon success, rename the temporary to its permanent name

ATLAS WT2 UF 6-Apr-0912 Getting to xrootd hosted data Use the root framework Automatically, when files named root://.... Manually, use TXNetFile() object Note: identical TFile() object will not work with xrootd! xrdcp The copy command xprep The redirector seeder command Via fuse on atlint01.slac.stanford.edu POSIX preload library

ATLAS WT2 UF 6-Apr-0913 Copying xrootd hosted data xrdcp [options] source dest Copies data to/from xrootd servers Some handy options: -ferase dest before copying source -sstealth mode (i.e., produce no status messages) -S nuse n parallel streams (use only across WAN)

ATLAS WT2 UF 6-Apr-0914 Preparing xrootd hosted data xprep [options] host[:port] [path [...]] Prepares xrootd access via redirector host:port Minimizes wait time if you are creating many files Some handy options: -wfile will be created or written -f fnfile fn holds a list of paths, one per line

ATLAS WT2 UF 6-Apr-0915 Interactive xrootd hosted data Atlas xroot redirector mounted as a file system “/xrootd” on atlint01.slac.stanford.edu Use this for typical operations dq2-get dq2-put dq2-ls rm

ATLAS WT2 UF 6-Apr-0916 For Everything Else POSIX preload library (libXrdPosixPreload.so) Works with any POSIX I/O compliant program Provides direct access to xrootd hosted data Does not need any changes to the application Just run the binary as is Talk to Wei or Andy if you want to use it

ATLAS WT2 UF 6-Apr-0917 Conclusion We hope that this is an effective environment Production Analysis But, we need your feedback What is unclear What is missing What is not working What can work even better

ATLAS WT2 UF 6-Apr-0918 Future Directions More simplicity! cnsdcmsd Integrating the cnsd into cmsd Reduces configuration issues Pre-linking the extended open file system (ofs) Less configuration options Tutorial-like guides! Apparent need as we deploy at smaller sites

ATLAS WT2 UF 6-Apr-0919 Acknowledgements Software Contributors Alice: Derek Feichtinger CERN: Fabrizio Furano, Andreas Peters Fermi: Tony Johnson (Java) Root: Gerri Ganis, Beterand Bellenet, Fons Rademakers STAR/BNL: Pavel Jackl SLAC: Jacek Becla, Tofigh Azemoon, Wilko Kroeger BeStMan LBNL: Alex Sim, Junmin Gu, Vijaya Natarajan (BeStMan team) Operational Collaborators BNL, FZK, IN2P3, RAL, UVIC, UTA Partial Funding US Department of Energy Contract DE-AC02-76SF00515 with Stanford University