Presentation is loading. Please wait.

Presentation is loading. Please wait.

Xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC

Similar presentations


Presentation on theme: "Xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC"— Presentation transcript:

1 xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC http://xrootd.org

2 March 7-11, 20112OSG All Hands Meeting Goals xrootd Describe xrootd architecture configurations Show how these can be used by demos Alice (in production), Atlas, and CMS Overview of the File Residency Manager How it addresses file placement Cover recent and future developments Conclusion

3 March 7-11, 20113OSG All Hands Meeting The Motivation Can we access HEP data as a single repository? Treat it like a Virtual Mass Storage System Is cache-driven grid data distribution feasible? The last missing file issue (Alice production) Adaptive file placement at Tier 3’s (Atlas demo) Analysis at storage-starved sites (CMS demo) xrootd Does xrootd provide the needed infrastructure?

4 March 7-11, 20114OSG All Hands Meeting A Simple xrootd Cluster /my/file 3: I DO! 1: open(“/my/file”)4: Try open() at A 5: open(“/my/file”) Data Servers Manager (a.k.a. Redirector) Client cmsdxrootdcmsdxrootdcmsdxrootdcmsdxrootd Who has “/my/file”? 2: Who has “/my/file”? ABC

5 March 7-11, 20115OSG All Hands Meeting The Fundamentals xrootdcmsd An xrootd-cmsd pair is the building block xrootd xrootd provides the client interface Handles data and redirections cmsd xrootd cmsd manages xrootd’s (i.e. forms clusters) Monitors activity and handles file discovery The building block is uniformly stackable Can build a wide variety of configurations Much like you would do with Lego  blocks Extensive plug-ins provide adaptability

6 March 7-11, 20116OSG All Hands MeetingServersServersServers Federating xrootd Clusters 1: open(“/my/file”)5: Try open() at ANL Distributed Clusters Meta-Manager (a.k.a. Global Redirector) Client AB C /my/file cmsdxrootdcmsdxrootd cmsdxrootd AB C/my/file cmsdxrootdcmsdxrootd cmsdxrootd AB C /my/file cmsdxrootdcmsdxrootd cmsdxrootd ANLSLACUTA cmsdxrootd A B C Who has “/my/file”? 3: Who has “/my/file”? 8: open(“/my/file”) 4: I DO! cmsdxrootd Manager (a.k.a. Local Redirector) Manager (a.k.a. Local Redirector) Manager (a.k.a. Local Redirector) cmsdxrootd 6: open(“/my/file”) 7: Try open() at A cmsdxrootd Who has “/my/file”? 2: Who has “/my/file”? But I’m behind a firewall! Can I still play? Data is uniformly available from three distinct sites /my/file An exponentially parallel search! (i.e. O(2n))

7 March 7-11, 20117OSG All Hands Meeting xrootd Firewalls & xrootd xrootd xrootd is a very versatile system It can be a server, manager, or supervisor Desires are all specified in a single configuration file xrootd libXrdPss.so plug-in creates an xrootd chameleon xrootd xrootd Allows xrootd to be a client to another xrootd So, all the basic roles can run as proxies Transparently getting around fire-walls Assuming you run the proxy role on a border machine

8 March 7-11, 20118OSG All Hands Meeting Border Machines A Simple xrootd Proxy Cluster /my/file 6: I DO! 1: open(“/my/file”) 3: open(“/my/file”) Data Servers Manager (a.k.a. Redirector) Client cmsdxrootdcmsdxrootdcmsdxrootd ABC 4: open(“/my/file”) Firewall 7: Try open() at A 8: open(“/my/file”) Proxy Servers cmsdxrootdcmsdxrootdXY 2: Try open() at X Proxy Manager (a.k.a. Proxy Redirector) cmsdxrootd Who has “/my/file”? 5: Who has “/my/file”? cmsdxrootd Proxy Managers Can Federate With a Meta-Manager How does help in a Federated cluster?

9 March 7-11, 20119OSG All Hands Meeting Demonstrator Specific Features A uniform file access infrastructure Usable even in the presence of firewalls Access to files across administrative domains Each site can enforce its own rules Site participation proportional to scalability Essentially the bit-torrent social model Increased opportunities for HEP analysis A foundation for novel approaches to efficiency

10 March 7-11, 201110OSG All Hands Meeting Alice & Atlas Approach Real-time placing of files at a site Built on top of the File Residency Manager (FRM) xrootd FRM - xrootd service that controls file residency Locally configured to handle events such as A requested file is missing A file is created or an existing file is modified Disk space is getting full Alice uses an “only when necessary” model Atlas will use a “when analysis demands” model

11 March 7-11, 2011OSG All Hands Meeting11 Using FRM For File Placement xrootd frm_xfrd Transfer Queue Configuration File Client xrootd Data Server Remote Storage all.export /atlas/atlasproddisk stage frm.xfr.copycmd in /opt/xrootd/bin/xrdcp \ –f –np root://globalredirector/$SRC $DST 1 open(missing_file) 2 Insert xfr request 3 Tell client wait 4 Read xfr request Transfer Agent 5 Launch xfr agent 7 Notify xrootd OK 6 Copy in file Wakeup client 8dq2get globus-url-copy gridFTP scp wget xrdcp etc

12 March 7-11, 201112OSG All Hands Meeting FRM Even Works With Firewalls xrdcp 2 Read xfr request 3 ssh xfr agent 5 Notify xrootd to run client frm_xfrd Transfer Queue xrootd Data Server xrootdBig Bad Internet Border Machine 4 Copy in file 1 Write xfr request Need to setup ssh identity keys ● The FRM needs one or more border machines ● The server transfer agent simply launches the real agent across the border ● How it’s done frm.xfr.copycmd in noalloc ssh bordermachine /opt/xrootd/bin/xrdcp –f \ root://globalredirector/$LFN root://mynode/$LFN?ofs.posc=1

13 March 7-11, 201113OSG All Hands Meeting Storage-Starved Sites (CMS) Provide direct access to missing files This is basically a freebie of the system However, latency issues exist Naively, as much as 3x increase in wall-clock time Can be as low as 5% depending on job’s CPU/IO ratio The root team is aggressively working to reduce it On the other hand... May be better than not doing analysis at such sites No analysis is essentially infinite latency

14 March 7-11, 201114OSG All Hands Meeting Security xrootd xrootd supports needed security models Most notably grid certificates (GSI) Human cost needs to be considered Does read-only access require this level of security? Considering that the data is unusable without a framework Each deployment faces different issues Alice uses light-weight internal security Atlas will use server-to-server certificates CMS will need to deploy the full grid infrastructure

15 March 7-11, 201115OSG All Hands Meeting Recent Developments FS-Independent Extended Attribute Framework Used to save file-specific information Migration time, residency requirements, checksums Shared-Everything File System Support Optimize file discovery in distributed file systems xrootd dCache, DPM, GPFS, HDFS, Lustre, proxy xrootd Meta-Manager throttling Configurable per-site query limits

16 March 7-11, 201116OSG All Hands Meeting Future Major Developments Integrated checksums Inboard computation, storage, and reporting Outboard computation already supported Specialized Meta-Manager Allows many more subscriptions than today Internal DNS caching and full IPV6 support Automatic alerts Part of message and logging restructuring

17 March 7-11, 201117OSG All Hands Meeting Conclusion xrootd xrootd mates well with demo requirements Can federated almost any file system Gives a uniform view of massive amounts of data Assuming per-experiment common logical namespace Secure and firewall friendly Ideal platform for adaptive caching systems Completely open source under a BSD license See more at http://xrootd.org/

18 March 7-11, 201118OSG All Hands Meeting Acknowledgements Current Software Contributors ATLAS: Doug Benjamin CERN: Fabrizio Furano, Lukasz Janyst, Andreas Peters, David Smith Fermi/GLAST: Tony Johnson FZK: Artem Trunov BeStMan LBNL: Alex Sim, Junmin Gu, Vijaya Natarajan (BeStMan team) Root: Gerri Ganis, Beterand Bellenet, Fons Rademakers OSG: Tim Cartwright, Tanya Levshina SLAC: Andrew Hanushevsky, Wilko Kroeger, Daniel Wang, Wei Yang UNL: Brian Bockelman UoC: Charles Waldman Operational Collaborators ANL, BNL, CERN, FZK, IN2P3, SLAC, UTA, UoC, UNL, UVIC, UWisc US Department of Energy Contract DE-AC02-76SF00515 with Stanford University


Download ppt "Xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC"

Similar presentations


Ads by Google