Any Data, Anytime, Anywhere

Slides:



Advertisements
Similar presentations
Dan Bradley Computer Sciences Department University of Wisconsin-Madison Schedd On The Side.
Advertisements

Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 7 Configuring File Services in Windows Server 2008.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
I/O Systems ◦ Operating Systems ◦ CS550. Note:  Based on Operating Systems Concepts by Silberschatz, Galvin, and Gagne  Strongly recommended to read.
Summary of issues and questions raised. FTS workshop for experiment integrators Summary of use  Generally positive response on current state!  Now the.
Introduction: Following the great success of AAA and FAX data federations of all US ATLAS & CMS T1 and T2 sites, AAA embarked on exploration of extensions.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
ALICE data access WLCG data WG revival 4 October 2013.
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina, L.Lueking,
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
BOSCO Architecture Derek Weitzel University of Nebraska – Lincoln.
Evolution of the Open Science Grid Authentication Model Kevin Hill Fermilab OSG Security Team.
Event Data History David Adams BNL Atlas Software Week December 2001.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
Performance Tests of DPM Sites for CMS AAA Federica Fanzago on behalf of the AAA team.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 4 Computer Systems Review.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Building Preservation Environments with Data Grid Technology Reagan W. Moore Presenter: Praveen Namburi.
Page PearsonAccess™ Technology Training Online Test Configuration.
Any Data, Anytime, Anywhere Dan Bradley representing the AAA Team At OSG All Hands Meeting March 2013, Indianapolis.
BIG DATA/ Hadoop Interview Questions.
DATA ACCESS and DATA MANAGEMENT CHALLENGES in CMS.
Magnetic Disks Have cylinders, sectors platters, tracks, heads virtual and real disk blocks (x cylinders, y heads, z sectors per track) Relatively slow,
UNICORE and Argus integration Krzysztof Benedyczak ICM / UNICORE Security PT.
Federating Data in the ALICE Experiment
Parrot and ATLAS Connect
Dynamic Extension of the INFN Tier-1 on external resources
Extending the farm to external sites: the INFN Tier-1 experience
WLCG IPv6 deployment strategy
Scalable sync-and-share service with dCache
Large Output and Shared File Systems
Virtualization and Clouds ATLAS position
Dag Toppe Larsen UiB/CERN CERN,
High Availability Linux (HA Linux)
Dag Toppe Larsen UiB/CERN CERN,
Future of WAN Access in ATLAS
Action Breakout Session
ALICE Monitoring
dCache “Intro” a layperson perspective Frank Würthwein UCSD
Provisioning 160,000 cores with HEPCloud at SC17
WLCG experiments FedCloud through VAC/VCycle in the EGI
StoRM Architecture and Daemons
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Brookhaven National Laboratory Storage service Group Hironori Ito
Ákos Frohner EGEE'08 September 2008
DCache things Paul Millar … on behalf of the dCache team.
Storage Virtualization
Computer Architecture
Haiyan Meng and Douglas Thain
Data Orgnization Frequently accessed data on the same storage device?
Initial job submission and monitoring efforts with JClarens
Cloud computing mechanisms
Persistence: hard disk drive
Specialized Cloud Architectures
Co-designed Virtual Machines for Reliable Computer Systems
Chapter 2: Computer-System Structures
Chapter 2: Computer-System Structures
Grid Computing Software Interface
CS 295: Modern Systems Organizing Storage Devices
Microsoft Virtual Academy
Thursday AM, Lecture 2 Brian Lin OSG
Presentation transcript:

Any Data, Anytime, Anywhere Dan Bradley <dan@hep.wisc.edu> representing the AAA Team At OSG All Hands Meeting March 2013, Indianapolis

Any data, Anytime, Anywhere AAA Project Goal Use resources more effectively through remote data access in CMS Sub-goals Low-ceremony/latency access to any single event Reduce data access error rate Overflow jobs from busy sites to less busy ones Use opportunistic resources Make life at T3s easier Any data, Anytime, Anywhere

xrootd: Federating Storage Systems Step 1: deploy seamless global storage interface But preserve site autonomy: xrootd plugin maps from global logical filename to physical filename at site Mapping is typically trivial in CMS: /store/* → /store/* xrootd plugin reads from site storage system Example: HDFS User authentication also pluggable But we use standard GSI + lcmaps + GUMS Any data, Anytime, Anywhere

Status of CMS Federation T1 (disk) + 7/7 T2s federated Covers 100% of the data for analysis Does not cover files only on tape World 2 T1s + 1/3 T2s accessible Monitored but not a “turns your site red” service (yet) Any data, Anytime, Anywhere

Any data, Anytime, Anywhere WAN xrootd traffic Any data, Anytime, Anywhere

Any data, Anytime, Anywhere Opening Files Any data, Anytime, Anywhere

Any data, Anytime, Anywhere Microscopic View Any data, Anytime, Anywhere

Any data, Anytime, Anywhere Problem Access via xrootd overloads site storage system Florida to Federation, “We are seceding!” Terms of the Feb 2013 treaty: Addition of local xrootd I/O load monitoring Site can configure automatic throttles When load too high, rejects new transfer requests End-user only sees error if file unavailable elsewhere in federation But these policies are intended for the exception, not the norm, because ... Any data, Anytime, Anywhere

Regulation of Requests To 1st order, jobs still run at sites with the data ~0.25 GB/s average remote read rate O(10) GB/s average local read rate ~1.5 GB/s PhEDEx transfer rate Cases where data is read remotely: Interactive - limited by # humans Fallback - limited by error rate opening files Overflow - limited by scheduling policy Opportunistic - limited by scheduling policy T3 - watching this Any data, Anytime, Anywhere

Any data, Anytime, Anywhere More on Fallback On file open error, CMS software can retry via alternate location/protocol Configured by site admin We fall back to regional xrootd federation US, EU Could also have inter-region fallback Have not configured this … yet Can recover from missing file error, but not missing block within file error (more on this later) Has more uses than just error recovery ... Any data, Anytime, Anywhere

Any data, Anytime, Anywhere More about Overflow GlideinWMS scheduling policy Candidates for overflow: Idle jobs with wait time above threshold (6h) Desired data available in a region supporting overflow Regulation of overflow: Limited number of overflow glideins submitted per source site Data access No reconfiguration of job required Uses fallback mechanism Try local access, fall back to remote access on failure Any data, Anytime, Anywhere

Any data, Anytime, Anywhere Overflow Small but steady overflow in US region Any data, Anytime, Anywhere

Running Opportunistically To run CMS jobs at non-CMS sites, we need Outbound network access Access to CMS datafiles Xrootd remote access Access to conditions data http proxy Access to CMS software CVMFS (also needs http proxy) Any data, Anytime, Anywhere

Any data, Anytime, Anywhere CVMFS Anywhere But non-CMS sites might not happen to mount the CMS CVMFS repository → Run the job under Parrot (from cctools) Can now access CVMFS without FUSE mount Also gives us identity boxing Privilege separation between glidein and user job Has worked well for guinea pig analysis users Working on extending it to more users What about in the cloud? If you control the VM image, just mount CVMFS Any data, Anytime, Anywhere

Any data, Anytime, Anywhere Fallback++ Today we can recover when file is missing from local storage system But missing blocks within files cause jobs to fail And job may come back and fail again ... Admin may need to intervene to recover the data User may need to resubmit the job Can we do better? Any data, Anytime, Anywhere

Any data, Anytime, Anywhere Yes, We Hope Concept Fall back on read error Cache remotely read data Insert downloaded data back into storage system Any data, Anytime, Anywhere

Any data, Anytime, Anywhere File Healing Any data, Anytime, Anywhere

Any data, Anytime, Anywhere File Healing Status Currently have it working via whole-file caching Still only triggered by file open error Plans to support partial-file healing Will need to fall back to local xrootd proxy on all read failures Current implementation is HDFS-specific Modifies HDFS client to do the fallback to xrootd But it's not CMS-specific Any data, Anytime, Anywhere

Cross-site Replication Once we have partial-file healing … Could reduce HDFS replication level from 2 to 1 and use cross-site redundancy instead Would need to enforce the replication policy at higher level May not be good idea for hot data Need to consider impact on performance Any data, Anytime, Anywhere

Any data, Anytime, Anywhere Performance Mostly CMS application-specific stuff Improved remote read performance by combining multiple reads into vector reads Eliminates many round-trips Working on bit-torrent-like capabilities in CMS application Read from multiple xrootd sources Balance load away from slower source React in O(1) minute time frame Any data, Anytime, Anywhere

Any data, Anytime, Anywhere HTCondor Integration Improved vanilla universe file transfer scheduling and monitoring in 7.9 Used to have one file transfer queue One misconfigured workflow could starve everything else Difficult to diagnose Now one per user Or per arbitrary attribute (e.g. target site) Equal sharing between transfer queues in case of contention Reporting transfer status, bandwidth usage, disk load, and network load And now you can condor_rm those malformed jobs that are transferring GBs of files :) Any data, Anytime, Anywhere

Any data, Anytime, Anywhere Summary xrootd storage federation rapidly expanding and proving useful within CMS We hope to do more Automatic error recovery Opportunistic usage Improving performance Any data, Anytime, Anywhere