Efi.uchicago.edu ci.uchicago.edu Caching FAX accesses Ilija Vukotic ADC TIM - Chicago October 28, 2014.

Slides:



Advertisements
Similar presentations
Andrew Hanushevsky7-Feb Andrew Hanushevsky Stanford Linear Accelerator Center Produced under contract DE-AC03-76SF00515 between Stanford University.
Advertisements

Distributed Xrootd Derek Weitzel & Brian Bockelman.
GLOBAL VIRTUAL CLUSTER DEPLOYMENT THROUGH A CONTENT DELIVERY NETWORK Pongsakorn U-chupala, Kohei Ichikawa (NAIST) Luca Clementi, Philip Papadopoulos (UCSD)
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
ALICE Operations short summary and directions in 2012 WLCG workshop May 19-20, 2012.
Tier-0: Preparations for Run-2 Armin NAIRZ (CERN) ADC Technical Interchange Meeting Chicago, 29 October 2014.
AMOD Report Doug Benjamin Duke University. Hourly Jobs Running during last week 140 K Blue – MC simulation Yellow Data processing Red – user Analysis.
Efi.uchicago.edu ci.uchicago.edu FAX update Rob Gardner Computation and Enrico Fermi Institutes University of Chicago Sep 9, 2013.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS Computing Integration.
Brainstorming Thin-provisioned Tier 3s.
Components of Windows Azure - more detail. Windows Azure Components Windows Azure PaaS ApplicationsWindows Azure Service Model Runtimes.NET 3.5/4, ASP.NET,
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
Site Report: Tokyo Tomoaki Nakamura ICEPP, The University of Tokyo 2013/12/13Tomoaki Nakamura ICEPP, UTokyo1.
FAX UPDATE 26 TH AUGUST Running issues FAX failover Moving to new AMQ server Informing on endpoint status Monitoring developments Monitoring validation.
Efi.uchicago.edu ci.uchicago.edu Towards FAX usability Rob Gardner, Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS.
Efi.uchicago.edu ci.uchicago.edu FAX meeting intro and news Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Federated Xrootd.
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
1 GCA Application in STAR GCA Collaboration Grand Challenge Architecture and its Interface to STAR Sasha Vaniachine presenting for the Grand Challenge.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Efi.uchicago.edu ci.uchicago.edu Using FAX to test intra-US links Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computing Integration.
Efi.uchicago.edu ci.uchicago.edu FAX status developments performance future Rob Gardner Yang Wei Andrew Hanushevsky Ilija Vukotic.
Efi.uchicago.edu ci.uchicago.edu Status of the FAX federation Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 /
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group S&C week Jun 2, 2014.
FAX PERFORMANCE TIM, Tokyo May PERFORMANCE TIM, TOKYO, MAY 2013ILIJA VUKOTIC 2  Metrics  Data Coverage  Number of users.
PERFORMANCE AND ANALYSIS WORKFLOW ISSUES US ATLAS Distributed Facility Workshop November 2012, Santa Cruz.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
FAX UPDATE 12 TH AUGUST Discussion points: Developments FAX failover monitoring and issues SSB Mailing issues Panda re-brokering to FAX Monitoring.
Efi.uchicago.edu ci.uchicago.edu Data Federation Strategies for ATLAS using XRootD Ilija Vukotic On behalf of the ATLAS Collaboration Computation and Enrico.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
Scientific Computing in PPD and other odds and ends Chris Brew.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
File Caching with SSD Arrays Wei Yang 11/14/12 US ATLAS Distributed Facility Workshop University of California, Santa Cruz 0.
Overview on Web Caching COSC 513 Class Presentation Instructor: Prof. M. Anvari Student name: Wei Wei ID:
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Introduce Caching Technologies using Xrootd Wei Yang 10/28/14ATLAS TIM 2014 Univ. Chicago1.
Old Client Memory Issues ATLAS FAX Meeting September 23, 2013 Andrew Hanushevsky, SLAC
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Improving Performance using the LINUX IO Scheduler Shaun de Witt STFC ISGC2016.
XRootD Monitoring Report A.Beche D.Giordano. Outlines  Talk 1: XRootD Monitoring Dashboard  Context  Dataflow and deployment model  Database: storage.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
 IO performance of ATLAS data formats Ilija Vukotic for ATLAS collaboration CHEP October 2010 Taipei.
SLACFederated Storage Workshop Summary Andrew Hanushevsky SLAC National Accelerator Laboratory April 10-11, 2014 SLAC.
Atlas Tier 3 Overview Doug Benjamin Duke University.
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
Efi.uchicago.edu ci.uchicago.edu Federating ATLAS storage using XrootD (FAX) Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
Efi.uchicago.edu ci.uchicago.edu Sharing Network Resources Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago Federated Storage.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation and Enrico Fermi.
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
ROOT IO workshop What relates to ATLAS. General Both CMS and Art pushing for parallelization at all levels. Not clear why as they are anyhow CPU bound.
Global Data Access – View from the Tier 2
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
Future of WAN Access in ATLAS
Experiences with http/WebDAV protocols for data access in high throughput computing
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Brookhaven National Laboratory Storage service Group Hironori Ito
Australia Site Report Sean Crosby DPM Workshop – 13 December 2013.
Efficient Migration of Large-memory VMs Using Private Virtual Memory
Summary of the dCache workshop
Presentation transcript:

efi.uchicago.edu ci.uchicago.edu Caching FAX accesses Ilija Vukotic ADC TIM - Chicago October 28, 2014

efi.uchicago.edu ci.uchicago.edu 2 Caching – why and where? Straight Tier3 – Most of the disk space is devoted to input data. The input data is almost always from downloaded from the grid o A lot of stale data o Tedious cleanups (mails asking people to clean up) o Different file paths o Have to worry about the data sizes Tier3 with a nearby Tier2 or Tier1 – Users advised to DaTRI data to a localgroupdisk and use it from there. That solves all the problems above but it bloats the localgroupdisk.

efi.uchicago.edu ci.uchicago.edu 3 Caching – why and where? Tier2 – Most of the jobs accessing FAX data are overflown. These already come from the optimal place (thanks to cost matrix). – Any cache would have a very small hit rate. DDM Endpoint-less Tier2 – but with cache disk – Like UCL or a cloud based Tier2 – Would have a very small hit rate – unless … – We specialize it: o Only certain physics group jobs? o Only certain type of jobs? o Only high priority stuff?

efi.uchicago.edu ci.uchicago.edu 4 XRootD caching proxy Alja & Matevz caching plugin – Presented at the Federated Storage – Tested to hundreds of concurrent reads/writes good enough to saturate a 100Gb/s link Basics: – File level (pre-fetching) – Sub-file level caching – Caches blocks of configurable size. – Supports vector reads – Purging based on High/Low watermark But never tried in production environment * alId=slides&confId= alId=slides&confId=7207

efi.uchicago.edu ci.uchicago.edu 5 Configuration FAX (MWT2 endpoint) FAX (MWT2 endpoint) Original servers: 113 TB HDDs in 5 Dell shelves RAID 6 Native xrootd Also used as interactive nodes Caching proxy: 28TB HDD in 2 shelves RAID 6 2 x 160GB SDD in RAID 0, fronting HDDs Custom kernel + rebuilt tools from SL7 (bcache) Straightforward configuration. (xrootd 4.0.4) Zero downtime.

efi.uchicago.edu ci.uchicago.edu 6 Performance One of the xAOD analysis tutorial lessons* – 200 input files (all available at MWT2) – Simple cut and plot example – rcSetup Base, – ROOT , no TTC Now open for end-users. Waiting for their feedback. Empty cache1:25 Full file cached1:07 Sub file0:29 *

efi.uchicago.edu ci.uchicago.edu 7 Conclusion For Tier3 storage XRootD cache solves most of the issues Admin friendly Simple deployment High performance 30/70 storage/cache split and sub-file level caching recommended User friendly Provides more effective storage (stale files, files of long gone users, …) Sub-file level caching can be seen as a one free skim/slim stage for everybody T2 usage still to be investigated upon longer Tier3 testing