Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /

Slides:



Advertisements
Similar presentations
Performance Testing - Kanwalpreet Singh.
Advertisements

More FAX dress rehearsal ideas & proposed plan R. Gardner 12/3/12.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GRID DATA MANAGEMENT PILOT (GDMP) Asad Samar (Caltech) ACAT 2000, Fermilab October , 2000.
Copyright © 2015 Accenture All rights reserved. A Lync Customer Story May 2015.
MultiJob PanDA Pilot Oleynik Danila 28/05/2015. Overview Initial PanDA pilot concept & HPC Motivation PanDA Pilot workflow at nutshell MultiJob Pilot.
A tool to enable CMS Distributed Analysis
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
Efi.uchicago.edu ci.uchicago.edu FAX update Rob Gardner Computation and Enrico Fermi Institutes University of Chicago Sep 9, 2013.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS Computing Integration.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Fusion GPS Externalization Pilot Training 1/5/2011 Lydia M. Naylor Research Lead.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Connect.usatlas.org ci.uchicago.edu ATLAS Connect Technicals & Usability David Champion Computation Institute & Enrico Fermi Institute University of Chicago.
ATLAS federated xrootd monitoring requirements Rob Gardner July 26, 2012.
Integration Program Update Rob Gardner US ATLAS Tier 3 Workshop OSG All LIGO.
◦ What is an Operating System? What is an Operating System? ◦ Operating System Objectives Operating System Objectives ◦ Services Provided by the Operating.
Take on messages from Lecture 1 LHC Computing has been well sized to handle the production and analysis needs of LHC (very high data rates and throughputs)
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
FAX UPDATE 1 ST JULY Discussion points: FAX failover summary and issues Mailing issues Panda re-brokering to sites using FAX cost and access Issue.
FAX UPDATE 26 TH AUGUST Running issues FAX failover Moving to new AMQ server Informing on endpoint status Monitoring developments Monitoring validation.
Efi.uchicago.edu ci.uchicago.edu Towards FAX usability Rob Gardner, Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS.
Efi.uchicago.edu ci.uchicago.edu FAX meeting intro and news Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Federated Xrootd.
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
Efi.uchicago.edu ci.uchicago.edu FAX Dress Rehearsal Status Report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation.
Efi.uchicago.edu ci.uchicago.edu FAX status developments performance future Rob Gardner Yang Wei Andrew Hanushevsky Ilija Vukotic.
NTEU Update Briefing World-Class Enterprise Operations
Nurcan Ozturk University of Texas at Arlington US ATLAS Transparent Distributed Facility Workshop University of North Carolina - March 4, 2008 A Distributed.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
Storage Federations and FAX (the ATLAS Federation) Wahid Bhimji University of Edinburgh.
Efi.uchicago.edu ci.uchicago.edu Status of the FAX federation Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 /
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group S&C week Jun 2, 2014.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
Jean-Roch Vlimant, CERN Physics Performance and Dataset Project Physics Data & MC Validation Group McM : The Evolution of PREP. The CMS tool for Monte-Carlo.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
Global ADC Job Monitoring Laura Sargsyan (YerPhI).
Efi.uchicago.edu ci.uchicago.edu Data Federation Strategies for ATLAS using XRootD Ilija Vukotic On behalf of the ATLAS Collaboration Computation and Enrico.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
OSG Area Coordinator’s Report: Workload Management October 6 th, 2010 Maxim Potekhin BNL
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Maria Girone, CERN CMS Experiment Status, Run II Plans, & Federated Requirements Maria Girone, CERN XrootD Workshop, January 27, 2015.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
© 2013 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. The Value Review.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
PanDA & Networking Kaushik De Univ. of Texas at Arlington ANSE Workshop, CalTech May 6, 2013.
Efi.uchicago.edu ci.uchicago.edu Federating ATLAS storage using XrootD (FAX) Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
Efi.uchicago.edu ci.uchicago.edu Sharing Network Resources Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago Federated Storage.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
PanDA & Networking Kaushik De Univ. of Texas at Arlington UM July 31, 2013.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation and Enrico Fermi.
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
James Casey, CERN IT-GD WLCG Workshop 1st September, 2007
Configuration and Monitoring
Supporting Analysis Users in U.S. ATLAS
Global Data Access – View from the Tier 2
Software Architecture in Practice
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
FDR readiness & testing plan
Presentation transcript:

efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 / Tier 3 Jamboree December 10-11, 2012

efi.uchicago.edu ci.uchicago.edu 2 Agenda Informal discussions today Main focus: planning for dress rehearsal activities Plans post-rehearsal

efi.uchicago.edu ci.uchicago.edu 3 FAX ‘Dress Rehearsal’ Steps towards usability: – Define an FDR sufficient to cover most anticipated user workloads, including user docs – Define a rehearsal period: ~ week and metrics (and needed monitoring) – Poll for site volunteers and an ad-hoc FAX OPS team – Execute the FDR; gather monitoring statistics and accounting data Propose spend December preparing – Identifying five exemplar use cases that can be run by FAX OPS team – Preparing a clean set of tutorial-like documents – Preplace example datasets – Load test redirectors and sites against with examples – Solve the main problem of federated access to datasets Week of January 21 going live with real users

efi.uchicago.edu ci.uchicago.edu 4 Organizing the rehearsal Identifying the capabilities to be probed and assessed, with associated metrics Preparing specific test cases – Synthetic tests that can be run by “us” from the facilities side – Tutorial tests o Specific test jobs & datasets, highly supported – Early adopting users – Load tests Coordinating operations with ADC Metrics collection, post-mortem analysis and reporting

efi.uchicago.edu ci.uchicago.edu 5 Use cases (1) Start with validation of basic functionality – Define set of blessed sites that pass basic status tests – Direct xrdcp of site-specific test files – Copy from parent redirector – Failover checks: o Redirection for files off-site within the cloud o Redirection for files off-side outside the cloud

efi.uchicago.edu ci.uchicago.edu 6 Use cases (2) Simple read tests – Simple script which reads test file used for WAN testing – Cloud contacts self-verify that all sites are “readable” Extend for FAX tutorial datasets

efi.uchicago.edu ci.uchicago.edu 7 Use cases (3) FAX-specific tutorials – Identify a few common analysis prototypes – prun + ANALY queues – Off-grid – Preplace datasets widely, replicate on stable sites – Document instructions for test users – Test instructions – Validate sites versus tutorial – Usage of tools (isDSinFAX.py)

efi.uchicago.edu ci.uchicago.edu 8 FAX usage modes Analysis within a site (just using the FAX door) Analysis within a cloud or region Extreme wide area runs Access from opportunistic resources

efi.uchicago.edu ci.uchicago.edu 9 (Controlled) Load testing Define specific tests – Leverage HC tests where possible Simple test targeted – Choose one or more reference client sites – Choose participating server sites – 10, 100, 500, 1000 remote clients reading random files from a dataset – Collect read times, efficiency – Capture of monitoring plots o From site monitors for IO and load

efi.uchicago.edu ci.uchicago.edu 10 Coordinated Load testing Simulate a coordinated, simultaneous activity across multiple sites – # users – # sites – # jobs – # sites Measure – Job efficiency – Throughput – Plots of distribution of FAX bandwidth

efi.uchicago.edu ci.uchicago.edu 11 Pilot FAX site mover testing Choose validated sites Dedicated tests – “Offline” datasets Metrics – Measure processing times after before – Measure processing times using remote data

efi.uchicago.edu ci.uchicago.edu 12 Site validation