FDR readiness & testing plan

Slides:



Advertisements
Similar presentations
More FAX dress rehearsal ideas & proposed plan R. Gardner 12/3/12.
Advertisements

DPM Italian sites and EPEL testbed in Italy Alessandro De Salvo (INFN, Roma1), Alessandra Doria (INFN, Napoli), Elisabetta Vilucchi (INFN, Laboratori Nazionali.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
Efi.uchicago.edu ci.uchicago.edu FAX update Rob Gardner Computation and Enrico Fermi Institutes University of Chicago Sep 9, 2013.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS Computing Integration.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
ATLAS federated xrootd monitoring requirements Rob Gardner July 26, 2012.
Take on messages from Lecture 1 LHC Computing has been well sized to handle the production and analysis needs of LHC (very high data rates and throughputs)
News from the HEPiX IPv6 Working Group David Kelsey (STFC-RAL) WLCG GDB, CERN 8 July 2015.
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
FAX UPDATE 1 ST JULY Discussion points: FAX failover summary and issues Mailing issues Panda re-brokering to sites using FAX cost and access Issue.
FAX UPDATE 26 TH AUGUST Running issues FAX failover Moving to new AMQ server Informing on endpoint status Monitoring developments Monitoring validation.
11-July-2008Fabrizio Furano - Data access and Storage: new directions1.
Xrootd Monitoring for the CMS Experiment Abstract: During spring and summer 2011 CMS deployed Xrootd front- end servers on all US T1 and T2 sites. This.
Efi.uchicago.edu ci.uchicago.edu Towards FAX usability Rob Gardner, Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Efi.uchicago.edu ci.uchicago.edu FAX meeting intro and news Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Federated Xrootd.
ATLAS Distributed Analysis Experiences in STEP'09 Dan van der Ster for the DA stress testing team and ATLAS Distributed Computing WLCG STEP'09 Post-Mortem.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
Efi.uchicago.edu ci.uchicago.edu FAX Dress Rehearsal Status Report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Overview of STEP09 monitoring issues Julia Andreeva, IT/GS STEP09 Postmortem.
Efi.uchicago.edu ci.uchicago.edu Using FAX to test intra-US links Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computing Integration.
Efi.uchicago.edu ci.uchicago.edu FAX status developments performance future Rob Gardner Yang Wei Andrew Hanushevsky Ilija Vukotic.
02-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Storage Federations and FAX (the ATLAS Federation) Wahid Bhimji University of Edinburgh.
Efi.uchicago.edu ci.uchicago.edu Status of the FAX federation Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 /
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group S&C week Jun 2, 2014.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
XROOTD AND FEDERATED STORAGE MONITORING CURRENT STATUS AND ISSUES A.Petrosyan, D.Oleynik, J.Andreeva Creating federated data stores for the LHC CC-IN2P3,
FAX PERFORMANCE TIM, Tokyo May PERFORMANCE TIM, TOKYO, MAY 2013ILIJA VUKOTIC 2  Metrics  Data Coverage  Number of users.
PERFORMANCE AND ANALYSIS WORKFLOW ISSUES US ATLAS Distributed Facility Workshop November 2012, Santa Cruz.
FAX UPDATE 12 TH AUGUST Discussion points: Developments FAX failover monitoring and issues SSB Mailing issues Panda re-brokering to FAX Monitoring.
Global ADC Job Monitoring Laura Sargsyan (YerPhI).
Efi.uchicago.edu ci.uchicago.edu Data Federation Strategies for ATLAS using XRootD Ilija Vukotic On behalf of the ATLAS Collaboration Computation and Enrico.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
EGEE is a project funded by the European Union under contract INFSO-RI Grid accounting with GridICE Sergio Fantinel, INFN LNL/PD LCG Workshop November.
11-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Accounting in DataGrid HLR software demo Andrea Guarise Milano, September 11, 2001.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
DPM in FAX (ATLAS Federation) Wahid Bhimji University of Edinburgh As well as others in the UK, IT and Elsewhere.
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
Efi.uchicago.edu ci.uchicago.edu Federating ATLAS storage using XrootD (FAX) Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
PanDA & Networking Kaushik De Univ. of Texas at Arlington UM July 31, 2013.
J. Shank DOSAR Workshop LSU 2 April 2009 DOSAR Workshop VII 2 April ATLAS Grid Activities Preparing for Data Analysis Jim Shank.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation and Enrico Fermi.
Storage discovery in AliEn
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
Daniele Bonacorsi Andrea Sciabà
Computing Operations Roadmap
WLCG Network Discussion
Xiaomei Zhang CMS IHEP Group Meeting December
L’analisi in LHCb Angelo Carbone INFN Bologna
ALICE internal and external network
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
The Status of Beijing site, and CMS local DBS
PanDA in a Federated Environment
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Storage elements discovery
Brookhaven National Laboratory Storage service Group Hironori Ito
Monitoring Of XRootD Federation
WLCG Demonstrator R.Seuster (UVic) 09 November, 2016
An introduction to the ATLAS Computing Model Alessandro De Salvo
ATLAS STEP09 UK T2 Activity
IPv6 update Duncan Rand Imperial College London
Presentation transcript:

FDR readiness & testing plan R. Gardner 1/21/13

Site and redirection status All monitoring links at https://twiki.cern.ch/twiki/bin/view/Atlas/MonitoringFax Site and redirection status

Redirectors

Site reporting to monitoring Display from new UDP collector Most, but not all sites reporting Other small issues to check

WLCG transfer dashboard Data from UDP  ActiveMQ. Some sites missing, some inconsistent labels. Will need to scrub starting from the UDP collector.

Testing elements At-large users HammerCloud & WAN-FDR jobs COMP L E X I T Y At-large users HammerCloud & WAN-FDR jobs (programmatic) Cost matrix (continuous) Basic dashboard functionality (continuous)

Site Metrics “Connectivity” – copy and read test matrices Snapshots per site as sever HC runs with modest job numbers Stage-in & direct read Local, nearby, far-away HC metrics Simple job efficiency Wallclock, # files, CPU %, event rate, Load tests For well functioning sites only Graduated tests 50, 100, 200 jobs vs various # files Will notify the site and/or list when these are launched

Site # client sites Local copy (sec) Regional Copy (sec, %LOC) Global copy (sec, %LOC) HC – Job eff HC – CPU% HC - WC HC - event rate (Hz) AGLT2 BNL BU CERN DESY INFN-FRASCATI INFN-NAPOLI INFN-ROMA JINR LRZ-LMU MPPMU MWT2 13 OU PRAGUE RAL PROTVINO SWT2_CPB UKI-LT2-QMUL UKI-LIV-HEP UKI-ECDF UKI-GLASGOW UKI-OX WT2

Site copy connectivity tests MWT2 as server, read from 18 ANALY queues. Accessible by 13 of the 18 queues tested Record for all sites http://ivukotic.web.cern.ch/ivukotic/WAN/index.asp

Metrics (global) No. sites, clouds participating No. ANALY queues tested as FAX-capable Average, peak aggregate rate MB/s Number of jobs from HC tests Number of jobs from WAN-FDR tests

Still need to do: Resolve pilot and wrapper issues – verify correct pilots are being sent Finish Hammer Cloud testing for direct access Placement of test datasets (files at LRZ and MWT2 only) Finalize examples for at-large users Setup metrics tables and start gathering statistics