Efi.uchicago.edu ci.uchicago.edu FAX meeting intro and news Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Federated Xrootd.

Slides:



Advertisements
Similar presentations
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Advertisements

More FAX dress rehearsal ideas & proposed plan R. Gardner 12/3/12.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
Efi.uchicago.edu ci.uchicago.edu FAX update Rob Gardner Computation and Enrico Fermi Institutes University of Chicago Sep 9, 2013.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS Computing Integration.
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
ATLAS federated xrootd monitoring requirements Rob Gardner July 26, 2012.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
Integration Program Update Rob Gardner US ATLAS Tier 3 Workshop OSG All LIGO.
PanDA Multi-User Pilot Jobs Maxim Potekhin Brookhaven National Laboratory Open Science Grid WLCG GDB Meeting CERN March 11, 2009.
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
FAX UPDATE 1 ST JULY Discussion points: FAX failover summary and issues Mailing issues Panda re-brokering to sites using FAX cost and access Issue.
FAX UPDATE 26 TH AUGUST Running issues FAX failover Moving to new AMQ server Informing on endpoint status Monitoring developments Monitoring validation.
DDM-Panda Issues Kaushik De University of Texas At Arlington DDM Workshop, BNL September 29, 2006.
Efi.uchicago.edu ci.uchicago.edu Towards FAX usability Rob Gardner, Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
CERN IT Department CH-1211 Geneva 23 Switzerland GT WG on Storage Federations First introduction Fabrizio Furano
Efi.uchicago.edu ci.uchicago.edu FAX Dress Rehearsal Status Report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation.
PanDA Update Kaushik De Univ. of Texas at Arlington XRootD Workshop, UCSD January 27, 2015.
Efi.uchicago.edu ci.uchicago.edu FAX status developments performance future Rob Gardner Yang Wei Andrew Hanushevsky Ilija Vukotic.
Status of PDC’07 and user analysis issues (from admin point of view) L. Betev August 28, 2007.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
Storage Federations and FAX (the ATLAS Federation) Wahid Bhimji University of Edinburgh.
Efi.uchicago.edu ci.uchicago.edu Status of the FAX federation Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 /
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT DPM / LFC and FTS news Ricardo Rocha ( on behalf of the IT/GT/DMS.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
FAX UPDATE 12 TH AUGUST Discussion points: Developments FAX failover monitoring and issues SSB Mailing issues Panda re-brokering to FAX Monitoring.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
Efi.uchicago.edu ci.uchicago.edu Data Federation Strategies for ATLAS using XRootD Ilija Vukotic On behalf of the ATLAS Collaboration Computation and Enrico.
OSG Area Coordinators Meeting Security Team Report Mine Altunay 02/13/2012.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
WLCG Operations Coordination Andrea Sciabà IT/SDC 10 th July 2013.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Future of Distributed Production in US Facilities Kaushik De Univ. of Texas at Arlington US ATLAS Distributed Facility Workshop, Santa Cruz November 13,
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
Probes Requirement Review OTAG-08 03/05/ Requirements that can be directly passed to EMI ● Changes to the MPI test (NGI_IT)
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
WLCG Accounting Task Force Update Julia Andreeva CERN GDB, 8 th of June,
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Storage Accounting John Gordon, STFC OMB August 2013.
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
WLCG Operations Coordination report Maria Dimou Andrea Sciabà IT/SDC On behalf of the WLCG Operations Coordination team GDB 12 th November 2014.
DPM in FAX (ATLAS Federation) Wahid Bhimji University of Edinburgh As well as others in the UK, IT and Elsewhere.
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
Efi.uchicago.edu ci.uchicago.edu Federating ATLAS storage using XrootD (FAX) Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
Efi.uchicago.edu ci.uchicago.edu Sharing Network Resources Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago Federated Storage.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation and Enrico Fermi.
WLCG Accounting Task Force Introduction Julia Andreeva CERN 9 th of June,
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
Federating Data in the ALICE Experiment
a brief summary for users
WLCG IPv6 deployment strategy
Global Data Access – View from the Tier 2
PanDA in a Federated Environment
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
FDR readiness & testing plan
WLCG and support for IPv6-only CPU
Presentation transcript:

efi.uchicago.edu ci.uchicago.edu FAX meeting intro and news Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Federated Xrootd Integration, Deployment and Operations Meeting November 19, 2012

efi.uchicago.edu ci.uchicago.edu 2 WLCG groups We have an on-going set of meetings with two WLCG working groups – WLCG Federated Data Storage (F. Furano Chair) o Explores issues generally and assesses approach taken by each of the LHC experiments – WLCG Xrootd task force (D. Giordano) o New group forming to coordinate deployment and operations o Will seek to engage Grid infrastructure providing groups (EGI/EMI, OSG) for support Both of these will help bring FAX into normal production operations

efi.uchicago.edu ci.uchicago.edu 3 UK cloud update from Wahid Testing federation at Glasgow: Paul / peter are trying to switch to the latest DDM client from cvmfs in test pilots. I'm sure they will update elsewhere how that is going. Many UK DPM sites are now on DPM which will make it easier for them to deploy Glasgow observing many orphaned ping processes from FAX tests. They block ping - the build up of processes causes problems. Fax door checker today noted problems at Oxford - haven't yet checked if that is related to any other issues or transient. RAL: Shaun is working on enabling N2N and hitting some issues - see his on the topic.

efi.uchicago.edu ci.uchicago.edu 4 From Wei is available. Testing at SLAC. No hurry to push to sites except SL6 sites (for GSI). Will work with SL6 sites (LRZ and MPPMU) to enable GSI. Wuppertal: Guenter and I both have concern that their configuration (xrootd4j) is unique and require extra support. If they still wish to use NFS v4.1 (instead of dCache xroot door), we may want them to use the standard proxy for POSIX, which is at least similar to LRZ and MPPMU, if not identical. Will test f-stream at SLAC before deployment (after the collector is able to handle it) Update to N2N. Ilija and Hiro have an update to N2N. I want to add it along with a few other minor changes (unique to C++ N2N, and too small to be a dedicated update).

efi.uchicago.edu ci.uchicago.edu 5 Recapping again the use cases Initial use cases – Failover from stage-in problems with local SE o Now implemented, in production on several sites – Lets discuss this a bit more today for production queues and more sites (Simone, Paul) – Gain access to more CPUs using WAN direct read access o Allow brokering to Tier 2s with partial datasets o Opportunistic resources without local ATLAS storage – Use as caching mechanism at sites to reduce local data management tasks o Eliminate cataloging, consistency checking, deletion services

efi.uchicago.edu ci.uchicago.edu 6 At-large use of FAX Slides from Ilija

efi.uchicago.edu ci.uchicago.edu 7 How to use it? Part - I Datasets should be registered – All the grid produced datasets are automatically registered independently if these are part of official production or simply result of a user's job. – If files are not registered it is trivial to do so. Very detailed description how to do this is given Have your ATLAS grid certificate – Make a proxy setup DQ2 Make sure your code uses TTreeCache! source /afs/cern.ch/project/gd/LCG-share/current_3.2/etc/profile.d/grid_env.sh voms-proxy-init -voms atlas source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.zsh CVMFS version

efi.uchicago.edu ci.uchicago.edu 8 CVMFS environment setup Setup environment Make a proxy setup DQ2 export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh’ export ALRB_localConfigDir=$HOME/localConfig setupATLAS localSetupGLite voms-proxy-init -voms atlas localSetupDQ2Client

efi.uchicago.edu ci.uchicago.edu 9 How to use it? Part - II Check that datasets exist at one of the federated sites Find gLFN's of input datasets – Find closest redirector to compute site. List is here: – Do – make a file with the list of all the gLFN’s export STORAGEPREFIX=root://closestRedirector:port/ dq2-list-files -p data12_8TeV physics_Muons.recon.DESD_ZMUMU.f437_m716_f437 > my_list_of_gLFNS.txt dq2-ls –r myDataSetName

efi.uchicago.edu ci.uchicago.edu 10 How to use it? Part - III From ROOT From prun Instead of giving --inDS myDataset option, provide it with --pfnList my_list_of_gLFNS.txt copy files locally TFile *f = TFile::Open("root://myRedirector:port//atlas/dq2/user/ilijav/HCtest/user.i lijav.HCtest.1/group.test.hc.NTUP_SMWZ.root"); xrdcp root://xrddc.mwt2.org:1096//atlas/dq2/user/ilijav/HCtest/user.ilijav.HCtes t.1/group.test.hc.NTUP_SMWZ.root /tmp/myLocalCopy.root

efi.uchicago.edu ci.uchicago.edu 11 FAX dress rehearsal How this actually works in practice, with active users and real workloads, needs to be tested Would like to make a broad announcement – Yet we risk losing users if problems block progress, create frustration, waste time Need to recruit friendly, early adopters willing to tolerate hiccups (there will be hiccups) Recall long ago exercises to validate ANALY queues – we need something similar IMHO it is the only way to know where we really are Proposal is: – Define an FDR sufficient to cover most anticipated user workloads, including user docs – Define a rehearsal period: ~ week and metrics (and needed monitoring) – Poll for site volunteers and an ad-hoc FAX OPS team – Execute the FDR; gather monitoring statistics and accounting data

efi.uchicago.edu ci.uchicago.edu 12 Tier 1/2/3 Jamboree Monday-Tuesday, December Agenda: – w=standard&confId= w=standard&confId= FAX deployment splinter group – Tentative: Tuesday 9 am, E01 – Sign up if you’ll attend o