Post-DC2/Rome Production Kaushik De, Mark Sosebee University of Texas at Arlington U.S. Grid Phone Meeting July 13, 2005.

Slides:



Advertisements
Similar presentations
Releases & validation Simon George & Ricardo Goncalo Royal Holloway University of London HLT UK – RAL – 13 July 2009.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Experience with ATLAS Data Challenge Production on the U.S. Grid Testbed Kaushik De University of Texas at Arlington CHEP03 March 27, 2003.
GLAST LAT Project 1S. Ritz Purposes of the Data Challenges “End-to-end” testing of analysis software. –define the ends –define the tests (what is success?)
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
The Panda System Mark Sosebee (for K. De) University of Texas at Arlington dosar workshop March 30, 2006.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ATLAS-Specific Activity in GridPP EDG Integration LCG Integration Metadata.
ATLAS Data Challenge Production and U.S. Participation Kaushik De University of Texas at Arlington BNL Physics & Computing Meeting August 29, 2003.
ATLAS : File and Dataset Metadata Collection and Use S Albrand 1, J Fulachier 1, E J Gallas 2, F Lambert 1 1. Introduction The ATLAS dataset search catalogs.
ATLAS and Grid Computing RWL Jones GridPP 13 5 th July 2005.
Integration Program Update Rob Gardner US ATLAS Tier 3 Workshop OSG All LIGO.
An Executive Summary of US ATLAS workshop on Beyond The Standard Model Physics Time & Place: Thursday 3/30 +Friday 4/1, Columbia University Topics: Beyond.
K. De UTA Grid Workshop April 2002 U.S. ATLAS Grid Testbed Workshop at UTA Introduction and Goals Kaushik De University of Texas at Arlington.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Sep 21, 20101/14 LSST Simulations on OSG Sep 21, 2010 Gabriele Garzoglio for the OSG Task Force on LSST Computing Division, Fermilab Overview OSG Engagement.
DDM-Panda Issues Kaushik De University of Texas At Arlington DDM Workshop, BNL September 29, 2006.
2nd September Richard Hawkings / Paul Laycock Conditions data handling in FDR2c  Tag hierarchies set up (largely by Paul) and communicated in advance.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
ATLAS Data Challenge Production Experience Kaushik De University of Texas at Arlington Oklahoma D0 SARS Meeting September 26, 2003.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
Metadata requirements for HEP Paul Millar. Slide 2 12 September 2007 Metadata requirements for HEP Some of the players in this game... WLCG – Umbrella.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
PanDA Update Kaushik De Univ. of Texas at Arlington XRootD Workshop, UCSD January 27, 2015.
Zprávy z ATLAS SW Week March 2004 Seminář ATLAS SW CZ Duben 2004 Jiří Chudoba FzÚ AV CR.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Performance of The NorduGrid ARC And The Dulcinea Executor in ATLAS Data Challenge 2 Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Overview of ATLAS Data Challenge Oxana Smirnova LCG/ATLAS, Lund University GAG monthly, February 28, 2003, CERN Strongly based on slides of Gilbert Poulard.
The ATLAS Strategy for Distributed Analysis on several Grid Infrastructures D. Liko, IT/PSS for the ATLAS Distributed Analysis Community.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Testing CernVM-FS scalability at RAL Tier1 Ian Collier RAL Tier1 Fabric Team WLCG GDB - September
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
USCMS May 2002Jim Branson 1 Physics in US CMS US CMS Annual Collaboration Meeting May 2002 FSU Jin Branson.
Future of Distributed Production in US Facilities Kaushik De Univ. of Texas at Arlington US ATLAS Distributed Facility Workshop, Santa Cruz November 13,
ATLAS Distributed Analysis DISTRIBUTED ANALYSIS JOBS WITH THE ATLAS PRODUCTION SYSTEM S. González D. Liko
Planning Session. ATLAS(-CMS) End-to-End Demo Kaushik De is the Demo Czar Need to put team together Atlfast production jobs –Atlfast may be unstable over.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Dario Barberis: Conclusions ATLAS Software Week - 10 December Conclusions Dario Barberis CERN & Genoa University.
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
ATLAS Distributed Computing Tutorial Tags: What, Why, When, Where and How? Mike Kenyon University of Glasgow.
ATLAS on Grid3/OSG R. Gardner December 16, 2004.
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
Testbed Monitoring Kaushik De Univ. of Texas at Arlington
Computing Operations Roadmap
University of Texas At Arlington Louisiana Tech University
U.S. ATLAS Grid Production Experience
Elizabeth Gallas - Oxford ADC Weekly September 13, 2011
Readiness of ATLAS Computing - A personal view
Zhongliang Ren 12 June 2006 WLCG Tier2 Workshop at CERN
Univ. of Texas at Arlington BigPanDA Workshop, ORNL
U.S. ATLAS Testbed Status Report
US ATLAS Physics & Computing
ATLAS DC2 & Continuous production
Presentation transcript:

Post-DC2/Rome Production Kaushik De, Mark Sosebee University of Texas at Arlington U.S. Grid Phone Meeting July 13, 2005

July 11, 2005 Kaushik De 2 Overview  Why restart managed production?  ~450 people attended Rome meeting, ~100 talks, many based on DC2/Rome data (though some still using DC1 or ATLFAST data).  Growing number of physicists looking at data every day.  Since Rome, many physics groups need data urgently and are starting to do private production since ‘grid is not available’.  SUSY needs background sample, top needs large dataset, Higgs…  Private production of common data samples is wasteful – many samples repeated, never used, mistakes…  Private production is not the correct model for 2008 – we do not have infinite computing resources, we will need quality control…  Grid needs to be available for ATLAS – just like the components of the detector, or core software – as a service to the collaboration.

July 11, 2005 Kaushik De 3 Production Proposal  Plan is being developed jointly with physics coordination, software releases, and with prodsys development  Ian Hinchliffe & Davide Costanzo presented physics plan June 21 st  KD is organizing distributed production for all ATLAS - presented this talk during phone meeting July 11 th  General assumptions about grid:  Production needs to help with testing of new prodsys  Production must allow for shutdowns required to upgrade middleware (like OSG, ARC, LCG), services (like RLS, DQ, DB)  First proposal:  Restart low level production mid-July  July/August – validate release (first on OSG, later NG/LCG)  September – test new prodsys  October – 1M event production to provide data for Physics Workshop

July 11, 2005 Kaushik De 4 July 2005 Proposal  Finish up Rome pile-up sample on LCG  Archive/clean-up files on all grids, after checking with physics groups and making general announcement with 2 weeks notice  Archive and delete all DC1 and DC2 files?  Archive and delete Rome simul files?  Upgrade middleware/fabric  OSG – started end of June, ready by mid-July (Yuri Smirnov doing tests of U.S. sites already with Rome top sample)  ARC? LCG?  Prodsys/grid testing – started  Do validation  Xin Zhao started installations on OSG  Pavel Nevski defining jobs

July 11, 2005 Kaushik De 5 August 2005 Proposal  Start production of some urgent physics samples  Use DC2 software  Get list from physics groups  Validation of 10.0.x  Stage-in input files needed for September  Prodsys integration and scaling testsGrid infrastructure tests of new fabric – top sample  RTT set-up and testing (for nightly validation)  DDM testing  Complete deployment of /10.0.x on all grids

July 11, 2005 Kaushik De 6 Production Samples  Physics groups will define 3 major samples  Sample A  for quick validation or debugging of software  scale 10^5 events, 15 datasets  Sample B  validation of major releases  scale 10^6 events, 25 datasets  Sample C  production sample  scale 10^7 events (same as DC2 or Rome), 50 datasets

July 11, 2005 Kaushik De 7 September - Validation  Sep , 2005  Goal: test production readiness (validate prodsys)  Use sample A (throw-away sample) with Rome release  Start RTT nightly tests with 10^4 events  Start grid & prodsys tests with 10^5 events  Steps: evgen, simul, digi and reco  Scale: 2000 jobs (50 events each) in 2 weeks (<1% of Rome rate)  Sep. 26-Oct. 3, 2005  Goal: test production readiness (validate new release)  Use sample A with release , same steps  Grid, prodsys & DDM (pre-production) tests with 10^5 events  Scale: 2000 jobs in 1 week (~2% of Rome rate)

July 11, 2005 Kaushik De 8 Computer Commissioning Proposal  Oct. 3-17, 2005  Goal: production for Physics Workshop scheduled end of October  Prodsys – ready? Rod?  DDM – ready? Miguel? Contingency plan?  Use sample B with release 10.0.x  Steps: evgen, simul, digi, reco, tag (concatenation)  Scale: 10,000 jobs with 100 events each in 2 weeks (~10-20% of peak Rome rate, sample size is ~15% of total Rome sample)  Note: this is the end of line for release 10

July 11, 2005 Kaushik De 9 Release 11 CSC Proposal  October/November  RTT run every night to discover problems – Davide?  1 week after any bug fix release, run Sample A on grid, scale 1000 jobs (100 events each), all steps if possible, typically in 2-5 days  1 week after any stable release, run Sample B on grid, scale 10k jobs, all steps, typically in 1-2 weeks (this is still <10% of peak Rome production rate)  November/December  Goal: generate background sample for blind challenge, test prodsys  Use sample C with stable/tested release 11.0.x  Steps: evgen, simul, digi, reco, tag (concatenation)  Scale: 100k jobs with 100 events each (should exceed Rome rate, sample size approx. same as Rome)

July 11, 2005 Kaushik De CSC Proposal  Early 2006  Goal: blind challenge (physics algorithm tune-up, test analysis model)  Mix background data with blind samples (not done on grid to protect content – possibly run at a few Tier 2 sites)  Continue to run Samples A and B on grid for every release  Calibration/alignment test with Release 12 – require Sample C scale production (equivalent to Rome)

July 11, 2005 Kaushik De 11 Appendix: Definition of Sample A  ~100k events, 10k per sample 1. Min Bias 2. Z->ee 3. Z->mumu 4. Z->tautau forced to large pt so that the missing et is large. 5. H->gamgam (130 GeV) 6. b-tagging sample 1 7. b-tagging sample 2 8. top 9. J1 10. J4 11. J8

July 11, 2005 Kaushik De 12 Appendix: Definition of Samples B, C  Sample B:  1M events, at least 25k per sample  Includes all the sets from sample B, plus additional physics samples  Sample C:  10M events, at least 100k per sample  Includes all the sets from sample A, 500k events each  Additional physics samples