ATLAS Computing Requirements LHCC - 19 March 2007 1 ATLAS Computing Requirements for 2007 and beyond.

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
1 First Considerations on LNF Tier2 Activity E. Vilucchi January 2006.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Sue Foffano LCG Resource Manager WLCG – Resources & Accounting LHCC Comprehensive Review November, 2007 LCG.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
WLCG/8 July 2010/MCSawley WAN area transfers and networking: a predictive model for CMS WLCG Workshop, July 7-9, 2010 Marie-Christine Sawley, ETH Zurich.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Status of 2015 pledges 2016 requests RRB Report Concezio Bozzi INFN Ferrara LHCb NCB, November 3 rd 2014.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
ATLAS Computing Model – US Research Program Manpower J. Shank N.A. ATLAS Physics Workshop Tucson, AZ 21 Dec., 2004.
The Worldwide LHC Computing Grid WLCG Service Ramp-Up LHCC Referees’ Meeting, January 2007.
Meeting, 5/12/06 CMS T1/T2 Estimates à CMS perspective: n Part of a wider process of resource estimation n Top-down Computing.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
ATLAS: Heavier than Heaven? Roger Jones Lancaster University GridPP19 Ambleside 28 August 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Ian Bird LCG Project Leader WLCG Collaboration Issues WLCG Collaboration Board 24 th April 2008.
SC4 Planning Planning for the Initial LCG Service September 2005.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
Dario Barberis: ATLAS Activities at Tier-2s Tier-2 Workshop June ATLAS Activities at Tier-2s Dario Barberis CERN & Genoa University.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
LHC Computing, CERN, & Federated Identities
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
The Worldwide LHC Computing Grid Introduction & Housekeeping Collaboration Workshop, Jan 2007.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Dario Barberis: ATLAS Computing TDR LHCC - 29 June ATLAS Computing Technical Design Report Dario Barberis (CERN & Genoa University) on behalf of.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Computing Operations Roadmap
Ian Bird WLCG Workshop San Francisco, 8th October 2016
LCG Service Challenge: Planning and Milestones
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Data Challenge with the Grid in ATLAS
Status and Prospects of The LHC Experiments Computing
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond

ATLAS Computing Requirements LHCC - 19 March History l ATLAS computing requirements were first worked out in , in preparation of the Computing TDR and the Computing Addendum to the M&O MoU l Given the shift in LHC start-up schedule, we revised all inputs to the Computing Model in Summer-Autumn 2006 nThe current numbers are based on the results of that revision l On the same timescale, we contributed to building the “megatable” of necessary bandwidths between all Tier-1s and Tier-2s nSome of the megatable inputs are constrained by unbalanced pledges of disk/CPU capacity by a few Tier-2s and cannot by definition match our top-down requirements nIn terms of other resources, it just illustrates how, if we were to use the imbalanced Tier-2s as pledged, they would affect the storage at the Tier-1s, and given such a use of the Tier-2s, what the Tier-1–Tier-2 traffic should be

ATLAS Computing Requirements LHCC - 19 March ATLAS Requirements for 2007 CPU (MSI2k)Disk (PB)Tape (PB) Tier CAF Sum of Tier-1s Sum of Tier-2s

ATLAS Computing Requirements LHCC - 19 March ATLAS Requirements for 2008 CPU (MSI2k)Disk (PB)Tape (PB) Tier CAF Sum of Tier-1s Sum of Tier-2s

ATLAS Computing Requirements LHCC - 19 March Evolution

ATLAS Computing Requirements LHCC - 19 March Ratio to Computing TDR (June 2005) NB: there was a mistake in the Tier-0/CAF CPU requirements for 2007 in the Computing TDR (subsequently corrected already in 2005): CPU for calibrations does not scale with the length of the data-taking period

ATLAS Computing Requirements LHCC - 19 March Observations on computing resources l Data storage requirements generally fall with reduced live-time (obviously) l CPU does not fall as much nCERN CPU determined by rate and calibration requirements nMore calibration and optimisation is needed for 2007 data l Higher than hoped simulation time per event l Tier-1s see significant reductions wrt earlier (C-TDR) estimates nCumulative effect of less data on reprocessing l Tier-2s see a small initial fall but are bigger after 2009 l There is an argument for spreading the gain and the pain with Tier-1s by introducing more flexibility in the model: nTier-1s can now produce simulated data when not fully busy with reprocessing l Resources to be provided in 2007 are based on the original agreement on the ramp-up slope (30% in 2006 and 70% in 2007 wrt 2008) nResources we had available in 2006 were <<30% of the nominal 2008 capacities… nAnd known acquisition plans for many Tier-1 centres fall far short of 70% of the 2008 capacities

ATLAS Computing Requirements LHCC - 19 March Megatable rate inputs l Disk space is the resource that limits our total computing capacity right now nThis is true for both Tier-1s and Tier-2s l We distribute RAW and 1st-pass processed ESD data to Tier-1s approximately according to their pledged DISK capacities nAfter removing the disk space for AODs (100 kB/event) nBNL in addition gets a full set of ESD data l Our data distribution model requires a coupling between Tier-1s nAs we keep 2 copies of the most recent version of ESD data on disk (and one on tape at the production site) nReprocessed ESDs also get exchanged between paired Tier-1s of similar capacity  BNL  IN2P3CC+FZK, NIKHEF/SARA  ASGC+TRIUMF+RAL, CNAF  RAL, PIC  NDGF l The Tier-2s receive from “their” Tier-1 a fraction of AOD (and smaller fractions of ESD and RAW) compatible with their DISK size nTier-2s in European countries with a Tier-1 get globally a full set of AOD nSome “large” Tier-2s also get a full set of AOD l CPU capacity at Tier-2s is split between analysis (proportional to the data they hold on disk) and simulation nSimulation output is transferred to Tier-1s that have storage capacity

ATLAS Computing Requirements LHCC - 19 March Reality check l We always compare our requirements with the pledges in the RRB tables l For , the sum of the pledges approximately matches our requirements nLater on there may be a problem, especially with storage space l The pledges for 2007 were supposed to be made available on 1st July nBut many computing centres have capacities several factors lower right now, and not enough purchasing actions in the pipeline nAbout 1/2 of the 2007 capacity should have been available in 2006, but his did not happen l The result is that we have full disks in most Tier-1s and had to reduce the production rates l We need that people take MoU pledges as the basis of their investment plans, as we rely on these pledges for our own planning

ATLAS Computing Requirements LHCC - 19 March Summary 2007 timeline l Running continuously throughout the year (increasing rates): nSimulation production nCosmic ray data-taking (detector commissioning) l January to June: nData streaming tests l March through May: nIntensive Tier-0 tests l From February onwards: nData Distribution tests l From March onwards: nDistributed Analysis (intensive tests) l May to July: nCalibration Data Challenge l June to October: nFull Dress Rehearsal l November: nGO!