NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, 25-30 Aug. 2013NA62 collaboration meeting.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
1 First Considerations on LNF Tier2 Activity E. Vilucchi January 2006.
SLUO LHC Workshop, SLACJuly 16-17, Analysis Model, Resources, and Commissioning J. Cochran, ISU Caveat: for the purpose of estimating the needed.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
External and internal data traffic in Tier-2 ATLAS farms. Sketch of farm organization Some approximate estimate s of internal and external data flows in.
Claudio Grandi INFN Bologna CMS Operations Update Ian Fisk, Claudio Grandi 1.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Modeling Regional Centers with MONARC Simulation Tools Modeling LHC Regional Centers with the MONARC Simulation Tools Irwin Gaines, FNAL for the MONARC.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
А.Минаенко Совещание по физике и компьютингу, 03 февраля 2010 г. НИИЯФ МГУ, Москва Текущее состояние и ближайшие перспективы компьютинга для АТЛАСа в России.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
A proposal: from CDR to CDH 1 Paolo Valente – INFN Roma [Acknowledgements to A. Di Girolamo] Liverpool, Aug. 2013NA62 collaboration meeting.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond.
David Stickland CMS Core Software and Computing
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Computing Model José M. Hernández CIEMAT, Madrid On behalf of the CMS Collaboration XV International Conference on Computing in High Energy and Nuclear.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Belle II Physics Analysis Center at TIFR
Computing model and data handling
Vanderbilt Tier 2 Project
Bernd Panzer-Steindel, CERN/IT
Status and Prospects of The LHC Experiments Computing
Proposal for the LHCb Italian Tier-2
ALICE Computing Model in Run3
jeudi 13 septembre 2018jeudi 13 septembre 2018 CMS Tapes Farida Fassi
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
R. Graciani for LHCb Mumbay, Feb 2006
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting

Define requirements for each data center of:  Connection speed (in-bound, out-bound)  CPU power  Tape space and reading/writing speed  Disk space (and speed) 2 Resources for the running Tapes Data center Disk CPU

NA62-FARM resources Liverpool, Aug. 2013NA62 collaboration meeting 3 L1/L2 farm Farm storage From ECN3 150 MB/s NA62 CERN-IT Disk  48h cache for RAW [30 TB]  Calibration streams  Services for the online  Databases CPU At present, limited by:  Network ports  Racks power limit: 10 kW

NA62-FARM networking Liverpool, Aug. 2013NA62 collaboration meeting 4

Tapes  100% of RAW + ⅓×(RECO+THIN) = 2 PB Disk pool  Disk cache for file distribution  Reprocessing on 33% of RAW  Physics monitoring/Data quality/Fast analysis  Calibrations Total = 1 PB CPU = 10 kHS06  Requirements depends essentially on a single set of parameters: share between CERN and the other Tier-1’s in terms of RAW and RECO:  LHC original model:  CERN:outside = 1:2  CERN:ΣT1:T2 = 1:1:1 Proposal is to keep in any case 100% of RAW at CERN (custodial) Assume 2 Tier-1 centers CERN-PROD resources Liverpool, Aug. 2013NA62 collaboration meeting 5 Disk pool Tapes Institutes 150 MB/s CERN-IT NA62

Tier-1 resources 6  Requirements depends essentially on a single set of parameters: share between CERN and the other Tier-1’s in terms of RAW and RECO:  LHC original model:  CERN:outside = 1:2  CERN:ΣT1:T2 = 1:1:1 Proposal is to keep in any case 100% of RAW at CERN (custodial) Assume 2 Tier-1 centers RAW=CERN:T1A:T1B=100%:33%:33% RECO/THIN=CERN:T1A:T1B=33%:33%:33% Tapes  33%×(RAW+RECO+THIN) = 1 PB Disk  Reprocessing on 33% of RAW = 500 TB CPU = 5 kHS06 Slightly more to speed up reprocessing T1 center

Disk  Analysis = 250 TB  Monte Carlo = 250 TB CPU  Analysis = 10 kHS06  Monte Carlo = 2 kHS06 Tier-2’s resources 7  Used for analysis and Monte Carlo production  Analysis  Requirements vary according to analysis to be performed  Total size of THIN files for one dataset (one year of data taking)  Disk: of order TB + ntuples, output, etc.  CPU: assume at least a factor :50 with respect to reconstruction but assuming 50 jobs for each file  Monte Carlo:  Take 10 9 events/year, scaling last production:  112M events (mixed)  30 TB  kHS06 ΣT2 centers

Full cost estimates Costs for 2013, to be scaled to the day of purchase and to be negotiated Excluding NA62-FARM resources  Tapes  0.04 Euro/GB: price down by a factor 2.5 in 3 years…  50kEuro/year for RAW data only + approximately the same amount for RECO files  Total 100 kEuro/year for custodial copy of RAW  +100% amount for T1’s  Disks  1 PB ≈ 300 kEuro  EOS option to be considered for CERN-PROD  +100% for T1’s  + T2’s resources  CPU  Many choices: INTEL multi-core platform, GPU, “micro-servers”, Integrated CPU+GPU, …  Assume 10k Euro/kHS06, 100 kEuro for processing  + 50% for T1’s reprocessing  Add 100% for T2’s analysis and Monte Carlo  Coarse estimate of computing costs/year during running: 300 kEuro/year × 3 years only for T0  Add 50% for T1 resources  Add cost for T2 resources: dominated by CPU [Analysis and Monte Carlo] + some disk  Possibilities to reduce this cost:  Tapes can be reduced only with L3 filtering (f-factor) and/or deletion of RECO versions  Disks can be reduced with slower reprocessing/harder data access  CPU power seems to be hardly reducible, and CPU power estimate is the less solid Liverpool, Aug. 2013NA62 collaboration meeting 8

9

Is it resonable? [Compare to ATLAS]/1 10 ATLAS 2012  ×5 wrt to what we expect  ESD getting larger than RAW  AOD, DPD ≈ 1/10 ATLAS 2012  ≈8000 jobs  If we have one job per burst, expect ≈3000 jobs/day for crunching 1/5 of the data [OK, different data structure, different reconstruction…]  Very roughly, need ½ of CPU

11 Is it resonable? [Compare to ATLAS]/2 ATLAS 2012  More or less the same data to T1’s by storage type:  4 PB to disk/13 PB to tape