The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma, 10.10.2005.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Working Group Meeting (McGrew/Toki) Discussion Items (focused on R&D software section) Goals of Simulations Simulation detector models to try Simulation.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Status of LHCb-INFN Computing CSN1, Catania, September 18, 2002 Domenico Galli, Bologna.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
BesIII Computing Environment Computer Centre, IHEP, Beijing. BESIII Computing Environment.
GridPP3 Project Management GridPP20 Sarah Pearce 11 March 2008.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Monte Carlo Data Production and Analysis at Bologna LHCb Bologna.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
May Donatella Lucchesi 1 CDF Status of Computing Donatella Lucchesi INFN and University of Padova.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
David Stickland CMS Core Software and Computing
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
A proposal for the KM3NeT Computing Model Pasquale Migliozzi INFN - Napoli 1.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Dynamic Extension of the INFN Tier-1 on external resources
Workload Management Workpackage
Domenico Galli, Bologna
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
FileStager test results
Western Analysis Facility
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
LHCb Software & Computing Status
Luca dell’Agnello INFN-CNAF
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Simulation use cases for T2 in ALICE
Infrastructure for testing accelerators and new
Proposal for the LHCb Italian Tier-2
ALICE Computing Model in Run3
Universita’ di Torino and INFN – Torino
TYPES OFF OPERATING SYSTEM
R. Graciani for LHCb Mumbay, Feb 2006
Status of LHCb-INFN Computing
Gridifying the LHCb Monte Carlo production system
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHCb Computing Data Challenge DC06
Presentation transcript:

The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,

The LHCb Italian Tier-2. 2 Domenico Galli Aim of LHCb Tier-2s In the LHCb Computing Model Tier-2s are used (at least in countries also provided with a Tier-1) as a pure centrally scheduled Monte Carlo simulation facility. In the LHCb Computing Model Analysis is not performed on Tier-2s. A LHCb Tier-2 consists in a PC-farm with a small disk buffer (3 TB in steady state) used as a temporary cache until data transfer to Tier-1. LHCb-Italy proposes to build 1 Tier-2 hosted at Bologna-CNAF, together with Italian Tier­1.

The LHCb Italian Tier-2. 3 Domenico Galli Why LHCb Analysis at Tier-1? LHCb analysis jobs consist in selecting the events (stripped DST) stored at Tier-1 to focus on one particular analysis channel: Typical analysis jobs run on a ~10 6 event sample. Some analysis jobs will run on a larger ~10 7 event sample. Average event reduction of a factor of 5. Analysis input is completely stored at each Tier-1. Analysis output ( GB) can be processed by a small Tier-3 facility. Physics Analysis Local Analysis Paper Selected DST+RAW 119 TB Event Tag Collection 20 TB Available at each Tier-1 n-tuple / User DST + User TAG Typical: 20 GB Large: 200 GB

The LHCb Italian Tier-2. 4 Domenico Galli Comparison between 2 models: Analysis job at Tier-2: Analysis job at Tier-1: Data accessed at Tier-1 per analysis job are the same. But the second is faster and less expensive in terms of hardware, infrastructure, staff resources and WAN load. Why LHCb Analysis at Tier-1? (II) Tier-1 DST+RAW+TAG 139 TB Output GB 139 TB Tier-1 Tier-2 DST+RAW+TAG 139 TB Buffer Output GB 139 TB WAN LAN

The LHCb Italian Tier-2. 5 Domenico Galli Why Tier-2 at CNAF? This model would allow maximum flexibility in moving resources back/forth between Tier-2 and Tier-1 to optimize resource exploitation. (to satisfy peak MC/analysis request). Simply by means of a software operation. No competition among Italian sites on Tier-2 resources, since MC production is centrally scheduled. Italian LHCb people involved in computing are mainly located in Bologna.

The LHCb Italian Tier-2. 6 Domenico Galli Tier-2 at CNAF Given the growing profile, LHCb Tier-2 is at maximum a +10% perturbation of CNAF Tier-1 resources. Same building. Same cooling system. Same fire alarm and surveillance. Same network. Same electric power and UPS. Etc. Same trained and skilled management and technical staff. LHCb experiment staff from Bologna Already heavily involved in LHCb computing; Established strict collaboration with the CNAF staff.

The LHCb Italian Tier-2. 7 Domenico Galli LHCb Tier-2 Exploitation in the Next Years From now on, practically speaking, an almost continuous MC production is foreseen for LHCb. Mainly for: Physics studies; HLT studies. Order of 100's Mevents/year.

The LHCb Italian Tier-2. 8 Domenico Galli LHCb Tier-2 Size and Cost We are sizing the Italian Tier-2 to be 15% of the total Tier-2 resources of the whole LHCb collaboration. 15% = Italian fraction of CORE funds; 15% = fraction of Italian physicists with respect of the total involved in the LHCb experiment.

The LHCb Italian Tier-2. 9 Domenico Galli LHCb Tier-2 Additional Size and Cost (according to LHCb Computing Model) Strictly according to current LHCb Computing Model total CPU [€/Si2k] Disk [€/GiB] CPU running [MSi2k] CPU Box running Disk running [TiB] CPU replacement [MSi2k] 0.34 Disk replacement [TiB] 1 CPU to be acquired [MSi2k] Disk to be acquired [TiB] CPU cost [k€] Disk cost [k€] Total cost [k€]

The LHCb Italian Tier Domenico Galli LHCb Tier-2 Additional Infrastructures CPU [MSi2K] Disk [TiB]12333 Electric Power [kW] N. PC Box N. Racks35664 Power+cooling [kW] kSi2k → 110 W 1 TiB → 70 W

The LHCb Italian Tier Domenico Galli LHCb Requests for k€: Tier-2 resources (95 dual-processor box + 1 TiB Disk). Since resources are allocated at CNAF, resource management could be flexible: CPUs can be moved from Tier-1 queues to Tier-2 queues and back with software operations. But Tier-2 have to be logically separated by Tier-1 (e.g.: different batch queues).