The LHCb Computing Data Challenge DC06

Slides:



Advertisements
Similar presentations
LHCb on the Grid A Tale of many Migrations
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
LHCb Quarterly Report October Core Software (Gaudi) m Stable version was ready for 2008 data taking o Gaudi based on latest LCG 55a o Applications.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LCG Plans for Chrsitmas Shutdown John Gordon, STFC-RAL GDB December 10 th, 2008.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
Dan Tovey, University of Sheffield GridPP: Experiment Status & User Feedback Dan Tovey University Of Sheffield.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Results of the LHCb experiment Data Challenge 2004 Joël Closier CERN / LHCb CHEP’ 04.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
T1 status Input for LHCb- NCB 9 th November 2009.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
WLCG Service Report ~~~ WLCG Management Board, 18 th September
WLCG Service Report ~~~ WLCG Management Board, 23 rd November
LHCb status and plans Ph.Charpentier CERN. LHCb status and plans WLCG Workshop 1-2 Sept 2007, Victoria, BC 2 Ph.C. Status of DC06  Reminder:  Two-fold.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
GDB meeting - July’06 1 LHCb Activity oProblems in production oDC06 plans & resource requirements oPreparation for DC06 oLCG communications.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
The GridPP DIRAC project DIRAC for non-LHC communities.
WLCG Service Report Jean-Philippe Baud ~~~ WLCG Management Board, 24 th August
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
LHCb 2009-Q4 report Q4 report LHCb 2009-Q4 report, PhC2 Activities in 2009-Q4 m Core Software o Stable versions of Gaudi and LCG-AA m Applications.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
LHCb Status report June 08. LHCb Computing Report Activities since February  Applications and Core Software  Preparation of applications for real data.
WLCG IPv6 deployment strategy
Real Time Fake Analysis at PIC
L’analisi in LHCb Angelo Carbone INFN Bologna
LCG Service Challenge: Planning and Milestones
Belle II Physics Analysis Center at TIFR
INFN GRID Workshop Bari, 26th October 2004
Flavia Donno CERN GSSD Storage Workshop 3 July 2007
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
Database Readiness Workshop Intro & Goals
CMS transferts massif Artem Trunov.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Simulation use cases for T2 in ALICE
ALICE – FAIR Offline Meeting KVI (Groningen), 3-4 May 2010
WLCG Service Report 5th – 18th July
R. Graciani for LHCb Mumbay, Feb 2006
LHCb Computing Philippe Charpentier CERN
LHC Data Analysis using a worldwide computing grid
Pierre Girard ATLAS Visit
LHCb status and plans Ph.Charpentier CERN.
Installation/Configuration
Production Manager Tools (New Architecture)
Dirk Duellmann ~~~ WLCG Management Board, 27th July 2010
Presentation transcript:

The LHCb Computing Data Challenge DC06 DC06 Aims To produce data for the LHCb physics book Realistically test the LHCb Computing Model Jobs Monte Carlo simulation on all available sites Reconstruction and Stripping at Tier-1s User Analysis mostly at Tier-1s All jobs submitted to the DIRAC WMS Using WLCG infrastructure & middleware Data Storage Real RAW data on tape at CERN + 1 Tier-1 Reduced DST on tape at Tier-1 Full DST on disk at all Tier-1s Challenge DIRAC and the LCG production service DC06 Resources and data distribution Resources required Bandwidth of 10 MB/s from CERN to Tier-1s FTS to distribute data from CERN to Tier-1s 1 MB/s from Tier-2 to associated Tier-1 Storage Element Disk space of 80 TB at CERN and all Tier-1s (by Sept 07) User analysis needs another “few” TB Primarily root files by physicists Processing power of 2.5 MSI2K cpu-days / day Data distribution patterns implemented MC raw data distributed from CERN to Tier-1s Reduced DSTs stored at the Tier-1 where produced DSTs distributed to CERN + 2 Tier-1s The LHCb Tier-1 sites Site Storage CERN Castor 2 CNAF PIC Castor 1 GridKa dCache IN2P3 NIKHEF RAL Monte Carlo simulation 700 million events were simulated since May 2006 1.5 M events / day through the DC06 period 1.5 million jobs successfully executed Up to 10000 jobs running simultaneously (figure below) Over 120 sites used worldwide Most requested simulations have been completed More plots at http://lhcb.pic.es/DIRAC/Monitoring/Production/ Reconstruction and Stripping 100 million events have been reconstructed 200,000 files recalled from tape 10 million events stripped 10,000 jobs submitted Up to 1200 jobs running simultaneously All the Tier-1 sites used Work still continuing Production Management Data distribution MC raw data distribution to Tier-1s for reconstruction Implemented by a central DIRAC-DMS using FTS Raw data distribution was part of WLCG SC4 (autumn 2006) Good throughput seen SE instabilities – no transfer to 6 sites simultaneously Data-driven job creation and submission Data upload from jobs on worker nodes LCG utilities used (lcg-cp) Sets a request on a VO-box if destination(s) are not available (fail-over mechanism) multiple destinations are required Sustained total transfer rates > 50MB/s (figure below) Issues faced Data access issues on Worker Nodes ROOT - SE incompatibilities (configuration of SEs) Files need to be pre-staged SRM endpoint problems Bad SRM configuration, disk space availability, server overload, etc Seen at all the sites Problems with computing elements CE acting as “black hole” for the DIRAC pilot agents Bad CE configuration, queue manager problems, etc Wrong job environments on the worker nodes Problems with grid middleware and services BDII server overload VOMS problems – access permissions errors, role changes, etc proxy lifetime issues (expiration in the batch queue, …) srmGet not staging files on dCache Firewall problems when accessing software Firewall problems when accessing data (SE behind a firewall) gLite WMS not yet in production Unforeseen issues (fire in a computer facility!!!, cooling breakdowns, …) Summary LHCb computing model successfully exercised All requested Monte Carlo events have been simulated Reconstruction and stripping is currently ongoing Satisfactory data transfer rates have been observed Many issues still with SEs, services and middleware R. Nandakumar (RAL) on behalf of the LHCb production and DIRAC teams