Gridifying the LHCb Monte Carlo production system

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
LHCb(UK) Meeting Glenn Patrick1 LHCb Grid Activities in UK LHCb(UK) Meeting Cambridge, 10th January 2001 Glenn Patrick (RAL)
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
Andrew McNab - Manchester HEP - 2 May 2002 Testbed and Authorisation EU DataGrid Testbed 1 Job Lifecycle Software releases Authorisation at your site Grid/Web.
February 12, 2002 DØRACE 1 Grid activities in NIKHEF Willem van Leeuwen.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Production Planning Eric van Herwijnen Thursday, 20 june 2002.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Workload Management Massimo Sgaravatto INFN Padova.
Using The EDG Testbed The European DataGrid Project Team
11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
LHCb Applications and GRID Integration Domenico Galli Catania, April 9, st INFN-GRID Workshop.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
SLICE Simulation for LHCb and Integrated Control Environment Gennady Kuznetsov & Glenn Patrick (RAL) Cosener’s House Workshop 23 rd May 2002.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
1 The DataGrid WorkPackage 8 F.Carminati 28 June 2001.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
LHCb planning for DataGRID testbed0 Eric van Herwijnen Thursday, 10 may 2001.
29 May 2002Joint EDG/WP8-EDT/WP4 MeetingClaudio Grandi INFN Bologna LHC Experiments Grid Integration Plans C.Grandi INFN - Bologna.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
WP8 Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid WP8 Meeting, 16th November 2000 Glenn Patrick (RAL)
Claudio Grandi INFN Bologna CHEP'03 Conference, San Diego March 27th 2003 BOSS: a tool for batch job monitoring and book-keeping Claudio Grandi (INFN Bologna)
…building the next IT revolution From Web to Grid…
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
UK Grid Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid Prototype and Globus Technical Meeting QMW, 22nd November 2000 Glenn Patrick (RAL)
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
DIRAC: Workload Management System Garonne Vincent, Tsaregorodtsev Andrei, Centre de Physique des Particules de Marseille Stockes-rees Ian, University of.
Workload Management Workpackage
Eleonora Luppi INFN and University of Ferrara - Italy
Moving the LHCb Monte Carlo production system to the GRID
Grid related projects CERN openlab LCG EDG F.Fluckiger
Gridifying the LHCb Monte Carlo simulation system
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
UK GridPP Tier-1/A Centre at CLRC
UK Testbed Status Testbed 0 GridPP project Experiments’ tests started
D. Galli, U. Marconi, V. Vagnoni INFN Bologna N. Brook Bristol
LHCb Distributed Computing and the Grid V. Vagnoni (INFN Bologna)
Eric van Herwijnen March 10th 2005
Grid activities in NIKHEF
LHCb thinking on Regional Centres and Related activities (GRIDs)
Status and plans for bookkeeping system and production tools
Short to middle term GRID deployment plan for LHCb
The LHCb Computing Data Challenge DC06
Presentation transcript:

Gridifying the LHCb Monte Carlo production system Eric van Herwijnen, CERN eric.van.herwijnen@cern.ch Tuesday, 19 february 2002 Talk given at GGF4, Toronto

Contents LHCb LHCb distributed computing environment Current GRID involvement Functionality of current Monte Carlo system Integration of DataGrid middleware Monitoring and control Requirements of DataGrid middleware

LHCb LHC collider experiment 109 events * 1Mb = 1 Pb Problems of data storage, access and computation Monte Carlo simulation very important for detector design Need a distributed model Create, distribute and keep track of data automatically

LHCb distributed computing environment 15 countries, 13 European + Brazil, China, 50 institutes Tier-0: CERN Tier-1: RAL, IN2P3 (Lyon), INFN (Bologna), Nikhef, CERN + ? Tier-2: Liverpool, Edinburgh/Glasgow, Switzerland + ? (grow to ~10) Tier-3: 50 throughout collaboration Ongoing negotiatons for centres Tier-1/2/3: Germany, Russia, Poland, Spain, Brazil

Current GRID involvement EU DataGrid project (involves HEP, Biology, Medecine and Earth Observation sciences) Active in WP8 (HEP applications) of DataGrid Use “middleware” (WP1-5) + Testbed (WP6) + Network (WP7) Current distributed system works since some time, LHCb is: Grid enabled, but not Grid dependent

MC production facilities (summer 2001) Centre Max. (av.) # of CPUs available simultaneously Batch System Typical weekly production % submitted through GRID CERN 315 (60) LSF 85 k 10% RAL 100 (60) PBS 35k 100% IN2P3 225 (60) BQS Liverpool 300 (250) Custom 150k 0% Bologna 20 (20) Nikhef 40 (40) Bristol 10 (10) 15k

Submit jobs remotely viaWeb Update bookkeeping database Transfer data to Mass store Execute on farm Monitor performance of farm via Web Data Quality Check

GRID-enabling production Run mc executable write log to Web copy data to mass store (dg-data-copy) call CERN servlet Construct job script and submit via Web (dg- authentication, dg-job-submit) mass store FTP servlet (dg-data-replication) copy data to CERN mass store call servlet to copy data from local mass store to CERN update bookkeeping db (?LDAP-now Oracle)

Gridi-fying the MC production system Provide a convenient tool for DataGrid Testbed validation tests Feed back improvements into the MC system currently in production Clone current system, replace commands by DataGrid middleware Report back to WP8 and other workpackages as required

Monitoring and control of running jobs Control system to monitoring distributed production (based on PVSS, author: Clara Gaspar) Initially for MC production, later all Grid computing Automatic quality checks on final data samples Online histograms and comparisons between histograms Use DataGrid monitoring tools Feed back improvements into production MC system

Requirements on DataGrid middleware Security: single user logon Job submission: use “sandboxes” to package environment so that use of AFS is unnecessary Monitoring: integrate with WP3 tools where possible for farm monitoring, use own tools for data quality monitoring Data moving: use a single API to move data We are in a cycle of requirements, design, implementation and testing