LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
LHCb(UK) Meeting Glenn Patrick1 LHCb Grid Activities in UK LHCb(UK) Meeting Cambridge, 10th January 2001 Glenn Patrick (RAL)
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
NIKHEF Testbed 1 Plans for the coming three months.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Status of SICB and Monte Carlo productions Eric van Herwijnen Friday, 2 march 2001.
Production Planning Eric van Herwijnen Thursday, 20 june 2002.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
LHCb Applications and GRID Integration Domenico Galli Catania, April 9, st INFN-GRID Workshop.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
3rd Nov 2000HEPiX/HEPNT CDF-UK MINI-GRID Ian McArthur Oxford University, Physics Department
SLICE Simulation for LHCb and Integrated Control Environment Gennady Kuznetsov & Glenn Patrick (RAL) Cosener’s House Workshop 23 rd May 2002.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
LHCb planning for DataGRID testbed0 Eric van Herwijnen Thursday, 10 may 2001.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
WP8 Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid WP8 Meeting, 16th November 2000 Glenn Patrick (RAL)
28 March 2001F Harris LHCb Software Week1 Overview of GGF1 (Global Grid Forum) and Datagrid meeting, NIKHEF, Mar 5-9 F Harris(Oxford)
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
UK Grid Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid Prototype and Globus Technical Meeting QMW, 22nd November 2000 Glenn Patrick (RAL)
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
LHCb GRID Meeting 11/12 Sept Sept LHCb-GRID T. Bowcock 2 AGENDA 9:30 LHCb MC Production –Points SICB Processing Req. Data Storage Data Transfer.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
DIRAC: Workload Management System Garonne Vincent, Tsaregorodtsev Andrei, Centre de Physique des Particules de Marseille Stockes-rees Ian, University of.
Workload Management Workpackage
U.S. ATLAS Grid Production Experience
Moving the LHCb Monte Carlo production system to the GRID
GWE Core Grid Wizard Enterprise (
Grid2Win: Porting of gLite middleware to Windows XP platform
Gridifying the LHCb Monte Carlo simulation system
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
UK GridPP Tier-1/A Centre at CLRC
UK Testbed Status Testbed 0 GridPP project Experiments’ tests started
D. Galli, U. Marconi, V. Vagnoni INFN Bologna N. Brook Bristol
US ATLAS Physics & Computing
Wide Area Workload Management Work Package DATAGRID project
Gridifying the LHCb Monte Carlo production system
GRID Workload Management System for CMS fall production
The National Grid Service Mike Mineter NeSC-TOE
ATLAS DC2 & Continuous production
HEC Beam Test Software schematic view T D S MC events ASCII-TDS
Summary Computing Model SICb Event Model Detector Description
LHCb thinking on Regional Centres and Related activities (GRIDs)
First attempt at using WIRED
Status and plans for bookkeeping system and production tools
Short to middle term GRID deployment plan for LHCb
Development of LHCb Computing Model F Harris
Presentation transcript:

LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000

Overview  LHCb computing model and existing resources  EU GRID project participation  GRID unique opportunity to unify CPU resources in the collaborating institutes  Mini project  EU GRID project submitted Paper by F. Harris at:

LHCb computing model  RICH and VertexDectector require detailed optimisation studies  High level triggers and physics studies require large amounts of simulated background and channel data  Simulation largely (2/3) to be done outside CERN (1/3)  GRID to be used with existing FTN based simulation program

Current resources for MC production  CERN: PCSF, 60 queues running WNT  RAL: 30 queues running WNT  Liverpool 300 PC farm running Linux  Short term objective: move away from WNT to Linux and use the GRID

LHCb WP8 application (F. Harris) MAP Farm(300 cpu) at Liverpool to generate 10 7 events over 4 months. MAP Farm(300 cpu) at Liverpool to generate 10 7 events over 4 months. Data volumes transferred between facilities: Data volumes transferred between facilities: Liverpool to RAL3TB (RAW,ESD,AOD,TAG) Liverpool to RAL3TB (RAW,ESD,AOD,TAG) RAL to Lyon/CERN0.3TB (AOD and TAG) RAL to Lyon/CERN0.3TB (AOD and TAG) Lyon to LHCb inst.0.3TB (AOD and TAG) Lyon to LHCb inst.0.3TB (AOD and TAG) RAL to LHCb inst.100GB (ESD for sys. studies) RAL to LHCb inst.100GB (ESD for sys. studies) Physicists run jobs at regional centre or move AOD & TAG data to local institute and run jobs there. Physicists run jobs at regional centre or move AOD & TAG data to local institute and run jobs there. Also, copy ESD for 10% of events for systematic studies. Also, copy ESD for 10% of events for systematic studies. Formal Production scheduled  start 2002 to mid-2002 (EU schedule) Formal Production scheduled  start 2002 to mid-2002 (EU schedule) BUT we are pushing ahead to get experience so we can define project requirements Aim for a ‘production’ run by end 2000 On basis of experience will give input on HEP application needs to the Middleware groups

GRID starting working group  Glenn Patrick  Chris Brew  Frank Harris  Ian McArthur  Nick Brook  Girish Patel  Themis Bowcock  Eric van Herwijnen  others to join from France, Italy, Netherlands etc.

Mini project (1)  Install Globus at CERN, RAL and Liverpool CERN: installation completed on a single Linux node (pcrd25.cern.ch) running Redhat 6.1, not available to public yet RAL: installed but not yet tested MAP: being installed  Members to get access to respective sites  Timescale: 1st week in August

Mini project (2)  Run SICBMC at CERN, RAL and Liverpool Prepare a script using Globus commands to run sicbmc and copy the data back to the host where the job was executed from Other partners to test the script from their sites  Timescale: middle of August

Mini project (3)  Verify data can be shipped back to CERN and written onto tape Use globus commands Some scripting required to use SHIFT and update bookkeeping database  Time scale: end of August  Benchmark sustained datatransfer rates

 Globus: toolkit has a C API (easy to integrate with Gaudi) commands for remotely running scripts (or executables), recovering data from std output, saving consulting metadata through LDAP  The gains: Everyone uses the same executable Everyone uses the same scripts Data is handled in a uniform way  Batch system (LSF, PBS, Condor) to be discussed Integration of the GRID between CERN, RAL and MAP (1)

Integration of the GRID between CERN, RAL and MAP (2)  Explore the use of LDAP for bookkeeping database: API in C would solve current Oracle -> Gaudi interface problem Simplification of DB updating by MC production Everybody is heading in this direction Oracle have an LDAP server product, someone should investigate  Java job submission tools should be modified to create Globus jobs. Timescale: October (to be done in parallel with current NT production)

Integration of the GRID between CERN, RAL and MAP (3)  RAL NT farm to be converted to Linux this autumn  MAP uses Linux  CERN have been asked to install Globus on the Linux batch service (LSF)  A full MC production run using Globus is aimed for in december

Extension to other institutes  Establish a “GRID” architecture: Intel PCs running Linux Redhat 6.1 LSF, PBS or Condor for job scheduling Globus for managing the GRID LDAP for our bookkeeping database java tools for connecting production, bookkeeping & GRID