5 LHCb December 2006 Muon Calorimeters RICH2 Trackers Magnet RICH1 VELO p p Getting Pretty!
6 Summer 2008 – Beauty at Last? 1000 million B mesons/year 2008 Suddenly Beautiful! B0B0 B0B0 b d d b
7 …and so it is with the Grid? Job Local disk Compute Element globus-url-copy Replica Catalogue NIKHEF - Amsterdam CERN TESTBED REST-OF-GRID Job Storage Element replica-get publish register-local-file Storage Element mss Data Origins of Grid for LHCb … GridPP at NeSc Opening – 25 April 2002
8 DIRAC WMS Evolution (2006) Job Receiver Job Receiver Job JDL Sandbox Job Input JobDB Job Receiver Job Receiver Job Receiver Job Receiver Data Optimizer Data Optimizer Task Queue LFC checkData Agent Director Agent Director checkJob RB Pilot Job CE WN Pilot Agent Pilot Agent Job Wrapper Job Wrapper execute (glexec) User Application User Application fork Matcher CE JDL Job JDL getReplicas WMS Admin WMS Admin getProxy SE uploadData VO-box putRequest Agent Monitor Agent Monitor checkPilot getSandbox Job Monitor Job Monitor DIRAC services DIRAC services LCG services LCG services Workload On WN Workload On WN
9 DIRAC Production & Analysis DIRAC Job Management Service DIRAC Job Management Service DIRAC CE LCG Resource Broker Resource Broker CE 1 DIRAC Sites Agent CE 2 CE 3 Production manager Production manager GANGA UI User CLI JobMonitorSvc JobAccountingSvc AccountingDB Job monitor InformationSvc FileCatalogSvc MonitoringSvc BookkeepingSvc BK query webpage BK query webpage FileCatalog browser FileCatalog browser User interfaces DIRAC services DIRAC resources DIRAC Storage DiskFile gridftp bbftp rfio GridPP: Gennady Kuznetsov (RAL) – DIRAC Production Tools DIRAC1: started 19.12.2002 DIRAC3 (data ready): due 2007
10 GANGA: Gaudi ANd Grid Alliance - 2001 GAUDI Program GANGA GUI JobOptions Algorithms Collective & Resource Grid Services Histograms Monitoring Results First ideas… Pere Mato: LHCb Workshop, Bologna, 15 June 2001 GridPP - Alexander Soroko (Oxford) Karl Harrison (Cambridge) Ulrik Egede (Imperial) Alvin Tan (Birmingham)
12 Ganga 2007: Elegant Beauty? CERN, September 2005Cambridge, January 2006 Job details Logical Folders Job Monitoring Log window Job builder Scriptor Screenshot of the Ganga GUI Edinburgh, January 2007
13 Ganga Users - 2007 806 unique users since 1 Jan 2007: LHCb=162 unique users ATLAS LHCb Other
17 Monte Carlo Simulation 2007 Record of 9715 simultaneous jobs over 70+ sites on 28 Feb 2007 Raja Nandakumar (RAL) 700M events simulated since May 2006. 1.5M jobs submitted
18 Reconstruction & Stripping - 2007 …but not so often we get all Tier 1 centres working together. Peak of 439 jobs. CNAF NIKHEF RAL IN2P3 CERN
19 Data Management - 2007 Production jobs upload output to associated Tier 1 SE (i.e. RAL in UK). Multiple Failover SE and Multiple VO Boxes used in case of failure. Replication done via FTS and centralised Transfer DB. eScience PhD: Andrew Smith (Edinburgh)
20 Data Transfer - 2007 RAW data replicated from Tier 0 to one of six Tier 1 sites. gLite FTS used for T0 – T1 replication. Transfers trigger automated job submission for reconstruction. Sustained total rate of 40MB/s required (and achieved). Further DAQ –T0 – T1 throughput tests at 42MB/s aggregate rate scheduled later in 2007. 50
21 Bookkeeping (2007) GridPP: Carmine Cioffi (Oxford) Tomcat volhcb01 AMGA Client Read Oracle DB BK Service BookkeepingSvc BookkeepingQuery Servlet Web Browser Read AMGA Client AMGA R/W
22 LHCb CPU Use 2005-2007 COUNTRYCPU USE (%) UK34.0 Italy16.1 Switzerland13.7 France9.8 Spain7.1 Germany4.8 Greece4.0 Netherlands4.0 Russia2.0 Poland1.8 Hungary0.6 CERN UK Italy Swiss France Spain Germany Many thanks to: Birmingham, Bristol, Brunel, Cambridge, Durham, Edinburgh, Glasgow, Imperial, Lancaster, Liverpool, Manchester, Oxford, QMUL, RAL, RHUL, Sheffield and all others.
23 UKI Evolution for LHCb Tier 1 NorthGrid London ScotGrid SouthGrid 2004 2007
25 Some 2007-2008 Milestones Sustain DAQ-T0–T1 throughput tests at 40+ MB/s. Reprocessing (second pass) of data at Tier 1 centres. Prioritisation of analysis, reconstruction and stripping jobs (all at Tier 1 for LHCb). CASTOR has to work reliably for all service classes! Ramp up of hardware resources in UK. Alignment. Monte-Carlo done with perfectly positioned detectors…. reality will be different! Calibration. Monte-Carlo done with well understood detectors… reality will be different! Distributed Conditions Database plays vital role. Analysis. Increasing load of individual users.
26 EPS Conference on High Energy Physics, Manchester 23 July 2007 Lyn Evans The End (and the Start) GridPP3