Presentation is loading. Please wait.

Presentation is loading. Please wait.

Your university or experiment logo here LHCb is Beautiful? Glenn Patrick GridPP19, 29 August 2007.

Similar presentations

Presentation on theme: "Your university or experiment logo here LHCb is Beautiful? Glenn Patrick GridPP19, 29 August 2007."— Presentation transcript:

1 Your university or experiment logo here LHCb is Beautiful? Glenn Patrick GridPP19, 29 August 2007

2 2 In the beginning…

3 3 LHCb – GridPP1 Era (May 2002) Empty!

4 4 LHCb – GridPP2 Era (Mar 2005) Not Beautiful!

5 5 LHCb December 2006 Muon Calorimeters RICH2 Trackers Magnet RICH1 VELO p p Getting Pretty!

6 6 Summer 2008 – Beauty at Last? 1000 million B mesons/year 2008 Suddenly Beautiful! B0B0 B0B0 b d d b

7 7 …and so it is with the Grid? Job Local disk Compute Element globus-url-copy Replica Catalogue NIKHEF - Amsterdam CERN TESTBED REST-OF-GRID Job Storage Element replica-get publish register-local-file Storage Element mss Data Origins of Grid for LHCb … GridPP at NeSc Opening – 25 April 2002

8 8 DIRAC WMS Evolution (2006) Job Receiver Job Receiver Job JDL Sandbox Job Input JobDB Job Receiver Job Receiver Job Receiver Job Receiver Data Optimizer Data Optimizer Task Queue LFC checkData Agent Director Agent Director checkJob RB Pilot Job CE WN Pilot Agent Pilot Agent Job Wrapper Job Wrapper execute (glexec) User Application User Application fork Matcher CE JDL Job JDL getReplicas WMS Admin WMS Admin getProxy SE uploadData VO-box putRequest Agent Monitor Agent Monitor checkPilot getSandbox Job Monitor Job Monitor DIRAC services DIRAC services LCG services LCG services Workload On WN Workload On WN

9 9 DIRAC Production & Analysis DIRAC Job Management Service DIRAC Job Management Service DIRAC CE LCG Resource Broker Resource Broker CE 1 DIRAC Sites Agent CE 2 CE 3 Production manager Production manager GANGA UI User CLI JobMonitorSvc JobAccountingSvc AccountingDB Job monitor InformationSvc FileCatalogSvc MonitoringSvc BookkeepingSvc BK query webpage BK query webpage FileCatalog browser FileCatalog browser User interfaces DIRAC services DIRAC resources DIRAC Storage DiskFile gridftp bbftp rfio GridPP: Gennady Kuznetsov (RAL) – DIRAC Production Tools DIRAC1: started DIRAC3 (data ready): due 2007

10 10 GANGA: Gaudi ANd Grid Alliance GAUDI Program GANGA GUI JobOptions Algorithms Collective & Resource Grid Services Histograms Monitoring Results First ideas… Pere Mato: LHCb Workshop, Bologna, 15 June 2001 GridPP - Alexander Soroko (Oxford) Karl Harrison (Cambridge) Ulrik Egede (Imperial) Alvin Tan (Birmingham)

11 11 PBSOSGNorduGridLocalLSFPANDA US-ATLAS WMS LHCb WMS Executable Athena (Simulation/Digitisation/ Reconstruction/Analysis) AthenaMC (Production) Gauss/Boole/Brunel/DaVinci (Simulation/Digitisation/ Reconstruction/Analysis) LHCbExperiment neutralATLAS Ganga Evolution: Replaces

12 12 Ganga 2007: Elegant Beauty? CERN, September 2005Cambridge, January 2006 Job details Logical Folders Job Monitoring Log window Job builder Scriptor Screenshot of the Ganga GUI Edinburgh, January 2007

13 13 Ganga Users unique users since 1 Jan 2007: LHCb=162 unique users ATLAS LHCb Other

14 14 Ganga by Domain CERN Other

15 15 RAL CSF 120 Linux cpu IBM 3494 tape robot LIVERPOOL MAP 300 Linux cpu CERN RAL (PPD) Bristol Imperial College Oxford GLASGOW/ EDINBURGH Proto-Tier 2 Initial LHCb-UK Testbed Institutes Exists Planned RAL DataGrid Testbed Cambridge LHCb Grid - circa 2001

16 16 2 kB/event 60MB/s LHCb Computing Model 40 MHz Level-0 Hardware 1 MHz Level-1 Software HLT Software 40 kHz

17 17 Monte Carlo Simulation 2007 Record of 9715 simultaneous jobs over 70+ sites on 28 Feb 2007 Raja Nandakumar (RAL) 700M events simulated since May M jobs submitted

18 18 Reconstruction & Stripping …but not so often we get all Tier 1 centres working together. Peak of 439 jobs. CNAF NIKHEF RAL IN2P3 CERN

19 19 Data Management Production jobs upload output to associated Tier 1 SE (i.e. RAL in UK). Multiple Failover SE and Multiple VO Boxes used in case of failure. Replication done via FTS and centralised Transfer DB. eScience PhD: Andrew Smith (Edinburgh)

20 20 Data Transfer RAW data replicated from Tier 0 to one of six Tier 1 sites. gLite FTS used for T0 – T1 replication. Transfers trigger automated job submission for reconstruction. Sustained total rate of 40MB/s required (and achieved). Further DAQ –T0 – T1 throughput tests at 42MB/s aggregate rate scheduled later in

21 21 Bookkeeping (2007) GridPP: Carmine Cioffi (Oxford) Tomcat volhcb01 AMGA Client Read Oracle DB BK Service BookkeepingSvc BookkeepingQuery Servlet Web Browser Read AMGA Client AMGA R/W

22 22 LHCb CPU Use COUNTRYCPU USE (%) UK34.0 Italy16.1 Switzerland13.7 France9.8 Spain7.1 Germany4.8 Greece4.0 Netherlands4.0 Russia2.0 Poland1.8 Hungary0.6 CERN UK Italy Swiss France Spain Germany Many thanks to: Birmingham, Bristol, Brunel, Cambridge, Durham, Edinburgh, Glasgow, Imperial, Lancaster, Liverpool, Manchester, Oxford, QMUL, RAL, RHUL, Sheffield and all others.

23 23 UKI Evolution for LHCb Tier 1 NorthGrid London ScotGrid SouthGrid

24 24 GridPP3: Final Crucial Step(s) GridPP Beauty!

25 25 Some Milestones Sustain DAQ-T0–T1 throughput tests at 40+ MB/s. Reprocessing (second pass) of data at Tier 1 centres. Prioritisation of analysis, reconstruction and stripping jobs (all at Tier 1 for LHCb). CASTOR has to work reliably for all service classes! Ramp up of hardware resources in UK. Alignment. Monte-Carlo done with perfectly positioned detectors…. reality will be different! Calibration. Monte-Carlo done with well understood detectors… reality will be different! Distributed Conditions Database plays vital role. Analysis. Increasing load of individual users.

26 26 EPS Conference on High Energy Physics, Manchester 23 July 2007 Lyn Evans The End (and the Start) GridPP3

Download ppt "Your university or experiment logo here LHCb is Beautiful? Glenn Patrick GridPP19, 29 August 2007."

Similar presentations

Ads by Google