LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.

Slides:



Advertisements
Similar presentations
Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
Advertisements

Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Development of a track trigger based on parallel architectures Felice Pantaleo PH-CMG-CO (University of Hamburg) Felice Pantaleo PH-CMG-CO (University.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
The new Silicon detector at RunIIb Tevatron II: the world’s highest energy collider What’s new?  Data will be collected from 5 to 15 fb -1 at  s=1.96.
6 th International Conference on Hyperons, Charm and Beauty Hadrons July 2, 2004 Rare B Decays at LHC Sébastien VIRET Laboratoire de Physique Subatomique.
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
1 Towards an Upgrade TDR: LHCb Computing workshop May 2015 Introduction to Upgrade Computing Session Peter Clarke Many peoples ideas Vava, Conor,
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
IOP HEPP: Beauty Physics in the UK, 12/11/08Julie Kirk1 B-triggers at ATLAS Julie Kirk Rutherford Appleton Laboratory Introduction – B physics at LHC –
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
1 The PHENIX Experiment in the RHIC Run 7 Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long Island,
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
Niko Neufeld, CERN/PH. ALICE – “A Large Ion Collider Experiment” Size: 26 m long, 16 m wide, 16m high; weight: t 35 countries, 118 Institutes Material.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
ATLAS Trigger Development
Overview of the High-Level Trigger Electron and Photon Selection for the ATLAS Experiment at the LHC Ricardo Gonçalo, Royal Holloway University of London.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
The Use of Trigger and DAQ in High Energy Physics Experiments Lecture 3: The (near) future O. Villalobos Baillie School of Physics and Astronomy The University.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
LHCbComputing Resources requests : changes since LHCb-PUB (March 2013) m Assume no further reprocessing of Run I data o (In.
LHCbComputing Lessons learnt from Run I. LHCbComputing Lessons learnt from Run I Growing pains of a lively teenager.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
New Trigger Strategies Discussion Perimeter Institute HL-LHC Workshop June 8, 2015 Eva Halkiadakis, Matt Strassler, Gordon Watts.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
OPERATIONS REPORT JUNE – SEPTEMBER 2015 Stefan Roiser CERN.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Workshop ALICE Upgrade Overview Thorsten Kollegger for the ALICE Collaboration ALICE | Workshop |
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
LHCb 2009-Q4 report Q4 report LHCb 2009-Q4 report, PhC2 Activities in 2009-Q4 m Core Software o Stable versions of Gaudi and LCG-AA m Applications.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
LHCbComputing LHCb computing model in Run1 & Run2 Concezio Bozzi Bologna, Feb 19 th 2015.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Workshop Concluding Remarks
SuperB and its computing requirements
CMS High Level Trigger Configuration Management
evoluzione modello per Run3 LHC
for the Offline and Computing groups
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
LHCb Software & Computing Status
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
ALICE, ATLAS, CMS & LHCb joint workshop on DAQ
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
Low Level HLT Reconstruction Software for the CMS SST
LHCb Trigger LHCb Trigger Outlook:
The LHCb Front-end Electronics System Status and Future Development
Presentation transcript:

LHCbComputing Computing for the LHCb Upgrade

2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly the statistics collected by the experiment, keeping the present excellent performance o Raise operational luminosity to a levelled 2 x cm -2 s -1 P Necessitates redesign of several sub-detectors P Does not require HL-LHC o Full software trigger at 40 MHz bunch crossing rate P Allows effective operation at higher luminosity P Improved efficiency in hadronic modes P Necessitates upgrade of the DAQ and HLT m The gain is a huge increase in precision, in many cases to the theoretical limit, and the ability to perform studies beyond the reach of the current experiment o Flexible trigger and unique acceptance also opens up opportunities for other topics apart from flavour

LHCb Upgrade Computing Timeline m 2020 is tomorrow o No time (or effort) for major changes in technology o Focused R&D based on existing experience o Possibility to use Run 2 as a test bed for new ideas m Roadmap document for TDR by Q o Specify R&D required for informed decisions in TDR m All R&D reports ready Q m TDR scheduled for Q o Baseline technology choices made m Computing model finalized Q

Trigger at the upgrade m Trigger-less readout at full LHC crossing rate o No hardware (L0) trigger m First and second level software trigger (HLT1/2) running on Event Filter Farm o Full deferral as in Run 2 P Offline quality detector calibration m Event Size: o 100 KB maximum (constraint from readout system) P times smaller for channels going to TURBO stream 4

Trigger now and at the upgrade 5

HLT 1 reconstruction strategy m With upgrade tracking detectors o Total HLT1 tracking time ~6 ms m Offline quality tracking at 30 MHz is possible in software! o Concept already in use (at 1MHz) in Run 2 m LHCb first collider experiment to operate an all-software trigger at full event rate 6

Computing Model Brainstorming: Reconstruction m After HLT 1, data buffered on local disk o Deferred processing after offline quality calibration has been obtained P Already planned for Run 2 m Aim to run full offline reconstruction in HLT 2 o If reconstruction is identical to offline, is there a need to run it again offline? o And if reconstruction is not run offline, can we redefine RAW data to be ONLY the reconstruction output? m CPU budget for reconstruction is crucial o Optimise for the HLT farm hardware o R&D for alternative architectures P Since reconstruction runs only in one place (HLT farm), can optimise hardware and software together d x86, but also GPU, Xeon Phi, ARM, … P Metric is events reconstructed per CHF d But remember code must also run on standard CPU (e.g. for offline simulation) 7

Event selection: The Game Has Changed (© Tim Head) m In the upgrade area there are no “boring” events, HLT is about classifying signal events! m 800 kHz of reconstructible charm hadrons, 270 kHz of reconstructible beauty hadrons P c.f. 2-5 GB/s allowed to offline storage 8

The Game Has Changed 9 (© Vava Gligorov)

Brainstorming: The Game Has Changed m Trigger is no longer selecting events, but classifying them o Write out what bandwidth and offline resources allow, but everything written out will be analysed m In many exclusive analyses, interested only in the decay tree of the triggering signal o So write out only the interesting part of the event, not the whole event o Turbo Stream idea, being tested in Run 2 P 2.5kHz of signal, ~5kB/event, going directly to analysis P x10 reduction in event size m If all events are interesting offline, current model of stripping no longer applies o Streaming more relevant, but how many streams? o Is direct access to individual events more relevant? P Needs event index, and R&D on efficient access to single events 10

Brainstorming: output event rate m Design range for offline storage is HLT output rate of 2- 5 GB/s o This represents huge range of event rates, depending on mix of Full events and Turbo events: P 2 GB/s of Full events (100kB) -> 20kHz P 5 GB/s of Turbo events (5kB) -> 1MHz o Reality will be somewhere in between, physics optimisation of bandwidth to be done m Main implication for offline resources is CPU for simulation o Factor 50 in events to be simulated between two extremes above P This will be important ingredient of physics optimisation o Clear that major development effort in Fast MC techniques is needed P Already started for Run 2 physics, focusing on reducing needs for full simulation d Parametric approaches, partial event simulation 11

Brainstorming: storage model m Three classes of storage o Disk for active data analysis P Real data, frequently accessed simulation o Active Tape P Less frequently accessed simulation P Migration between disk and tape based on popularity predictions o Archive Tape P Only for data, simulation and analysis preservation d No need for large disk cache d No I/O latency constraint, can be outsourced? m A few sites for disk, even fewer for tape o ~3 sites with active tape sufficient o Sufficient disk sites to provide low latency and high availability for analysis jobs P No technical need for many small disk pools d (but recognise it as important funding/sociological issue) m CPU can continue to be anywhere o Current WLCG distributed model o Leverage on opportunistic resources for simulation 12

Conclusions m Three areas of research: m Software optimisation o Use cases: reconstruction on HLT farm, fast simulation m I/O optimisation o Use case: sparse event access m Storage optimisation o Use cases: data popularity, archiving 13