LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
The LHCb Event-Builder Markus Frank, Jean-Christophe Garnier, Clara Gaspar, Richard Jacobson, Beat Jost, Guoming Liu, Niko Neufeld, CERN/PH 17 th Real-Time.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Laboratoire de l’Accélérateur Linéaire, Orsay, France and CERN Olivier Callot on behalf of the LHCb collaboration Implementation and Performance of the.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
LHCb Trigger and Data Acquisition System Beat Jost Cern / EP Presentation given at the 11th IEEE NPSS Real Time Conference June 14-18, 1999 Santa Fe, NM.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
1 DAQ System Realization DAQ Data Flow Review Sep th, 2001 Niko Neufeld CERN, EP.
LHCb front-end electronics and its interface to the DAQ.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Online View and Planning LHCb Trigger Panel Beat Jost Cern / EP.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Niko Neufeld, CERN. Trigger-free read-out – every bunch-crossing! 40 MHz of events to be acquired, built and processed in software 40 Tbit/s aggregated.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Niko Neufeld LHCC Detector Upgrade Review, June 3 rd 2014.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
DAQ Overview + selected Topics Beat Jost Cern EP.
KIP Ivan Kisel, Uni-Heidelberg, RT May 2003 A Scalable 1 MHz Trigger Farm Prototype with Event-Coherent DMA Input V. Lindenstruth, D. Atanasov,
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
An Overview over Online Systems at the LHC Invited Talk at NSS-MIC 2012 Anaheim CA, 31 October 2012 Beat Jost, Cern.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Grzegorz Kasprowicz1 Level 1 trigger sorter implemented in hardware.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
LHCb and InfiniBand on FPGA
High Rate Event Building with Gigabit Ethernet
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Electronics Trigger and DAQ CERN meeting summary.
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
DAQ upgrades at SLHC S. Cittolin CERN/CMS, 22/03/07
The LHCb Trigger Niko Neufeld CERN, PH
The LHCb Event Building Strategy
VELO readout On detector electronics Off detector electronics to DAQ
LHCb Trigger and Data Acquisition System Requirements and Concepts
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
The LHCb Trigger Niko Neufeld CERN, PH.
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
Throttling: Infrastructure, Dead Time, Monitoring
LHCb Trigger LHCb Trigger Outlook:
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
Presentation transcript:

LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN

Niko Neufeld CERN, PH Acronyms MEP (Multi Event Packet): packet containing n event fragments from n consecutive triggers. n is called the: PF (Packing Factor): there are two independent PFs for L1 and HLT SFC (Subfarm Controller): the event-builder L1 (Level-1 Trigger): data stream at 1 MHz containing data from VeLo, Trigger Tracker and summary info from L0 trigger HLT (Hight Level Trigger): data stream at 40 kHz containing all data TFC (Timing and Fast Control): the LHCb variant of the LHC wide TTC system. Central unit is the RS (Readout Supervisor): centrally monitors and protects buffers where deterministic by disabling the trigger. Throttle signals from modules in trouble are used where buffers cannot be predicted.

Niko Neufeld CERN, PH LHCb DAQ & Trigger LHCb has a three-level trigger system –Level-0 10 MHz “visible” 32 Multiplicity/Pile-Up: 7 MHz E T (µ 1,µ 2,h,e,γ,π 0 ): 1 MHz –Level-1 Silicon Vertex Detector (VeLo): impact parameter VELO+Trigger Tracker: momentum VELO+L0-m: Mmm accept: 40 kHz –HLT confirmation L1(VELO+TT)+T: 20 kHz VELO+TT+T: dp/p<1% 2-5 kHz to tape out of which fully reconstructed for prompt analysis: 200 Hz LHCb DAQ is the hardware for the LHCb software triggers

Niko Neufeld CERN, PH LHCb Experiment Data used for Level-1 Trigger

Niko Neufeld CERN, PH DAQ Features (Almost) All detectors use the same module to send data (TELL1) on up to 4 Gigabit Ethernet links Synchronous information (triggers) are distributed by the TFC/TTC system All data traffic is using Gigabit Ethernet Data is pushed (connectionless protocol for data transfer, like UDP) Two levels of software trigger and two data streams (L1 and HLT) on the same network L1 uses only part of the detector (VeLo, TT, L0 summary), HLT reads out all of the detector For both L1 and HLT fragments from several consecutive triggers are packed together and set as a Multi Event Packet (MEP)

Niko Neufeld CERN, PH DAQ features (2) L1 has a latency restriction (data are buffered for full HLT readout). L1 decision from farm node must reach TFC system no later than 58 ms after initial trigger Static load-balancing among sub-farms: destination assignment via TTC Dynamic load-balancing among nodes in sub-farms by the SFC  less sub-farms are better Central flow-control via TFC system: throttle: –Dedicated signals from Front-end boards (fast) –Via the control system from the SFCs (slow)

Niko Neufeld CERN, PH Multiplexing and Aggregation Layer FE Switch Level-1 Traffic HLT Traffic 1000 kHz 5.5 GB/s 40 kHz 1.6 GB/s 94 SFCs Front-end Electronics Gb Ethernet Level-1 Traffic Mixed Traffic HLT Traffic 7.1 GB/s TRM Sorter TFC System L1-Decision Storage System Readout Network Switch SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU CPU Farm ~1800 CPUs DAQ Architecture ~ 200 MB/s total TIER kHz 9.5 GB/s 11.1 GB/s ~ 150 SFCs ~???? CPUs DAQ Architecture upgrade

Niko Neufeld CERN, PH DAQ in numbers 276 detector boards (+ 1 Readout Supervisor + 1 L0 Decision Unit) Currently readout for Level 1: 135 Estimated data rate for HLT and L1 combined 7.1 GB/s Estimated data rate for HLT and L1 combined after upgrade of L1: ~ 11 GB/s Estimated required peak bandwidth to storage ~ 200 MB/s

Niko Neufeld CERN, PH 2121 Dataflow in LHCb FE Switch 94 SFCs Front-end Electronics Gb Ethernet Level-1 Traffic Mixed Traffic HLT Traffic 94 Links 7.1 GB/s TRM Sorter TFC System L1-Decision Storage System Readout Network Switch SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU CPU Farm ~1800 CPUs 1 L0 Yes 2 L1 Trigger L1 D L1 Yes HLT Yes B  ΦΚ s 58 ms56 ms3 ms0.5 ms

Niko Neufeld CERN, PH Performance Requirements on SFC Handle L1 and HLT MEP stream –in and out –2 x ~O(50 kHz/ N SFC ) + 2 x ~O(10 kHz/ N SFC ) Forward pre-sorted decisions to L1-Sorter ~O(50 kHz / N SFC ) Forward accepted events to Storage ~O(5kHz / N SFC ) Control and Monitoring traffic ~ O(1 Hz)

Niko Neufeld CERN, PH Technical Requirements Rack-mount 1 U, not deeper then 70 cm if at all possible Minimum 2 x 1000 MBit and 2 x 100 MBit network interfaces (1 data receive, 1 data send, 1 control, 1 storage) copper Network bootable, diskless operation possible Remotely manageable (IPMI)

Niko Neufeld CERN, PH Nice to have (and why) 4 x 1000 BaseT interfaces (+ min 2x 100 MBit for control and storage ): allows running at more than 1 Gigabit (resources permitting) Have a price that: N(required SFCs) * Price(1 SFC) / < 1: no market-survey / tender Other goodies if not too expensive: redundant, hot-pluggable power supplies, etc…

Niko Neufeld CERN, PH Latency due to queuing 0.1 % of events have a timeout larger than the 30 ms cut-off Ptolemy simulation: Processing time distribution from number of clusters Assuming 9 processors and a shared L1 trigger rate of 9 kHz per sub- farm 10^6 L0 accepted events, one of 120 subfarms

Niko Neufeld CERN, PH Beating the statistics of small numbers Subfarm now with 18 nodes and sharing ~ 18 kHz of L1 trigger  one of 60 sub farms. Total number of CPUs in the system constant Only 0.05 % of events have a timeout larger than 30 ms  minimise number of sub-farms

Niko Neufeld CERN, PH Summary LHCb DAQ is completely based on commercial (mostly high-end commodity) components We have to handle two levels of software trigger, on with a latency restriction We use a connection-less push-protocol from top to bottom The SFC is one of three key components in the data-flow (the sources, the network, the SFCs) It has to do event-building, event-distribution and buffering, decision pre-sorting and forwarding and forwarding of accepted events to storage