Presentation is loading. Please wait.

Presentation is loading. Please wait.

LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.

Similar presentations


Presentation on theme: "LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN."— Presentation transcript:

1 LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN

2 Niko Neufeld CERN, PH Acronyms MEP (Multi Event Packet): packet containing n event fragments from n consecutive triggers. n is called the: PF (Packing Factor): there are two independent PFs for L1 and HLT SFC (Subfarm Controller): the event-builder L1 (Level-1 Trigger): data stream at 1 MHz containing data from VeLo, Trigger Tracker and summary info from L0 trigger HLT (Hight Level Trigger): data stream at 40 kHz containing all data TFC (Timing and Fast Control): the LHCb variant of the LHC wide TTC system. Central unit is the RS (Readout Supervisor): centrally monitors and protects buffers where deterministic by disabling the trigger. Throttle signals from modules in trouble are used where buffers cannot be predicted.

3 Niko Neufeld CERN, PH LHCb DAQ & Trigger LHCb has a three-level trigger system –Level-0 10 MHz “visible” interactions @2.10 32 Multiplicity/Pile-Up: 7 MHz E T (µ 1,µ 2,h,e,γ,π 0 ): 1 MHz –Level-1 Silicon Vertex Detector (VeLo): impact parameter VELO+Trigger Tracker: momentum VELO+L0-m: Mmm accept: 40 kHz –HLT confirmation L1(VELO+TT)+T: 20 kHz VELO+TT+T: dp/p<1% 2-5 kHz to tape out of which fully reconstructed for prompt analysis: 200 Hz LHCb DAQ is the hardware for the LHCb software triggers

4 Niko Neufeld CERN, PH LHCb Experiment Data used for Level-1 Trigger

5 Niko Neufeld CERN, PH DAQ Features (Almost) All detectors use the same module to send data (TELL1) on up to 4 Gigabit Ethernet links Synchronous information (triggers) are distributed by the TFC/TTC system All data traffic is using Gigabit Ethernet Data is pushed (connectionless protocol for data transfer, like UDP) Two levels of software trigger and two data streams (L1 and HLT) on the same network L1 uses only part of the detector (VeLo, TT, L0 summary), HLT reads out all of the detector For both L1 and HLT fragments from several consecutive triggers are packed together and set as a Multi Event Packet (MEP)

6 Niko Neufeld CERN, PH DAQ features (2) L1 has a latency restriction (data are buffered for full HLT readout). L1 decision from farm node must reach TFC system no later than 58 ms after initial trigger Static load-balancing among sub-farms: destination assignment via TTC Dynamic load-balancing among nodes in sub-farms by the SFC  less sub-farms are better Central flow-control via TFC system: throttle: –Dedicated signals from Front-end boards (fast) –Via the control system from the SFCs (slow)

7 Niko Neufeld CERN, PH Multiplexing and Aggregation Layer FE Switch Level-1 Traffic HLT Traffic 1000 kHz 5.5 GB/s 40 kHz 1.6 GB/s 94 SFCs Front-end Electronics Gb Ethernet Level-1 Traffic Mixed Traffic HLT Traffic 7.1 GB/s TRM Sorter TFC System L1-Decision Storage System Readout Network Switch SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU CPU Farm ~1800 CPUs DAQ Architecture ~ 200 MB/s total TIER0 1000 kHz 9.5 GB/s 11.1 GB/s ~ 150 SFCs ~???? CPUs DAQ Architecture upgrade

8 Niko Neufeld CERN, PH DAQ in numbers 276 detector boards (+ 1 Readout Supervisor + 1 L0 Decision Unit) Currently readout for Level 1: 135 Estimated data rate for HLT and L1 combined 7.1 GB/s Estimated data rate for HLT and L1 combined after upgrade of L1: ~ 11 GB/s Estimated required peak bandwidth to storage ~ 200 MB/s

9 Niko Neufeld CERN, PH 2121 Dataflow in LHCb FE Switch 94 SFCs Front-end Electronics Gb Ethernet Level-1 Traffic Mixed Traffic HLT Traffic 94 Links 7.1 GB/s TRM Sorter TFC System L1-Decision Storage System Readout Network Switch SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU CPU Farm ~1800 CPUs 1 L0 Yes 2 L1 Trigger L1 D L1 Yes 1 2 2121 HLT Yes B  ΦΚ s 58 ms56 ms3 ms0.5 ms

10 Niko Neufeld CERN, PH Performance Requirements on SFC Handle L1 and HLT MEP stream –in and out –2 x ~O(50 kHz/ N SFC ) + 2 x ~O(10 kHz/ N SFC ) Forward pre-sorted decisions to L1-Sorter ~O(50 kHz / N SFC ) Forward accepted events to Storage ~O(5kHz / N SFC ) Control and Monitoring traffic ~ O(1 Hz)

11 Niko Neufeld CERN, PH Technical Requirements Rack-mount 1 U, not deeper then 70 cm if at all possible Minimum 2 x 1000 MBit and 2 x 100 MBit network interfaces (1 data receive, 1 data send, 1 control, 1 storage) copper Network bootable, diskless operation possible Remotely manageable (IPMI)

12 Niko Neufeld CERN, PH Nice to have (and why) 4 x 1000 BaseT interfaces (+ min 2x 100 MBit for control and storage ): allows running at more than 1 Gigabit (resources permitting) Have a price that: N(required SFCs) * Price(1 SFC) / 200000 < 1: no market-survey / tender Other goodies if not too expensive: redundant, hot-pluggable power supplies, etc…

13 Niko Neufeld CERN, PH Latency due to queuing 0.1 % of events have a timeout larger than the 30 ms cut-off Ptolemy simulation: Processing time distribution from number of clusters Assuming 9 processors and a shared L1 trigger rate of 9 kHz per sub- farm 10^6 L0 accepted events, one of 120 subfarms

14 Niko Neufeld CERN, PH Beating the statistics of small numbers Subfarm now with 18 nodes and sharing ~ 18 kHz of L1 trigger  one of 60 sub farms. Total number of CPUs in the system constant Only 0.05 % of events have a timeout larger than 30 ms  minimise number of sub-farms

15 Niko Neufeld CERN, PH Summary LHCb DAQ is completely based on commercial (mostly high-end commodity) components We have to handle two levels of software trigger, on with a latency restriction We use a connection-less push-protocol from top to bottom The SFC is one of three key components in the data-flow (the sources, the network, the SFCs) It has to do event-building, event-distribution and buffering, decision pre-sorting and forwarding and forwarding of accepted events to storage


Download ppt "LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN."

Similar presentations


Ads by Google