Presentation is loading. Please wait.

Presentation is loading. Please wait.

SuperB DAQ U. Marconi Padova 23/01/09. Bunch crossing: 450 MHz L1 Output rate: 150 kHz L1 Triggering Detectors: EC, DC The Level 1 trigger has the task.

Similar presentations


Presentation on theme: "SuperB DAQ U. Marconi Padova 23/01/09. Bunch crossing: 450 MHz L1 Output rate: 150 kHz L1 Triggering Detectors: EC, DC The Level 1 trigger has the task."— Presentation transcript:

1 SuperB DAQ U. Marconi Padova 23/01/09

2 Bunch crossing: 450 MHz L1 Output rate: 150 kHz L1 Triggering Detectors: EC, DC The Level 1 trigger has the task of determining the time of the collision of interest to within 1  s. L1 variable latency: 12  s, jitter 1  s (@BaBar) L1 input rates: – EC: 3.7 MHz – DC: 7.4 MHz (?) Resolution in defining the event time: – EC: 0.5  s – DC: 0.2  s Parameters

3 L1 Trigger

4 A complete hit map including drift time information of the Drift Chamber is provided to the trigger via dedicated data paths every 269 ns. The Drift Chamber is a small-hex-cell design and contains 40 layers, with 96 to 256 cells per layer, for a total of 7104 drift cells. Depending on where a track passed through a particular cell, the resulting ions will take 1 to 4 3.7 MHz periods (clock ticks) to drift to the signal wire. The simulated event time jitter window for 99% of events, even under worst-case benchmarks is less than 200 ns, much less than the required 1  s. L1 DC Trigger (DCT)

5 Time resolution L1 EM Trigger (EMT)

6 Dominique Breton – Rome SuperB meeting – December 17th 2008 The main new problem on SuperB compared to BABAR is the channel occupancy. This leads to two different types of problems : For detectors with slow signals (like EMC barrel), physics events may sit on the queue of large background events (pile-up). Two consecutive physics events may reside within the trigger time window (overlapping). The question is : can we afford keeping the fixed trigger time window in such a situation, especially if the inter-trigger minimum delay can go down to ~100ns ? Should the window be systematically larger for the EMC barrel ? This increases the mean amount of data per event. This makes the second problem worse. The window length could then be defined on a per event basis, thanks to a few bits sent with the L1 accept command (Model1). For overlapping events, FEE should be able to deal with close triggers, and send data in consequence (like reducing the size of posterior events) The idea of directly addressing data in the ring buffer could be raised (Model2) if: transmission links or FEE are not able to deal with the shortest time interval between successive L1 accept the amount of data per event is too large because of the « fixed » latency Moving to SuperB (2)

7 Dominique Breton – Rome SuperB meeting – December 17th 2008 Pros & cons of both FEE models In terms of buffering size in the FEE : Dealing with overlapping events is free. Choosing the window length is cheap : buffer size has to take extra required window length into account. Using the Read_event command in addition to L1 accept is costly, especially if the requested extra buffering is large (linked to trigger rate). Having an addressable ring buffer is expensive : it may almost double the buffer length requirement. In terms of complexity in the FEE : Dealing with overlapping events is cheap : one just has to measure the distance between L1 accepts and send the corresponding time slices. This could also be done by FCTS, sending a shortened time window depending on previous L1 accept. Choosing the window length is cheap : it’s just waiting for the right position in the ring buffer. Having an addressable ring buffer is more expensive. In terms of length of L1 accept command (and thus of transmission time) : Dealing with overlapping events is free. Choosing the window length is cheap (2-4 bits should be sufficient). Using Read_event has a high cost in terms of link occupancy (one per L1 accept). Having an addressable ring buffer is expensive.

8 Dominique Breton – Rome SuperB meeting – December 17th 2008 Temporary conclusion about FEE models So, concerning FEE buffer size, FEE complexity, and L1 occupancy on the ROM to FEE link, addressing the ring buffers (« Model 2 ») is more expensive  this should lead us to study extensively the fixed trigger latency model (« Model 1 »), in order to check if it copes or not with the requirements before choosing any model as a baseline  if the Read_Event command doesn’t really prove to be necessary, it should be removed

9 Simulation of Model1 Sequence: The DAQ sends a L1 trigger command associated with a value corresponding to a time window. The FEE sends to the DAQ the data contained inside this window, embedded in a frame including status, trigger tag and time. Trigger is defined by three parameters : - The latency : L (fixed) - The window width : W (event basis) - The time distance between triggers : D Constraints : - Triggers with variable width windows - Triggers with overlapping windows - No dead time …

10 Dominique Breton – Rome SuperB meeting – December 17th 2008 L1 Trigger W0 L: fixed latency W0: window containing the relevant data for trigger #0 N0: data to dump M0: data finally kept for the trigger 0 Baseline: latency pipeline always provides the oldest relevant data Time L Data to dumpData to keep N0 t0 Description M0 Simulation of Model1 (2)

11 Dominique Breton – Rome SuperB meeting – December 17th 2008 Trigger #0Trigger #1 W0W1 Non overlapping latencies with 2 different windows (green): no problem M1 = W1 Trigger #0 W0 W1 Trigger #1 Overlapping latency with non overlapping windows. Still straightforward. M1 = W1 D D Overlapping latency trigger with overlapping windows: trickier … The window W1 is then shortened! M1 = W1 – (W0 – D)= W1- W0 + D Case 1 : D ≥ W0 Two different cases can be identified N0N1 N0 N1 Trigger #0 W0 W1 Trigger #1 D Case 2 : D < W0 N1 N0 L M0 M1 Or D ≥ L D < L N : data to be dumped L : Latency W : Window D : Distance between triggers M : data to be kept

12 Dominique Breton – Rome SuperB meeting – December 17th 2008 Counter Event Fifo Trigger input Latency Pipeline Wr_en Data input Wn Counter Dn Wn-1 Case 1 : Dn ≥ Wn : Mn = Wn Case 2 : Dn < Wn : Mn = Wn – Wn-1 + Dn SUBSUB Dn ≥ Wn? Wn-Wn-1 Wn ADDADD MUXMUX Dn+Wn-Wn-1 Wn Fifo “M” Start_flag FSM enable M !empty Mn : amount of data to be kept for the trigger #n L D M EVT_BUFFER end To serializer All synchronous pipelined operations Simple hardware implementation

13 Dominique Breton – Rome SuperB meeting – December 17th 2008 Conclusion - DAQ and trigger are based on BABAR experience but must evolve to cope with the new level of the requirements - ROM and L1 trigger processors have to be redesigned => there is a real need for new collaborators to work on DAQ and L1 trigger hardware - Control distribution has to be rethought => CERN has started a generic R&D for detector control and readout. They propose us to join them … The key point is the production if Super-LHC is delayed … => Do we have another solution in hand or in mind ? - We started working on the fixed latency DAQ model and will soon present high level simulations of this simple solution. - A Verilog/VHDL model will also be built. - We hope to get feedback from you whether it is adequate for physics requirements or not. - Any question ?

14 Dominique Breton – Rome SuperB meeting – December 17th 2008 Summary of BABAR implementation FECROM 1Gbit/s Optical links 60MHz clock Setup and control FCTS Subsystems control Setup and control 60MHz clock L1 accept Read event DAQ 16 serialized commands sent in parallel every clock cycle Same command words serialized at 60MHz (geographically dedicated lines) 16 60Mbits/s links

15 FEE L1 Trigger FastControl System HLT Buffer HLT Switch HLT Farm L1 accept Data if L1 accept L1 accept Data destination address Data if L1 accept

16 FEE FEC HLT Buffer Board The HLT Buffer Board would allow for data aggregation and packing of the events fragments in multi-event fragments. Optical Link The HLT Buffer is used to perform data format conversion: detector to commercial data format (Ethernet IP packet). 16 Front-end Cards Optical Link Switch port HLT Buffer Board


Download ppt "SuperB DAQ U. Marconi Padova 23/01/09. Bunch crossing: 450 MHz L1 Output rate: 150 kHz L1 Triggering Detectors: EC, DC The Level 1 trigger has the task."

Similar presentations


Ads by Google