TELL1 A common data acquisition board for LHCb

Slides:



Advertisements
Similar presentations
L1 Board CERN 3-June-2003 A. Bay UNI-Lausanne. RB1: 10 MHz, VME I/F 2 FADC channels RB2: 40 MHz, VME I/F 4 FADC channels RB3 (mother-board): 40 MHz, LHCb.
Advertisements

1 Design of the Front End Readout Board for TORCH Detector 10, June 2010.
1 Tell10 Guido Haefeli, Lausanne Electronics upgrade meeting 10.February 2011.
5 Feb 2002Alternative Ideas for the CALICE Backend System 1 Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
Straw electronics Straw Readout Board (SRB). Full SRB - IO Handling 16 covers – Input 16*2 links 400(320eff) Mbits/s Control – TTC – LEMO – VME Output.
Trigger Supervisor (TS) J. William Gu Data Acquisition Group 1.TS position in the system 2.First prototype TS 3.TS functions 4.TS test status.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
PHENIX upgrade DAQ Status/ HBD FEM experience (so far) The thoughts on the PHENIX DAQ upgrade –Slow download HBD test experience so far –GTM –FEM readout.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
PROCStar III Performance Charactarization Instructor : Ina Rivkin Performed by: Idan Steinberg Evgeni Riaboy Semestrial Project Winter 2010.
TELL1 The DAQ interface board for LHCb experiment Gong guanghua, Gong hui, Hou lei DEP, Tsinghua Univ. Guido Haefeli EPFL, Lausanne Real Time ,
Leo Greiner IPHC DAQ Readout for the PIXEL detector for the Heavy Flavor Tracker upgrade at STAR.
R&D for First Level Farm Hardware Processors Joachim Gläß Computer Engineering, University of Mannheim Contents –Overview of Processing Architecture –Requirements.
A PCI Card for Readout in High Energy Physics Experiments Michele Floris 1,2, Gianluca Usai 1,2, Davide Marras 2, André David IEEE Nuclear Science.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Wilco Vink 1 Outline Optical station Vertex processor board Output board Latency.
1 VeLo L1 Read Out Guido Haefeli VeLo Comprehensive Review 27/28 January 2003.
Laboratoire d’Annecy-le-vieux de Physique des Particules, France Cyril Drancourt Tuesday 3 June 2003 Common L1 Workshop Use in Calorimeter Old design with.
March 9, 2005 HBD CDR Review 1 HBD Electronics Preamp/cable driver on the detector. –Specification –Schematics –Test result Rest of the electronics chain.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
1 Level 1 Pre Processor and Interface L1PPI Guido Haefeli L1 Review 14. June 2002.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
LHCb upgrade Workshop, Oxford, Xavier Gremaud (EPFL, Switzerland)
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
FPGA based signal processing for the LHCb Vertex detector and Silicon Tracker Guido Haefeli EPFL, Lausanne Vertex 2005 November 7-11, 2005 Chuzenji Lake,
PC-based L0TP Status Report “on behalf of the Ferrara L0TP Group” Ilaria Neri University of Ferrara and INFN - Italy Ferrara, September 02, 2014.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
29/05/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
Grzegorz Kasprowicz1 Level 1 trigger sorter implemented in hardware.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
LHCb Outer Tracker Electronics 40MHz Upgrade
DAQ ACQUISITION FOR THE dE/dX DETECTOR
M. Bellato INFN Padova and U. Marconi INFN Bologna
LHCb and InfiniBand on FPGA
Electronics Trigger and DAQ CERN meeting summary.
Current DCC Design LED Board
Production Firmware - status Components TOTFED - status
TELL1 A common data acquisition board for LHCb
Large Area Endplate Prototype for the LC TPC
Electronics, Trigger and DAQ for SuperB
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
L0 processor for NA62 Marian Krivda 1) , Cristina Lazzeroni 1) , Roman Lietava 1)2) 1) University of Birmingham, UK 2) Comenius University, Bratislava,
CMS EMU TRIGGER ELECTRONICS
COVER Full production should arrive today
Vertex 2005 November 7-11, 2005 Chuzenji Lake, Nikko, Japan
The LHCb Trigger Niko Neufeld CERN, PH
The LHCb Event Building Strategy
VELO readout On detector electronics Off detector electronics to DAQ
Example of DAQ Trigger issues for the SoLID experiment
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
The LHCb Trigger Niko Neufeld CERN, PH.
LHCb Trigger, Online and related Electronics
Testing, timing alignment, calibration and monitoring features in the LHCb front-end electronics and DAQ interface Jorgen Christiansen - CERN LHCb electronics.
Network Processors for a 1 MHz Trigger-DAQ System
Throttling: Infrastructure, Dead Time, Monitoring
New DCM, FEMDCM DCM jobs DCM upgrade path
The LHCb Front-end Electronics System Status and Future Development
Data Concentrator Card and Test System for the CMS ECAL Readout
Presentation transcript:

TELL1 A common data acquisition board for LHCb Guido Haefeli, University of Lausanne Guido Haefeli

Outline LHCb readout scheme LHCb data acquisition Optical links Event building network Common readout requirements Trigger rates Buffers Bandwidth Data flow on the board Synchronization Level 1 trigger pre-processing and zero-suppression Higher level trigger processing Gigabit Ethernet interface Board implementation FPGAs Level 1 buffer Higher level trigger multi event packet buffer Summary

LHCb trigger system L0: fully synchronous and pipelined fixed latency Pile-Up Calorimeter Muon L1: software trigger with maximal latency VELO TT (Outer Tracker) HLT: software trigger Access to all sub-detectors

LHCb data acquisition Front End of detectors in cavern 60-100m TELL1 in counting room

Optical link implementation

Event building network TELL1 HLT Traffic 40KHz MEP  /16 Mux  x8 Level-1 Traffic 1.11MHz MEP /32 Mux x2 SFC /94 Front-end Electronics FE FE FE FE FE FE FE FE FE FE FE FE TRM Multiplexing Layer Commercial network equipment Switch Switch Switch Switch Switch Readout Network L1-Decision Sorter 94 Links 7.1 GB/s … 94 SFCs SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU CPU Farm Gb Ethernet Level-1 Traffic Mixed Traffic HLT Traffic ~1800 CPUs

Trigger rates and buffering Max. L0 Accept rate = 1.11 MHz Max. L1 Accept rate = 40 KHz L0 buffer is implemented on the Front End fixed to be 160 clock cycles ! L1 buffer 58254 events which equals to 52.4us !

Bandwidth requirements Input data bandwidth for a 24 optical link motherboard Optical receiver 24 fibres x 1.28 Gbit/s  30.7Gbit/s Analog receiver 64 channels x 10-bit @ 40 MHz  25.6 Gbit/s L1 Buffer Write data bandwidth 30.7 Gbit/s Read data bandwidth 4 Gbit/s DAQ links 4 Gigabit Ethernet links ECS 10/100 Ethernet for remote control

A bit of history L1 trigger scheme changed During the last year the maximal L1T latency has increased from 1.8ms to 52ms (x32). This forces the change SRAM FIFO  SDRAM Detectors added to L1T (TT, OT) and potentially others Decreasing cost for the optical links  data processing is done in the counting room More and more functionalities on the read out board because no Readout Unit and no Network Processors! The event fragments are packed in so called “Multi Event Packets” MEP to optimize ethernet packet size and packet rate. The acquisition board adds IP destination, does Ethernet framing and transmit data buffering …)

How can we make a common useful read out ? FE FE FE FE Adaptation to two link system is possible with receiver mezzanine cards. A-RxCard A-RxCard O-RxCard PP-FPGA PP-FPGA PP-FPGA PP-FPGA FPGAs allow the adaptation for different data processing. L1B L1B L1B L1B SyncLink-FPGA Sufficient bandwidth for the entire acquisition path ECS TTCrx RO-Tx FEM Mezzanine card for detector specific needs ECS TTC L1T HLT L0 and L1 Throttle

Advantages being common ! FE FE FE FE Solution and consents finding for new system requirements much easier. A-RxCard A-RxCard O-RxCard PP-FPGA PP-FPGA PP-FPGA PP-FPGA Cost reduction due bigger quantity serial production (300 boards for LHCb). L1B L1B L1B L1B SyncLink-FPGA Reduce maintenance cost with a single system. ECS TTCrx RO-Tx FEM ECS TTC L1T HLT L0 and L1 Throttle Common software interfaces.

Cluster encapsulation L1T dataflow X12 PP-FPGA SyncLink-FPGA FIFO 32 L1T Link 64 O-RxCard PP-FPGA L1T DEST FIFO L1T IP RAM L1T Framer 32 L1T MEP Buffer 64 64 KByte internal SRAM @ 100 MHz ECS broadcast TTC PP-FPGA PP-FPGA Data Link FIFO 32 ID Check 8 FIFO L1PP Cluster encapsulation 32 FIFO L1B RO-Interface POS-Level 3 32 Shared data path for 2 channel RO-TxCard @ 100 MHz O-RxCard (Mezzanine card) X12 32 Link DDR @80MHz FIFO 16 64 Kbyte Sync FIFO 16

HLT ZeroSupp Event Encaps. O-RxCard (Mezzanine card) HLT dataflow 0.9us/event 20us/event 320us/MEP PP-FPGA, Altera Stratix 1S20, 18K LE SyncLink Stratix 1S25, 25K LE X12 16 FIFO 32 O-RxCard PP-FPGA broadcast TTC 16 FIFO 32 ECS PP-FPGA HLT IP RAM HLT DEST FIFO HLT ZeroSupp Event Encaps. 16 FIFO 32 Sync FIFO ID Check 32 16 FIFO 32 FIFO 32 HLT Framer RO-Interface POS-Level 3 X12 HLT MEP Buffer O-RxCard (Mezzanine card) 32 32 32 32 Sync FIFO ID Check 32 16 FIFO 32 L1B Data link 32 FIFO 16 FIFO 32 4 Kbyte Shared data path for 2 channel RO-TxCard @ 100 MHz 1 Kbyte 1 Kbyte FIFO 32 16 Sync FIFO ID Check FIFO 32 16 Sync FIFO Link DDR @80MHz 1 Mbyte external QDR SRAM @ 100 MHz @80MHz @120MHz @120MHz 64 KEvent DDR SDRAM

Prototyping Motherboard specification, schematics and layout is finished Daughtercard design: Pattern generator card (available) 12 way optical receiver card (design finished) RO-TxCard is implemented as a dual Gigabit ethernet (see talk from Hans in this session) CCPC and GlueCard for ECS Test system PCI based data generator card Gigabit ethernet connection to PC

Technology used FPGA Stratix 1S20 , 780-pin FBGA Stratix 1S25 , 1020-pin FBGA Main features used: Memory blocks 512Kbit,4Kbit and 512bit DDR SDRAM interface (dedicated circuit) DDR I/Os Terminator technology for serial and parallel termination on chip. DSP blocks for L1T pre-processing DDR SDRAM running at 240 MHz data transfer rate (120 MHz clock) for L1B QDR SRAM running at 100 MHz for HLT MEP buffer 12 layer PCB (50Ohm)

DDR bank data signal layout Equal length signal traces are required for DDR SDRAM Implementation (4 x 48-bit wide bus @ 240MHz data transfer rate ! (46 Gbit/s) Guido Haefeli

14cm

Summary After evaluating different concepts for data processing and acquisition a common read out board for LHCb has been specified and designed. It serves for 24-optical with a data transfer rate of 1.28 Gbit/s each. The board implements data identification, L1 buffering an zero suppression. It is made for the use with standard Gigabit ethernet equipment.