DAQ and Trigger upgrade U. Marconi, INFN Bologna Firenze, March 2014 1.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

The LHCb Event-Builder Markus Frank, Jean-Christophe Garnier, Clara Gaspar, Richard Jacobson, Beat Jost, Guoming Liu, Niko Neufeld, CERN/PH 17 th Real-Time.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
1 Tell10 Guido Haefeli, Lausanne Electronics upgrade meeting 10.February 2011.
PCIe based readout U. Marconi, INFN Bologna CERN, May 2013.
Niko Neufeld CERN PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
5 Feb 2002Alternative Ideas for the CALICE Backend System 1 Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
GBT Interface Card for a Linux Computer Carson Teale 1.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
TELL1 The DAQ interface board for LHCb experiment Gong guanghua, Gong hui, Hou lei DEP, Tsinghua Univ. Guido Haefeli EPFL, Lausanne Real Time ,
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Gueorgui ANTCHEVPrague 3-7 September The TOTEM Front End Driver, its Components and Applications in the TOTEM Experiment G. Antchev a, b, P. Aspell.
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Niko Neufeld PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
Federico Alessio Zbigniew Guzik Richard Jacobsson TFC Team: A Super-TFC for a Super-LHCb - Top-down approach -
LHCb Upgrade Architecture Review BE DAQ Interface Rainer Schwemmer.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Why it might be interesting to look at ARM Ben Couturier, Vijay Kartik Niko Neufeld, PH-LBC SFT Technical Group Meeting 08/10/2012.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Niko Neufeld Electronics Upgrade Meeting, April 10 th 2014.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
GBT-FPGA Interface Carson Teale. GBT New radiation tolerant ASIC for bidirectional 4.8 Gb/s optical links to replace current timing, trigger, and control.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Common meeting of CERN DAQ teams CERN May 3 rd 2006 Niko Neufeld PH/LBC for the LHCb Online team.
A. KlugeFeb 18, 2015 CRU form factor discussion & HLT FPGA processor part II A.Kluge, Feb 18,
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
Niko Neufeld, CERN. Trigger-free read-out – every bunch-crossing! 40 MHz of events to be acquired, built and processed in software 40 Tbit/s aggregated.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Niko Neufeld LHCC Detector Upgrade Review, June 3 rd 2014.
PCIe based readout for the LHCb upgrade U. Marconi, INFN Bologna On behalf of the LHCb Online Group (Bologna-CERN-Padova) CHEP2013, Amsterdam, 15 th October.
PCIe40 — a Tell40 implementation on PCIexpress Beat Jost DAQ Mini Workshop 27 May 2013.
Update on Optical Fibre issues Rainer Schwemmer. Key figures Minimum required bandwidth: 32 Tbit/s –# of 100 Gigabit/s links > 320, # of 40 Gigabit/s.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Event Building for the LHCb Upgrade 30 MHz Rate (32 Tb/s Aggregate Throughput) Domenico Galli INFN Bologna and Università di Bologna Workshop INFN CCR.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
M. Bellato INFN Padova and U. Marconi INFN Bologna
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
LHCb and InfiniBand on FPGA
Challenges in ALICE and LHCb in LHC Run3
Niko Neufeld LHCb Upgrade Online Computing Challenges CERN openlab Workshop on Data Center Technologies and Infrastructures, Mar 2017.
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
GBT-FPGA Interface Carson Teale.
ProtoDUNE SP DAQ assumptions, interfaces & constraints
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
VELO readout On detector electronics Off detector electronics to DAQ
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
New DCM, FEMDCM DCM jobs DCM upgrade path
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
Presentation transcript:

DAQ and Trigger upgrade U. Marconi, INFN Bologna Firenze, March

DAQ and Trigger TDR TDR planned for June 2014 Sub-systems involved: – Long distance optical fibres – Readout boards (PCIe40) and Event Builder Farm – The Event Filter Farm for the HLT – Firmware, LLT, TFC 2

Readout System Review Held on Feb 25 th, 9:00 – 16:00 4 reviewers – Christoph Schwick (PH/CMD – CMS DAQ) – Stefan Haas (PH/ESE) – Guido Haefeli (EPFL) – Jan Troska (PH/ESE) All documents can be found in EDMS 3

Trigger LHCb TB Meeting 4 LHCb Technical Board Meeting 18/03/2014 Full software trigger, software LLT R. Legac, Trigger Coordinator

Long distance fibres 5 The distance to cover with 850 nm OM multimode optical cables, from underground to the surface, is 300 m. Locate the Event Builder Farm and the Event Filter Farm for the HLT outside the cavern. Minimum required bandwidth: 32 Tbit/s – # of 100 Gigabit/s links > 320, – # of 40 Gigabit/s links > 800, – # of 10 Gigabit/s links > 3200 – # of 5Gbit GBT links (4.5 Gbit effective): > – Current estimate O(15000), DAQ, Controls and Timing system. Spares included

Long distance fibres (II) 144 fibres per cable. A total of 120 such cables. 3 patch panels (breakpoints) foreseen: expected attenuation ~ 3dB 4.8 Gbit/s signal produced on the detector by Versatile Link transmitters. VTTx to MiniPod for data acquisition. MiniPod to VTRx for control, configuration. 6 MiniPod

Optical fibres studies 7 Measurement of BER vs. receive OMA(*) on different OM3 and OM4 fibres. (*) optical modulation amplitude Transmission test at 4.8 Gb/s The target value of 3 dB on OM3 using the Versatile Link and MP is reachable.

Optical links 8 OM3 vs OM4 A shortest path seems possible

LHCb DAQ today 2 x F10 E1200i 56 sub-farms Readout boards: 313 x TELL1 9 Push-protocol with centralized flow-control # links (UTP Cat 6)~ 3000 Event-size (total – zero- suppressed) 65 kB Read-out rate1 MHz # read-out boards313 output bw / read-out boardup to 4 Gbit/s (4 Ethernet links) # farm-nodes1500 (up to 2000) max. input bw / farm-node1 Gbit/s # core-routers2 # edge routers56

DAQ Event builder High speed network HLT Event filter 10 PCIe40PCIe40 PCIe40PCIe40 PCIe40PCIe40 PCIe40PCIe40 PCIe40PCIe40 PCIe40PCIe40 PCIe40PCIe40 PCIe40PCIe40 16-lane PCIe-3 EBFEBF EFFEFF

Readout board review Basic question to reviewers: “We propose a change of baseline from ATCA/AMC to PCIe. Do the reviewers support the choice of baseline (PCIe) and back-up (ATCA)?” Answer: “Given the listed advantages the review committee endorses the choice of the PCIe based design as a baseline.” 11

PCIe Gen3 based readout  A main FPGA manages the input streams and transmits data to the event-builder server using PCIe Gen3.  PCIe Gen3 throughput: 16-lane × 8 Gb/s/lane = 128 Gb/s  The readout version of the board uses two de-serializers lane PCIe-3 edge connector to the motherboard of the host PC 24 optical links from the FEE DMA over 8-lane PCIe-3 hard IP blocks U. Marconi et al. The PCIe Gen3 readout for the LHCb upgrade, CHEP2013

PCIe-3 test board ALTERA development board, equipped with a Stratix V GX FPGA, model 5SGXEA7K2F40C2N 13 “The Stratix V GX FPGA development board is designed to fit entirely into a PC motherboard with a ×8 PCI Express slot that can accommodate a full height short form factor add-in card. This interface uses the Stratix V GX FPGA device's PCI Express hard IP block, saving logic resources for the user logic application”. 8-lane edge-connector

14 The PCIe-3 DMA test setup GPU used to test 16-lane PCIe-3 data transfer between the device and the host memory The ALTERA development board 8-lane PCIe-3

15 The DMA PCIe-3 effective bandwidth DMA over 8-lane PCIe-3 hard IP blocks ALTERA Stratix V

Test of PLX bridge Long-term test using GTX690 card / PLX 8747 bridge. Zero impact of using a bridge and two independent PCIe targets pushing data into a PC. Consistently around 110 Gb/s over long- term. No load balancing issues between the two competing links observed. Details on: /wiki/index.php/I/O_perfor mance_of_PC_servers#Upgr ade_to_GTX690 /wiki/index.php/I/O_perfor mance_of_PC_servers#Upgr ade_to_GTX h 110 Gb/s PLX GPU 1 GPU 2 PCIe 16 PLX 8-lane 16-lane PCIe switch 16

Event builder node 17 PCIe40 Event Building Network Interface Event Building Network Interface data from the detector ~ 100 Gb/s to the event builder Dual-port IB FDR – 110 Gb/s from the event builder events that are being built on this machine opportunity for doing pre-processing of full event here to the HLT empty DDR GB/s Half duplex 2x50 GB/s Full duplex DDR GB/s Half duplex

EVB performance 18 Event Builder execution requires around 6 logical cores 18 instances of the HLT software CPU consumption Memory I/O bandwidth Event Builder performs stably at 400 Gb/s PC sustains 100 Gb/s Event Builder today Ivy Bridge Intel dual CPU 12 cores We currently see 50% free resources for opportunistic triggering on EB nodes

Fast Control 19

Costing (preliminary) Long-distance fibres: ~ 1.6 MCHF fibres OM3, 300 m. – Excluding patch-cords to detector and TELL40, cable-ducts, but including patch-panels, installation and testing – No contingency, but several quotes. PCIe40: DAQ version: 5.8 kCHF ECS/TRG version: 7.9 kCHF – includes 15% contingency Readout network, Event-building (PCIe40) ~ 3.6 MCHF – Including event-builder PCs and event-filter network, excluding farm-servers – Model based on InfiniBand FDR (2013 quotes) Event-building (AMC40) ~ 9 MCHF – including event-filter network – Model based on 40G and 10G Ethernet (on 2013 quotes) 20

Running conditions Visible rate and mean visible interactions per crossing At 2.×10 33, 27 MHz, 5.2 interaction per crossing 21 Mean visible interactions per crossing

LHCb upgrade: HLT farm Trigger-less system at 40 MHz: A selective, efficient and adaptable software trigger. Average event size: 100 kB Expected data flux: 4 TB/s Total HLT trigger process latency: 14 ms – Tracking time budget (VELO + Tracking + PV searches): 50% – Tracking finds 99% of offline tracks with p T >500 MeV/c Number of running trigger process required: 4.×10 5 Number of core/CPU available in 2018: ~ 200 – Intel tick-tock plan: 7nm technology available by , the n. of core accordingly scales as 12. × (32 nm/ 7 nm) 2 = 250 Number of computing nodes required: ~

LLT 23 Readout reviewers’ opinion … only the software option should be pursued if any.

LLT performance 24

Person-power Fibres: (CERN) – Tests and preparation of installation 0.5 py (person-year) – Installation & testing by external company  0.5 py for supervision and follow- up PCIe40: (Marseille, Bologna) – design, production, commissioning ~ 18 py LLT: (Marseille, Annecy, Clermont Fd., LAL) ~ 17 py Firmware: Global framework, TFC, DAQ (Marseille, Bologna, CERN, Annecy) ~ 18 py (estimated) DAQ: (CERN, Bologna) – ~ 15 py (excluding ECS and high-level trigger infrastructure) Overall person-power in involved institutes is sufficient, but does not allow for any “luxuries”, as many people are also involved in the operation of LHCb during Run 2 25 N. Neufeld

Schedule TELL40 26 Schedule readout board PCIe40 board plan: as INFN we should develop the PCB and prototypes, to qualify our production: be ready to produce the boards. Sblocco 25 kE s.j.

Schedule DAQ Assume start of data-taking in 2020 – System for SD-tests ready whenever needed  minimal investment 2013 – 16: technology following (PC and network) 2015 – 16: Large scale network IB and Ethernet tests 2017: tender preparations 2018: Acquisition of minimal system to be able to read out every GBT – Acquisition of modular data-center 2019: Acquisition and Commissioning of full system – starting with network – farm as needed 27 N. Neufeld

Schedule: firmware, LLT, TFC All these are FPGA firmware projects First versions of global firmware and TFC ready now (for MiniDAQ test-systems) – then ongoing development LLT – Software – Hardware 2016 – 2017 (?) 28

Schedule long-distance fibres Test installation 2014 – validate installation procedure and pre-select provider Long-term fibre test with AMC40/PCIe40 on long- distance 2014/2015 Full installation in LS1.5 or during winter-shutdowns / to be finished before LS2 Assumptions: – Installation can be done without termination in UX (cables terminated on at least one end), if blown, fibres can be blown from bottom to top 29