Links from experiments to DAQ systems Jorgen Christiansen PH-ESE 1.

Slides:



Advertisements
Similar presentations
J. Varela, CERN & LIP-Lisbon Tracker Meeting, 3rd May Partitions in Trigger Control J. Varela CERN & LIP-Lisbon Trigger Technical Coordinator.
Advertisements

Tracker Strip and Pixel FEDs John Coughlan Tracker Readout Upgrade Meeting September 12 th 2007.
E-link IP for FE ASICs VFAT3/GdSP ASIC design meeting 19/07/2011.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Fast Detector Readout T. Flick University Wuppertal 3. Detector Workshop of the Helmholtz Alliance "Physics at the Terascale" Heidelberg 2010.
Trigger-less and reconfigurable data acquisition system for J-PET
GEM Design Plans Jason Gilmore TAMU Workshop 1 Oct
PCIe based readout U. Marconi, INFN Bologna CERN, May 2013.
Laboratoire de l’Accélérateur Linéaire (IN2P3-CNRS) Orsay, France Calorimeter upgrade meeting Olivier Duarte Upgrade calo FE review Comments : Digital.
5 Feb 2002Alternative Ideas for the CALICE Backend System 1 Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University.
LHCb DAQ Review, September LHCb Timing and Fast Control System TFC Team: Arek Chlopik, Warsaw Zbigniew Guzik, Warsaw Richard Jacobsson, CERN Beat.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
Calorimeter upgrade meeting - Wednesday, 11 December 2013 LHCb Calorimeter Upgrade : CROC board architecture overview ECAL-HCAL font-end crate  Short.
SODA: Synchronization Of Data Acquisition I.Konorov  Requirements  Architecture  System components  Performance  Conclusions and outlook PANDA FE-DAQ.
14 Sep 2005DAQ - Paul Dauncey1 Tech Board: DAQ/Online Status Paul Dauncey Imperial College London.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
ALICE Rad.Tolerant Electronics, 30 Aug 2004Børge Svane Nielsen, NBI1 FMD – Forward Multiplicity Detector ALICE Meeting on Rad. Tolerant Electronics CERN,
PHENIX upgrade DAQ Status/ HBD FEM experience (so far) The thoughts on the PHENIX DAQ upgrade –Slow download HBD test experience so far –GTM –FEM readout.
M. Lo Vetere 1,2, S. Minutoli 1, E. Robutti 1 1 I.N.F.N Genova, via Dodecaneso, GENOVA (Italy); 2 University of GENOVA (Italy) The TOTEM T1.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
LNL 1 SLOW CONTROLS FOR CMS DRIFT TUBE CHAMBERS M. Bellato, L. Castellani INFN Sezione di Padova.
Outer Tracker Off-Detector Readout and Control Discussion John Coughlan SLHC Tracker Readout Meeting March 7 th 2007.
FF-LYNX (*): Fast and Flexible protocols and interfaces for data transmission and distribution of clock, trigger and control signals (*) project funded.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Muon Electronics Upgrade Present architecture Remarks Present scenario Alternative scenario 1 The Muon Group.
Background Physicist in Particle Physics. Data Acquisition and Triggering systems. Specialising in Embedded and Real-Time Software. Since 2000 Project.
Instrumentation DepartmentCCLRC Rutherford Appleton Laboratory28 March 2003 FED Project Plan 2003 FED Project aiming to satisfy 2 demands/timescales: Module.
Federico Alessio Zbigniew Guzik Richard Jacobsson TFC Team: A Super-TFC for a Super-LHCb - Top-down approach -
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
1 07/10/07 Forward Vertex Detector Technical Design – Electronics DAQ Readout electronics split into two parts – Near the detector (ROC) – Compresses and.
Acquisition system for the CLIC Module. Sébastien Vilalte.
The GBT, a Proposed Architecture for Multi-Gb/s Data Transmission in High Energy Physics P. Moreira CERN – Geneva, Switzerland Topic Workshop on Electronics.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Common electronics projects-solutions across experiments and sub-systems
P. Aspell CERN April 2011 CMS MPGD Upgrade …. Electronics 2 1.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
John CoughlanESDG Group Meeting Oct 25 th 2007 STFC Technology TWEPP 2007 Topical Workshop on Electronics for Particle Physics
Peter LICHARD CERN (NA62)1 NA62 Straw tracker electronics Study of different readout schemes Readout electronics frontend backend Plans.
Electronics Preparatory Group 6 June Events which happened  Meeting of all the conveners of working groups 
A. KlugeFeb 18, 2015 CRU form factor discussion & HLT FPGA processor part II A.Kluge, Feb 18,
ATLAS SCT/Pixel TIM FDR/PRR28 June 2004 TIM Requirements - John Lane1 ATLAS SCT/Pixel TIM FDR/PRR 28 June 2004 Physics & Astronomy HEP Electronics John.
Standard electronics for CLIC module. Sébastien Vilalte CTC
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Jorgen Christiansen, CERN PH-ESE 1.  EPIX ITN proposal did not get requested EU funding ◦ CERN based proposals did very bad this time. ◦ I better not.
DAQ 1000 Tonko Ljubicic, Mike LeVine, Bob Scheetz, John Hammond, Danny Padrazo, Fred Bieser, Jeff Landgraf.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
The Data Handling Hybrid Igor Konorov TUM Physics Department E18.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
E. Hazen -- HCAL Upgrade Workshop1 MicroTCA Common Platform For CMS Working Group E. Hazen - Boston University for the CMS Collaboration.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
DAQ and TTC Integration For MicroTCA in CMS
PANDA collaboration meeting FEE session
Electronics Trigger and DAQ CERN meeting summary.
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
DHH progress report Igor Konorov TUM, Physics Department, E18
TELL1 A common data acquisition board for LHCb
uTCA A Common Hardware Platform for CMS Trigger Upgrades
CMS EMU TRIGGER ELECTRONICS
VELO readout On detector electronics Off detector electronics to DAQ
GEM for CMS Electronics Overview
TELL1 A common data acquisition board for LHCb
FED Design and EMU-to-DAQ Test
Presentation transcript:

Links from experiments to DAQ systems Jorgen Christiansen PH-ESE 1

Getting data into DAQ 2 CPU Switching network Front-end DAQ interface Front-end DAQ interface 1 Front-end DAQ interface N Front-end DAQ interface 2 Timing/triggers/ sync Control/monitoring VME S-link PCI-express ~zero mass Trigger Max 4T 10K – 100M rad

Data and Link types Readout - DAQ: Readout - DAQ: Unidirectional Unidirectional Event frames. Event frames. High rate High rate Point to point Point to point Trigger data: Trigger data: Unidirectional Unidirectional High constant data rate High constant data rate Short and constant latency Short and constant latency Point to point Point to point Detector Control System Detector Control System Bidirectional Bidirectional Low/moderate rate (“slow control”) Low/moderate rate (“slow control”) Bus/network or point to point Bus/network or point to point Timing: Clock, triggers, resets Timing: Clock, triggers, resets Precise timing (low jitter and constant latency) Precise timing (low jitter and constant latency) Low latency Low latency Fan-out network (with partitioning) Fan-out network (with partitioning) 3 We often keep these data types physically separate and each link type has its own specific implementation We often keep these data types physically separate and each link type has its own specific implementation Multiple exceptions Multiple exceptions Merge timing and control/monitoring: CMS CCU, LHCb SPECS,, Merge timing and control/monitoring: CMS CCU, LHCb SPECS,, Combine readout and control: ALICE DDL (Used for DCS ?), Combine readout and control: ALICE DDL (Used for DCS ?), Use same link implementation for readout and trigger data Use same link implementation for readout and trigger data But never have readout and trigger data on same physical link But never have readout and trigger data on same physical link Down: Control data with Timing. UP: Monitoring data with Readout: ATLAS pixel/SCT,, Down: Control data with Timing. UP: Monitoring data with Readout: ATLAS pixel/SCT,,

Links in ATLAS Large confined/enclosed experiment with high radiation. Large confined/enclosed experiment with high radiation. ~100KHz trigger ~100KHz trigger 4 DetectorPurposeMediaDir.Rate Mbits/s QuantityComment PixelTTC/DCS/DAQOpticalDown/up40 (80)250Custom SCTTTC/DCS/DAQOpticalDown/up408200Custom TRTTTC/DCSLVDS- Cu Down40400Custom DAQ/DCSLVDS – Optical Up40 – GOL EcalTTCOpticalDown80TTC link DCSLVDS- Cu Down/UpSPAC DAQOpticalUp1600 Glink TriggerCupperUpSub-det.Analog HcalTTCOpticalDown80TTC link DCSCupperDown/upCAN DAQOpticalUp Glink TriggerCupperUpAnalog CSC, RPC, TGCDAQOpticalUp Glink MDTDAQOpticalUp GOL CSCTTCOpticalDown Glink CSC, RPC, TGC, MDTDCSCupperDown/upCAN RPCTriggerOpticalUp1600Glink

Links in CMS Large confined/enclosed experiment with high radiation. Large confined/enclosed experiment with high radiation. ~100KHz trigger ~100KHz trigger 5 DetectorPurposeMediaDir.Rate Mbits/s QuantityComment Pixel - stripTTC/DCSOpticalDown/up80CCU, Custom DAQOpticalUp40Msamp ~ Custom analog EcalTTC/DCSOpticalDown/up80CCU, Custom DAQOpticalUp GOL TriggerOpticalUpGOL HcalTTCDown? DCSDown/up? DAQOpticalUp800GOL TriggerUp800GOL MuonsTTCDown DCSDown/up DAQUp800GOL TriggerUp800GOL

Example: CMS tracker links 6

Links in ALICE Low radiation levels (but can not be ignored) Low radiation levels (but can not be ignored) Specialized/qualified COTS solutions can be used Specialized/qualified COTS solutions can be used Low trigger rate (10KHz) but very large events Low trigger rate (10KHz) but very large events Standardized readout links: DDL Standardized readout links: DDL Some front-ends use directly DDL interface card (can stand limited radiation: COTS + SEU tolerance) Some front-ends use directly DDL interface card (can stand limited radiation: COTS + SEU tolerance) Others (inner trackers, TRD) use optical links (e.g. GOL) to crates that then interfaces to DDL. Others (inner trackers, TRD) use optical links (e.g. GOL) to crates that then interfaces to DDL. ~400 DDL links of 2125 Mbits/s ~400 DDL links of 2125 Mbits/s Link card plugs into PCI-X slots (must evolve with PC backplane technology) Link card plugs into PCI-X slots (must evolve with PC backplane technology) Link bidirectional so can also be used for DCS (used in practice ?) Link bidirectional so can also be used for DCS (used in practice ?) DCS: Use of single board computer with Ethernet (Very low radiation levels) DCS: Use of single board computer with Ethernet (Very low radiation levels) Trigger data: ? Trigger data: ? TTC TTC 7

Links in LHCb Open detector so easier to get data out Open detector so easier to get data out 1MHz trigger rate 1MHz trigger rate Came later than the other experiments and profited from link developments already made (GOL). Came later than the other experiments and profited from link developments already made (GOL). Standardized links (with 1 exception) Standardized links (with 1 exception) DAQ and trigger data: GOL 1800Mbits/s, ~5.000 Links DAQ and trigger data: GOL 1800Mbits/s, ~5.000 Links (Vertex: multiplexed analog links to counting house) (Vertex: multiplexed analog links to counting house) DCS: SPECS (Custom COTS) and CAN DCS: SPECS (Custom COTS) and CAN TTC: TTC TTC: TTC Standardized readout module: TELL1 (with 1 exception) Standardized readout module: TELL1 (with 1 exception) 8

What to do for the future One standardized optical link doing it all. Dream ?. We MUST try !. One standardized optical link doing it all. Dream ?. We MUST try !. Must be available early to be used/adopted by our community (but takes looong to develop). Must be available early to be used/adopted by our community (but takes looong to develop). Support for its use Support for its use Requirements: Requirements: General: General: Optical Optical Bi-directional Bi-directional High data rate: ~ 5Gbits/s (mainly from FE to DAQ) (10Gbits/s version for phase 2 ?) High data rate: ~ 5Gbits/s (mainly from FE to DAQ) (10Gbits/s version for phase 2 ?) Reliable Reliable Low and constant latency (for TTC and trigger data paths) Low and constant latency (for TTC and trigger data paths) Front-end: Radiation hard ASICs (130nm) and radiation qualified opto-electronics Front-end: Radiation hard ASICs (130nm) and radiation qualified opto-electronics Radiation hard ( >100Mrad), SEU immunity Radiation hard ( >100Mrad), SEU immunity “Low” power “Low” power Flexible front-end chip interface Flexible front-end chip interface Back-end: COTS Back-end: COTS Direct connection to high-end FPGA’s with multiple/many serial link interfaces Direct connection to high-end FPGA’s with multiple/many serial link interfaces Project : GBT ASICs, Versatile opto, FPGA firmware, GBT Link Interface Board (GLIB). Project : GBT ASICs, Versatile opto, FPGA firmware, GBT Link Interface Board (GLIB). This is one of our major projects in the PH-ESE group in collaboration with external groups to have our “dream” link ready for the upgrade community within ~1 year. This is one of our major projects in the PH-ESE group in collaboration with external groups to have our “dream” link ready for the upgrade community within ~1 year. This has been a large investment (money and manpower for the last 5 years) This has been a large investment (money and manpower for the last 5 years) 9

How can this look 1010 CPU Switching network Front-end DAQ interface Timing/triggers/ sync Control/monitoring ~zero mass Trigger Max 4T 10K – 100M rad DCS network GBT bi-dir links GBT uni-dir links DCS network, TTC distribution network and DAQ network Fully in counting house (COTS)

DAQ interface 11

GBT, Versatile, GLIB 12

Example: LHCb upgrade No trigger (40MHZ event rate), Change all front-end electronics No trigger (40MHZ event rate), Change all front-end electronics Massive use of GBT links (10k) and use of ONE flexible FPGA based ATCA board for common FE-DAQ interface, Rate control and TTC system Massive use of GBT links (10k) and use of ONE flexible FPGA based ATCA board for common FE-DAQ interface, Rate control and TTC system 13 J.P. Cachemiche, CPPM

Summary & Conclusions Current LHC DAQ systems Current LHC DAQ systems Front-end links not directly part of what we term(ed) (common/central) DAQ systems, but they have had a major impact on the data collection (DAQ system) architecture, detector interfaces and required system debugging. Front-end links not directly part of what we term(ed) (common/central) DAQ systems, but they have had a major impact on the data collection (DAQ system) architecture, detector interfaces and required system debugging. A large number of different links implies a large set of different front-end/DAQ interface modules. A large number of different links implies a large set of different front-end/DAQ interface modules. Each link and associated link/DAQ interface has been optimized for its specific use in each sub-detector system Each link and associated link/DAQ interface has been optimized for its specific use in each sub-detector system We tend to physically separated TTC, Slow control/monitoring, DAQ data and Trigger data information. We tend to physically separated TTC, Slow control/monitoring, DAQ data and Trigger data information. Many exceptions in different sub-systems with different mixtures. Many exceptions in different sub-systems with different mixtures. No satisfactory rad hard link existed to cover all the different needs No satisfactory rad hard link existed to cover all the different needs Future front-end links and interfaces Future front-end links and interfaces One link can carry all types of information: GBT + Versatile One link can carry all types of information: GBT + Versatile If one link fails then we completely loose everything from this detector part (but this is in practice also the case if any of the “4” distinct links fails to a detector part). Redundancy is an other (long) discussion subject. If one link fails then we completely loose everything from this detector part (but this is in practice also the case if any of the “4” distinct links fails to a detector part). Redundancy is an other (long) discussion subject. Different links/networks/fan-outs/etc. for the different information in the counting house (use of COTS) ?. Different links/networks/fan-outs/etc. for the different information in the counting house (use of COTS) ?. A “standardized” radhard high rate optical link must be available early to be adopted by our community and takes significant time and resources to develop: Start development as early as possible!. A “standardized” radhard high rate optical link must be available early to be adopted by our community and takes significant time and resources to develop: Start development as early as possible!. One type of front-end – DAQ module for all sub-detectors ? One type of front-end – DAQ module for all sub-detectors ? FPGA’s are extremely versatile and powerful (and continuously becomes even more powerful) FPGA’s now use internal standardized data busses and networks (what we before did at the crate level) FPGA’s are extremely versatile and powerful (and continuously becomes even more powerful) FPGA’s now use internal standardized data busses and networks (what we before did at the crate level) Implement final module as late as possible to profit from the latest generation technology. Implement final module as late as possible to profit from the latest generation technology. The front-end links and interface modules are vital and critical parts of DAQ/data collection systems !. The front-end links and interface modules are vital and critical parts of DAQ/data collection systems !. 14

Why not DAQ network interface directly in front-ends ? This is obviously what we would ideally like but: This is obviously what we would ideally like but: Complicated DAQ network interfaces (network protocol, packet routing, data retransmission, buffering, etc.) will be very hard to (build to) work reliable in hostile radiation environment. Complicated DAQ network interfaces (network protocol, packet routing, data retransmission, buffering, etc.) will be very hard to (build to) work reliable in hostile radiation environment. We can not use flexible high performance COTS components (FPGA’s, DSP, CPU, etc.) We can not use flexible high performance COTS components (FPGA’s, DSP, CPU, etc.) Power consumption Power consumption Mass Mass We can not upgrade DAQ network as hardwired into “old” front-end electronics. We can not upgrade DAQ network as hardwired into “old” front-end electronics. 15