“L1 farm: some naïve consideration” Gianluca Lamanna (CERN) & Riccardo Fantechi (CERN/Pisa)

Slides:



Advertisements
Similar presentations
Do’s and Dont’s - How to Prevent Troubles iConnect / CommPact
Advertisements

LAV contribution to the NA62 trigger Mauro Raggi, LNF ONLINE WG CERN 9/2/2011.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
1 25\10\2010 Unit-V Connecting LANs Unit – 5 Connecting DevicesConnecting Devices Backbone NetworksBackbone Networks Virtual LANsVirtual LANs.
“A board for LKr trigger interface and proto-L0TP” G.Lamanna (CERN) NA62 Collaboration Meeting in Brussels LKr-WG
Straw electronics Straw Readout Board (SRB). Full SRB - IO Handling 16 covers – Input 16*2 links 400(320eff) Mbits/s Control – TTC – LEMO – VME Output.
Network Parts. Network Interface Card (NIC) 2 This used to be a separate card as shown. As many computers these days need access to a network, the technology.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
“ PC  PC Latency measurements” G.Lamanna, R.Fantechi & J.Kroon (CERN) TDAQ WG –
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
1 “Fast FPGA-based trigger and data acquisition system for the CERN experiment NA62: architecture and algorithms” Authors G. Collazuol(a), S. Galeotti(b),
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Cisco 3 - Switching Perrine. J Page 16/4/2016 Chapter 4 Switches The performance of shared-medium Ethernet is affected by several factors: data frame broadcast.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
LCLS_II High Rep Rate Operation and Femtosecond Timing J. Frisch 7/22/15.
TALK, LKr readout and the rest… R. Fantechi, G. Lamanna 15/12/2010.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
NA62 TDAQ WG Meeting News and usual stuff M. Sozzi CERN – 29/5/2009.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
“Planning for Dry Run: material for discussion” Gianluca Lamanna (CERN) TDAQ meeting
R. Fantechi. TDAQ commissioning Coordination activity started on January Several items to be attacked already in January Regular meetings: 15/1, 29/1,
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
SRB data transmission Vito Palladino CERN 2 June 2014.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
“MultiRing Almagest with GPU” Gianluca Lamanna (CERN) Mainz Collaboration meeting TDAQ WG.
Status of the NA62 network R. Fantechi 23/5/2012.
“L1/L2 farm: some thought ” G.Lamanna, R.Fantechi & D. Di Filippo (CERN) Computing WG –
NA62: The LKr Electronics Matt Huszagh. Installation & Setup (1/5) Crate Setup 8 vertical columns of 4 crates Almost all installed.
TDAQ news and miscellaneous reports M. Sozzi NA62 TDAQ WG meeting CERN – 13/7/2011.
L1/HLT trigger farm Bologna setup 0 By Gianluca Peco INFN Bologna Genève,
TELL1 readout in RICH test: status report Gianluca Lamanna on behalf of TDAQ Pisa Group (B.Angelucci, C.Avanzini, G.Collazuol, S.Galeotti, G.L., G.Magazzu’,
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Many LAV stations in digital trigger Francesco Gonnella Photon-Veto Working Group CERN – 03/02/2015.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
R. Fantechi 2/09/2014. Milestone table (7/2014) Week 23/6: L0TP/Torino test at least 2 primitive sources, writing to LTU, choke/error test Week.
Software and TDAQ Peter Lichard, Vito Palladino NA62 Collaboration Meeting, Sept Ferrara.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Straw (T)DAQ Peter Lichard, Vito Palladino NA62 Collaboration Meeting, Sept Ferrara.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
“Technical run experience” Gianluca Lamanna (CERN) TDAQ meeting
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
DAQ ACQUISITION FOR THE dE/dX DETECTOR
LCLS_II High Rep Rate Operation and Femtosecond Timing
Level-zero trigger status
Ingredients 24 x 1Gbit port switch with 2 x 10 Gbit uplinks  KCHF
Electronics Trigger and DAQ CERN meeting summary.
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
G.Lamanna (CERN) NA62 Collaboration Meeting TDAQ-WG
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
JIVE UniBoard Correlator (JUC) Firmware
Readout board: DRS, Oscilloscope
The LHCb Event Building Strategy
VELO readout On detector electronics Off detector electronics to DAQ
Event Building With Smart NICs
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
LHCb Online Meeting November 15th, 2000
U. Marconi, D. Breton, S. Luitz
Presentation transcript:

“L1 farm: some naïve consideration” Gianluca Lamanna (CERN) & Riccardo Fantechi (CERN/Pisa)

The L1 farm can’t be avoided The L1 farm can’t be avoided: 100 kHz – the NZS LKr must be read at 100 kHz rare decay junk L2 farm – We are looking for a rare decay: most of the events passing the L0 are junk, avoid to build a huge and complex L2 farm to fully reconstruct events that have to be reject GTK readout window – Some detector (i.e. GTK) needs additional information to reduce the readout window (and then the bandwidth) L1-L2 PCs mixing It’s better to avoid the L1-L2 PCs mixing: protocolrate – Characteristics of the data links (protocol, rate, etc.) – Specialization of the L1 PCs with respect to the sub- detector

How to calculate the number of PC in the L1 farm? “type”: Two different “type”: participatingL1 trigger – Detector participating to the L1 trigger decision  number of PC driven by computing power needed to calculate the primitives not participating L1 trigger – Detector not participating to the L1 trigger decision  number of PC driven by data rate and bandwidth

Not participating to L1 Two relevant parameter: – Bandwidth10 Gb – Bandwidth: with 10 Gb links only the GTK needs 2 ports (2 PCs?) – Maximum data rate1 MHz ?more events in one packet – Maximum data rate: 1 MHz packet rate isn’t sustainable from an ethernet card (max 200 kHz?)  more events in one packet 1500 B300 B 51MHz/5=200 kHz The number of event per packet depends on the size of the event: for instance assuming standard ethernet packet (1500 B) and RICH event size of 300 B, a factor of 5 can be applied to the max rate (1MHz/5=200 kHz) Jumbo frame 9000 B – Gain by using Jumbo frame (9000 B): buy switches capable to manage Jumbo frames

Not participating to L1 1 Gb/s 1500 B 12 us 80 kHz But the maximum frequency of packets in the cable (1 Gb/s) depends on the length of the packet: 1500 B  12 us  80 kHz – Increase the number of PC to cope with high rate 5 events 80 kHz10 ports ? ? – RICH: 5 events per packet  80 kHz  10 ports (2-3 PCs ? ), multicore (100 kHz of interrupts per core?) To evaluate the correct number of PC for each subdetector we need to know: – Events size PPSNIC – Maximum PPS of the NIC interrupts rate – Maximum interrupts rate per core – Number of events per packet

Participating to L1 L1TP Some detector should produce primitives for L1TP NM In this case the number of PC should be increased to allow the online data processing: 1 MHz  1 us  N (number of cores) = M (actual time of the algorithm) RICH 100 us 100 cores PC For instance: if the RICH algorithm to reconstruct ring will take 100 us  100 cores (10-12 PC) GPUs 2-31 PC 2 video card Using GPUs this number could be highly reduced: RICH 2-3 us  1 PC with 2 video card (probably less)

Final message L1 farm The dimension of the L1 farm should be evaluated with care actual rate computing increasedecrease sub-L1farm sub-L1farm L0->L1 farm 10 Gb to 1 Gb 10 Gb to 10 Gb The real dimension depends on actual rate and computing needed : it’s very important to have the possibility to increase and decrease the number of PCs in each sub-L1farm  include small switches for each sub-L1farm (or assure that the L0->L1 farm are large enough for future upgrade)  it’s convenient to use 10 Gb to 1 Gb switches instead of 10 Gb to 10 Gb ???

Time measurement one pc to another We are trying to measure transit time sending data from one pc to another using the oscilloscope. LPT Signals on LPT Jitter few us Jitter of the method measured (under control… few us) interrupts frequency data rate Study of time performance as a function of the interrupts frequency and data rate. “gun” TELL1/TEL62 Future step: replace the “gun” with a TELL1/TEL62 (with a signal produced by the firmware on the test connector) PC1: gun PC2: target Interface to LPT