The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
The LHCb Event-Builder Markus Frank, Jean-Christophe Garnier, Clara Gaspar, Richard Jacobson, Beat Jost, Guoming Liu, Niko Neufeld, CERN/PH 17 th Real-Time.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
PCIe based readout U. Marconi, INFN Bologna CERN, May 2013.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
Niko Neufeld, CERN/PH-Department
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
PHENIX upgrade DAQ Status/ HBD FEM experience (so far) The thoughts on the PHENIX DAQ upgrade –Slow download HBD test experience so far –GTM –FEM readout.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Upgrade of the CMS Event Builder Andrea Petrucci - CERN (PH/CMD) on behalf of the CMS DAQ group 19 th International Conference on Computing in High Energy.
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
LHCb Upgrade Architecture Review BE DAQ Interface Rainer Schwemmer.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Why it might be interesting to look at ARM Ben Couturier, Vijay Kartik Niko Neufeld, PH-LBC SFT Technical Group Meeting 08/10/2012.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
Interconnect Networks Basics. Generic parallel/distributed system architecture On-chip interconnects (manycore processor) Off-chip interconnects (clusters.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
SRB data transmission Vito Palladino CERN 2 June 2014.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
KIP Ivan Kisel, Uni-Heidelberg, RT May 2003 A Scalable 1 MHz Trigger Farm Prototype with Event-Coherent DMA Input V. Lindenstruth, D. Atanasov,
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Studies of LHCb Trigger Readout Network Design Karol Hennessy University College Dublin Karol Hennessy University College Dublin.
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
Balazs Voneki CERN/EP/LHCb Online group
MPD Data Acquisition System: Architecture and Solutions
LHCb and InfiniBand on FPGA
Workshop Concluding Remarks
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Challenges in ALICE and LHCb in LHC Run3
Niko Neufeld LHCb Upgrade Online Computing Challenges CERN openlab Workshop on Data Center Technologies and Infrastructures, Mar 2017.
Electronics Trigger and DAQ CERN meeting summary.
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
The LHCb Event Building Strategy
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
High-Performance Storage System for the LHCb Experiment
LHCb Trigger, Online and related Electronics
The LHCb High Level Trigger Software Framework
Network Processors for a 1 MHz Trigger-DAQ System
LHCb Online Meeting November 15th, 2000
TELL1 A common data acquisition board for LHCb
Presentation transcript:

The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012

Outlines  Introduction to the LHCb Upgrade  Potential solutions for the DAQ upgrade and the event builder network (or called DAQ network)  The evaluation tool for the DAQ network and some tests done in the lab with this tool  Summary 2

LHCb Upgrade  Overall schedule: installation in the second long shut- down of the LHC in 2018, be ready for data taking in 2019  Design luminosity: cm -2 s -1  Trigger: LHCb opts for a fully flexible software trigger.  Hardware trigger will be removed  The whole detector will be readout at the LHC collision rate of 40MHz.  Low Level Trigger (LLT) : reuse current hardware trigger  tune the input rate to the computing farm from 1 – 40 MHz when the system is not fully ready for 40 MHz  cope with a staged DAQ system 3

LHCb Upgrade: Trigger 20 kHz Software Trigger Low Level Trigger (LLT) Input Rate: MHz to tape 40 MHz Event size: ~100 KB 4

Current LHCb DAQ  Readout board (ROB): custom FPGA board  UDP-like transport protocol MEP (Multi- Event Packet): several events packed into one message  Push scheme: ROBs assemble data fragments and send out the packets immediately  Deep buffer is required in the routers and the switches 5 Evt m Frag. Evt m Frag. Evt m Frag. CPU n: DataReq Evt m, Dest n Evt m, Dest n Evt m, Dest n

LHCb DAQ upgrade  Requirements  Event size: ~100 KB, readout rate: 40 MHz uni-directional throughput of the DAQ network: at least 32 Tbit/s  Two high-speed interconnection technologies  Ethernet (10G/40G/100G)  Infiniband (FDR/...)  Two topologies  Large core-router (current solution in LHCb) pros: “simple” architecture, good performance cons: expensive, not many choices  Fat-tree topology with many small Top-of-Rack (TOR) switches pros: cost-efficiency, scalability, flexibility cons: complexity 6

 Two DAQ schemes: push or pull  Push scheme with no traffic shaping  Push scheme with barrel shifter traffic shaping  Pull scheme: the destination sends data requests to the sources one by one 7 DAQ Schemes Diagram of barrel shifter traffic shaping

 Motivation  Demonstrate and verify the DAQ protocol  Measure the network performance and verify the solution based on different technologies and architecture 8 Evaluation tool for the network upgrade Network Transport Layer Data Flow manager Network Transport Layer Data Source Generator Event Builder Performance measurement Core  Basic components:  Core unit: dispatch jobs, handle exceptions,...  Data flow manager: control the data flow  Network transport layer: an abstract layer of the underlying network  Performance measurement: collect information from different modules

 Network: 10G Ethernet and QDR Infiniband  Server: dual-port 10G Ethernet card and 4x QDR Infiniband card 9 Setup in the lab Nodes 8 Type DELL R710 CPU Xeon E5520 at 2.27 GHz Memory 3 GB Network EthernetInfiniband Adapter Chelsio T420-CR dual-port 10GBASE-SFP, PCIe 2 x8 Qlogic HCA, qle7340 4x QDR PCIe 2 x8 Switch Voltaire Vantage 6048, 48 ports, 10 GbE Qlogic BS01, 36 ports, 4x QDR

 Tests in the lab so far  Push scheme  Push with barrel-shifter traffic shaping  Pull scheme  Infiniband network throughput measurement  So far, preliminary results only, the tool is to be optimized 10 Some Preliminary Tests

Study the current LHCb DAQ scheme in 10G Ethernet  Data flow manager send broadcast message to all servers using one 10G interface  Emulate the trigger packet  Synchronize all sources  All servers are data generators and receivers, using one interface to receive broadcast message, the other one to send and receive traffic. 11 Case 1: Push Trigger

 Main concern: buffer size in the network device.  Packet drop rate at different size of burst. Measure the packet drop rate at less than 50% of the line rate  Simple push  7 x 1 push: 7 sources to 1 destination  7 x 7 push: 7 sources to 7 destinations  7 x 7 push with barrel-shifter traffic shaping 12 Push: packet drop rate test

13 Push: results Message size: 13 kByte Packet drop rate: 4.3e-7 Message size: 14 kByte Packet drop rate: 1.4e-7 Message size: 50 kByte Packet drop rate: 1.3e-7

14 Case 2: pull  7 x 7 pull: 7 sources to 7 destinations using tcp/ip Performance degrades from message size 20 KB

 Network throughput for 1 to 1, 4 to 1, and 4 to 4 using RDMA (Remote Direct Memory Access) for data transfer 15 Case 3: Infiniband

 More studies for the LHCb event builder network upgrade are needed, so a generic evaluation tool will be very useful  A few basic modules has been used for the preliminary studies and the results have been shown  Push DAQ scheme on 10 GbE  Pull scheme on 10 GbE  Infiniband network throughput using RDMA 16 Summary

17