Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

The LHCb Event-Builder Markus Frank, Jean-Christophe Garnier, Clara Gaspar, Richard Jacobson, Beat Jost, Guoming Liu, Niko Neufeld, CERN/PH 17 th Real-Time.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
10 - Network Layer. Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
PCIe based readout U. Marconi, INFN Bologna CERN, May 2013.
Protocols and the TCP/IP Suite Chapter 4. Multilayer communication. A series of layers, each built upon the one below it. The purpose of each layer is.
Chapter 6 High-Speed LANs Chapter 6 High-Speed LANs.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Router Architecture Overview
LAN Switching and Wireless – Chapter 1
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
Chapter 7 Backbone Network. Announcements and Outline Announcements Outline Backbone Network Components  Switches, Routers, Gateways Backbone Network.
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
LHCb Upgrade Architecture Review BE DAQ Interface Rainer Schwemmer.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Connecting Devices CORPORATE INSTITUTE OF SCIENCE & TECHNOLOGY, BHOPAL Department of Electronics and.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
Computer Network Architecture Lecture 3: Network Connectivity Devices.
Guirao - Frascati 2002Read-out of high-speed S-LINK data via a buffered PCI card 1 Read-out of High Speed S-LINK Data Via a Buffered PCI Card A. Guirao.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
SRB data transmission Vito Palladino CERN 2 June 2014.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Spring 2000CS 4611 Router Construction Outline Switched Fabrics IP Routers Extensible (Active) Routers.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
PCIe40 — a Tell40 implementation on PCIexpress Beat Jost DAQ Mini Workshop 27 May 2013.
DAQ Overview + selected Topics Beat Jost Cern EP.
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Graciela Perera Department of Computer Science and Information Systems Slide 1 of 18 INTRODUCTION NETWORKING CONCEPTS AND ADMINISTRATION CSIS 3723 Graciela.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
M. Bellato INFN Padova and U. Marconi INFN Bologna
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
LHCb and InfiniBand on FPGA
High Rate Event Building with Gigabit Ethernet
Challenges in ALICE and LHCb in LHC Run3
Electronics Trigger and DAQ CERN meeting summary.
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
Read-out of High Speed S-LINK Data Via a Buffered PCI Card
Chapter 4: Network Layer
Chapter 7 Backbone Network
The LHCb Event Building Strategy
John Harvey CERN EP/LBC July 24, 2001
Router Construction Outline Switched Fabrics IP Routers
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
TELL1 A common data acquisition board for LHCb
Chapter 4: Network Layer
Presentation transcript:

Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013

 Introduction to the LHCb DAQ Upgrade: numbers  Potential network technologies for the DAQ upgrade  DAQ network architecture  DAQ schemes  Summary 2 Outlines

 Timeframe: installation in the second long shut-down of the LHC in 2018, be ready for data taking in 2019  Trigger: a fully flexible software solution.  Low Level Trigger (LLT) : tune the input rate to the computing farm from 1 – 40 MHz when the system is not fully ready for 40 MHz  The DAQ system should be capable of reading out the whole detector at the LHC collision rate of 40MHz.  Numbers for the DAQ Network  Event size: ~100 KB  Max. event input rate: 40 MHz  Unidirectional Bandwidth: ~38.4 Tbit/s (may scale up) 3 LHCb DAQ Upgrade

 High-speed interconnection technologies  Ethernet (10G/40G/00G)  InfiniBand (FDR, coming EDR)  Some other similar technologies  Ethernet  Very popular for desktop/station/server  Familiar by users/developers  InfiniBand  Mainly used in high performance computing and large enterprise data center  High speed: 56Gb/s FDR  Great performance/price 4 Network Technologies

5 Ethernet vs InfiniBand EthernetInfiniBand ReliabilityBest effort, relies on upper layer protocol TCP/IP Hardware based re- transmission Flow ControlPause frame, temporarily blocking the transmission Credit based Switch MethodStore-and-forward or cut- through Cut-through Buffer sizeLarge (store-and-forward) or small (cut-through) Small

 Readout board (TELL1): custom FPGA board  UDP-like transport protocol: MEP (Multi- Event Packet)  Push DAQ scheme  Deep buffer is required in the routers and the switches 6 Review: Current LHCb DAQ Evt m Frag. Evt m Frag. Evt m Frag. CPU n: DataReq Evt m, Dest n Evt m, Dest n Evt m, Dest n

 Unidirectional solution:  Dataflow in the core network is unidirectional  Bidirectional mixed solution:  Readout Unit (RU) & Builder Unit (BU) connected to the same Top-Of-Rack (TOR) switch, dataflow in the core network is bidirectional  Bidirectional uniform solution:  RU & BU combined in the same server, dataflow in the core network is bidirectional 7 Network Architecture for DAQ upgrdae

 All the readout units are connected to the core network  The builder unit and the filter unit are implemented in the same server.  The dataflow in the core network is unidirectional 8 Unidirectional solution

9 DAQ: Core Network Monolithic core router fabric with fat-tree topologyvs

 Monolithic core-router (current solution in LHCb)  pros: “simple” architecture, good performance  cons: expensive, not many choices  Fabric with fat-tree topology : many small Top-of-Rack (TOR) switches  pros: cost-efficiency, scalability, flexibility  cons: complexity  Fabric is quite popular in data center: Cisco FabricPath, Juniper QFabri, and also other large chassis … 10 Ethernet vs InfiniBand

 The builder unit and the filter unit are implemented in the same server  All the readout units are connected to the TOR switches instead of the core network. 11 Bidirectional mixed solution (1)

 The dataflow in the core network is bi-directional  Requires RUs and BU/FUs are close enough to connect the same TOR switch  This can save up to 50% of bandwidth and ports in the core network.  The price per port in the core network are usually 3 to 4 times more expensive than in a TOR switch 12 Bidirectional mixed solution (2)

 The readout unit and builder unit are implemented in the same server (RU/BU server)  The RU/BU server connects both the core network (for event building) and the TOR switch (for event filtering) 13 Bidirectional uniform solution (1)

 The dataflow in the core network is bi-directional  Saves up to 50% ports in the core network.  Possible to choose different network technologies for the core layer (event builder network) and the edge layer (event filter network).  e.g. cost-effective InfiniBand FDR for the core, low cost 10 GBase-T for the event filter network  Increases the flexibility: deep buffer, easy to implement different DAQ schemes in software  Not tied to any technology  Reduces the complexity in the FPGA receiver card  No deep buffer is needed  Simple protocol (e.g PCIe) with PC 14 Bidirectional uniform solution (2)

 Key to success of the uniform solution: the RU/BU module  RU/BU modules serve five purposes: 15 Bidirectional uniform solution (3)  Receives data fragments from the front-end electronics  Sends data fragments to the other modules  Builds complete events  Performs event filtering on a fraction of the complete events  Distributes the remaining events to a sub-farm of filter units 1234

 IO bandwidth requirements of RU/BU modules:  Full 24x GBT link  ~ 154 Gb/s input and output or ~ 215 Gb/s for wide user mode  Preliminary tests on a Sandy-Bridge server  Intel E5 2650: 2x16x2.0G  2x Mellanox dual-port InfiniBand FDR cards Connect-IB  OS: SLC 6.2  Software: MLNX-OFED 2.0  Connect-IB cards send and receive data simultaneously 16 Bidirectional uniform solution (4)

 Preliminary test results: input and output throughput  MLNX-OFED 2.0 is a beta version, but needed for the new dual-port cards  In MLNX-OFED 1.5.3, the throughput of the single-port card is close to the limit  More tunings on OS and software are needed to improve the performance 17 Bidirectional uniform solution (5)

 Several different DAQ schemes in term of the data flow  Push data without traffic shaping  Push data with barrel-shift traffic shaping  Pull data from the destinations  Different schemes fit for different network technologies and topologies  More details on Daniel’s talk later 18 DAQ schemes

 Both Ethernet and InfiniBand, or a mix of both can be the candidate for the DAQ network upgrade  Several architectures have been discussed, the uniform solution is the most flexible and cost-effective solution  Preliminary tests show the uniform solution can work  More studies for the LHCb DAQ network upgrade are needed, stay tuned for the development in industry 19 Summary

20