Event Building With Smart NICs

Slides:



Advertisements
Similar presentations
High Speed Total Order for SAN infrastructure Tal Anker, Danny Dolev, Gregory Greenman, Ilya Shnaiderman School of Engineering and Computer Science The.
Advertisements

Chapter 4 Conventional Computer Hardware Architecture
Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
The LHCb Event-Builder Markus Frank, Jean-Christophe Garnier, Clara Gaspar, Richard Jacobson, Beat Jost, Guoming Liu, Niko Neufeld, CERN/PH 17 th Real-Time.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) SriramGopinath( )
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Evaluating System Performance in Gigabit Networks King Fahd University of Petroleum and Minerals (KFUPM) INFORMATION AND COMPUTER SCIENCE DEPARTMENT Dr.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Jan 3, 2001Brian A Cole Page #1 EvB 2002 Major Categories of issues/work Performance (event rate) Hardware  Next generation of PCs  Network upgrade Control.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
“L1 farm: some naïve consideration” Gianluca Lamanna (CERN) & Riccardo Fantechi (CERN/Pisa)
Srihari Makineni & Ravi Iyer Communications Technology Lab
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
ECE 526 – Network Processing Systems Design Computer Architecture: traditional network processing systems implementation Chapter 4: D. E. Comer.
An Architecture and Prototype Implementation for TCP/IP Hardware Support Mirko Benz Dresden University of Technology, Germany TERENA 2001.
Prospects for the use of remote real time computing over long distances in the ATLAS Trigger/DAQ system R. W. Dobinson (CERN), J. Hansen (NBI), K. Korcyl.
1 DAQ System Realization DAQ Data Flow Review Sep th, 2001 Niko Neufeld CERN, EP.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
EECB 473 Data Network Architecture and Electronics Lecture 1 Conventional Computer Hardware Architecture
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
Networking update and plans (see also chapter 10 of TP) Bob Dobinson, CERN, June 2000.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Exploiting Task-level Concurrency in a Programmable Network Interface June 11, 2003 Hyong-youb Kim, Vijay S. Pai, and Scott Rixner Rice Computer Architecture.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
DAQ Overview + selected Topics Beat Jost Cern EP.
KIP Ivan Kisel, Uni-Heidelberg, RT May 2003 A Scalable 1 MHz Trigger Farm Prototype with Event-Coherent DMA Input V. Lindenstruth, D. Atanasov,
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Grzegorz Kasprowicz1 Level 1 trigger sorter implemented in hardware.
Ethernet Packet Filtering – Part 2 Øyvind Holmeide 10/28/2014 by.
LISA Linux Switching Appliance Radu Rendec Ioan Nicu Octavian Purdila Universitatea Politehnica Bucuresti 5 th RoEduNet International Conference.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
LHCb and InfiniBand on FPGA
High Rate Event Building with Gigabit Ethernet
Electronics Trigger and DAQ CERN meeting summary.
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
CS 286 Computer Organization and Architecture
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
The LHCb Event Building Strategy
LHCb Trigger and Data Acquisition System Requirements and Concepts
Packet Switch Architectures
John Harvey CERN EP/LBC July 24, 2001
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
Throttling: Infrastructure, Dead Time, Monitoring
LHCb Online Meeting November 15th, 2000
Use Of GAUDI framework in Online Environment
TELL1 A common data acquisition board for LHCb
ECE 671 – Lecture 8 Network Adapters.
Cluster Computers.
Presentation transcript:

Event Building With Smart NICs Jean-Pierre Dufey, Beat Jost, Niko Neufeld & Marianna Zuin DAQ 2000 Lyon, October 20, 2000

Recap: LHCb DAQ System Niko NEUFELD CERN, EP

Event Building Components Readout units (RU): multiplexing of front-end links, destination assignment Switching read-out network Sub-farm controllers (SFC): event building and event dispatching Niko NEUFELD CERN, EP

Event Building Properties Static load balancing among the SFCs RUs send round robin to destinations destination = f(event_number) f being the same for all RUs Pure push protocol congestions handled via flow control and more importantly by throttling Distributes the event data flow of 6 GB/s from m sources to n destinations, each of which has to handle O(1Kb) fragments at 80 kHz Niko NEUFELD CERN, EP

Why Use Smart NICs? Modern Smart NICs are powerful embedded computers Off-load general purpose CPU Take advantage of cheap CPU power on the NIC Facilitate hardware design of the RU (Yet) limited CPU power compared to commodity PC No guarantee that high-end NIC development will continue in this direction (firmware/CPU vs. ASIC/FPGA) Niko NEUFELD CERN, EP

Alteon Tigon 2 Features Development environment Dual R4000-class processor running at 88 MHz Up to 2 MB memory GigE MAC+link-level interface PCI interface Development environment GNU C cross compiler with few special features to support the hardware Source-level remote debugger Niko NEUFELD CERN, EP

Test Setup PC/Linux CPU Mem PCI CERN Network GbE NIC Niko NEUFELD CERN, EP

NIC 2 NIC Performance Niko NEUFELD CERN, EP

Performance of Alteon NIC Can fill the wire at any given frame size (from 64 to 9000 bytes) Can send out frames at a frequencies of up to 1.4 MHz For frames bigger than 512 bytes more than 95% of nominal bandwidth available for data (practically 100% for >8000 Jumbo frames) Niko NEUFELD CERN, EP

Event Building Algorithm Assembles events out of fragments from a known number of sources Handles an adjustable amount of events concurrently (limited only by buffer space) Implements “Implicit + Time-out Completion” Uses “scatter/gather” capabilities of NIC’s DMA engine to concatenate the fragments into the host’s memory Niko NEUFELD CERN, EP

Algorithm ? Polling Start Procedure New fragment New event fragment NO still in the table Fragment out of time Collect the fragment YES Decrement sources Add new event descriptor Check for missing fragments in previous events Niko NEUFELD CERN, EP

PC Test Implementation 400 MHz PIII VC++ 5.0 Niko NEUFELD CERN, EP

Performance NIC 2 NIC Average time per fragment 11.65 us Niko NEUFELD CERN, EP

Summary Event building on a smart NIC at a frequency of incoming fragments of almost 100 KHz has been demonstrated Event building at Gigabit speed for fragments bigger than ~1100 bytes Code Optimization ongoing (9 us/frag have already been achieved) Niko NEUFELD CERN, EP

Program of Work Evaluate impact of interrupt coalescence on SFC performance Study possibility of handling some amount of TCP/IP traffic on the outgoing link of the SFC (events to storage) “Real world” tests on a Gigabit Ethernet switching network Use measured parameters in a detailed simulation of the readout network Niko NEUFELD CERN, EP