Niko Neufeld CERN PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.

Slides:



Advertisements
Similar presentations
The LAr ROD Project and Online Activities Arno Straessner and Alain, Daniel, Annie, Manuel, Imma, Eric, Jean-Pierre,... Journée de réflexion du DPNC Centre.
Advertisements

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
Niko Neufeld PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
1 Tell10 Guido Haefeli, Lausanne Electronics upgrade meeting 10.February 2011.
PCIe based readout U. Marconi, INFN Bologna CERN, May 2013.
Infrastructure for LHCb Upgrade workshop – MUON detector Infrastructure for LHCB Upgrade workshop: MUON power, electronics, cable Muon upgrade in a nutshell.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
Farm Completion Beat Jost and Niko Neufeld LHCb Week St. Petersburg June 2010.
SODA: Synchronization Of Data Acquisition I.Konorov  Requirements  Architecture  System components  Performance  Conclusions and outlook PANDA FE-DAQ.
GBT Interface Card for a Linux Computer Carson Teale 1.
V. Bobillier1 Long distance cable installation status Calorimeter commissioning meeting.
Status of Readout Board Firmware Guillaume Vouters On behalf of the MiniDAQ developers 11 December 2014 LHCb Upgrade Electronics.
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
Status report for TFC and ECS* LHCb Upgrade Electronics meeting F. Alessio, CERN with acknowledgements to Cairo (Caplan), Mauricio (Rivello),
13 June 2013Laurent Roy Optical Fibers / Electronics Upgrade -Fibers need for Upgrade -> Installation ? -Two techniques possible for multi fibers installation.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Dec. 19, 2006TELL1 commissioning for Calorimeters 1 TELL1 commissioning for calorimeters ■ Reminder ■ TELL1 status ■ ECS for TELL1- PVSS panels ■ Firmware.
DAQ and Trigger upgrade U. Marconi, INFN Bologna Firenze, March
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Niko Neufeld PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Why it might be interesting to look at ARM Ben Couturier, Vijay Kartik Niko Neufeld, PH-LBC SFT Technical Group Meeting 08/10/2012.
LHCb ideas and path toward vectorization and parallelization Niko Neufeld CERN, Wednesday, Nov 28 th 2012 Fourth International Workshop for Future Challenges.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Niko Neufeld Electronics Upgrade Meeting, April 10 th 2014.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
Infrastructure for LHCb Upgrade workshop – MUON detector Infrastructure for LHCB Upgrade workshop: MUON power, electronics Muon upgrade in a nutshell LV.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Common meeting of CERN DAQ teams CERN May 3 rd 2006 Niko Neufeld PH/LBC for the LHCb Online team.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Clara Gaspar on behalf of the ECS team: CERN, Marseille, etc. October 2015 Experiment Control System & Electronics Upgrade.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
Niko Neufeld, CERN. Trigger-free read-out – every bunch-crossing! 40 MHz of events to be acquired, built and processed in software 40 Tbit/s aggregated.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Niko Neufeld LHCC Detector Upgrade Review, June 3 rd 2014.
PCIe based readout for the LHCb upgrade U. Marconi, INFN Bologna On behalf of the LHCb Online Group (Bologna-CERN-Padova) CHEP2013, Amsterdam, 15 th October.
PCIe40 — a Tell40 implementation on PCIexpress Beat Jost DAQ Mini Workshop 27 May 2013.
1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Update on Optical Fibre issues Rainer Schwemmer. Key figures Minimum required bandwidth: 32 Tbit/s –# of 100 Gigabit/s links > 320, # of 40 Gigabit/s.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Event Building for the LHCb Upgrade 30 MHz Rate (32 Tb/s Aggregate Throughput) Domenico Galli INFN Bologna and Università di Bologna Workshop INFN CCR.
High throughput computing collaboration (HTCC) Jon Machen, Network Software Specialist DCG IPAG, EU Exascale Labs INTEL Switzerland.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
of the Upgraded LHCb Readout System
LHCb and InfiniBand on FPGA
Challenges in ALICE and LHCb in LHC Run3
Niko Neufeld LHCb Upgrade Online Computing Challenges CERN openlab Workshop on Data Center Technologies and Infrastructures, Mar 2017.
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Controlling a large CPU farm using industrial tools
The LHCb Event Building Strategy
John Harvey CERN EP/LBC July 24, 2001
Characteristics of Reconfigurable Hardware
LHCb Online Meeting November 15th, 2000
Presentation transcript:

Niko Neufeld CERN PH/LBC

Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter Farm up to 4000 servers UX85B Point 8 surface subfarm switch TFC x 100 Gbit/s subfarm switch Online storage Clock & fast commands 8800 Versatile Link 8800 Versatile Link throttle from PCIe40 Clock & fast commands 6 x 100 Gbit/s ECS Online upgrade status - Niko Neufeld2

Full event-building of every bunch-crossing (40 MHz)  no bottle-neck for event-building in the system DAQ, ECS and TFC relying on the same universal hardware module PCIe40 New DAQ with very challenging I/O in the event-builders and the network Cost-effectiveness dictates very compact system concentrated in new data-center, connected via long- distance links to the detector FEs ECS and TFC continued smoothly from current system (modulo the significant changes in the TFC due to the trigger-less nature of the read-out) Online upgrade status - Niko Neufeld3

Most compact system achieved by locating all Online components in a single location Power, space and cooling constraints allow such an arrangement only on the surface: containerized data- centre Versatile links connecting detector to readout-boards need to cover 300 m Online upgrade status - Niko Neufeld4

CPPM, Bologna, CERN (supporting)

PCIe40: the universal hardware module, a PCIe Gen3 x 16 card with up to 48 optical transceivers TELL40: a PCIe40 with the DAQ firmware with up to 48 GBT receivers SOL40: a PCIe40 with the ECS/TFC firmware (replacing SPECS and various TFC modules) with 48 GBT transceivers

Online upgrade status - Niko Neufeld7 slides by JP. Cachemiche

Online upgrade status - Niko Neufeld8

9

10 slide by F. Pisani

Online upgrade status - Niko Neufeld11

Bologna INFN and U., CNAF, CERN

Performances tests performed at CNAF with a test bed similar to the CERN one Exploiting the best performances required some tuning Bind processes according to NUMA topology and switch off power saving modes Very close to saturation ∼ 52.5 Gbit/s!!! Online upgrade status - Niko Neufeld13 A.Falabella et al

Extensive tests need to be done on a bigger cluster We aim at the new CINECA Galileo TIER-1 cluster Possible to test on a scale similar to the LHCb Upgraded DAQ network The cluster is in production from the last week of January 2015 First tests in few weeks managed by CNAF team ModelIBM NeXtScale Cluster Nodes516 Processor 2 8-core Intel Haswell 2.40GHz per node RAM 128 GB/node, 8 GB/core Network InfiniBand with 4x QDR switches Online upgrade status - Niko Neufeld14

CERN (thanks to technical coordinatioe team for their help!)

In real conditions First test – loop-back of AMC40 Soon – with front-end prototype Watch out for bit-errors / verify optical margin Use a MiniDAQ setup Online upgrade status - Niko Neufeld16

PCIe40 Patch cord court MPO(2m-5m) Câble ‘longue distance’ (300m) MPO-MPO Adaptateur Cassette MPO vers 12x LC ou SC Câble ‘longue distance’ (300m) Patch panel rack en souterrain x12 Config 1: Config 2: (repartition de charges) Fan Out court (2m-5m) PCIe40 MPO-MPO Adaptateur A B slide by L. Roy Online upgrade status - Niko Neufeld17

Online upgrade status - Niko Neufeld18

Online upgrade status - Niko Neufeld19

no error (12 links for 4 weeks) over 700 m More links and different tests to follow Online upgrade status - Niko Neufeld20

Annecy, CPPM, CERN + sub-detector experts

TFC & ECS firmware (SOL40) DAQ firmware (for MiniDAQ and PCIe40) Prototype ECS for MiniDAQ Global firmware framework