PCIe based readout U. Marconi, INFN Bologna CERN, May 2013.

Slides:



Advertisements
Similar presentations
6-April 06 by Nathan Chien. PCI System Block Diagram.
Advertisements

LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
Shelf Management & IPMI SRS related activities
Applying Wireless in Legacy Systems
1 Tell10 Guido Haefeli, Lausanne Electronics upgrade meeting 10.February 2011.
Niko Neufeld CERN PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
5 Feb 2002Alternative Ideas for the CALICE Backend System 1 Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University.
1.  Project Goals.  Project System Overview.  System Architecture.  Data Flow.  System Inputs.  System Outputs.  Rates.  Real Time Performance.
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
Calorimeter upgrade meeting - Wednesday, 11 December 2013 LHCb Calorimeter Upgrade : CROC board architecture overview ECAL-HCAL font-end crate  Short.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
SODA: Synchronization Of Data Acquisition I.Konorov  Requirements  Architecture  System components  Performance  Conclusions and outlook PANDA FE-DAQ.
GBT Interface Card for a Linux Computer Carson Teale 1.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
DAQ and Trigger upgrade U. Marconi, INFN Bologna Firenze, March
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
LHCb Upgrade Architecture Review BE DAQ Interface Rainer Schwemmer.
Serial Data Link on Advanced TCA Back Plane M. Nomachi and S. Ajimura Osaka University, Japan CAMAC – FASTBUS – VME / Compact PCI What ’ s next?
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
ATCA GPU Correlator Strawman Design ASTRONOMY AND SPACE SCIENCE Chris Phillips | LBA Lead Scientist 17 November 2015.
Links from experiments to DAQ systems Jorgen Christiansen PH-ESE 1.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
Guirao - Frascati 2002Read-out of high-speed S-LINK data via a buffered PCI card 1 Read-out of High Speed S-LINK Data Via a Buffered PCI Card A. Guirao.
1 Level 1 Pre Processor and Interface L1PPI Guido Haefeli L1 Review 14. June 2002.
BPM stripline acquisition in CLEX Sébastien Vilalte.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
1 Chapter Overview  Network Cables  Network Interface Adapters  Network Hubs.
PCIe based readout for the LHCb upgrade U. Marconi, INFN Bologna On behalf of the LHCb Online Group (Bologna-CERN-Padova) CHEP2013, Amsterdam, 15 th October.
PCIe40 — a Tell40 implementation on PCIexpress Beat Jost DAQ Mini Workshop 27 May 2013.
Standard electronics for CLIC module. Sébastien Vilalte CTC
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
The trigger-less readout for the Mu3e experiment Dirk Wiedner On behalf of the Mu3e collaboration 31 March 20161Dirk Wiedner.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
29/05/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
E. Hazen1 MicroTCA for HCAL and CMS Review / Status E. Hazen - Boston University for the CMS Collaboration.
EXtreme Data Workshop Readout Technologies Rob Halsall The Cosener’s House 18 April 2012.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Counting Room Electronics for the PANDA MVD
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
LHCb Outer Tracker Electronics 40MHz Upgrade
M. Bellato INFN Padova and U. Marconi INFN Bologna
LHCb and InfiniBand on FPGA
Challenges in ALICE and LHCb in LHC Run3
PANDA collaboration meeting FEE session
The Data Handling Hybrid
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
Read-out of High Speed S-LINK Data Via a Buffered PCI Card
Evolution of S-LINK to PCI interfaces
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
VELO readout On detector electronics Off detector electronics to DAQ
New DCM, FEMDCM DCM jobs DCM upgrade path
TELL1 A common data acquisition board for LHCb
Presentation transcript:

PCIe based readout U. Marconi, INFN Bologna CERN, May 2013

Electronics: basic assumptions. Data transfer from the frontend boards to the read-out boards at 40 MHz: ~ 40 Tb/s,12000 optical links, using 3.2 Gb/s GBT serializers. – Zero suppression performed at the frontend board Readout boards for buffering and data format conversion, from custom to industrial standard (LAN protocol). April 2013DIS2013: The LHCb upgrade AMC40 24 × 3.2 Gb/s ATCA Carrier Board 12 × 10 Gb/s AMC40 24 input 12 output data throughput ~100 Gb/s ~600 AMC40 In total 2 The LHCb readout board

DAQ Network Implement the InfiniBand protocol on the AMC40 FPGAs: not an easy task though … Why not try then with PCIe Gen3? One would need just a suitable receiver card on the server … Changing the architecture of course

PCIe Gen3 extension: Avago-PLX test setup Not available for testing …

PCIe-IB-ETH-uniform cluster 5 Event builder High speed network Event filter

6 PCIe Receiver Card PCIe connectors AMC-40 PXE 8733 PXE x 12 optical fibres RU/BU unit Event Builder Stratix V PCIe Gen3 bandwidth: 12 x 8 = 96 Gb/s x8 PCIe3 x4 PCIe3 Event fragments DMA PCIe3 hard IP block PLX PCIE switch PCIe Gen3 extension Event fragments CUSTOM DESIGN

CPU-to-CPU connection through PCIe The PXF51002 is a low profile dual QSFP+ PCIe adapter for connecting to x16 PCIe slot on motherboard.

PCIe x16 Gen3 Switch-based Cable Adapter Under test at LAL

9 PXF5102 PCIe connectors AMC-40 3x4 optical fibres x8 RU/BU unit Event Builder Stratix V Stratix V 24 PCIe Gen3 bandwidth: 12 x 8 = 96 Gb/s x4 PCIe3 Event fragments from FEE PCIe3 hard IP block PXF PCIE switch PXF51002 based solution Event fragments x8 x4 PXF5102 QSFP+

10 PCIe connectors AMC-40 x16 optical fibres RU/BU unit Event Builder Stratix V Stratix V 24 PCIe Gen3 bandwidth: 12 x 8 = 96 Gb/s x16 PCIe3 Event fragments from FEE PCIe3 hard IP block ONE STOP SYSTEM based solution Event fragments x16 x12 used PXE 8733 PXE 8733 PCIe x16 Gen3 Switch-based Cable Adapter

Stratix V: n. of PCIe hard IP blocks

I/O performance of PC servers Dual-socket server main-boards with 4 x 16 lane- sockets and 2 x 8 lane-sockets: the total theoretical I/O of a dual-socket system is 1280 Gb/s. Test setup: – GTX 680 GPU PCIe Gen3 x 16 – 2x InfiniBand FDR adpater Mellanox (PCIe Gen3 x 8) Results: – It is possible to transfer more than 100 Gb/s to/from the GPU. – The PC using InfiniBand can transfer simultaneously to/from the network 2 x 56 Gbit/s over the two InfiniBand cards.

Clock isolation Typically, when employing optical fiber, both ends of the link will not reside in the same enclosure. This means they will not share the same reset nor the same system clock Because the interface is optical, there is a reduced need for EMI suppression of the link: keep the optical link in a constant frequency mode. In a system that uses SSC clocking, the SSC must be completely disabled at the host. If disabling the SSC is not possible then a clock isolation adapter card will be required to isolate the SSC clock: appropriate PLX switch can provide SSC isolation. PLX integrated spread spectrum clock (SSC) isolation, provides the capability for isolating the clock domains of two systems. SSC isolation allows designers the flexibility to develop products with asynchronous clock sources, thus removing the need for a single clock source for all PCIe components in a system. – When you enable the switch, its Port 0 operates in the spread-spectrum-clocking domain, and the other ports operate in the constant-frequency-clock domain.

Summary PCIe Gen3 appears a viable solution to inject data from the AMC40 to the EFF servers. We are ready to start testing PCIe Gen3 CPU- to-CPU based connections, relying on commercial PCIe cable adapters, linked with optical fibres. Next step is to replace one of the CPU with a Stratix V FPGA.