DEPFET Backend DAQ, Giessen Group 1 ATCA based Compute Node as Backend DAQ for sBelle DEPFET Pixel Detector Andreas Kopp, Wolfgang Kühn, Johannes Lang,

Slides:



Advertisements
Similar presentations
News on PXD Readout Issues Sören Lange, Universität Gießen, Belle II DAQ/TRG, KEK, 七夕, 平成 21.
Advertisements

ESODAC Study for a new ESO Detector Array Controller.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
Reliable Data Storage using Reed Solomon Code Supervised by: Isaschar (Zigi) Walter Performed by: Ilan Rosenfeld, Moshe Karl Spring 2004 Midterm Presentation.
Detector Array Controller Based on First Light First Light PICNIC Array Mux PICNIC Array Mux Image of ESO Messenger Front Page M.Meyer June 05 NGC High.
HCAL FIT 2002 HCAL Data Concentrator Status Report Gueorgui Antchev, Eric Hazen, Jim Rohlf, Shouxiang Wu Boston University.
1 Design of the Front End Readout Board for TORCH Detector 10, June 2010.
Manfred Meyer & IDT & ODT 15 Okt Detectors for Astronomy 2009, ESO Garching, Okt Detector Data Acquisition Hardware Designs.
The Train Builder Data Acquisition System for the European-XFEL John Coughlan, Chris Day, Senerath Galagedera and Rob Halsall STFC Rutherford Appleton.
AIDA annual meeting,Vienna, 26th March 2014Václav Vrba, Institute of Physics, Prague 1  design of sensors for production submission  design of the readout.
Prototype of the Global Trigger Processor GlueX Collaboration 22 May 2012 Scott Kaneta Fast Electronics Group.
Status Report of CN Board Design Zhen’An LIU Representing Trigger Group, IHEP, Beijing Panda DAQ Meeting, Munich Dec
GBT Interface Card for a Linux Computer Carson Teale 1.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
Understanding Data Acquisition System for N- XYTER.
Leo Greiner IPHC meeting HFT PIXEL DAQ Prototype Testing.
PHENIX upgrade DAQ Status/ HBD FEM experience (so far) The thoughts on the PHENIX DAQ upgrade –Slow download HBD test experience so far –GTM –FEM readout.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
C.Schrader; Oct 2008, CBM Collaboration Meeting, Dubna, Russia R/O concept of the MVD demonstrator C.Schrader, S. Amar-Youcef, A. Büdenbender, M. Deveaux,
R&D for First Level Farm Hardware Processors Joachim Gläß Computer Engineering, University of Mannheim Contents –Overview of Processing Architecture –Requirements.
Status of Global Trigger Global Muon Trigger Sept 2001 Vienna CMS-group presented by A.Taurok.
CCD Cameras with USB2.0 & Gigabit interfaces for the Pi of The Sky Project Grzegorz Kasprowicz Piotr Sitek PERG In cooperation with Soltan Institute.
Status and planning of the CMX Wojtek Fedorko for the MSU group TDAQ Week, CERN April , 2012.
Leo Greiner PIXEL Hardware meeting HFT PIXEL detector LVDS Data Path Testing.
FED RAL: Greg Iles5 March The 96 Channel FED Tester What needs to be tested ? Requirements for 96 channel tester ? Baseline design Functionality.
XTCA projects (HW and SW) related to ATLAS LAr xTCA interest group - CERN 07/03/2011 Nicolas Letendre – Laurent Fournier - LAPP.
Upgrade to the Read-Out Driver for ATLAS Silicon Detectors Atlas Wisconsin/LBNL Group John Joseph March 21 st 2007 ATLAS Pixel B-Layer Upgrade Workshop.
ATLAS Trigger / current L1Calo Uli Schäfer 1 Jet/Energy module calo µ CTP L1.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
1 07/10/07 Forward Vertex Detector Technical Design – Electronics DAQ Readout electronics split into two parts – Near the detector (ROC) – Compresses and.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
ZPD Project Overview B A B AR L1 DCT Upgrade FDR Masahiro Morii Harvard University Design Overview Progress and Changes since CDR Current Status Plans.
Towards a 7-module Micromegas Large TPC prototype 1 D. Attié, P. Baron, D. Calvet, P. Colas, C. Coquelet, E. Delagnes, M. Dixit, A. Le Coguie, R. Joannes,
Barcelona 1 Development of new technologies for accelerators and detectors for the Future Colliders in Particle Physics URL.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
FEE Electronics progress Mezzanine layout progress FEE64 progress FEE64 initial testing Test mezzanine. A few of the remaining tasks 2nd October 2009.
1 Level 1 Pre Processor and Interface L1PPI Guido Haefeli L1 Review 14. June 2002.
D. Attié, P. Baron, D. Calvet, P. Colas, C. Coquelet, E. Delagnes, R. Joannes, A. Le Coguie, S. Lhenoret, I. Mandjavidze, M. Riallot, E. Zonca TPC Electronics:
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
New ATCA compute node Design for PXD Zhen-An Liu TrigLab, IHEP Beijing Feb , 6th International Workshop on DEPFET Detectors and Applications.
Report from Grünberg Workshop Sören Lange, Universität Gießen 5 th International Workshop on DEPFET Detectors and Applications , Valencia,
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
Report from Panda DAQT and Frontend Workshop Sören Lange (for the DAQT Group) XXXIII Panda Collaboration Meeting 05/31-06/04, June 2010 Stockholm.
1 Hardware Tests of Compute Node Carrier Board Hao Xu IHEP, CAS.
Vladimir Zhulanov for BelleII ECL group Budker INP, Novosibirsk INSTR2014, Novosibirsk 2014/02/28 1.
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
Compute Node Tutorial(2) Agenda Introduce to RocketIO How to build a optical link connection Backplane and cross link communications How to.
The Slow Control System of the HADES RPC Wall Alejandro Gil on behalf of the HADES RPC group IFIC (Centro Mixto UV-CSIC) Valencia, 46071, Spain IEEE-RT2009.
PXD DAQ in Giessen 1. How we do programming 2. Proposal for link layer Bonn+Giessen Meeting, Feb 2, 2011.
Firmware and Software for the PPM DU S. Anvar, H. Le Provost, Y.Moudden, F. Louis, E.Zonca – CEA Saclay IRFU – Amsterdam/NIKHEF, 2011 March 30.
AMC-based Upgrade of Compute Node Hao XU Trigger group of IHEP, Beijing PANDA DAQT and FEE Workshop, Rauischholzhausen Castle April 2010.
Status of Compute Node Zhen’an Liu, Dehui Sun, Jingzhou Zhao, Qiang Wang, Hao Xu Triglab, IHEP, Beijing Wolfgang Kühn, Sören Lange, Univ. Giessen Belle2.
FPGA Helix Tracking Algorithm with STT Yutie Liang, Martin Galuska, Thomas Geßler, Wolfgang Kühn, Jens Sören Lange, David Münchow, Milan Wagner II. Physikalisches.
FPGA Helix Tracking Algorithm -- VHDL implementation Yutie Liang, Martin Galuska, Thomas Geßler, Wolfgang Kühn, Jens Sören Lange, David Münchow, Björn.
Johannes Lang: IPMI Controller Johannes Lang, Ming Liu, Zhen’An Liu, Qiang Wang, Hao Xu, Wolfgang Kuehn JLU Giessen & IHEP.
PXD DAQ News S. Lange (Univ. Gießen) Belle II Trigger/DAQ Meeting (Jan 16-18, 2012, Hawaii, USA) Today: only topics important for CDAQ - GbE Connection.
29/05/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
AMC13 T1 Rev 2 Preliminary Design Review E. Hazen Boston University
Future Hardware Development for discussion with JLU Giessen
PXD ATCA DAQ for DESY Test
CoBo - Different Boundaries & Different Options of
Update on the FPGA Online Tracking Algorithm
Development of new CN for PXD DAQ
Lic. Thesis Presentation in ICT/ECS KTH
New DCM, FEMDCM DCM jobs DCM upgrade path
SVT detector electronics
TELL1 A common data acquisition board for LHCb
Presentation transcript:

DEPFET Backend DAQ, Giessen Group 1 ATCA based Compute Node as Backend DAQ for sBelle DEPFET Pixel Detector Andreas Kopp, Wolfgang Kühn, Johannes Lang, Jens Sören Lange, Ming Liu, David Münchow, Johannes Roskoss, Qiang Wang (Tiago Perez, Daniel Kirschner) II. Physikalisches Institut, Justus-Liebig-Universität Giessen Colleagues involved in project, but not (s)Belle members Dapeng Jin, Lu Li, Zhen'An Liu, Yunpeng Lu, Shujun Wei, Hao Xu, Dixin Zhao (IHEP Beijing, Beijing)

DEPFET Backend DAQ, Giessen Group 2 Compute Node (CN) Concept 5 x VIRTEX4 FX-60 FPGAs –each FPGA has 2 x 300 MHz PowerPC –Linux (open source version), stored in FLASH memory –algorithm programming in VHDL (XILINX ISE 10.1) ATCA (Advanced Telecommunications Computing Architecture) with full mesh backplane (point-to-point connections on backplane from each CN to each other CN, i.e. no bus arbitration) optical links (connected to RocketIO at FPGA) Gigabit Ethernet ATCA management (IPMI) by add-on card

DEPFET Backend DAQ, Giessen Group 3

4 Compute Node (CN)

DEPFET Backend DAQ, Giessen Group 5 Compute Node Data Transfer total integrated bandwidth · 32 Gbit/s (all channels summed, theoretical limit) All 5 FPGAs are connected pairwise (on the board) by –one 32-bit general purpose bus (GPIO) –one full duplex RocketIO link 4 of 5 FPGAs have two RocketIO links routed to front panel using Multi-Gigabit Receivers (MGT) ! optical links 1 of 5 FPGAs serves as a switch ! has 13 RocketIO links to all the other compute nodes in the same ATCA shelf All 5 FPGA have a Gigabit Ethernet Link routed to front panel bandwidth tested with a Virtex-4 FX12 test board · 0.3 Gbit/s TCP · 0.4 Gbit/s UDP

DEPFET Backend DAQ, Giessen Group 6 ATCA Shelf 1 kW

DEPFET Backend DAQ, Giessen Group 7 Size of the DAQ System  Assuming a requirement of 100 Gbit/s (Estimate H. Moser) for whole pixel detector ATTENTION! Estimate was changed on Valencia meeting, see remarks later (p. 25,26)  >> 1 ATCA Shelf with 14 Compute Nodes (+2 spares)  DATA IN: per 1 compute node 8 optical 1.6 Gbit/s x 14 compute nodes per 1 ATCA shelf = 180 Gbit/s i.e. factor 1.8 safety margin DATA OUT: 5 x GB Ethernet per 1 compute 0.4 GBit/s  150k Euro investment in the BMBF application  This is identical size to system size for the HADES Upgrade (test beamtime at GSI, parallel to existing DAQ system, planned for end of 2009)  Note: the compute note is DAQ prototype system for PANDA (>2016) Panda bandwidth requirement is ~10-20% higher than ATLAS <3 x 10 7 interactions/s

DEPFET Backend DAQ, Giessen Group 8 Status of the DAQ System  1 CN (1 st generation) tested in Giessen since ~1/2 year  2 nd iteration (schematics, layout, fabrication) of CN in spring-summer 2009 note: PCB has 14 layers  1 full ATCA Shelf (identical size to sBelle DEPFET system) planned until end of 2009 for HADES experiment at GSI (testbeam maybe spring 2010)

DEPFET Backend DAQ, Giessen Group 9 Compute Node Boot Sequence  5 x VIRTEX-4 FX-60 FPGAs  booting by slave sequential configuration chain (not shown in the block schematics) power up, CPLD powers up  bitstream data (1 file, but contains data for all 5 FPGAs) are copied from FLASH #0 to all 5 FPGAs  bitstream contains a small boot loader for each FPGA, which loads Linux (open source version), stored in local FLASH memory  then Linux is copied to local DDR2 memory (volatile)

DEPFET Backend DAQ, Giessen Group 10 Test System at Giessen Ming Liu, Quiang Wang, Johannes Lang

DEPFET Backend DAQ, Giessen Group 11 Current Status of Test System The 1 st version CN PCB has been tested. –optical 1.6 Gbps to TRB2 (HADES TPC board, CERN HPTPC and ETRAX 1-chip PC), 0 bit error for 150-hour test –Gigabit Ethernet –JTAG chain –CPLD+Flash system start-up mechanism and remote reconfigurability –DDR2 SDRAM –other peripherals

DEPFET Backend DAQ, Giessen Group 12 IPMI  Intelligent Platform Management Interface  I 2 C 2-line serial interface (clock, data)  ATCA Power Management ~180 W / compute node needed but only 10 W management power power-up ! request to shelf manager  CN piggy-back add-on card 75x35 mm design/layout in Giessen Microcontroller AVR Atmega x 60 pin connector to CN  Additional tasks: read temperature, read voltages (0.9/1.2/1.8/3.3/5.0 V, ADC via I 2 C), allows for remote reset/reboot, hot swap (i.e. communicate to shelf manager „I will be disconnected from the backplane now“) Johannes Lang

DEPFET Backend DAQ, Giessen Group 13 IPMI Add-On Board Johannes Lang

DEPFET Backend DAQ, Giessen Group 14 IPMI Add-On Board Johannes Lang

DEPFET Backend DAQ, Giessen Group 15 Algorithms to run on the CN ? 1.pixel subevent building 2.data reduction (?) if rate estimate is correct, we must achieve a data reduction of factor ~20 on the CN. Preliminary idea: 1.receive CDC data (from COPPER) 2.receive SVD data (from COPPER) 3.track finding and track fitting 4.extrapolation to pixel detector 5.matching to pixel hits 6.identify synchrotron radiation hits (i.e. no track match) and discard 3.data compression (?)

DEPFET Backend DAQ, Giessen Group 16 Example Algorithm #1: HADES Event Selector 1.reads HADES raw data from DDR2 memory (HADES binary data format) 2.uses PLB bus for memory access (LocalLink multiport memory controller only supported with newest core from XILINX) 3.copy data from DDR-2 to Block-RAM (on FPGA) 4.then read FIFO / write FIFO from/to Block-RAM 5.event buffer 6.small event decoder 7.issue event yes/no decision discard event or write back Shuo Yang

DEPFET Backend DAQ, Giessen Group 17 Example Algorithm #1: HADES Event Selector Shuo Yang

DEPFET Backend DAQ, Giessen Group 18  Event selection rates of 100% & 25%  Different FIFO sizes (DMA sizes)  Processing throughput of <150 & <100 MB/s (expected to be higher if DMA size increased) Example Algorithm #1: HADES Event Selector Shuo Yang

DEPFET Backend DAQ, Giessen Group 19 Example Algorithm #2: HADES Track Finder here max. 12 drift chamber wires fired per 1 track Nucleus+Nucleus collisions, large track density expected Ming Liu

DEPFET Backend DAQ, Giessen Group 20  PLB slave interface (PLB IPIF) for system control  LocalLink master interface for data read/write from/to DDR2 memory  algorithm processor (track finding) Example Algorithm #2: HADES Track Finder Ming Liu

DEPFET Backend DAQ, Giessen Group 21 Results: FPGA resource utilization of Virtex-4 FX60 (<1/5) – acceptable! Timing limitation: 125 MHz without optimization –Clock frequency fixed at 100 MHz, to match the PLB speed Processing: –C program running on the Xeon 2.4 GHz computer as software reference –different wire multiplicities (10, 30, 50, 200, 400 fired wires out of 2110) –speedup of 10.8 – 24.3 times per module (compared to reference) –tried integration of multiple cores on 1 FPGA for parallel processing (even higher performance speedup of ~2 orders of magnitude) Example Algorithm #2: HADES Track Finder Ming Liu

DEPFET Backend DAQ, Giessen Group 22 A remark on a data compression algorithm  In 2003 for the DAQ Workshop in Nara, we tried MPEG-2 compression on SVD1.5 data (test runs taken by Itoh-san)  DST (not MDST) i.e. incl. SVD raw data panther banks  L4 switched off i.e. incl. some background (but L3 on)  Compression factor ~1.83 achieved  MPEG encoding is working on frames (data chunks) might be easily parallelized on FPGA  C source code for MPEG-2 is available free

DEPFET Backend DAQ, Giessen Group 23

DEPFET Backend DAQ, Giessen Group 24  Johannes Roskoss – HADES  RICH ring finder  match ring to drift chamber track (straight line, <12 wires per track)  David Münchow – PANDA  Track Finder for PANDA Straw Tube Tracker conformal mapping and Hough transform helix, 30 hits per 1 track, ~10 tracks per event  Andreas Kopp – HADES  drift chamber track incl. momentum kick in dipole field  match to TOF (2-sided read out scintillator paddles)  match to EM Shower detector Additional Algorithm Development all incl. FPGA implementation (work ongoing)

DEPFET Backend DAQ, Giessen Group 25 Open Questions on Data Reduction  Can the compute nodes get the CDC and SVD data from COPPER? and then run a track finder algorithm?  by GB Ethernet  if yes, what is the latency? data size? rate? protocol?  At the input of the event builder or at the output of the event builder (i.e. input of L3)?  Is it acceptable for DAQ group? etc. etc.

DEPFET Backend DAQ, Giessen Group 26 Plan: Increase of input bandwidth  At the Valencia meeting, it was decided to a.) increase # of pixel rows to increase resolution in z direction b.) increase readout time (accordingly) to 20  s > factor 2 higher data volume  Modification of CN required  If we increase # of links from 44 to 88: no problem (in 1 ATCA shelf there are 112 links)  If we keep # of optical links to 44 optical links are tested for 1.6 Gbit/s we need <5 Gbit/s  change FPGA VIRTEX-4 FX60-10 $ 904,- VIRTEX-4 FX60-11 $1131,-  change optical link transceiver FTLF8528P2BCK $140,- FTLF8519P2BNL $ 45,-

DEPFET Backend DAQ, Giessen Group 27 Plan: Increase of input bandwidth  Price per 1 compute node increases by ~20% (from $ 8100,- to $ 9995,-) > must reduce # of CN from 14 to 11 to keep budget  Note: bandwidth >1.6 Gbit/s per 1 link was never tested  Plan: for the next CN iteration (expected May/June 2009) 1 FPGA on 1 of the new CN plus transceivers will be replaced (in any case)  Testing will be done soon afterwards.

DEPFET Backend DAQ, Giessen Group 28 BMBF Application, Details Manpower (applied for) 1 postdoc 2 Ph.D. students Manpower (other funding sources) Wolfgang Kühn 20% Sören Lange 35% 1 Ph.D. student (funded by EU FP7) 50% Travel budget 2 trips to KEK per year (2 persons) 2 trips inside Germany per year (3 persons) in months at KEK for one person in months at KEK for one person Our share for the workshops (electronics and fine mechanics) is 1:1:1 for Panda:Hades:Super-Belle but electronic workshop is not involved in compute node