CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration www.gsi.de1 About FAIR About CBM About FutureDAQ About Demonstrator.

Slides:



Advertisements
Similar presentations
Hadron Physics (I3HP) activities Hadron Physics (I3HP) is part of Integrated Activity of 6’th European Framework. Contract has a form of consortium of.
Advertisements

Agnes Lundborg Open Seminar 9 th of April 2003 in Uppsala on the SweGrid project PANDA computing “a tiny specification”
FEE/DAQ Demonstrator Walter F.J. Müller, GSI, Darmstadt for the CBM Collaboration 3 rd FutureDAQ Workshop GSI, Darmstadt, October 11, 2005.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Open Charm Everard CORDIER (Heidelberg) Grako meeting HD, April 28, 2006Everard Cordier.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
Data Acquisition Backbone Core DABC J. Adamczewski, H.G. Essel, N. Kurz, S. Linev GSI, Darmstadt The new Facility for Antiproton and Ion Research at GSI.
D ata A cquisition B ackbone C ore J.Adamczewski, H.G.Essel, N.Kurz, S.Linev 1 Work supported by EU.
Working Group 5 Summary David Christian Fermilab.
Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm:
Dec Heavy-ion Meeting ( 홍병식 ) 1 Introduction to CBM Contents - FAIR Project at GSI - CBM at FAIR - Discussion.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Understanding Data Acquisition System for N- XYTER.
1 Plans for JINR participation at FAIR JINR Contributions: ● Accelerator Complex ● Condensed Baryonic Matter ● Antiproton Physics ● Spin Physics Physics.
Pixel hybrid status & issues Outline Pixel hybrid overview ALICE1 readout chip Readout options at PHENIX Other issues Plans and activities K. Tanida (RIKEN)
Fast reconstruction of tracks in the inner tracker of the CBM experiment Ivan Kisel (for the CBM Collaboration) Kirchhoff Institute of Physics University.
R&D for First Level Farm Hardware Processors Joachim Gläß Computer Engineering, University of Mannheim Contents –Overview of Processing Architecture –Requirements.
CBM Software Workshop for Future Challenges in Tracking and Trigger Concepts, GSI, 9 June 2010 Volker Friese.
11th March 2008AIDA FEE Report1 AIDA Front end electronics Report February 2008.
D ata A cquisition B ackbone C ore DABCDABC , Huelva J.Adamczewski, H.G.Essel, N.Kurz, S.Linev 1 Work.
Frank Lemke DPG Frühjahrstagung 2010 Time synchronization and measurements of a hierarchical DAQ network DPG Conference Bonn 2010 Session: HK 70.3 University.
Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Design Criteria and Proposal for a CBM Trigger/DAQ Hardware Prototype Joachim Gläß Computer Engineering, University of Mannheim Contents –Requirements.
Tracking in the CBM experiment I. Kisel Kirchhoff Institute of Physics, University of Heidelberg (for the CBM Collaboration) TIME05, Zurich, Switzerland.
A.N.Sissakian, A.S.Sorin Very High Multiplicity Physics Seventh International Workshop JINR, Dubna, September 18, 2007 Status of the project NICA/MPD at.
The FAIR* Project *Facility for Antiproton and Ion Research Outline:  FAIR layout  Research programs Peter Senger, GSI USTC Hefei Nov. 21, 2006 and CCNU.
LHCb front-end electronics and its interface to the DAQ.
DABCDABC D ata A cquisition B ackbone C ore J.Adamczewski, H.G.Essel, N.Kurz, S.Linev 1 Work supported by EU RP6 project.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Results from first beam tests for the development of a RICH detector for CBM J. Eschke 1*, C. Höhne 1 for the CBM collaboration 1 GSI, Darmstadt, Germany.
Ring Recognition and Electron Identification in the RICH detector of the CBM Experiment at FAIR Semeon Lebedev GSI, Darmstadt, Germany and LIT JINR, Dubna,
International Accelerator Facility for Beams of Ions and Antiprotons at Darmstadt Construction of FAIR Phase-1 December 2005 J. Eschke, GSI Construction.
– Self-Triggered Front- End Electronics and Challenges for Data Acquisition and Event Selection CBM  Study of Super-Dense Baryonic Matter with.
DABCDABC D ata A cquisition B ackbone C ore J.Adamczewski, H.G.Essel, N.Kurz, S.Linev 1 Work supported.
Charmonium Production in 920 GeV Proton-Nucleus Interactions Presented by Wolfgang Gradl for the HERA-B
20/12/2011Christina Anna Dritsa1 Design of the Micro Vertex Detector of the CBM experiment: Development of a detector response model and feasibility studies.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
DABC Data Acquisition Backbone Core NUSTAR, Legnaro : DABC - J.Adamczewski, H.G.Essel, N.Kurz, S.Linev 1 Data Acquisition Backbone Core J.Adamczewski,
29/05/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
K + → p + nn The NA62 liquid krypton electromagnetic calorimeter Level 0 trigger V. Bonaiuto (a), A. Fucci (b), G. Paoluzzi (b), A. Salamon (b), G. Salina.
Multi-Strange Hyperons Triggering at SIS 100
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
eTOF - status and plan CBM-STAR joint Workshop Outline Introduction
M. Bellato INFN Padova and U. Marconi INFN Bologna
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
LHCb and InfiniBand on FPGA
16th IEEE NPSS Real Time Conference 2009 University of Heidelberg
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Perspectives for Heavy-Ion Collisions at Future Facilities
LHC experiments Requirements and Concepts ALICE
A heavy-ion experiment at the future facility at GSI
TELL1 A common data acquisition board for LHCb
Silicon Pixel Detector for the PHENIX experiment at the BNL RHIC
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
The FAIR Project Physics Programme of FAIR CBM PANDA NUSTAR
Example of DAQ Trigger issues for the SoLID experiment
I. Vassiliev, V. Akishina, I.Kisel and
SVT detector electronics
John Harvey CERN EP/LBC July 24, 2001
LHCb Trigger, Online and related Electronics
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Pierre-Alain Loizeau and David Emschermann for the CBM collaboration
SVT detector electronics
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
Perspectives on strangeness physics with the CBM experiment at FAIR
Presentation transcript:

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration About FAIR About CBM About FutureDAQ About Demonstrator

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration UNILAC SIS FRS ESR SIS 100/300 HESR Super FRS NESR CR RESR FLAIR Primary Beams /s; GeV/u; 238 U 28+ Factor over present intensity 2(4)x10 13 /s 30 GeV protons /s 238 U 92+ up to 35 GeV/u up to 90 GeV protons Secondary Beams Broad range of radioactive beams up to GeV/u; up to factor in intensity over present Antiprotons GeV Cooled beams Rapidly cycling superconducting magnets Key Technical Features Storage and Cooler Rings Radioactive beams e - – A (or Antiproton-A) collider stored and cooled GeV antiprotons Polarized antiprotons(?) The international FAIR project 2014

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration FAIR in 2014

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration  Nuclear Structure Physics and Nuclear Astrophysics with RIBs  Hadron Physics with Anti-Proton Beams  Physics of Nuclear Matter with Relativistic Nuclear Collisions  Plasma Physics with highly Bunched Beams  Atomic Physics and Applied Science with highly charged ions and low energy Anti-Protons  + Accelerator Physics Scientific programs Compressed Baryonic Matter

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration Compressed Baryonic Matter U+U 23 AGeV

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration CBM physics topics and observables 1. In-medium modifications of hadrons (p-A, A-A)  onset of chiral symmetry restoration at high  B measure: , ,   e + e - (µ + µ - ), J/Ψ, open charm (D 0,D ± ) 2. Indications for deconfinement at high  B (A-A heavy)  anomalous charmonium suppression ? measure: D excitation function, J/Ψ  e + e- (µ + µ-)  softening of EOS measure flow excitation function 3. Strangeness in matter  enhanced strangeness production ? multi quark states? measure: K, , , ,  4. Critical endpoint of deconfinement phase transition  event-by-event fluctuations measure: π, K Problems for LVL1 trigger: ► High data rates ► Short latency (µsec) ► Complex (displaced vertices) ► Most of the data needed displaced vertices e + e - pair, high p t low cross section

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration 1. A conventional LVL1 trigger would imply full displaced vertex reconstruction within fixed (short) latency. 2. Strongly varying complex event filter decisions needed on almost full event data No common trigger! Self triggered channels with time stamps! Event filters The data acquisition New paradigm: switch full data stream into event selector farms 10 MHz interaction rate expected 1 ns time stamps (in all data channels, ~10 ps jitter) required 1 TByte/s primary data rate (Panda < 100 GByte/s) expected < GByte/s maximum archive rate (Panda < 100 MByte/s) required Event definition (time correlation: multiplicity over time histograms) required Event filter to 20 KHz (1 GByte/s archive with compression) required On-line track & (displaced) vertex reconstruction required Data flow driven, no problem with latency expected Less complex communication, but high data rate to sort

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration FutureDAQ GSI (Spokesperson: Walter F.J. Müller) Kirchhoff Institute for Physics, Univ. Heidelberg University of Mannheim Technical University Munich University of Silesia, Katowice Krakow University Warsaw University Giessen University RMKI Budapest INFN Turino European project 2004 (FP6 → I3HP → JRA1) FP6: 6 th Framework Program on research, technological development and demonstration I3HP: Integrated Infrastructure Initiative in Hadron Physics JRA: Joint Research Activity Participants from

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration Beam HADES Detector: 1-8 AGeV CBM Detector: AGeV Studying AA-collisions from AGeV

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration The CBM detectors  Radiation hard Silicon (pixel/strip) tracker in a magnetic dipole field  Electron detectors: RICH & TRD & ECAL pion suppression up to 10 5  Hadron identification: RICH, RPC  Measurement of photons, π 0,η and muons electromagn. calorimeter ECAL At 10 7 interactions per second! Multiplicities: 160 p 400 π π + 44 K + 13 K 800  1817 total at 10 MHz

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration The CBM detectors At 10 7 interactions per second! Central Au+Au collision at 25 AGeV: URQMD + GEANT4  Radiation hard Silicon (pixel/strip) tracker in a magnetic dipole field  Electron detectors: RICH & TRD & ECAL pion suppression up to 10 5  Hadron identification: RICH, RPC  Measurement of photons, π 0,η and muons electromagn. calorimeter ECAL Multiplicities: 160 p 400 π π + 44 K + 13 K 800  1817 total at 10 MHz

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration The CBM detectors At 10 7 interactions per second! Central Au+Au collision at 25 AGeV: URQMD + GEANT4 track finding efficiency momentum resolution

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration data dispatcher FEE deliver time stamped data CNet collect data into buffers Detector collect DAQ hierarchy ~50000 FEE chips event dispatcher 1000x1000 switching BNet sort time stamped data ~1000 links a 1 GB/sec HNet high level selection to high level computing and archiving ~1 GB/sec Output processing PNet process events level 1&2 selection subfarm ~100 subfarms ~100 nodes per subfarm ~10 dispatchers → subfarm ~1000 collectors ~1000 active buffers TNet time distribution

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration TNet: Clock/time distribution Challenge of time distribution: –TNet must generate GHz time clock with ~10 ps jitter –must provide global state transitions with clock cycle precise latency –Hierarchical splitting into 1000 CNet channels Consequences for serial FEE links and CNet switches: –bit clock cycle precise transmission of time messages –low jitter clock recover required –FEE link and CNet will likely use custom SERDES (i.e. OASE) TNet BNet HNet FEE CNet PNet Detectors

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration CNet: Data concentrator –Collects hits from readout boards (FEE) to active buffers (data dispatchers) 1 GByte/s –Capture hit clusters, communicate geographically neighboring channels –Distribute time stamps and clock (from TNet) to FEE –Low latency bi-directional optical links –Eventually communicate detector control & status messages TNet BNet HNet FEE CNet PNet Detectors

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration BNet: Building network Has to sort parallel data to sequential event data Two mechanisms, both with traffic shaping –switch by time intervals - all raw data goes through BNet + event definition is done behind BNet in PNet compute resources –switch by event intervals - event definition done in BNet by multiplicity histogramming - some bandwidth required for histogramming + suppression of incoherent background and peripheral events + potentially significant reduction of BNet traffic Functionality of data dispatcher and event dispatcher implemented on one active buffer board using bi-directional links. Simulations with mesh like topology TNet BNet HNet FEE CNet PNet Detectors

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration BNet: Factorization of 1000x1000 switch n=4 : 16x double-nodes need 32 32x32 switches with 496 bi-directional connections Data transfer in time slices (determined by epochs or event streams) Bandwidth reserved for histogramming and scheduling (traffic shaping) 1 GByte/s point to point switch n × n switch n × n n n * (n - 1) / 2 bi-directional connections n n - 1 ports H: histogrammer TG: event tagger HC: histogram collector BC: scheduler DD: data dispatcher ED: event dispatcher CNetPNet DD/ED active buffer TG/BC BNet controller CNetPNet n - 1 DD/ED H CNet DD/HC active buffer TNet BNet HNet FEE CNet PNet Detectors

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration Simulation of BNet with SystemC event generator data dispatcher (sender) histogram collector tag generator BNet controller (schedule) event dispatcher (receiver) transmitter (data rate, latency) switches (buffer capacity, max. # of package queue, 4K) Running with 10 switches and 100 end nodes. Simulation takes 1.5 *10 5 times longer than simulated time. Various statistics (traffic, network load, etc.) Modules:

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration BNet: SystemC simulations 100x100 single buffers excluded! data meta data no traffic shaping traffic shaping

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration PNet: Structure of a sub-farm A sub-farm is a collection of compute resources connected with a PNet Compute resources are –programmable logic (FPGA) –processors Likely choice for the processors are high performance SoC components –CPUs, MEM, high speed interconnect on one chip –optimized for low W/GFlop and high packing density –see QCDOC, Blue Gene, STI cell,.... PNet uses 'build-in' serial links connected through switches PCIe-AS is a candidate for a commonly used serial interconnect A plausible scenario for the low level compute farm –O(100) sub-farms with O(100) compute resources each –one sub-farm on O(10) boards in one crate Consequences –only chip-2-chip and board-2-board links in PNet –thus only short distance (<1m) communication TNet BNet HNet FEE CNet PNet Detectors

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration PNet: First & second level computing sub-Farms, each with 32 FPGA and 32 CPU Event selection level 1 (FPGA): 1% Event selection level 2 (CPU): 10% 1 GByte/s

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration Summary Five different networks with very different characteristics –CNet (custom) Capture hit clusters, communicate geographically neighboring channels Distribute time stamps and clock (from TNet) to FEE Low latency bi-directional optical links (OASE) Eventually communicate detector control & status messages connects custom components (FEE ASICS, FPGAs) –TNet (custom) generates GHz time clock with ~10 ps jitter provides global state transitions with clock cycle precise latency –BNet (standard technology, i.e. Ethernet or Infiniband) switch by time intervals: event definition is done behind BNet in PNet compute resources switch by event intervals: event definition done in BNet by multiplicity histogramming –PNet (custom) short distance, most efficient of already 'build-in' links (i.e. PCIe-AS) connects standardized components (FPGA, SoCs) –HNet general purpose, to archive TNet BNet HNet FEE CNet PNet Detectors

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration GE switch PC ABB PCIe DCB FE DAQ Demonstrator InfiniBand switch FE DC FE board: sampling ADCs (4*4), clock distribution Data collector boards, clock distribution to FE Data dispatcher board: PCI express card 8-20 PCs dual quad PC DD PCIe *8 *4 Scales up to 10k channels, 160 CPUs bidirectional event building 2.5 GBit/s bi-directional (optical) link: data, clock 2.5 GBit/s data links Gigabit Ethernet The goal: Investigate critical technology CBM detector tests Replace existing DAQ

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration bi-directional (optical) link –data, trigger, RoI, control, clock FPGA –logic for data (/protocol) processing –processor for control external memory (DDR) Ethernet as main control interface –external memory for data storage –PC interface (PCIexpress) Interface to Bnet FPGA OASE or MGT OASE or MGT OASE or MGT OASE or MGT memory Ethernet PPC Data Collector Board (DC) Data Dispatcher (DD) PCIe (MGT) memory Prototype for DC and DD

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration V4FX Testboard for DC and DD (Joachim Gläss, Univ. Mannheim) –PPC with external DRAM and Ethernet (Linux): Software for control –Test and classification of MGTs optical, copper, backplane, … –Test of OASE –PCIexpress interface to PC –Develop and test RAM controllers –Develop and test algorithms (shrinked down) Prototype for DC and DD DC prototype for demonstrator DD prototype for demonstrator 4 x OASE on mezzanine 2 x RAM on mezzanine Probe on mezzanine SFP (miniGBIC) PCIexpress mezzanines Virtex4 MGTs

CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration Activities Final FE, DC, DD boards (to do) FPGA codes (to do) Link management (to do) Data formats (to do) Interfaces (to do) Framework –xDaq (CMS) under investigation Controls –EPICS (+ xDaq) under investigation InfiniBand cluster with 4 double opteron PCs installed –MPI and uDAPL tested (msg, RDMA) (900KB/s with 20KB buffers) In production end of 2007 (hopefully)