Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm:

Slides:



Advertisements
Similar presentations
DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM I. Kisel (for CBM Collaboration) I. Kisel (for CBM Collaboration)
Advertisements

Chapter 3 Embedded Computing in the Emerging Smart Grid Arindam Mukherjee, ValentinaCecchi, Rohith Tenneti, and Aravind Kailas Electrical and Computer.
Vectors, SIMD Extensions and GPUs COMP 4611 Tutorial 11 Nov. 26,
Lecture 38: Chapter 7: Multiprocessors Today’s topic –Vector processors –GPUs –An example 1.
L1 Event Reconstruction in the STS I. Kisel GSI / KIP CBM Collaboration Meeting Dubna, October 16, 2008.
GPGPU Introduction Alan Gray EPCC The University of Edinburgh.
Development of a track trigger based on parallel architectures Felice Pantaleo PH-CMG-CO (University of Hamburg) Felice Pantaleo PH-CMG-CO (University.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
ALICE HLT High Speed Tracking and Vertexing Real-Time 2010 Conference Lisboa, May 25, 2010 Sergey Gorbunov 1,2 1 Frankfurt Institute for Advanced Studies,
Porting Reconstruction Algorithms to the Cell Broadband Engine S. Gorbunov 1, U. Kebschull 1,I. Kisel 1,2, V. Lindenstruth 1, W.F.J. Müller 2 S. Gorbunov.
Computing Platform Benchmark By Boonyarit Changaival King Mongkut’s University of Technology Thonburi (KMUTT)
Introduction What is GPU? It is a processor optimized for 2D/3D graphics, video, visual computing, and display. It is highly parallel, highly multithreaded.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
GPGPU overview. Graphics Processing Unit (GPU) GPU is the chip in computer video cards, PS3, Xbox, etc – Designed to realize the 3D graphics pipeline.
STS Simulations Anna Kotynia 15 th CBM Collaboration Meeting April , 2010, GSI 1.
CA+KF Track Reconstruction in the STS I. Kisel GSI / KIP CBM Collaboration Meeting GSI, February 28, 2008.
KIP TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD Ivan Kisel KIP,
Online Event Selection in the CBM Experiment I. Kisel GSI, Darmstadt, Germany RT'09, Beijing May 12, 2009.
Practical PC, 7th Edition Chapter 17: Looking Under the Hood
Event Reconstruction in STS I. Kisel GSI CBM-RF-JINR Meeting Dubna, May 21, 2009.
Online Track Reconstruction in the CBM Experiment I. Kisel, I. Kulakov, I. Rostovtseva, M. Zyzak (for the CBM Collaboration) I. Kisel, I. Kulakov, I. Rostovtseva,
Computer Graphics Graphics Hardware
CA tracker for TPC online reconstruction CERN, April 10, 2008 S. Gorbunov 1 and I. Kisel 1,2 S. Gorbunov 1 and I. Kisel 1,2 ( for the ALICE Collaboration.
Many-Core Scalability of the Online Event Reconstruction in the CBM Experiment Ivan Kisel GSI, Germany (for the CBM Collaboration) CHEP-2010 Taipei, October.
Status of the L1 STS Tracking I. Kisel GSI / KIP CBM Collaboration Meeting GSI, March 12, 2009.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Speed-up of the ring recognition algorithm Semeon Lebedev GSI, Darmstadt, Germany and LIT JINR, Dubna, Russia Gennady Ososkov LIT JINR, Dubna, Russia.
VTU – IISc Workshop Compiler, Architecture and HPC Research in Heterogeneous Multi-Core Era R. Govindarajan CSA & SERC, IISc
Ooo Performance simulation studies of a realistic model of the CBM Silicon Tracking System Silicon Tracking for CBM Reconstructed URQMD event: central.
Fast reconstruction of tracks in the inner tracker of the CBM experiment Ivan Kisel (for the CBM Collaboration) Kirchhoff Institute of Physics University.
Introducing collaboration members – Korea University (KU) ALICE TPC online tracking algorithm on a GPU Computing Platforms – GPU Computing Platforms Joohyung.
Fast track reconstruction in the muon system and transition radiation detector of the CBM experiment at FAIR Andrey Lebedev 1,2 Claudia Höhne 3 Ivan Kisel.
Status of Reconstruction in CBM
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
A Silicon vertex tracker prototype for CBM Material for the FP6 Design application.
KIP Ivan Kisel JINR-GSI meeting Nov 2003 High-Rate Level-1 Trigger Design Proposal for the CBM Experiment Ivan Kisel for Kirchhoff Institute of.
Standalone FLES Package for Event Reconstruction and Selection in CBM DPG Mainz, 21 March 2012 I. Kisel 1,2, I. Kulakov 1, M. Zyzak 1 (for the CBM.
By Dirk Hekhuis Advisors Dr. Greg Wolffe Dr. Christian Trefftz.
1 Open charm simulations ( D +, D 0,  + c ) Sts geometry: 2MAPS +6strip (Strasbourg geo) or 2M2H4S (D+ and D - at 25AGeV); TOOLS: signal (D +  K - 
Results from first beam tests for the development of a RICH detector for CBM J. Eschke 1*, C. Höhne 1 for the CBM collaboration 1 GSI, Darmstadt, Germany.
First Level Event Selection Package of the CBM Experiment S. Gorbunov, I. Kisel, I. Kulakov, I. Rostovtseva, I. Vassiliev (for the CBM Collaboration (for.
– Self-Triggered Front- End Electronics and Challenges for Data Acquisition and Event Selection CBM  Study of Super-Dense Baryonic Matter with.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
CBM Simulation Walter F.J. Müller, GSI CBM Simulation Week, May 10-14, 2004 Tasks and Concepts.
Cellular Automaton Method for Track Finding (HERA-B, LHCb, CBM) Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg Second FutureDAQ Workshop, GSI.
Muon detection in the CBM experiment at FAIR Andrey Lebedev 1,2 Claudia Höhne 1 Ivan Kisel 1 Anna Kiseleva 3 Gennady Ososkov 2 1 GSI Helmholtzzentrum für.
CA+KF Track Reconstruction in the STS S. Gorbunov and I. Kisel GSI/KIP/LIT CBM Collaboration Meeting Dresden, September 26, 2007.
Kalman Filter based Track Fit running on Cell S. Gorbunov 1,2, U. Kebschull 2, I. Kisel 2,3, V. Lindenstruth 2 and W.F.J. Müller 1 1 Gesellschaft für Schwerionenforschung.
Computer Architecture Lecture 24 Parallel Processing Ralph Grishman November 2015 NYU.
1/13 Future computing for particle physics, June 2011, Edinburgh A GPU-based Kalman filter for ATLAS Level 2 Trigger Dmitry Emeliyanov Particle Physics.
Some GPU activities at the CMS experiment Felice Pantaleo EP-CMG-CO EP-CMG-CO 1.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
GPGPU introduction. Why is GPU in the picture Seeking exa-scale computing platform Minimize power per operation. – Power is directly correlated to the.
Fast parallel tracking algorithm for the muon detector of the CBM experiment at FAIR Andrey Lebedev 1,2 Claudia Höhne 1 Ivan Kisel 1 Gennady Ososkov 2.
Advanced Science and Technology Letters Vol.43 (Multimedia 2013), pp Superscalar GP-GPU design of SIMT.
Parallel Implementation of the KFParticle Vertexing Package for the CBM and ALICE Experiments Ivan Kisel 1,2,3, Igor Kulakov 1,4, Maksym Zyzak 1,4 1 –
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
Computing in CBM Volker Friese GSI Darmstadt International Conference on Matter under High Densities 21 – 23 June 2016 Sikkim Manipal Institute of Technology,
Y. Fisyak1, I. Kisel2, I. Kulakov2, J. Lauret1, M. Zyzak2
FPGAs for next gen DAQ and Computing systems at CERN
New Track Seeding Techniques at the CMS experiment
Fast Parallel Event Reconstruction
ALICE – First paper.
ALICE HLT tracking running on GPU
TPC reconstruction in the HLT
Trigger, DAQ, & Online: Perspectives on Electronics
ALICE Computing Upgrade Predrag Buncic
Presentation transcript:

Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm: D  (c  = 312  m): D +  K -  +  + (9.5%) D 0 (c  = 123  m): D 0  K -  + (3.8%) D 0  K -  +  +  - (7.5%) D  s (c  = 150  m): D + s  K + K -  + (5.3%)  + c (c  = 60  m):  + c  pK -  + (5.0%) No simple, single track level trigger primitive, like high p t, available to tag events of interest. The only selective signature is the detection of the decay vertex.  Track reconstruction in STS/MVD and displaced vertex search required in the first trigger level.  Such a complex trigger is not feasible within the latency limits of conventional Front-End Electronics, typically 4 μsec at LHC.  Work without L1 trigger  Use Self-triggered Front-End Electronics  Use timestamps to organize and correlate data  Ship all hits to subsequent data buffer and processing stages High-Speed DAQ and Event Building  Typical parameters (for 10 7 int/sec and 1% occupancy): 100 kHz channel hit rate 600 Byte/sec per channel data flow  First level event selection, which replaces the L1 trigger in a conventional system, is done in a processor farm fed with data from the event building network Very efficient tracking algorithms are essential for the feasibility of the open charm event selection  Up to 10 9 tracks/sec in the Silicon tracker  Co-develop Silicon tracker layout and tracking algorithm for best overall performance  Develop algorithms which exploit the full potential of modern processors. First step: -use 'Single Instruction Multiple Data' (SIMD) instructions. They are essential for the high performance of many multi-media applications (e.g. video codecs), but rarely used in data analysis. Best results were obtained with a Cellular Automaton based track finder with integrated Kalman filter track fit  allows usage of double-side strip detectors even at high track densities  highly optimized code - field approximated by polynomials - compact, cache-efficient data - most calculations SIMDized - fast on standard PC's - well adapted to next generation many-core and wide-SIMD processors - already ported to IBM cell processor  very fast when only hard quasi-primary tracks are reconstructed, as needed in the online first level event selection of open charm candidates  supports reconstruction of soft tracks down to 100 MeV/c, as needed in the offline analysis High Speed Tracking Algorithms Source: I. Kisel, KIP, Heidelberg and GSI, Darmstadt FPGA PCPCPCPCPC Sub-Farm Gaming STI: Cell STI: CellGaming GP GPU Nvidia: Tesla Nvidia: Tesla GP GPU Nvidia: Tesla Nvidia: Tesla GP CPU Intel: Larrabee Intel: Larrabee GP CPU Intel: Larrabee Intel: Larrabee CPU/GPU AMD: Fusion AMD: FusionCPU/GPU ?? Cell: heterogeneous multi-core Intel P4 Cell lxg1411 eh102 blade11bc4 Data flow out of the Front-end Electronics at 10 7 int/sec will be about 1 TByte/sec Optimization steps for the track fit routine Performance on different platforms CPU time for track reconstruction and fit Typ. Au+Au collision Concept of SIMD instructions: process a short vector per cycle R&D Roadmap  Detailed simulation and co-optimization of the tracking system and the analysis algorithms -alternate sensor types (single-sided sensors) -alternate module layouts  Detailed studies of event selection algorithms - open charm selector covering all relevant channels (D 0,D ±,D s,Λ c ) -design of multi-level event selection  Mathematical and computational optimization of all algorithms  Determine best platform (programmable logic vs.processor) for the different processing steps: -Hit/Cluster finding -Tracklet finding -Tracking/Vertexting  Go beyond SIMDization (from scalars to vectors)  Address MIMDization (multi-threads, multi-cores and many-core systems)  Exploit the numerical throughput of dedicated purpose processors like GPU's (Graphics Processors)  Be ready for the emerging heterogeneous many-core systems  Re-design algorithms to run efficiently on all CPU/GPU architectures  Investigate new languages for the performance critical core of algorithms, like Ct or CUDA GPU: Controller plus many ALU CPU: SIMD, multi-core