Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
Grid for CBM Kilian Schwarz, GSI. What is Grid ? ● Sharing of distributed resources within one Virtual Organisations !!!!
FEE/DAQ Demonstrator Walter F.J. Müller, GSI, Darmstadt for the CBM Collaboration 3 rd FutureDAQ Workshop GSI, Darmstadt, October 11, 2005.
Alice EMCAL Meeting, July 2nd EMCAL global trigger status: STU design progress Olivier BOURRION LPSC, Grenoble.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
VC Sept 2005Jean-Sébastien Graulich Report on DAQ Workshop Jean-Sebastien Graulich, Univ. Genève o Introduction o Monitoring and Control o Detector DAQ.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Christian Steinle, University of Mannheim, Institute of Computer Engineering1 L1 Tracking – Status CBMROOT And Realisation Christian Steinle, Andreas Kugel,
Data Acquisition Backbone Core DABC J. Adamczewski, H.G. Essel, N. Kurz, S. Linev GSI, Darmstadt The new Facility for Antiproton and Ion Research at GSI.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
TRIGGER-LESS AND RECONFIGURABLE DATA ACQUISITION SYSTEM FOR POSITRON EMISSION TOMOGRAPHY Grzegorz Korcyl 2013.
Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm:
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Understanding Data Acquisition System for N- XYTER.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
R&D for First Level Farm Hardware Processors Joachim Gläß Computer Engineering, University of Mannheim Contents –Overview of Processing Architecture –Requirements.
CBM Software Workshop for Future Challenges in Tracking and Trigger Concepts, GSI, 9 June 2010 Volker Friese.
1 High Level Processing & Offline event selecton event selecton event processing event processing offine Dieter Roehrich UiB Data volume and event rates.
CPT Week, April 2001Darin Acosta1 Status of the Next Generation CSC Track-Finder D.Acosta University of Florida.
Frank Lemke DPG Frühjahrstagung 2010 Time synchronization and measurements of a hierarchical DAQ network DPG Conference Bonn 2010 Session: HK 70.3 University.
DABCDABC J. Adamczewski-Musch, H.G. Essel, S. Linev Software development for CBM DAQ J. Adamczewski-Musch, H.G. Essel, S.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Status of Reconstruction in CBM
J. Prast, G. Vouters, Arlington, March 2010 DHCAL DIF Status Julie Prast, Guillaume Vouters 1. Future CCC Use in DHCAL Setup 2. Calice DAQ Firmware Implementation.
Design Criteria and Proposal for a CBM Trigger/DAQ Hardware Prototype Joachim Gläß Computer Engineering, University of Mannheim Contents –Requirements.
KIP Ivan Kisel JINR-GSI meeting Nov 2003 High-Rate Level-1 Trigger Design Proposal for the CBM Experiment Ivan Kisel for Kirchhoff Institute of.
Dec.11, 2008 ECL parallel session, Super B1 Results of the run with the new electronics A.Kuzmin, Yu.Usov, V.Shebalin, B.Shwartz 1.New electronics configuration.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
Near Real-Time Verification At The Forecast Systems Laboratory: An Operational Perspective Michael P. Kay (CIRES/FSL/NOAA) Jennifer L. Mahoney (FSL/NOAA)
LHCb front-end electronics and its interface to the DAQ.
DABCDABC D ata A cquisition B ackbone C ore J.Adamczewski, H.G.Essel, N.Kurz, S.Linev 1 Work supported by EU RP6 project.
Simulations and Software CBM Collaboration Meeting, GSI, 17 October 2008 Volker Friese Simulations Software Computing.
Status DAQ Walter F.J. Müller, GSI, Darmstadt for the CBM Collaboration 14 th CBM Collaboration Meeting Friday, 9 October 2009.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
Fast Tracking of Strip and MAPS Detectors Joachim Gläß Computer Engineering, University of Mannheim Target application is trigger  1. do it fast  2.
CBM Simulation Walter F.J. Müller, GSI CBM Simulation Week, May 10-14, 2004 Tasks and Concepts.
Track reconstruction in TRD and MUCH Andrey Lebedev Andrey Lebedev GSI, Darmstadt and LIT JINR, Dubna Gennady Ososkov Gennady Ososkov LIT JINR, Dubna.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing for Alice at GSI (Proposal) (Marian Ivanov)
DDRIII BASED GENERAL PURPOSE FIFO ON VIRTEX-6 FPGA ML605 BOARD PART B PRESENTATION STUDENTS: OLEG KORENEV EUGENE REZNIK SUPERVISOR: ROLF HILGENDORF 1 Semester:
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
17/02/06H-RORCKIP HeidelbergTorsten Alt The new H-RORC H-RORC.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Vladimir Zhulanov for BelleII ECL group Budker INP, Novosibirsk INSTR2014, Novosibirsk 2014/02/28 1.
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
Scalable Readout System Data Acquisition using LabVIEW Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
CHEP06, MumbaiHans G. Essel, GSI / FAIR, CBM collaboration About FAIR About CBM About FutureDAQ About Demonstrator.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
16th IEEE NPSS Real Time Conference 2009 University of Heidelberg
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Future Hardware Development for discussion with JLU Giessen
LHC experiments Requirements and Concepts ALICE
TELL1 A common data acquisition board for LHCb
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CoBo - Different Boundaries & Different Options of
Commissioning of the ALICE HLT, TPC and PHOS systems
TELL1 A common data acquisition board for LHCb
Presentation transcript:

Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March th CBM Collaboration Meeting, GSI, March 9-12, Computing and DAQ Session Thursday 14:00 – 17:00 – Theory Seminar Room Handle 20PB a year CBM Grid first steps Controls, not an after sought this time Network & processing p-p, p-A – >10 8 int/sec

11 March th CBM Collaboration Meeting, GSI, March 9-12, Computing and DAQ Session

4 Data rates Data rates into HLPS Open charm 10 kHz * 168 kbyte = 1.7 Gbyte/sec Low-mass di-lepton pairs 25 kHz * 84 kbyte = 2.1 Gbyte/sec Data volume per year – no HLPS action 10 Pbyte/year ALICE = 10 Pbyte/year: 25% raw, 25% reconstructed, 50% simulated slide from D. Rohrich

5 Processing concept HLPS’ tasks Event reconstruction with offline quality Sharpen Open Charm selection criteria – reduce event rate further Create compressed ESDs  Create AODs No offline re-processing Same amount of CPU-time needed for unpacking and dissemination of data as for reconstruction RAW->ESD: never ESD->ESD’: only exceptionally slide from D. Rohrich

6 Data Compression Scenarios Loss-less data compression –Run-Length Encoding (standard technique) –Entropy coder (Huffman)  –Lempel Ziff Lossy data compression –Compress 10-bit ADC into 8-bit ADC using logarithmic transfer function (standard technique) –Vector quantization  –Data modeling  Perform all of the above wherever possible slide from D. Rohrich

7 Offline and online issues Requirements to software –offline code = online code Emphasis on –Run-time performance –Clear interfaces –Fault tolerance and error recovery –Alignment –Calibration –”Prussian” programming slide from D. Rohrich

8 Storage concept Main challenge of processing heavy-ion data: logistics No archival of raw data Storage of ESDs –Advanced compressing techniques: 10-20% –Only one pass Multiple versions of AODs slide from D. Rohrich

11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, Dubna educational and scientific network Dubna-Grid Project (2004) More than 1000 CPU Laboratory of Information Technologies, JINR University "Dubna" Directorate of programme for development of the science city Dubna University of Chicago, USA University of Lund, Sweden Creation of Grid-testbed on the basis of resources of Dubna scientific and educational establishments, in particular, JINR Laboratories, International University "Dubna“, secondary schools and other organizations slide from V. Ivanov

summary (middlewares) ● LCG-2: GSI and Dubna - pro: large distribution, support - contra: difficult to set up, no distributed analysis ● AliEn: GSI, Dubna, Bergen - pro: in production since contra: unsecure future, no support Globus 2: GSI, Dubna, Bergen? - pro/contra: simple, but functioning (no RB, no FC, no support) gLite/GT4: new on the market - pro/contra: nobody has production experience (gLite) slide from K.Schwarz

11 March th CBM Collaboration Meeting, GSI, March 9-12, CBM Grid – Status CBM VO Server setup First certificate in work Use for MC Transport production this summer Initial participants:  Bergen, Dubna, GSI, ITEP Initial Middleware:  AliEn (available on all 4 sites, good working horse)

11 March th CBM Collaboration Meeting, GSI, March 9-12, ECS (Experiment Control System) Definition of Functionality of ECS and DCS Draft of URD (user requirements document) Constitute ECS working group

11 March th CBM Collaboration Meeting, GSI, March 9-12, FEE – DAQ Interface FEE Hit Data (out only) Clock and Time (in only) Control (bidirectional) 3 logical interfaces FEE Concentrator or read-out controller Cave Shack 3 Specs: TimeDAQDCS First Drafts ready for fall 2005 CBM TB Meeting Diversity inevitavble Common interfaces indispensible

DAQ BNet 11 March 2005GSI, Mar th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet 14 Currently investigated structure switch n × n switch n × n n n * (n - 1) / 2 bidirectional connections n CNetPNet n - 1 n - 1 ports H: histogrammer TG: event tagger HC: histogram collector BC: scheduler DD: data dispatcher ED: event dispatcher TG/BC DD/ED CNetPNet DD/ED H CNet DD/HC active buffer BNet controller n=4 : 16x16 slide from H. Essel

DAQ BNet 11 March 2005GSI, Mar th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet 15 Simulation with SystemC event generator data dispatcher (sender) histogram collector tag generator BNet controller (schedule) event dispatcher (receiver) transmitter (data rate, latency) switches (buffer capacity, max. # of package queue, 4K) Running with 10 switches and 100 end nodes. Simulation takes 1.5 *10 5 times longer than simulated time. Various statistics (traffic, network load, etc.) Modules: slide from H. Essel

DAQ BNet 11 March 2005GSI, Mar th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet 16 Some statistic examples single buffers excluded! slide from H. Essel

DAQ BNet 11 March 2005GSI, Mar th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet 17 Topics for investigations Event shaping Separate meta data transfer system Addressing/routing schemes Broadcast Synchronization Determinism Fault tolerance Real test bed slide from H. Essel

11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, Overview of Processing Architecture Processing resources Hardware processors –L1/FPGA Software processors –L1/CPU Active Buffers Sub-farm network –Pnet Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering slide from J. Gläß

11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, Architecture of R&D Prototype communication via backplane –4 boards, all-to-all –different length of traces –up to 10 Gbit/s serial –=> FR4 Rogers DDR ZBT FPGA connector SFPSFP XC2VPX20 Flash RS232 Ethernet PPC zeroXT 10GB SMT Ethernet Flash DDR Linux µC FPGA with MGTs –up to 10 Gbit/s serial –=> XC2VPX20 (8 x MGT) –=> XC2VPX70 (20 x MGT) externals –2 x ZBT SRAM –2 x DDR SDRAM –for PPC: Flash, Ethernet, … initialization and control –standalone board/system –microcontroller running Linux Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering slide from J. Gläß

11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, Conclusion R&D prototype to learn: –physical layer of communication 2.5 Gbit/s up to 10 Gbit/s chip-to-chip board-to-board (-> connectors, backplane) PCB layout, impedances PCB material (FR4, Rogers, …) –next step: communication protocols –more resources needed => XC2VPX70?, Virtex4? (availability?) –external memories fast controllers for ZBT and DDR RAM PCB layout, termination, … Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering slide from J. Gläß

11 March 20055th CBM Collaboration Meeting, GSI, March 9-12, DAQ Challenge Incredibly small (unknown) cross-section: pp  x at 90 GeV beam energy Q = 13.4 – 9.5 – = 1.9 GeV ( near threshold) What is the theoretical limit for the hardware and DAQ? How can one improve the sensitivity by clever algorithm? More questions than answers.

11 March th CBM Collaboration Meeting, GSI, March 9-12, Algorithms Performance of L1 feature extraction algorithms is essential  critical in CBM: STS tracking + vertex reconstruction TRD tracking and Pid Look for algorithms which allow massive parallel implementation  Hough Transform Tracker needs lots of bit level operations, well suited for FPGA  Cellular Automaton Tracker Co-develop tracking detectors and tracking algorithms  L1 tracking is necessarily speed optimized (>10 9 tracks/sec) → possibly more detector granularity and redundancy needed  Aim for CBM: Validate final hardware design with at least 2 trackers suitable for L1