CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.

Slides:



Advertisements
Similar presentations
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks –
Advertisements

HLT Collaboration; High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer.
Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Torsten Alt - KIP Heidelberg IRTG 28/02/ ALICE High Level Trigger The ALICE High-Level-Trigger.
Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg A Framework for.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
> IRTG – Heidelberg 2007 < Jochen Thäder – University of Heidelberg 1/18 ALICE HLT in the TPC Commissioning IRTG Seminar Heidelberg – January 2008 Jochen.
HLT Online Monitoring Environment incl. ROOT - Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HOMER - HLT Online Monitoring.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
HLT Collaboration (23-Jun-15) 1 High Level Trigger HLT PRR Planning Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer Science University.
The Publisher-Subscriber Interface Timm Morten Steinbeck, KIP, University Heidelberg Timm Morten Steinbeck Technical Computer Science Kirchhoff Institute.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Tag Database for HLT Micro-ESDs.
Matthias Richter, University of Bergen & Timm M. Steinbeck, University of Heidelberg 1 AliRoot - Pub/Sub Framework Analysis Component Interface.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
M. Richter, University of Bergen & S. Kalcher, J. Thäder, T. M. Steinbeck, University of Heidelberg 1 AliRoot - Pub/Sub Framework Analysis Component Interface.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Timm M. Steinbeck HLT Data Transport Framework.
Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg Alice High Level.
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT for TPC commissioning - Setup - - Status - - Experience -
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HLT and the Alignment & Calibration DB.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
The Application of DAQ-Middleware to the J-PARC E16 Experiment E Hamada 1, M Ikeno 1, D Kawama 2, Y Morino 1, W Nakai 3, 2, Y Obara 3, K Ozawa 1, H Sendai.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
TPC online reconstruction Cluster Finder & Conformal Mapping Tracker Kalliopi Kanaki University of Bergen.
Bernardo Mota (CERN PH/ED) 17/05/04ALICE TPC Meeting Progress on the RCU Prototyping Bernardo Mota CERN PH/ED Overview Architecture Trigger and Clock Distribution.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
An Architecture and Prototype Implementation for TCP/IP Hardware Support Mirko Benz Dresden University of Technology, Germany TERENA 2001.
A Hardware Based Cluster Control and Management System Ralf Panse Kirchhoff Institute of Physics.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
KIP Ivan Kisel JINR-GSI meeting Nov 2003 High-Rate Level-1 Trigger Design Proposal for the CBM Experiment Ivan Kisel for Kirchhoff Institute of.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Normal text - click to edit HLT tracking in TPC Off-line week Gaute Øvrebekk.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
FPGA Co-processor for the ALICE High Level Trigger Gaute Grastveit University of Bergen Norway H.Helstrup 1, J.Lien 1, V.Lindenstruth 2, C.Loizides 5,
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Monitoring for the ALICE O 2 Project 11 February 2016.
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.
ALICE Computing Data Challenge VI
LHCb and InfiniBand on FPGA
Matthias Richter, Sebastian Kalcher, Jochen Thäder & Timm M. Steinbeck
LHC experiments Requirements and Concepts ALICE
Controlling a large CPU farm using industrial tools
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Commissioning of the ALICE HLT, TPC and PHOS systems
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
The LHCb Event Building Strategy
Example of DAQ Trigger issues for the SoLID experiment
John Harvey CERN EP/LBC July 24, 2001
The Performance and Scalability of the back-end DAQ sub-system
Cluster Computers.
Presentation transcript:

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport Framework Timm M. Steinbeck Computer Science/Computer Engineering Kirchhoff Institute of Physics Ruprecht-Karls-University Heidelberg, Germany

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Outline – Overview ALICE / High Level Trigger (HLT) – Overview Framework / Components – Benchmarks & Tests ● Framework Interface Benchmark ● HLT Data Challenge ● TPC Beamtest Experience – Summary & Conclusion

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Overview - ALICE ALICE High Level Trigger – ALICE: A Large Ion Collider Experiment – Heavy-Ion Mode: ● Up to particles in event ● Max. event size >70 MB ● Max. input rate from TPC: 200 Hz ● Input data stream:  25 GB/s ● Output data stream:  1.2GB/s – Proton-Proton Mode: ● Max. input rate from TPC: 1kHz ● Event size  3 MB See also Talk by M. Richter

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Overview - HLT ALICE High Level Trigger – Processing in several steps from raw detector data to full event reconstruction – Large PC cluster ● Initially  nodes ● Arranged in hierarchy levels (HL) that match detector layout and analysis steps – Exact processing sequence and hierarchy not known – Flexible and efficient software needed to transport data through cluster – Framework consisting of independant, communicating components

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Outline – Overview ALICE / High Level Trigger (HLT) – Overview Framework / Components – Benchmarks & Tests ● Framework Interface Benchmark ● HLT Data Challenge ● TPC Beamtest Experience – Summary & Conclusion

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Overview of the Framework – Framework consisting of independant components – Communicating via publisher-subscriber interface ● Data driven architecture ● Components receive data from other components ● Process received data ● Forward own output data to next component – Named pipes used to exchange small messages and descriptors – Data is exchanged via shared memory Major design criteria ● Efficiency: CPU Cycles used for analysis ● Flexibility: Different configurations needed ● Fault-Tolerance: PCs are unreliable Subscriber Processing Publisher Subscriber Processing Publisher Shared Memory Data Named Pipe

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Application Components Templates for application specific components Data Source Addressing & Handling Buffer Management Data Announcing Data Analysis & Processing Output Buffer Mgt. Output Data Announcing Accepting Input Data Input Data Addressing Data Sink Addressing & Writing Accepting Input Data Input Data Addressing ● Read data from source and publish it (Data Source Template) ● Accept data, process it, publish results (Analysis Template) ● Accept data and process it, e.g. storing (Data Sink Template)

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Framework contains components to shape the flow of data in a cluster All components use publisher-subscriber interface Data Flow Components Subscriber Publisher 1.2; 2.2;... 1; 2;... Subscriber 1.1; 2.1;... Subscriber 1.3; 2.3;... Subscriber Publisher 1; 2;... Subscriber Publisher Subscriber 1; 2;... Publisher Subscriber Communication Node A Node B Network Merge parts of events Transparently connect components on different nodes (bridges) Split and rejoin a data stream (e.g. for load-balancing) (scatter/gather)

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Outline – Overview ALICE / High Level Trigger (HLT) – Overview Framework / Components – Benchmarks & Tests ● Framework Interface Benchmark ● HLT Data Challenge ● TPC Beamtest Experience – Summary & Conclusion

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Interface Benchmarks Benchmark of only Publisher-Subscriber Interface No shared memory access or anything else Benchmarks sensitive to L1 cache size (P4 family only 8 kB) Maximum rate achieved: >110 kHz (Dual Opteron) Time overhead f. event round-trip: < 18  s

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Outline – Overview ALICE / High Level Trigger (HLT) – Overview Framework / Components – Benchmarks & Tests ● Framework Interface Benchmark ● HLT Data Challenge ● TPC Beamtest Experience – Summary & Conclusion

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 HLT Data Challenge Long time test of framework running on 7 nodes 800 MHz Dual Pentium 3 nodes No real analysis done, dummy data and dummy processing components used All dataflow components utilised Simulated reduced TPC sector setup (only 4 "readouts" instead of 6, 2 trackers, and 1 global merger) 13 dummy processing components active Setup running for one month More than 2*10 9 events Achieved rate: > 780 Hz CF 0 AU 0 CF 1 AU 1 CF 2 AU 2 CF 3 AU 3 ST GM Patch 0Patch 1Patch 2Patch 3 ADC Unpacker Clusterfinder 4*SectorTracker Global Merger

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 HLT Data Challenge Event Rate on final merger node measured during runtime of HLT data challenge

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Outline – Overview ALICE / High Level Trigger (HLT) – Overview Framework / Components – Benchmarks & Tests ● Framework Interface Benchmark ● HLT Data Challenge ● TPC Beamtest Experience – Summary & Conclusion

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 TPC Sector Beamtest HLT being used during beamtest of TPC sector prototype Main goals: ● Test interfaces between DAQ and HLT ● Obtain some real world operation experience for HLT PCI Card DAQ PC 1 PCI Card DAQ PC 2 Fast Ethernet DAQ PC 3 3x 250 GB disk 10 MB/s Optical Link Optical Link Optical Link HLT (3 PCs) Detector Prototype Optical Link Mass Storage 1 MB/s

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 HLT PC 1 Optical Link (from DAQ) PCI Card HLT PC 2 PCI Card HLT PC 3 PCI Card Switch Gigabit Ethernet TPC Sector Beamtest 3 Node Mini HLT Cluster used 933 MHz Dual Pentium 3 PCs Optical Link (from DAQ) Optical Link (to DAQ)

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 TPC Sector Beamtest Optical Link Switched off due to technical issues w. optical link HLT Software Setup ● EventRateSubscriber used to measure event rate ● StorageWriter used to save data locally on disc HLT PC 1 RORC Publisher Subscriber BridgeHead HLT PC 3 Publisher BridgeHead Publisher BridgeHead Event Merger Storage Writer HLT-Out Subscriber EventRate Subscriber HDD HLT PC 2 RORC Publisher Subscriber BridgeHead Storage Writer HDD Optical Link

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 TPC Sector Beamtest Beamtest HLT Summary ● Software contained only “data flow” components in order to test the interfaces from/to DAQ ● No online analysis done ● Interfaces to DAQ successfully tested, works Other Issues (outside HLT scope) ● Event rate < 5 Hz due to trigger system ● Event merging not possible due to technical issues w. optical link during HLT active time

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 Outline – Overview ALICE / High Level Trigger (HLT) – Overview Framework / Components – Benchmarks & Tests ● Framework Interface Benchmark ● HLT Data Challenge ● TPC Beamtest Experience – Summary & Conclusion

CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 HLT Summary HLT Framework is maturing Can already be used in projects/testbeams/... Has been used in real beamtest Stable Has been running for a month continously Performance has improved significantly Enough for Alice HLT already now Component approach has already been useful Easy to create configurations for short tests Flexible configuration changes during testbeam