CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks –
HLT Collaboration; High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer.
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Torsten Alt - KIP Heidelberg IRTG 28/02/ ALICE High Level Trigger The ALICE High-Level-Trigger.
HLT & Calibration.
A Grid Parallel Application Framework Jeremy Villalobos PhD student Department of Computer Science University of North Carolina Charlotte.
Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg A Framework for.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
> IRTG – Heidelberg 2007 < Jochen Thäder – University of Heidelberg 1/18 ALICE HLT in the TPC Commissioning IRTG Seminar Heidelberg – January 2008 Jochen.
HLT Online Monitoring Environment incl. ROOT - Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HOMER - HLT Online Monitoring.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
The Publisher-Subscriber Interface Timm Morten Steinbeck, KIP, University Heidelberg Timm Morten Steinbeck Technical Computer Science Kirchhoff Institute.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Tag Database for HLT Micro-ESDs.
Matthias Richter, University of Bergen & Timm M. Steinbeck, University of Heidelberg 1 AliRoot - Pub/Sub Framework Analysis Component Interface.
Timm Morten Steinbeck, KIP, University Heidelberg 1/15 How to Marry the ALICE High Level Trigger DiMuon Tracking Algorithm and the ALICE High Level Trigger.
M. Richter, University of Bergen & S. Kalcher, J. Thäder, T. M. Steinbeck, University of Heidelberg 1 AliRoot - Pub/Sub Framework Analysis Component Interface.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Timm M. Steinbeck HLT Data Transport Framework.
Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg Alice High Level.
Status and roadmap of the AlFa Framework Mohammad Al-Turany GSI-IT/CERN-PH-AIP.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HLT and the Alignment & Calibration DB.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
SensIT PI Meeting, January 15-17, Self-Organizing Sensor Networks: Efficient Distributed Mechanisms Alvin S. Lim Computer Science and Software Engineering.
DISTRIBUTED COMPUTING
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
IMPROUVEMENT OF COMPUTER NETWORKS SECURITY BY USING FAULT TOLERANT CLUSTERS Prof. S ERB AUREL Ph. D. Prof. PATRICIU VICTOR-VALERIU Ph. D. Military Technical.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
TPC online reconstruction Cluster Finder & Conformal Mapping Tracker Kalliopi Kanaki University of Bergen.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Frank Lemke DPG Frühjahrstagung 2010 Time synchronization and measurements of a hierarchical DAQ network DPG Conference Bonn 2010 Session: HK 70.3 University.
1 Computing Challenges for the Square Kilometre Array Mathai Joseph & Harrick Vin Tata Research Development & Design Centre Pune, India CHEP Mumbai 16.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
KIP Ivan Kisel JINR-GSI meeting Nov 2003 High-Rate Level-1 Trigger Design Proposal for the CBM Experiment Ivan Kisel for Kirchhoff Institute of.
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
A Fully Automated Fault- tolerant System for Distributed Video Processing and Off­site Replication George Kola, Tevfik Kosar and Miron Livny University.
Claudio Grandi INFN-Bologna CHEP 2000Abstract B 029 Object Oriented simulation of the Level 1 Trigger system of a CMS muon chamber Claudio Grandi INFN-Bologna.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
Computing for Alice at GSI (Proposal) (Marian Ivanov)
FPGA Co-processor for the ALICE High Level Trigger Gaute Grastveit University of Bergen Norway H.Helstrup 1, J.Lien 1, V.Lindenstruth 2, C.Loizides 5,
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 1/22 ALICE High Level Trigger Interfaces.
KIP Ivan Kisel, Uni-Heidelberg, RT May 2003 A Scalable 1 MHz Trigger Farm Prototype with Event-Coherent DMA Input V. Lindenstruth, D. Atanasov,
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Matthias Richter, Sebastian Kalcher, Jochen Thäder & Timm M. Steinbeck
CMS High Level Trigger Configuration Management
Controlling a large CPU farm using industrial tools
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Commissioning of the ALICE HLT, TPC and PHOS systems
The LHCb High Level Trigger Software Framework
MapReduce: Simplified Data Processing on Large Clusters
Distributed Systems and Concurrency: Distributed Systems
Presentation transcript:

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science / Computer Engineering Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg A Software Data Transport Framework for Trigger Applications on Clusters

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Requirements " Alice: A Large Ion Collider Experiment  Very large multiplicity:  particles/event " Full event size > 70 MB " Data rate into last trigger stage (High Level Trigger, HLT) up to 25 GB/s " HLT has full event data available

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration From raw ADC values... to particle tracks 1, 2, 123, 255, 100, 30, 5, 1, 4, 3, 2, 3, 4, 5, 3, 4, 60, 130, 30, 5, Requirements HLT primary task is event reconstruction for triggering and storage

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration High Level Trigger Dataflow  HLT consists of Linux PC farm w. roughly  1000 nodes " Analysis is performed hierarchically " Several stages for data processing and merging " Natural mapping of dataflow, cluster topology, detector geometry

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration High Level Trigger Framework software required to transport data in HLT " Flexible – Components communicating via standardized interface – Pluggable components to support different configurations – Support for runtime reconfiguration " Efficiency – Minimize CPU usage to retain cycles for data processing (primary) – Transport data as quickly as possible (secondary) " C++ Implementation

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Component Interface " Only locally on a node " Uses shared memory for data and named pipes for descriptors " Multiple consumers attached to one producer (Publisher-Subscriber paradigm) " Buffer management has to be done in data producer Event M Data Block 0 Shared Memory Data Block n Publisher Subscriber Event M Descriptor Block n Block 0 Named Pipe

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Network Communication " Uses class library " Abstract call interface " Classes optimized for – Small message transfers – Large data blocks " Implementations for multiple network technologies/protocols possible " Currently supported: TCP & SCI Message Base Class TCP Message Class SCI Message Class Block Base Class TCP Block Class SCI Block Class

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Components Framework contains components to configure dataflow " To merge data streams belonging to one event " To split and rejoin a data stream (e.g. for load balancing) EventScattererEventGatherer Event 1 Event 2 Event 3 Event 1 Event 3 Processing Event 1 Event 2 Event 3 EventMerger Event N, Part 1 Event N, Part 2 Event N, Part 3 Event N

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Components Framework contains components to configure dataflow " To transparently transport data over the network to other computers (Bridge) " SubscriberBridgeHead has subscriber class for incoming data, PublisherBridgeHead uses publisher class to announce data " Both use network classes for communication SubscriberBridgeHead Node 1 Network PublisherBridgeHead Node 2

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Components Templates for user specific components Data Source Addressing & Handling Buffer Management Data Announcing Data Analysis & Processing Output Buffer Mgt. Output Data Announcing Accepting Input Data Input Data Addressing Data Sink Addressing & Writing Accepting Input Data Input Data Addressing " Read data from source and publish it (Data Source Template) " Accept data, process it, publish results (Analysis Template) " Accept data and process it, e.g. storing (Data Sink Template)

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Fault Tolerance Data Source Data Sink Wor ker 0 Wor ker 1 Wor ker 2 Spa re Data Source Data Sink Wor ker 0 Wor ker 1 Wor ker 2 Spa re Components to handle software or hardware faults " Processing distributed for load balancing + redundancy " Upon failure reschedule events and activate spare node

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration „Real-World-Test“ CF 0 AU 0 CF 1 AU 1 CF 2 AU 2 CF 3 AU 3 CF 4 AU 4 Cf 5 AU 5 T 0 T 1 T 0 T 1 T 0 T 1 T 2 T 3 T 2 T 3 T 2 T 3 T 4 T 5 T 4 T 5 T 4 T 5 PM ESM Patch 0Patch 1Patch 2Patch 3Patch 4Patch 5 ADC Unpacker Clusterfinder 2*Tracker 2*Patch Merger Event Stream Merger HL0 HL1 HL2 HL3

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration „Real-World-Test“ Task: " Tracking of simulated Alice pp events, " Pile up of 25 events " Simulate one sector of Alice TPC (1/36 of detector) Performance on mix of 800 MHz and 733 MHz systems: Processing rate of more than 420 Hz

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Fault Tolerance Test Curves are scaled independantly and arbitrarily

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Interface Performance

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Interface Performance Reference PC memory benchmark scaling

CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Conclusion " Working framework " Flexible configuration w. fault tolerance abilities " Can already be used in real applications To Do: " Tool for easier configuration and setup " Fault tolerance control instance/decision unit " Fine-grained fault tolerance and recovery " More tuning