CHEP 24-28 March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.

Slides:



Advertisements
Similar presentations
G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
Advertisements

Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
1 Introduction to Geneva ATLAS High Level Trigger Activities Xin Wu Journée de réflexion du DPNC, 11 septembre, 2007 Participants Assitant(e)s: Gauthier.
October 20 th, 2000Lyon - DAQ2000HP Beck ATLAS Trigger & Data Acquisition Requirements and Concepts Hanspeter Beck LHEP - Bern for the ATLAS T/DAQ Group.
The ATLAS High Level Trigger Steering Journée de réflexion – Sept. 14 th 2007 Till Eifert DPNC – ATLAS group.
Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
Chris Bee ATLAS High Level Trigger Introduction System Scalability Trigger Core Software Development Trigger Selection Algorithms Commissioning & Preparation.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
1 The ATLAS Online High Level Trigger Framework: Experience reusing Offline Software Components in the ATLAS Trigger Werner Wiedenmann University of Wisconsin,
Control and monitoring of on-line trigger algorithms using a SCADA system Eric van Herwijnen Wednesday 15 th February 2006.
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
Large Scale and Performance Tests of the ATLAS Online Software CERN ATLAS TDAQ Online Software System D.Burckhart-Chromek, I.Alexandrov, A.Amorim, E.Badescu,
Virtual Organization Approach for Running HEP Applications in Grid Environment Łukasz Skitał 1, Łukasz Dutka 1, Renata Słota 2, Krzysztof Korcyl 3, Maciej.
TDAQ ATLAS Reimplementation of the ATLAS Online Event Monitoring Subsystem Ingo Scholtes Summer Student University of Trier Supervisor: Serguei Kolos.
ATLAS ONLINE MONITORING. FINISHED! Now what? How to check quality of the data?!! DATA FLOWS!
What’s in the ATLAS data : Trigger Decision ATLAS Offline Software Tutorial CERN, August 2008 Ricardo Gonçalo - RHUL.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
The ATLAS Trigger and Data Acquisition: a brief overview of concept, design and realization John Erik Sloper ATLAS TDAQ group CERN - Physics Dept. April.
Remote Online Farms Sander Klous
ATLAS HLT in PPD John Baines, Dmitry Emeliyanov, Julie Kirk, Monika Wielers, Will Dearnaley, Fred Wickens, Stephen Burke 1.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
ALICE, ATLAS, CMS & LHCb joint workshop on
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Navigation Timing Studies of the ATLAS High-Level Trigger Andrew Lowe Royal Holloway, University of London.
May 23 rd, 2003Andreas Kugel, Mannheim University1 Mannheim University – FPGA Group Real Time Conference 2003, Montréal ATLAS RobIn ATLAS Trigger/DAQ Read-Out-Buffer.
January 12th 2009MICINN-IN2P3 meeting1 ATLAS and CMS in France Construction Integration and Commissioning Test beam LHC computing Physics Plans for sLHC.
U.S. ATLAS Executive Committee August 3, 2005 U.S. ATLAS TDAQ FY06 M&O Planning A.J. Lankford UC Irvine.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
1 Calorimeters LED control LHCb CALO meeting Anatoli Konoplyannikov /ITEP/ Status of the calorimeters LV power supply and ECS control Status of.
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
ATLAS TDAQ RoI Builder and the Level 2 Supervisor system R. E. Blair, J. Dawson, G. Drake, W. Haberichter, J. Schlereth, M. Abolins, Y. Ermoline, B. G.
Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition.
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
The ATLAS Trigger System 1.Global Requirements for the ATLAS trigger system 2.Trigger/DAQ Architecture 3.LVL1 system 4.HLT System 5.Some results from LHC.
1 Nicoletta GarelliCPPM, 03/25/2011 Overview of the ATLAS Data-Acquisition System o perating with proton-proton collisions Nicoletta Garelli (CERN) CPPM,
EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.
Commissioning of the ATLAS High Level Trigger with Single Beam and Cosmic Rays 1.Review of HLT architecture 2.Tools needed for Commissioning 3.Operational.
5/14/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
U.S. ATLAS TDAQ FY06 M&O Planning
Ricardo Gonçalo, RHUL BNL Analysis Jamboree – Aug. 6, 2007
CMS High Level Trigger Configuration Management
Controlling a large CPU farm using industrial tools
Operating the ATLAS Data-Flow System with the First LHC Collisions
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
ATLAS: Level-1 Calorimeter Trigger
12/3/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
TDAQ commissioning and status Stephen Hillier, on behalf of TDAQ
Philippe Vannerem CERN / EP ICALEPCS - Oct03
Remote Online Farms TDAQ Sander Klous ACAT April
1/2/2019 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
The Performance and Scalability of the back-end DAQ sub-system
Presentation transcript:

CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger group

CHEP March 2003 Sarah Wheeler 2 Read-Out Sub-systems HLTHLT LV L1 D E T RO ROD LVL2 TriggerDAQ 2.5  s ~ 10 ms DATAFLOWDATAFLOW Calo MuTrCh Other detectors ROB ROS SFI SFO RRC RRM EBN EFN FE Pipelines Read-Out Drivers ROD-ROB Connection Read-Out Buffers ROD-ROS Merger Dataflow Manager Sub-Farm Input Sub-Farm Output Event Filter N/work ROIB L2P L2SV L2N Event Filter DFM EFP RoI Builder L2 Supervisor L2 N/work L2 Proc Unit RoI RoI data = 2% RoI requests Lvl2 acc = ~2 kHz Event Building N/work ~ sec Lvl1 acc = 75 kHz 40 MHz 120 GB/s ~ 300 MB/s ~3+3 GB/s Event Filter Processors 120 GB/s ~3 GB/s EFacc = ~0.2 kHz 40 MHz 75 kHz ~2 kHz ~ 200 Hz ATLAS Trigger and Data Acquisition High Level Triggers

CHEP March 2003 Sarah Wheeler 3 Supervision of the HLT Supervision is one of the areas where commonality between Level-2 and Event Filter can be effectively exploited HLT implemented as hundreds of software tasks running on large processor farms For reasons of practicality farms split into sub-farms Supervision is responsible for all aspects of software task management and control Configuring Controlling Monitoring SFI EFP SFI EFP SFI EFP EVENT BUILDING Event Filter Farm

CHEP March 2003 Sarah Wheeler 4 Prototype HLT supervision system Prototype HLT supervision system has been implemented using tools from the ATLAS Online Software system (OnlineSW) OnlineSW is a system of the ATLAS Trigger/DAQ project Major integration exercise: OnlineSW provides generic services for TDAQ wide configuration, control and monitoring Successfully adapted for use in the HLT For HLT control activities following OnlineSW services are used: Configuration Databases Run Control Supervisor (Process Control) Controllers based on a finite-state machine are arranged in a hierarchical tree with one software controller per sub-farm and one top-level farm controller Controllers successfully customised for use in HLT

CHEP March 2003 Sarah Wheeler 5 SubFarm 1SubFarm 2 Supervisor setup boot Supervisor Control Supervisor Control Controlling a Farm load start configure

CHEP March 2003 Sarah Wheeler 6 Monitoring Aspects Monitoring has been implemented using tools from OnlineSW Information Service Statistical information written by HLT processes to information service servers and retrieved by others for e.g. display Error Reporting system HLT processes use this service to issue error messages to any other TDAQ component e.g. the central control console where they can be displayed

CHEP March 2003 Sarah Wheeler 7 Monitoring a Farm Example of Event Filter monitoring panel

CHEP March 2003 Sarah Wheeler 8 Scalability Tests (January 2003) Series of tests to determine scalability of control architecture Carried out on 230 node IT LXPLUS cluster at CERN Configurations studied: Constant total number of nodes split into a varying number of sub-farms Constant number of sub- farms with number of nodes per sub-farm varied Tests focused on times to startup, prepare for data- taking & shutdown of configurations Control Tree Event Filter Farm

CHEP March 2003 Sarah Wheeler 9 Generation of Configuration Database Custom GUI written to create configuration database files

CHEP March 2003 Sarah Wheeler 10 Results – Constant number of Nodes Graph shows times to start and stop control infrastructure Increase in times seen with number of sub-farms More sub-farms mean more controller and supervisor processes Constant Farm Size (230 nodes) Number of Sub-Farms Time (s) setup boot shutdown

CHEP March 2003 Sarah Wheeler 11 Results – Constant number of Nodes Graph shows times to cycle through run control sequence Decrease seen with number of sub-farms More sub-farms imply fewer nodes, therefore fewer trigger processes to control per sub-farm Constant Farm Size (230 nodes) Number of Sub-Farms Time (s) load configure start stop unconfigure unload

CHEP March 2003 Sarah Wheeler 12 Results – Constant number of Sub-Farms Times increase with increasing numbers of nodes and processes to control as expected Constant Number of Sub-Farms (10) Number of Nodes per Sub-Farm Time (s) load configure start stop unconfigure unload

CHEP March 2003 Sarah Wheeler 13 Conclusions and future Results are very promising for the implementation of the HLT supervision system for the first ATLAS run All operations required to startup, prepare for data- taking and shutdown configurations take of the order of a few seconds to complete Largest tested configurations represent 10-20% of final system Future enhancements of supervision system to include: Combined Run Control/Process Control component Parallelised communication between control and trigger processes Distributed configuration database