Fermilab Fermilab Scientific Computing Division & Computing Enabling Technologies Department, Fermi National Accelerator Laboratory, Batavia, Illinois,

Slides:



Advertisements
Similar presentations
Mayukh Das 1Louisiana Tech University For the 2004 D0SAR Workshop Activities with L3 and the Higgs By : Mayukh Das.
Advertisements

The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
An accelerator beam of muon neutrinos is manufactured at the Fermi Laboratory in Illinois, USA. The neutrino beam spectrum is sampled by two detectors:
Searching for Atmospheric Neutrino Oscillations at MINOS Andy Blake Cambridge University April 2004.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
BTeV Trigger Architecture Vertex 2002, Nov. 4-8 Michael Wang, Fermilab (for the BTeV collaboration)
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. The Power of Data Driven Triggering DAQ Topology.
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
Modeling of the architectural studies for the PANDA DAT system K. Korcyl 1,2 W. Kuehn 3, J. Otwinowski 1, P. Salabura 1, L. Schmitt 4 1 Jagiellonian University,Krakow,
06/15/2009CALICE TB review RPC DHCAL 1m 3 test software: daq, event building, event display, analysis and simulation Lei Xia.
IceCube DAQ Mtg. 10,28-30 IceCube DAQ: “DOM MB to Event Builder”
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Performance of the NOA Data Acquisition and Data Driven Trigger Systems for the full 14 kT Far Detector A. Norman Fermilab, Scientific Computing Division.
Status of the NO ν A Near Detector Prototype Timothy Kutnink Iowa State University For the NOvA Collaboration.
21-Aug-06DoE Site Review / Harvard(1) Front End Electronics for the NOvA Neutrino Detector John Oliver Long baseline neutrino experiment Fermilab (Chicago)
The NOvA Experiment Ji Liu On behalf of the NOvA collaboration College of William and Mary APS April Meeting April 1, 2012.
A Software Solution for the Control, Acquisition, and Storage of CAPTAN Network Topologies Ryan Rivera, Marcos Turqueti, Alan Prosser, Simon Kwan Electronic.
Disk WP-4 “Information Technology” J. Hogenbirk/M. de Jong  Introduction (‘Antares biased’)  Design considerations  Recent developments  Summary.
Online Reconstruction used in the Antares-Tarot alert system J ü rgen Brunner The online reconstruction concept Performance Neutrino doublets and high.
July 19, 2006VLCW-06 Vancouver1 Scint/MAPMT Muon Prototype Operation Robert Abrams Indiana University.
Silicon Module Tests The modules are tested in the production labs. HIP is is participating in the rod quality tests at CERN. The plan of HIP CMS is to.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
January 31, MICE DAQ MICE and ISIS Introduction MICE Detector Front End Electronics Software and MICE DAQ Architecture MICE Triggers Status and Schedule.
May, 2006FEE 2006 / Perugia, Italy(1) Front End Electronics for the NOvA Neutrino Detector John Oliver, Nathan Felt Harvard University Long baseline.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Linda R. Coney – 5 November 2009 Online Reconstruction Linda R. Coney 5 November 2009.
Disk Towards a conceptual design M. de Jong  Introduction  Design considerations  Design concepts  Summary.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
Monitoring for the ALICE O 2 Project 11 February 2016.
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
15-Aug-08DoE Site Review / Harvard(1) Front End Electronics for the NOvA Neutrino Detector John Oliver, Nathan Felt, Sarah Harder Long baseline neutrino.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
IceCube DAQ Mtg. 10,28-30 IceCube DAQ: Implementation Plan.
Evelyn Thomson Ohio State University Page 1 XFT Status CDF Trigger Workshop, 17 August 2000 l XFT Hardware status l XFT Integration tests at B0, including:
Jaroslav Zalesak The Institute of Physics of the Czech Academy of Sciences / Fermilab and Peter Shanahan, Andrew Norman, Kurt Biery, Ronald Rechenmacher,
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
Status of the NO A Experiment Kirk Bays (Caltech) on behalf of the NO A collaboration Lake Louise Winter Institute Saturday, Feb 22, 2014.
FTK high level simulation & the physics case The FTK simulation problem G. Volpi Laboratori Nazionali Frascati, CERN Associate FP07 MC Fellow.
Event Display The DAQ system architecture The NOA Data Acquisition system with the full 14 kt far detector P.F. Ding, D. Kalra, A. Norman and J. Tripathi.
Monitoring and Commissioning the NOνA Far Detector M. Baird 1, J. Bian 2, J. Coelho 4, G. Davies 3, M. Messier 1, M. Muether 5, J. Musser 1 and D. Rocco.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
IRFU The ANTARES Data Acquisition System S. Anvar, F. Druillole, H. Le Provost, F. Louis, B. Vallage (CEA) ACTAR Workshop, 2008 June 10.
The NOνA data driven trigger Matthew Tamsett for the NOνA collaboration NOνA Neutrino 2014, XXVI International Conference on Neutrino Physics and Astrophysics.
April 2006 CD1 Review NOvA DAQ 642, hits/sec avg. 20,088 Front-End Boards (~ 3 MByte/sec data rate per FEB) 324 Data Combiner Modules,
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
CMS DAQ project at Fermilab
Pasquale Migliozzi INFN Napoli
LHC experiments Requirements and Concepts ALICE
Controlling a large CPU farm using industrial tools
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
PADME L0 Trigger Processor
ATLAS L1Calo Phase2 Upgrade
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Toward a costing model What next? Technology decision n Schedule
Kevin Burkett Harvard University June 12, 2001
Example of DAQ Trigger issues for the SoLID experiment
LHCb Trigger, Online and related Electronics
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
The CMS Tracking Readout and Front End Driver Testing
Implementation of DHLT Monitoring Tool for ALICE
Presentation transcript:

Fermilab Fermilab Scientific Computing Division & Computing Enabling Technologies Department, Fermi National Accelerator Laboratory, Batavia, Illinois, USA. The NOvA Experiment & DAQ Readout The Need for Data Driven Triggering DAQ Topology The Nova DAQ system is designed to support continuous readout of over 368,000 detector channels. Nova accomplishes this by creating a hierarchical readout structure that transmits raw data to aggregation nodes which perform multiple stages of event building and organization before retransmitting the data in real time to the next stage of the DAQ chain. In this manner data moves from Front End digitizer boards (FEBs) to Data Concentrator Modules (DCMs) to Event Building & Buffering nodes and finally into a data logging station. The flow of data out of the buffer nodes is controlled by a Global Trigger process that extraction directives based on time windows of interest in the data. These windows can be beam spills, calibration pulsers, or data driven trigger decisions. The Buffer nodes used for DAQ processing are a farm of high performance commodity computers. Each node has a minimum of 16 compute cores and 32megs of available memory for data processing and buffering. This permit them to run both the event builder process, the full multi-threaded data driven triggering suite and buffer a minimum of 20 seconds of full, continuous, zero bias data readout. The NOvA experiment is designed to probe the oscillations of neutrinos and anti-neutrinos to determine the transition probabilities for: These transition rates will allow NOvA to determine the neutrino mass hierarchy and highly constrain the CP violating phase δ. The NOvA measurements will improve our understanding of the mixing angles θ 13 and θ 23. To perform these measurements Nova has been designed with a 15kTon far detector which is located 810 km away from Fermilab and a small near detector at Fermilab. This enormous detector with over 360,000 channels of synchronized readout electronics operates in a free-running “trigger-less” readout mode. All of the channels are continuously digitized and the data are accumulated through a multi-tiered readout chain into a large farm of high speed computers where it waits…. Beam spill information is generated independently at FNAL and transmitted to the Ash River detector site. When the beam information is received the relevant data blocks overlapping with the beam spill are extracted and recorded to permanent storage. The rest of the data is allowed to expire unless there is a way to perform realtime analysis and filtering of over 4.3GB of data per second. The Nova detector utilizes a very elegant XY range stack geometry. This allows for the application of very powerful global tracking algorithms to identify event topologies that are of interest to physics analyses. At the most basic level NOvA online physics program need the ability to identify four broad classes of topologies for producing data driven triggers: Being able to generate DDTs based on these topologies allows NOvA to: Improve detector calibrations using “horizontal” muons Verify detector/beam timing with contained neutrino events Search exotic phenomena in non-beam data (magnetic monopole, WIMP annihilation signatures) Search for directional, cosmic, neutrino sources Detect nearby supernova ν µ CC candidate Multi-prong neutrino candidate (cosmic ray) Realtime Event Filtering ARTDAQ is a software product, under active development at Fermilab, that provides tools to build programs that combine event building and filtering of events as part of an integrated DAQ system. The NOvA Data Driven Trigger (NOvA-DDT) demonstrator module uses the high speed event feeding and event filtering componets of ARTDAQ to provide a real world proof of concept application that meets the NOvA trigger requirements. NOVA-DDT takes already-built 5ms wide raw data windows from shared memory and injects them into the art framework where trigger decision modules are run and a decision message is broadcast out to the NOvA Global Trigger System. The ART framework allows the usage of standard NOvA reconstruction concepts to be applied to the online trigger decision process. The common ART framework additionally allows for online reconstruction & filtering algorithms to be applied in the offline environment. This allows the identical code used in the DAQ to be used in offline simulation and testing for determination of selection efficiencies, reconstruction biases and over all trigger performance. Neutrino Event topologies of interest observed in the NOvA prototype near detector during the run. These topologies exhibit the core features for identification with the NOvA-DDT. Linear tracks Multi Track Vertices EM Shower objects Correlated Hit Cluster Data Driven Trigger Buffering As timeslices of detector data are collected and built they are written into two distinct pools: The Global Trigger pool contains timeslice waiting for up to 20 s to be extracted by a global trigger and sent to the datalogger. The DDT pool is a separate set of IPV4 shared memory buffers designed to be the input queues for data driven trigger analysis processes The DDT shared memory interface is optimized as a one-writer-many-readers design: The shared memory buffer has N “large” (fixed size) segments which are rotated through by the writer to prevent collisions with reader processess The writing process writes guard indicators before and after writing the detector data into the segment. These indicators allow read proccesses to tell if data being read was potentially over written during the read. DAQ Event Building & Buffering NOvA-DDT Performance Because the DAQ system performs a 200 to 1 synchronized burst transmission of data between the data concentrator modules and the Event Builder buffer nodes, a large amount of "network" buffering is required. This buffering is split between the high speed CISCO 4948 network switch hardware and sending socket buffers local to the data concentrator modules and has been optimized to match the TCP protocol window size to the average data rate. For the Nova far detector, the amount of buffering depends mainly upon the output data rate of the DCMs and the size of the "timeslice" sent by the DCMs. The data rate is anticipated to be 8 MB/s and the time slice 5ms. NOvA Far Detector Stats: Weight: 15kT Height: 5 stories (53x53ft) Detection Cells: 368,640 “Largest plastic structure Built by man.” Prototype near detector, in operation at FNAL 2010-Present » Buffer Node Computers 180 Data Concentrator Modules 11,160 Front End Boards Buffer Nodes Data Buffer 5ms data blocks Data Logger Data Driven Triggers System Trigger Processor …. Event builder Data Slice Pointer Table Data Time Window Search Trigger Reception Grand Trigger OR Data Triggered Data Output Data Minimum Bias 0.75GB/S Stream DCM 1 DCMs COtS Ethernet 1Gb/s FEB Zero Suppressed at (6-8MeV/cell) FEB Global Trigger Processor Beam Spill Indicator (Async from Trigger Broadcast Calib. Pulser ( 50-91Hz) Data Driven Trig. Decisions FEBs (368,4600 det. channels) 200 Buffer Nodes (3200 Compute Cores) Shared Memory DDT event stack NEED PERFORMANCE DATA