Event Display The DAQ system architecture The NOA Data Acquisition system with the full 14 kt far detector P.F. Ding, D. Kalra, A. Norman and J. Tripathi.

Slides:



Advertisements
Similar presentations
SciFi Tracker DAQ M. Yoshida (Osaka Univ.) MICE meeting at LBNL 10.Feb.2005 DAQ system for KEK test beam Hardware Software Processes Architecture SciFi.
Advertisements

MINOS sensitivity to dm2 and sin2 as a function of pots. MINOS sensitivity to theta13 as a function of pots Precision Neutrino Oscillation Physics with.
The Angra Neutrino Detector Detector, VETO and electronics conceptual design Laudo Barbosa (May 18th, 2006) Centro Brasileiro de Pesquisas Físicas (CBPF)
How to Build a Neutrino Oscillations Detector - Why MINOS is like it is! Alfons Weber March 2005.
An accelerator beam of muon neutrinos is manufactured at the Fermi Laboratory in Illinois, USA. The neutrino beam spectrum is sampled by two detectors:
Searching for Atmospheric Neutrino Oscillations at MINOS Andy Blake Cambridge University April 2004.
The MINOS Experiment Andy Blake Cambridge University.
NuMI MINOS The MINOS Far Detector Cosmic Rays and their neutrinos are being collected now. How does this work, and of what use is this data? Alec Habig,
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. The Power of Data Driven Triggering DAQ Topology.
Fermilab Fermilab Scientific Computing Division & Computing Enabling Technologies Department, Fermi National Accelerator Laboratory, Batavia, Illinois,
Connecting LANs, Backbone Networks, and Virtual LANs
Use of ROOT in the D0 Online Event Monitoring System Joel Snow, D0 Collaboration, February 2000.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
SODA: Synchronization Of Data Acquisition I.Konorov  Requirements  Architecture  System components  Performance  Conclusions and outlook PANDA FE-DAQ.
TRIGGER-LESS AND RECONFIGURABLE DATA ACQUISITION SYSTEM FOR POSITRON EMISSION TOMOGRAPHY Grzegorz Korcyl 2013.
QuarkNet Muon Data Analysis with Shower Array Studies J.L. FISCHER, A. CITATI, M. HOHLMANN Physics and Space Sciences Department, Florida Institute of.
NO A Experiment Jarek Nowak University of Minnesota For NOvA Collaboration.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
PHENIX upgrade DAQ Status/ HBD FEM experience (so far) The thoughts on the PHENIX DAQ upgrade –Slow download HBD test experience so far –GTM –FEM readout.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
Performance of the NOA Data Acquisition and Data Driven Trigger Systems for the full 14 kT Far Detector A. Norman Fermilab, Scientific Computing Division.
Status of the NO ν A Near Detector Prototype Timothy Kutnink Iowa State University For the NOvA Collaboration.
David Bailey University of Manchester. Overview Aim to develop a generic system –Maximise use of off-the-shelf commercial components Reduce as far as.
6-10 Oct 2002GREX 2002, Pisa D. Verkindt, LAPP 1 Virgo Data Acquisition D. Verkindt, LAPP DAQ Purpose DAQ Architecture Data Acquisition examples Connection.
Prediction W. Buchmueller (DESY) arXiv:hep-ph/ (1999)
21-Aug-06DoE Site Review / Harvard(1) Front End Electronics for the NOvA Neutrino Detector John Oliver Long baseline neutrino experiment Fermilab (Chicago)
Timing Distribution System (TDS) 9 April, 2010 Greg Deuerling Rick Kwarciany Neal Wilcer.
The NOvA Experiment Ji Liu On behalf of the NOvA collaboration College of William and Mary APS April Meeting April 1, 2012.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Connecting Devices CORPORATE INSTITUTE OF SCIENCE & TECHNOLOGY, BHOPAL Department of Electronics and.
January 31, MICE DAQ MICE and ISIS Introduction MICE Detector Front End Electronics Software and MICE DAQ Architecture MICE Triggers Status and Schedule.
May, 2006FEE 2006 / Perugia, Italy(1) Front End Electronics for the NOvA Neutrino Detector John Oliver, Nathan Felt Harvard University Long baseline.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Disk Towards a conceptual design M. de Jong  Introduction  Design considerations  Design concepts  Summary.
E. W. Grashorn and A. Habig, UMD, for the MINOS Collaboration The Detectors of The Main Injector Neutrino Oscillation Search (MINOS) Experiment The MINOS.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Harvard Neutrino Group DoE Review August 21, 2006.
STAR Pixel Detector readout prototyping status. LBNL-IPHC-06/ LG22 Talk Outline Quick review of requirements and system design Status at last meeting.
Tracking at the Fermilab Test Beam Facility Matthew Jones, Lorenzo Uplegger April 29 th Infieri Workshop.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
NUMI NUMI/MINOS Status J. Musser for the MINOS Collatoration 2002 FNAL Users Meeting.
S.Anvar, V.Gautard, H.Le Provost, F.Louis, K.Menager, Y.Moudden, B.Vallage, E.Zonca, on behalf of the KM3NeT consortium 1 IRFU/SEDI-CEA Saclay F
15-Aug-08DoE Site Review / Harvard(1) Front End Electronics for the NOvA Neutrino Detector John Oliver, Nathan Felt, Sarah Harder Long baseline neutrino.
Jaroslav Zalesak The Institute of Physics of the Czech Academy of Sciences / Fermilab and Peter Shanahan, Andrew Norman, Kurt Biery, Ronald Rechenmacher,
Status of the NOνA Experiment Satish Desai - University of Minnesota For the NOνA Collaboration APS Meeting - April 2013 Denver, Colorado.
Status of the NO A Experiment Kirk Bays (Caltech) on behalf of the NO A collaboration Lake Louise Winter Institute Saturday, Feb 22, 2014.
DAQ Selection Discussion DAQ Subgroup Phone Conference Christopher Crawford
Recent Evolution of the Offline Computing Model of the NOA Experiment Talk #200 Craig Group & Alec Habig CHEP 2015 Okinawa, Japan.
The NO A Near Detector: An overview Jose A. Sepulveda-Quiroz For the NO A Collaboration Iowa State University and Argonne National Laboratory APS April.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
IRFU The ANTARES Data Acquisition System S. Anvar, F. Druillole, H. Le Provost, F. Louis, B. Vallage (CEA) ACTAR Workshop, 2008 June 10.
The NOνA data driven trigger Matthew Tamsett for the NOνA collaboration NOνA Neutrino 2014, XXVI International Conference on Neutrino Physics and Astrophysics.
April 2006 CD1 Review NOvA DAQ 642, hits/sec avg. 20,088 Front-End Boards (~ 3 MByte/sec data rate per FEB) 324 Data Combiner Modules,
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
PANDA collaboration meeting FEE session
Pasquale Migliozzi INFN Napoli
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
ALICE – First paper.
Example of DAQ Trigger issues for the SoLID experiment
Characteristics of Reconfigurable Hardware
LHCb Trigger, Online and related Electronics
The CMS Tracking Readout and Front End Driver Testing
Presentation transcript:

Event Display The DAQ system architecture The NOA Data Acquisition system with the full 14 kt far detector P.F. Ding, D. Kalra, A. Norman and J. Tripathi for the NOA DAQ group Fermi National Accelerator Laboratory NOA experiment The NOA experiment is a long baseline neutrino oscillation experiment designed to measure both e appearance and disappearance in order to determine the neutrino mass hierarchy, improve knowledge of the neutrino mixing structure and investigate the CP violating phase CP. Near Detector, Fermilab Far Detector (14 kt) Near Detector 21 st International Conference on Computing in High Energy and Nuclear Physics April 2015, Okinawa Japan The DAQ system performance The experiment uses a multi detector design where each of the NOA detector uses a functionally identical design and differ only in the their size and locations. The experiment features: A 300 t underground near detector is located at approximately 1 km from the primary target station and 14 mrad off the beam; A massive 14 kt far detector is located in Ash River Minnesota, at 810 km from the primary target station and 14 mrad off-axis; The detectors are exposed to the neutrino beam from the 120 GeV NuMI beamline. NOA uses a novel data acquisition system based on a continuous deadtime-less readout of the front-end electronics, extended buffering of the data stream and asynchronous software triggering. Neutrino interactions in the detectors are detected as light pulses in the scintillator. The NOA detectors, both far and near, consist of scintillation particles suspended in mineral oil within a PVC tube. A wavelength shifting optical fiber is routed inside the PVC tube to collect the light, and an Avalanche Photo Diode (APD) attached to the fiber converts the light pulse into electrical signals. Buffer nodes Networking Switch Layers 168 DCMs Each DCM connects to 64 FEBs builds 50 μs – 5ms events locally 168 DCMs Each DCM connects to 64 FEBs builds 50 μs – 5ms events locally Data Center Layers 10Gb Fiber Network Buffer node 1 Physical Storage WAN Connection Detector Acquisition Domain Central & Archival Storage Trigger Domain Analysis Domain Computing Grids (Analysis & Mining) Computing Grids (Analysis & Mining) Data Flow Buffer node … Buffer node 200 builds 5 ms events globally Buffer node 200 builds 5 ms events globally The Front End Board (FEB) will discriminate and time tag the signals from 32 readout channels. A timing system (more details in a separate poster) will synchronize and provide the time base to all the DAQ electronics modules. This system both internally and site-to-site synchronizes the more than 12,000 custom FEBs to within 16 ns of each other. FEBs will transmit the over-threshold signals to the Data Concentrator Module (DCM). The DCM will collect data from up to 64 FEBs and transmit packets of data onto the DAQ network. A farm of ~200 computers (buffer nodes) will buffer the data, build requested events based on the time tags, do real time analysis by the “data driven trigger” system (DDT) and archive that data. The selected data will be transmitted to the data processing center at Fermilab for storage and offline analysis. A schematic overview of the data acquisition system. Readout Module DCMs are detector mounted devices with an embedded CPU running Linux. The CPU pulls data from a buffer implemented in an FPGA. This buffer is continuously filled with serial data collected from up to 64 FEB modules via a custom serial and timing interface protocol over the CAT5e front end link cable. The link deserializers, first stage buffers, and data concatenation logic are also implemented in the FPGA. DCMs receive timing and control information from the NOA timing system over a dedicated link. This information is transmitted to the FEBs over the remaining three pairs of wires in the cable. DCMs are connected to the Giga-bit Ethernet DAQ network, and are positioned on the detector as necessary to keep the FEB data cable lengths manageable. The DAQ system contains a large collection, of over 200 compute nodes running Scientific Linux Fermi, which serves as a memory buffer for data. The farm is designed to buffer data for a minimum of 20 seconds. They are designed to be one large circular buffer. Data on each DCM is accumulated for 5 milliseconds and then routed over TCP/IP to a buffer node. Data from all DCMs collected during that same time period will also be sent to the same node. The buffer farm also allows for monitoring and triggering algorithms to run over the buffered data. The Global Trigger receives trigger messages from the DDT trigger processes and remotely generated beam spill triggers, and generates trigger messages in the form of time windows to the buffer nodes. Upon receiving a trigger window the buffer farm nodes will search their data buffers for matching data and route it to a data logger process on a separate node. The data buffers inside each buffer farm node are treated as circular memory. The system has been performing with an average of over 90% DAQ system live time. A total of ~332 TB data, consisting 14,308,325 NuMI beam spill and 231,210,960 cosmic ray minimum bias events, have been recorded by the DAQ system in the last 12 months. This represents 1.9 x POT equivalent exposure for the full 14 kt detector. The DAQ network is based on commercially available gigabit Ethernet hardware. These interconnect over 10 GB uplinks to fabric routers to connect the front end readout and aggregation (Detector Hall) with the trigger, buffer farm and final event builders (Datacenter). The system is capable of bursting the full readout data from the detector into individual buffer nodes for further processing and triggering. The system is capable of writing 2.2 GB/s to disk. Daily Protons-On-Target (POT). Data Driven Trigger The DDT analyzes all the data collected by the NOνA detectors in real time using hundreds of parallel instances of highly optimised analysis software running within the ARTDAQ framework. It has been in operation continuously as part of both the prototype near detector and the FD for several months and has exhibited well understood, stable, rates as well as algorithm execution times that fit within the stringent criteria needed to operate in real time. The current selection of triggers includes algorithms designed to identify muon tracks useful for calibration, contained tracks collinear with the beam line, and high energy deposits, as well as both fast and slow monopoles. It is trivially extendable to support many more signatures. The modular software design means common components are only executed once, making optimum use of the computing resources. The rates of all triggers. Execution time for various DDT algorithms. A neutrino interaction at the vertex of two showers coincident with cosmic rays in the near detector. An event with high energy showers in Far Detector A Data Concentrator Module DCMs and FEBs mounted on detector. DAQ live time fraction. Accumulated exposure to NuMI beam. Data Logger Nodes Builds 50 μs – 2 s events finally Global Trigger DDT trigger Beam spill info from Fermilab An event of 550 μs readout in the far detector. FEBs DCMs FEBs/APDs Each FEB/APD connects to 32 channels 344,064 Channels