David Abbott - Jefferson Lab DAQ group Data Acquisition Development at JLAB.

Slides:



Advertisements
Similar presentations
Trigger & DAQ Ole Hansen SBS Collaboration Meeting 19 March 2010.
Advertisements

Priority Research Direction (I/O Models, Abstractions and Software) Key challenges What will you do to address the challenges? – Develop newer I/O models.
Last update: August 9, 2002 CodeTest Embedded Software Verification Tools By Advanced Microsystems Corporation.
June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.
Workshop Goals MICE Controls and Monitoring Workshop September 25, 2006 A. Bross.
12 GeV Trigger Workshop Session II - DAQ System July 8th, 2009 – Christopher Newport Univ. David Abbott.
28 August 2002Paul Dauncey1 Readout electronics for the CALICE ECAL and tile HCAL Paul Dauncey Imperial College, University of London, UK For the CALICE-UK.
Chapter 1 and 2 Computer System and Operating System Overview
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
Data Acquisition Graham Heyes May 20th Outline Online organization - who is responsible for what. Resources - staffing, support group budget and.
Plans for EPICS in Hall D at Jefferson Lab Elliott Wolin EPICS Collaboration Meeting Vancouver, BC 30-Apr-2009.
Feb. 19, 2015 David Lawrence JLab Counting House Operations.
May. 11, 2015 David Lawrence JLab Counting House Operations.
UNIX System Administration OS Kernal Copyright 2002, Dr. Ken Hoganson All rights reserved. OS Kernel Concept Kernel or MicroKernel Concept: An OS architecture-design.
Hall D Trigger and Data Rates Elliott Wolin Hall D Electronics Review Jefferson Lab 23-Jul-2003.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Trigger Supervisor (TS) J. William Gu Data Acquisition Group 1.TS position in the system 2.First prototype TS 3.TS functions 4.TS test status.
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
Hall A DAQ status and upgrade plans Alexandre Camsonne Hall A Jefferson Laboratory Hall A collaboration meeting June 10 th 2011.
IMPLEMENTATION OF SOFTWARE INPUT OUTPUT CONTROLLERS FOR THE STAR EXPERIMENT J. M. Burns, M. Cherney*, J. Fujita* Creighton University, Department of Physics,
TRIGGER-LESS AND RECONFIGURABLE DATA ACQUISITION SYSTEM FOR POSITRON EMISSION TOMOGRAPHY Grzegorz Korcyl 2013.
DAQ Status Graham. EMU / EB status EMU framework prototype is complete. Prototype read, process and send modules are complete. XML configuration mechanism.
DAQ Status Report GlueX Collaboration – Jan , 2009 – Jefferson Lab David Abbott (In lieu of Graham) GlueX Collaboration Meeting - Jan Jefferson.
CODA Users Workshop (Data Acquisition at Jefferson Lab) By David Abbott.
David Abbott CODA3 - DAQ and Electronics Development for the 12 GeV Upgrade.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 7 OS System Structure.
DAQ Issues for the 12 GeV Upgrade CODA 3. A Modest Proposal…  Replace aging technologies  Run Control  Tcl-Based DAQ components  mSQL  Hall D Requirements.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
David Abbott - JLAB DAQ group Embedded-Linux Readout Controllers (Hardware Evaluation)
1 Trigger and DAQ for SoLID SIDIS Programs Yi Qiang Jefferson Lab for SoLID-SIDIS Collaboration Meeting 3/25/2011.
Hall D Online Meeting 28 March 2008 Fast Electronics R. Chris Cuevas Group Leader Jefferson Lab Experimental Nuclear Physics Division System Engineering.
Gueorgui ANTCHEVPrague 3-7 September The TOTEM Front End Driver, its Components and Applications in the TOTEM Experiment G. Antchev a, b, P. Aspell.
Hall D Online Meeting 27 June 2008 Fast Electronics R. Chris Cuevas Jefferson Lab Experimental Nuclear Physics Division 12 GeV Trigger System Status Update.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
12GeV Trigger Workshop Christopher Newport University 8 July 2009 R. Chris Cuevas Welcome! Workshop goals: 1.Review  Trigger requirements  Present hardware.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
Jefferson Laboratory Hall A SuperBigBite Spectrometer Data Acquisition System Alexandre Camsonne APS DNP 2013 October 24 th 2013 Hall A Jefferson Laboratory.
Compaq Availability Manager Installation, Configuration, Setup and Usage Barry Kierstein.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
DoE Review January 1998 Online System WBS 1.5  One-page review  Accomplishments  System description  Progress  Status  Goals Outline Stu Fuess.
Electronics for HPS Proposal September 20, 2010 S. Boyarinov 1 HPS DAQ Overview Sergey Boyarinov JLAB June 17, 2014.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Guirao - Frascati 2002Read-out of high-speed S-LINK data via a buffered PCI card 1 Read-out of High Speed S-LINK Data Via a Buffered PCI Card A. Guirao.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
HPS TDAQ Review Sergey Boyarinov, Ben Raydo JLAB June 18, 2014.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
Hall–D Level-1 Trigger Commissioning Part II A.Somov, H.Dong Jefferson Lab 12 GeV Trigger Workshop, July 8, 2010  Play Back Test Vector in Hall-D commissioning.
Event Management. EMU Graham Heyes April Overview Background Requirements Solution Status.
GlueX Collaboration May05 C. Cuevas 1 Topics: Infrastructure Update New Developments EECAD & Modeling Tools Flash ADC VXS – Crates GlueX Electronics Workshop.
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
Fermilab Control System Jim Patrick - AD/Controls MaRIE Meeting March 9, 2016.
Super BigBite DAQ & Trigger Jens-Ole Hansen Hall A Collaboration Meeting 16 December 2009.
David Lawrence JLab May 11, /11/101Reconstruction Framework -- GlueX Collab. meeting -- D. Lawrence.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
DAQ and Trigger for HPS run Sergey Boyarinov JLAB July 11, Requirements and available test results 2. DAQ status 3. Trigger system status and upgrades.
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Fernando J. Barbosa F1TDC Status Update Hall D Collaboration Meeting Indiana University – Bloomington, IN May 20-22, 2004 Fernando J.
Example of DAQ Trigger issues for the SoLID experiment
AFECS Multi-Agent Framework for Experiment Control Systems
The only thing worse than an experiment you can’t control is an experiment you can. May 6, 2019 V. Gyurjyan CHEP2007.
Presentation transcript:

David Abbott - Jefferson Lab DAQ group Data Acquisition Development at JLAB

 DAQ Group now stands at 5 members.  Recent Experiments have begun to test limits of the current distribution of CODA (v 2.5).  Aging technologies (software and hardware) are being retired and replaced both for support of the 6 GeV program as well as development of 12 GeV.  Continue to use open standards and minimize the use of commercial software while maximizing use of commercial hardware.  We continue to focus on a “migration” plan from CODA2 to CODA3 (v 2.6). Data Acquisition Status

General DAQ Issues…   Front-end hardware is evolving. Real-time intelligence is moving from the CPU to FPGAs. Old hardware technologies are no longer commercially supported (FASTBUS).   CPU-Based real-time readout on a per event basis limits the maximum accepted L1 trigger rate (~10 KHz).   32 crate limit on the trigger distribution system is nearly reached in Hall B.   Event transport limitations in the current CODA architecture are being seen for moderately complex systems.   Computing platform and OS changes (Muli-core, more memory, 64 bit systems etc…) are not taken advantage of.   Aging software technologies and reliance on third party packages are making code portability and upkeep difficult.   Monitoring and control of large numbers of distributed objects are not handled in a consistent way (too many protocols).   “Slow” controls only minimally supported

CODA3 - Requirements/Goals  Pipelined Electronics (FADC, TDC) –Dead-timeless system –Replacement for obsolete electronics –Eliminate large numbers of delay cables  Integrated L1/L2 Trigger and Trigger Distribution System –Support up to 200 KHz L1 Trigger –Use FADC for L1 trigger input –Support 100+ crates  Parallel/Staged Event Building –Handle ~100 of input data streams –Scalable (>1 GByte/s) aggregate data throughput  L3 Online Farm –Online (up to x10) reduction in data to permanent storage  Integrated Experiment Control –DAQ RunControl + “Slow” control/monitoring –Distributed, scalable, and “intelligent”

Proposed GlueX DAQ System

Current DAQ Projects Components:   CODA Objects   CODA ROC   CODA EMU (EB/ER/ANA)   Run Control Software Tools:   cMsg   ET   EVIO   Config and Display GUIs Hardware:   FADC/F1TDC   Trigger Interface (VME/PCI)   Trigger/Clock Distribution   Commercial Module Support R&D:   Embedded Linux   Experiment Control   Staged/Parallel Event Building   200KHz Trigger/readout   Clock distribution   L3 Farm

Front-End Systems VME CPU VME CPU - (MV6100) PPC, GigE, vxWorks (GE V7865) Intel, GigE, Linux CODA ROC Readout ~ MB/s Trigger Interface Trigger Interface - (V3) Pipeline Trigger Event Blocking Clock distribution Event ID Bank Info F1 TDC Flash ADC R&D to support fully pipelined crates capable of 200 KHz trigger rates

VXS - L1 Trigger VME CPU -??? Intel, GigE Linux CODA ROC VME Readout of Event Data Switch Sum and Trigger Distribution Modules (VXS) Collect Sums/Hits Pass Data to Master L1 Clock distribution Trigger Distribution Flash ADC Use VXS High speed serial backplane (P0) to collect Energy sum and hit data from FADCs Flash ADC P0

Building a DAQ System ROC To EMU To File To ET To User Process

Event Distribution  ET provides efficient transport of Data for building, and provides flexible User access  EMU provides easy configuration, and User specific processing options

Staged/Parallel Event Building Divide total throughput into N streams (1GB/sec -> N*xMB/sec). Two stages - Data Concentration -> Event Building. Two stages - Data Concentration -> Event Building. Each EMU is a software component running on a separate host. Each EMU is a software component running on a separate host.

AFECS - Integrated Experiment Control   FIPA Java-Based (v 1.5) “Intelligent” agents   Extensions provide runtime “distributed” Containers (JVM).   Agents provide a customizable intelligence (state machine) and communication (cMsg, CA, SNMP etc…) with external processes.   Many independent “logical” control systems can operate within the platform.   System is scalable. Agents can migrate to JVM containers on different nodes at runtime.   System tested: 3 Containers on different hosts with 1000 Agents controling 1000 physical components distributed over 20 other nodes.   ~40% CPU and 200 MB usage for each JVM

CODA ROC CODA EMU EPICS IOC 1 EPICS CAG Trigger soft Trigger hard Online ANA A A A A A A A S S S S S NRNANC Physical Components Normative Agents Supervisor agent Grand supervisor Front-End ACC AFECS Platform (Java 1.5+) IPC WEB IPC Hierarchy of Control IPC

CODA Evolves cMsg AFECS CODA3

CODA 2.6 Features Integrate new AFECS Run Control System cMsg IPC replaces RC (ROC,EB,ER) communication as well as CMLOG message logging. Various Support for newer operating systems and compilers –vxWorks 5.5, 6+ –RHEL 4, Solaris 10, OS X New and Updated tools –ET upgraded, 64bit compliant –Db2cool –EVIO package Support for new CODA3 Objects and components Integration of long time bug fixes, new driver libraries and feature enhancements

Summary  The DAQ Group must support ALL experimental programs at JLAB. The current group must grow by at least 2 FTEs soon to manage current timelines.  CODA 2.6 is available now, and will provide an integration path for CODA 3 technologies.  Much DAQ software development is dependent on custom hardware development in order satisfy many 12GeV requirements.  Current DAQ projects reflect the philosophy that we can progress to support the physics of the 12 GeV program through an evolution of the existing proven system.