Multistep Runs with ROD Crate DAQ Murrough Landon, QMUL Outline: Overview Implementation Comparison with existing setup Readout Status ModuleServices API.

Slides:



Advertisements
Similar presentations
GCT Software ESR - 10th May 2006 Jim Brooke. Jim Brooke, 10 th May 2006 HAL/CAEN Overview GCT Driver GCT GUI Trigger Supervisor Config DB Test scripts.
Advertisements

GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
The LAr ROD Project and Online Activities Arno Straessner and Alain, Daniel, Annie, Manuel, Imma, Eric, Jean-Pierre,... Journée de réflexion du DPNC Centre.
April 2006Jean-Sébastien GraulichSlide 1 DAQ Status o Goal for BTF o Status o First results o To be done.
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
Uli Schäfer 1 JEM: Status and plans JEM1.2 Status Test results Plans.
1 HLT – ECS, DCS and DAQ interfaces Sebastian Bablok UiB.
Uli Schäfer 1 JEM1: Status and plans JEM1.1 Status Plans.
1 August 2000ATLAS SCT and Pixel Off-Detector PDR 1 SCT ROD Crate DAQ Status and Schedule John Hill University of Cambridge.
Uli Schäfer 1 JEM Test Strategies Current plan: no JTAG tests at R&S  initial tests done at MZ Power-up / currents Connectivity tests (JTAG) per (daughter)
Uli Schäfer 1 JEM1: Status and plans power Jet Sum R S T U VME CC RM ACE CAN Flash TTC JEM1.0 status JEM1.1 Plans.
LKr readout: present and future R. Fantechi 30/8/2012.
Large Scale and Performance Tests of the ATLAS Online Software CERN ATLAS TDAQ Online Software System D.Burckhart-Chromek, I.Alexandrov, A.Amorim, E.Badescu,
06/15/2009CALICE TB review RPC DHCAL 1m 3 test software: daq, event building, event display, analysis and simulation Lei Xia.
Designing a HEP Experiment Control System, Lessons to be Learned From 10 Years Evolution and Operation of the DELPHI Experiment. André Augustinus 8 February.
BOE/BME – MDT: DCS For the 2 BOE, DCS needs to handle 2 x 2 HV (1 ch/multilayer) 2x LV 2x JTAG, associated 2 MDMs/ELMBs Monitoring of T, B and CSM sensors.
Imperial College Tracker Slow Control & Monitoring.
14/November/2002 CF NSS2002 in Norfolk, Virginia, USA 1 The First Integration Test of the ATLAS End-cap Muon Level 1 Trigger System Introduction Overview.
ACE A COOL Editor ATLAS Online Database meeting 3 rd December 2007, CERN Chun Lik Tan -
Graphing and statistics with Cacti AfNOG 11, Kigali/Rwanda.
Gnam Monitoring Overview M. Della Pietra, D. della Volpe (Napoli), A. Di Girolamo (Roma1), R. Ferrari, G. Gaudio, W. Vandelli (Pavia) D. Salvatore, P.
JANA and Raw Data David Lawrence, JLab Oct. 5, 2012.
17-Aug-00 L.RistoriCDF Trigger Workshop1 SVT: current hardware status CRNowFinal Hit Finders64242 Mergers31616 Sequencers2312 AMboards4624 Hit Buffers21212.
Mobile DAQ Testbench ‘Mobi DAQ’ Paulo Vitor da Silva, Gerolf Schlager.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
6th Feb 2003SCT DAQ analysis development 1 SCT analysis framework Work by:Alan Barr, Matt Palmer, Dave Robinson Almost all slides originally produced by.
TELL1 command line tools Guido Haefeli EPFL, Lausanne Tutorial for TELL1 users : 25.February
October Test Beam DAQ. Framework sketch Only DAQs subprograms works during spills Each subprogram produces an output each spill Each dependant subprogram.
AFP Trigger DAQ and DCS Krzysztof Korcyl Institute of Nuclear Physics - Cracow on behalf of TDAQ and DCS subsystems.
L1Calo Installation Status Murrough Landon, QMUL Level-1 Calorimeter Trigger (University of Birmingham, University of Heidelberg, University of Mainz,
14 th IEEE-NPSS Real Time Stockholm - June 9 th 2005 P. F. Zema The GNAM monitoring system and the OHP histogram presenter for ATLAS 14 th IEEE-NPSS Real.
CMM++ activities at MSU Y. Ermoline et al. Level-1 Calorimeter Trigger Joint Meeting, CERN, 13 – 17 September 2010.
ATLAS SCT/Pixel Off Detector Workshop, UCL, 15 June ROD Test Stand Lukas Tomasek LBL
R. Fantechi 2/09/2014. Milestone table (7/2014) Week 23/6: L0TP/Torino test at least 2 primitive sources, writing to LTU, choke/error test Week.
Sumary of the LKr WG R. Fantechi 31/8/2012. SLM readout restart First goal – Test the same configuration as in 2010 (rack TS) – All old power supplies.
CERN IT Department CH-1211 Genève 23 Switzerland t Load testing & benchmarks on Oracle RAC Romain Basset – IT PSS DP.
TDAQ status and plans for 2008 Carlos Solans TileCal Valencia meeting 17th December 2007.
L1Calo Databases ● Overview ● Trigger Configuration DB ● L1Calo OKS Database ● L1Calo COOL Database ● ACE Murrough Landon 16 June 2008.
Schedule ● External Links ● Calorimeter Signals ● Internal Tests ● System Tests ● Combined Runs ● Schedule Murrough Landon 19 April 2007 A brief update.
L1Calo DBs: Status and Plans ● Overview of L1Calo databases ● Present status ● Plans Murrough Landon 20 November 2006.
DB and Information Flow Issues ● Selecting types of run ● L1Calo databases ● Archiving run parameters ● Tools Murrough Landon 28 April 2009.
Online Software Status ● Overview ● Recent Changes ● Required Changes ● Desirable Changes ● Upgrade ● Summary Murrough Landon 1 July 2009.
L1Calo Databases ● Overview ● Recent Activitity ● To Do List ● Run types Murrough Landon 4 June 2007.
Online Database Developments ● Overview ● OKS database status and plans ● COOL database developments ● Validating calibrations ● Tools ● Summary Murrough.
L1 Technical Proposal: Phase 2 ● Slow progress on writing TP phase 2 – Distractions with beam and calibration – Also some things are still unclear ● Hence.
M4 Operations ● Operational model for M4 ● Shifts and Experts ● Documentation and Checklists ● Control Room(s) ● AOB Murrough Landon 24 July 2007.
Calibration Operations ● Recent Changes ● Required Changes ● Calibration by Shifters Murrough Landon 23 March 2011.
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
Online Database Work Overview Work needed for OKS database
Run Control (and Other) Work
DAQ for ATLAS SCT macro-assembly
Online Database Status
The Software Framework available at the ATLAS ROD Crate
Online Software Status
Conditions Database Current conditions folders Proposed new folders
Online Software Status
Recent Online SW/DB Changes
Online SW Readiness (or not)
Online SW, DB & Calibration
Online Software Status
Remaining Online SW Tasks
Level 1 (Calo) Databases
Online SW and Database Status
Test-rigs outside CERN
Cabling Lengths and Timing
Online Software “To Do” List
CPM plans: the short, the medium and the long
John Harvey CERN EP/LBC July 24, 2001
The Performance and Scalability of the back-end DAQ sub-system
Presentation transcript:

Multistep Runs with ROD Crate DAQ Murrough Landon, QMUL Outline: Overview Implementation Comparison with existing setup Readout Status ModuleServices API change Next Steps Demo?

CERN, 30 Jan 2006 Murrough Landon, QMUL ROD Crate DAQ Controller Derived from ROS Software –Application framework implementing the run state transition skeleton –Implemented as set of plugin libraries, configured via database –Set of ReadoutModules with DataChannels responsible for reading data (analogous to ROBins with input links). L1Calo implementation –configL1Calo package: schema and data access library providing pluging for our L1CaloReadoutModule subclass –readoutModule package: implementation of module and data channel classes and a “crate” class for common actions at the start of each state transition: L1CaloReadoutModule L1CaloDataChannel CrateSetup –dbFiles package: script to generate RCD DB objects

CERN, 30 Jan 2006 Murrough Landon, QMUL RCD Implementation

CERN, 30 Jan 2006 Murrough Landon, QMUL RCD Controller: Pros and Cons Pro –Standard, supported solution for detectors ROD (and other) crates –Hopefully less maintenance and easier use of new features –Implements readout/monitoring/histo publishing within the framework, also interrupts, database access (not COOL yet?) –Allows multiple readouts during the RUNNING phase of multistep runs (compared with one readout per step at present) Con –Database requires lots of objects: generate them all from the hardware description (hw/*_crates.data.xml) but then harder to customise individual crates - eg to add applications such as “kickers” –Less flexible: no direct access to RC skeleton –Represents a change from what we are used to

CERN, 30 Jan 2006 Murrough Landon, QMUL Readout Data Software –Separate crate_readout program no longer required –Start/stop readout should be better synchronised with state transitions Format –Data from one crate looks like a ROS fragment, contains one ROB (and ROD) fragment per module –Using “TDAQ beam crate” source ID at the moment (0x700000) –Should add crate ID to this at ROS level, crate and module IDs at ROB/ROD level? –No need for existing one word buffer header (or extended crate header?) but keep rest of the formatted buffer contents Monitoring –Use -k RCD -v rcd- –Existing standalone monitoring programs need to use eformat to locate ROD fragments within the ROS fragment –But GNAM can easily provide a vector of ROD fragments

CERN, 30 Jan 2006 Murrough Landon, QMUL Present Status Implemented and tested (OK) –Actions at run state transitions (USA15 and RAL) –Publishing status to IS –One readout at state transitions and many readouts when RUNNING Problems –See a few “ROS” errors building events (high rate?) –When running, need a way to know if a new event is available –(at present the DataChannel just sleeps a bit) To be done –Test elsewhere than USA15 (eg with more modules at RAL) –Modify monitoring programs to slightly different packaging of formatted readout buffers

CERN, 30 Jan 2006 Murrough Landon, QMUL ModuleServices API Change? Issue –DataChannel has poll() method to see if data is available –(alternatively can use interrupts) –If true, RCD will call DataChannel::getNextEvent() –But at the moment the DataChannel has no way of knowing if PPM or ROD (or DSS?) has a new event available (for other modules its not a problems as status information is always available) Suggestion –Add a generic isDataAvailable() method to moduleServices API –Only needs a real implementation in PPM, ROD (and DSS?)

CERN, 30 Jan 2006 Murrough Landon, QMUL Next Steps Soon –More tests, adapt PPM monitoring and try Tile tests –Try out at other sites, after a few dbFiles mods: partitions: Partition  PartitionRcd segments: partition_segments  cut down partitionRcd_segments Include partition_genrcd.data.xml instead of existing crate segments Fairly soon –Implement moduleServices API change, along with connect() method in preparation for forthcoming run control changes Not long after –Would like to move to RCD as our standard –NB simulation controller remains in the old style for now