IEEE - Nuclear Science Symposium San Diego, Oct. 31st 2006

Slides:



Advertisements
Similar presentations
GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
Advertisements

Experiment Control Systems at the LHC An Overview of the System Architecture An Overview of the System Architecture JCOP Framework Overview JCOP Framework.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
Control and monitoring of on-line trigger algorithms using a SCADA system Eric van Herwijnen Wednesday 15 th February 2006.
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
Clara Gaspar, March 2006 LHCb’s Experiment Control System Step by Step.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2012 Xavier Vilasis.
Using PVSS for the control of the LHCb TELL1 detector emulator (OPG) P. Petrova, M. Laverne, M. Muecke, G. Haefeli, J. Christiansen CERN European Organization.
Designing a HEP Experiment Control System, Lessons to be Learned From 10 Years Evolution and Operation of the DELPHI Experiment. André Augustinus 8 February.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
The Joint COntrols Project Framework Manuel Gonzalez Berges on behalf of the JCOP FW Team.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
A PCI Card for Readout in High Energy Physics Experiments Michele Floris 1,2, Gianluca Usai 1,2, Davide Marras 2, André David IEEE Nuclear Science.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 VeLo L1 Read Out Guido Haefeli VeLo Comprehensive Review 27/28 January 2003.
Bruno Belbute, October 2006 Presentation Rehearsal for the Follow-up meeting of the Protocol between AdI and CERN.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
News on GEM Readout with the SRS, DATE & AMORE
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Clara Gaspar, December 2012 Experiment Control System & Electronics Upgrade.
March 7th 2005 Stefan Koestner LHCb week ECS-tools for the CCPC/Tell1 (a tutorial in 3 acts): (1) CCPC/PVSS Interface: - few comments on the server - quick.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Clara Gaspar on behalf of the ECS team: CERN, Marseille, etc. October 2015 Experiment Control System & Electronics Upgrade.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
LHCb Configuration Database Lana Abadie, PhD student (CERN & University of Pierre et Marie Curie (Paris VI), LIP6.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN) on behalf of the.
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Clara Gaspar, February 2010 DIM A Portable, Light Weight Package for Information Publishing, Data Transfer and Inter-process Communication.
Real-time control using embedded micro-controllers Flavio Fontanelli, Giuseppe Mini, Mario Sannino, INFN Genoa Zbigniew Guzik, Institute for Nuclear Research.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
The Control and Hardware Monitoring System of the CMS Level-1 Trigger Ildefons Magrans, Computing and Software for Experiments I IEEE Nuclear Science Symposium,
Stefan Koestner IEEE NSS – San Diego 2006 IEEE - Nuclear Science Symposium San Diego, Oct. 31 st 2006 Stefan Koestner on behalf of the LHCb Online Group.
Controlling a large Data Acquisition System using an industrial SCADA system Stefan Koestner (CERN) on behalf of the LHCb Online Group Modelling the Experiment.
PVSS an industrial tool for slow control
Slow Control and Run Initialization Byte-wise Environment
Slow Control and Run Initialization Byte-wise Environment
Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN)
Electronics Trigger and DAQ CERN meeting summary.
Online Control Program: a summary of recent discussions
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
CMS – The Detector Control System
TELL1 A common data acquisition board for LHCb
Controlling a large CPU farm using industrial tools
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
ECS-tools for the CCPC/Tell1 (a tutorial in 3 acts):
MiniDAQ2 Workshop Control System.
Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN)
Large CMS GEM with APV & SRS electronics
The LHCb Run Control System
Philippe Vannerem CERN / EP ICALEPCS - Oct03
LHCb Trigger, Online and related Electronics
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
The LHCb High Level Trigger Software Framework
Network Processors for a 1 MHz Trigger-DAQ System
Tools for the Automation of large distributed control systems
The LHCb Front-end Electronics System Status and Future Development
Configuration DB Status report Lana Abadie
TELL1 A common data acquisition board for LHCb
Juan Carlos Cabanillas Noris September 29th, 2018
Presentation transcript:

IEEE - Nuclear Science Symposium San Diego, Oct. 31st 2006 Control and Operation of the LHCb Readout Boards using Embedded Microcontrollers and the PVSSII SCADA System Stefan Koestner on behalf of the LHCb Online Group (email: Stefan.Koestner@cern.ch) IEEE - Nuclear Science Symposium San Diego, Oct. 31st 2006 Stefan Koestner IEEE NSS – San Diego 2006

The LHCb experiment Dedicated to ‘B-physics’: single arm forward spectrometer (15 - 300 mrad) L = 2 x 1032 cm-2 s-1 (~1 evt/bx) 1012 bb-pairs (~100 kHz) _ fraction of interest just 10-5  efficient trigger necessary to enrich the data sample! (implemented in 2 levels) Ten sub-detectors with approx. 1.1 million readout channels in total average total event size (at average occupancy) is 35 kB (after ZS) Stefan Koestner IEEE NSS – San Diego 2006

Trigger and DAQ system L0 trigger: 4 μs fixed latency time synchronized to FE-chips (Calos, Muon & Pile-up Veto) Frontend @ 40Mhz L0 trigger (~ 1 MHz) L0 -electronics Collection and preprocessing  Multi-Event- Packages Push-protocol supposing that following stage has enough buffer. Centralized throttle via TFC-system (synchronous). 1 MHz readout (~ 35 GB/s) via Gigabit Ethernet L1-elcetronics HLT Triggers Reconstruction Storage HLT trigger: ~2kHz to tape Raw Data: 70 MB/s PC-Farm Stefan Koestner IEEE NSS – San Diego 2006

TELL1 - board Trigger ELectronics Level 1 board: FPGA based adopted by almost all subdetectors in total ~300 boards with ~400 registers & memory blocks Frontend @ 40Mhz L0 -electronics Collection and preprocessing  Multi-Event- Packages L1-elcetronics HLT Triggers Reconstruction Storage PC-Farm Stefan Koestner IEEE NSS – San Diego 2006

TELL1 - board ADC conversion Synchronisation, reordering, pedestal subtraction Common mode suppression Zero Suppression Multi Event Packing IP Framing Gigabit Ethernet Link ECS interface via creditcard sized PC connected to 10/100 Ethernet Controls network separated from Data network isolated access path  robustness Stefan Koestner IEEE NSS – San Diego 2006

Credit Card PC 4’’ - SM520PCX (Digitallogic) CCPC: i486 compatible microcontroller (AMD ELAN 520) Linux Kernel at 133 MHz reading from local bus 20 MB/s filesystem shared over server Access to board via gluecard over a PLX PCI9030 bridge: Parallel bus (8/16/32), 3 JTAG chains (2 MHz), 4 I2C lines and 9 GPIO lines low level libraries under SLC4 (maintenance) Stefan Koestner IEEE NSS – San Diego 2006

Experiment Control System A uniform and homogenous control system based on the ‘Joint Controls Project’ (JCOP) framework - tailored for LHCb needs: eases development as well as operation distributed and scalable system coherent interfaces Configuration, monitoring and operation of the ‘whole’ experiment: Data acquisition & trigger Detector Operations (DCS) Experimental Infrastructure Interaction with accelerator & CERN Safety System Stefan Koestner IEEE NSS – San Diego 2006

Industrial SCADA (PVSSII) The CERN’s common JCOP framework is built upon an industrial SCADA system: PVSSII data (registers or probes) is stored in ‘data points’ (=complex structures) connected to the PVSS internal memory-DB Change of these entries triggers a callback function Upon certain thresholds alerts can be launched Extended to allow for finite state modeling & hierarchical control: intuitive model of subsystem several levels of abstraction sub systems can run alone auto-recovery from errors Stefan Koestner IEEE NSS – San Diego 2006

Communication Layer - DIM Interface from ECS to hardware realized with the ‘Distributed Information Management’ System (DIM) – a portable lightweight communication layer: DIM server running on CCPC publishes services to DIM Name Server (DNS) from where the client (ECS) can subscribe to it. Data exchange peer to peer from server to client. Client sends commands (write/read) – server updates services (data/status) Portability: Clients can be installed on any machine just specifying DNS node (no need to take care of connectivity) Robustness: If server crashes it can easily republish on DNS node Stefan Koestner IEEE NSS – San Diego 2006

Abstraction Layer - FwHw Introduced a tool to model hardware as PVSS-datapoints: hides diversity and complexity of the various hardware/bus types takes care of the communication with the Tell1 (using DIM) framework function calls by register name Stefan Koestner IEEE NSS – San Diego 2006

Finite State Machines Further abstraction as finite state machines (SMI++): defined transition from one state of the board into another with possibility to autorecover Device Units: act on hardware (data points) state can be triggered by hardware Control Units: send commands to children (DU) state transition when children change Firmware can be downloaded to board in state NOT_READY Configuration of board (mainly writing registers) via ‘recipes’ downloaded from Configuration Database (different settings for different run conditions) Stefan Koestner IEEE NSS – San Diego 2006

Tell1 Control Unit An example for a Tell1 CU panel (under construction) states of children (DUs) define state of CU commands are propagated downwards partitioning of the system is a key feature (partitions can have their own trigger) clicking on device unit allows to open operator panels for the device (boards) Stefan Koestner IEEE NSS – San Diego 2006

Tell1 Device Unit Screenshot of a Tell1 DU panel – operator interface: identifies board type evaluates malfunctioning parts of the board quick overview of important counters and registers reconfiguration of refresh rate Stefan Koestner IEEE NSS – San Diego 2006

Tell1 Device Unit registers (=data points) can be connected to callback functions which allow to evaluate the functionality of subcomponents (inside Ctrl-script) error recovery or alerts can be implemented Stefan Koestner IEEE NSS – San Diego 2006

Tell1 Device Unit many panels to monitor registers – grouped in sub panels panels for interaction and reconfiguration of registers Stefan Koestner IEEE NSS – San Diego 2006

Conclusions Presented the basic concepts for implementing the LHCb readout boards into the Experiment Control System with a focus on the various layers and frameworks used The system is currently under test for commissioning of the sub-detectors (refinements still ongoing) A generic design was of major importance to allow for the usage by different subdetectors and board types Further wizards to facilitate board implementation by non experts Stefan Koestner IEEE NSS – San Diego 2006