Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN) on behalf of the.

Slides:



Advertisements
Similar presentations
Peter Chochula CERN-ALICE ALICE DCS Workshop, CERN September 16, 2002 DCS – Frontend Monitoring and Control.
Advertisements

Experiment Control Systems at the LHC An Overview of the System Architecture An Overview of the System Architecture JCOP Framework Overview JCOP Framework.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Figure 1.1 Interaction between applications and the operating system.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
A. Homs, BLISS Day Out – 15 Jan 2007 CCD detectors: spying with the Espia D. Fernandez A. Homs M. Perez C. Guilloud M. Papillon V. Rey V. A. Sole.
Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2012 Xavier Vilasis.
Using PVSS for the control of the LHCb TELL1 detector emulator (OPG) P. Petrova, M. Laverne, M. Muecke, G. Haefeli, J. Christiansen CERN European Organization.
5 March DCS Final Design Review: RPC detector The DCS system of the Atlas RPC detector V.Bocci, G.Chiodi, E. Petrolo, R.Vari, S.Veneziano INFN Roma.
Towards a Detector Control System for the ATLAS Pixeldetector Susanne Kersten, University of Wuppertal Pixel2002, Carmel September 2002 Overview of the.
Calorimeter upgrade meeting - Wednesday, 11 December 2013 LHCb Calorimeter Upgrade : CROC board architecture overview ECAL-HCAL font-end crate  Short.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
GBT Interface Card for a Linux Computer Carson Teale 1.
DEVICES AND COMMUNICATION BUSES FOR DEVICES NETWORK
LNL 1 SLOW CONTROLS FOR CMS DRIFT TUBE CHAMBERS M. Bellato, L. Castellani INFN Sezione di Padova.
ALICE, ATLAS, CMS & LHCb joint workshop on
1 Outline Firmware upgrade of the HV_LED_DAC boards. HV Status Bits board. Status of the board integration into the LHCb TFC system. CALO HV system and.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 VeLo L1 Read Out Guido Haefeli VeLo Comprehensive Review 27/28 January 2003.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Peter Chochula ALICE Offline Week, October 04,2005 External access to the ALICE DCS archives.
L0 DAQ S.Brisbane. ECS DAQ Basics The ECS is the top level under which sits the DCS and DAQ DCS must be in READY state before trying to use the DAQ system.
News on GEM Readout with the SRS, DATE & AMORE
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Controls EN-ICE FSM for dummies (…w/ all my respects) 15 th Jan 09.
PART 7 CPU Externals CHAPTER 7: INPUT/OUTPUT 1. Input/Output Problems Wide variety of peripherals – Delivering different amounts of data – At different.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Links from experiments to DAQ systems Jorgen Christiansen PH-ESE 1.
Clara Gaspar, December 2012 Experiment Control System & Electronics Upgrade.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
CCU25 Communication and Control Unit ASIC in CMOS 0.25 μm Ch.Paillard
TELL1 command line tools Guido Haefeli EPFL, Lausanne Tutorial for TELL1 users : 25.February
March 7th 2005 Stefan Koestner LHCb week ECS-tools for the CCPC/Tell1 (a tutorial in 3 acts): (1) CCPC/PVSS Interface: - few comments on the server - quick.
Clara Gaspar on behalf of the ECS team: CERN, Marseille, etc. October 2015 Experiment Control System & Electronics Upgrade.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
1 Timing of the calorimeter monitoring signals 1.Introduction 2.LED trigger signal timing * propagation delay of the broadcast calibration command * calibration.
Clara Gaspar, February 2010 DIM A Portable, Light Weight Package for Information Publishing, Data Transfer and Inter-process Communication.
October 12th 2005 ICALEPCS 2005D.Charlet The SPECS field bus  Global description  Module description Master Slave Mezzanine  Implementation  Link development.
Jun 18th 2009 SPECS system D.Charlet The SPECS field bus ACTEL APA 150 GLUE.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
Stefan Koestner IEEE NSS – San Diego 2006 IEEE - Nuclear Science Symposium San Diego, Oct. 31 st 2006 Stefan Koestner on behalf of the LHCb Online Group.
Controlling a large Data Acquisition System using an industrial SCADA system Stefan Koestner (CERN) on behalf of the LHCb Online Group Modelling the Experiment.
of the Upgraded LHCb Readout System
PVSS an industrial tool for slow control
Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN)
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
DT80 range Modbus capability
TELL1 A common data acquisition board for LHCb
Controlling a large CPU farm using industrial tools
ECS-tools for the CCPC/Tell1 (a tutorial in 3 acts):
Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN)
The LHCb Run Control System
Philippe Vannerem CERN / EP ICALEPCS - Oct03
IEEE - Nuclear Science Symposium San Diego, Oct. 31st 2006
The CMS Tracking Readout and Front End Driver Testing
TPC Electronics Meeting, 13/01/05 Carmen González Gutiérrez
Tools for the Automation of large distributed control systems
TELL1 A common data acquisition board for LHCb
Presentation transcript:

Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN) on behalf of the LHCb Online Group Cavern-side: Dose outside detector up to 10 Gy in 10 yrs The LHCb experiment: Single-arm forward spectrometer dedicated to B-physics TeV, luminosity=2·10 32 cm -2 s -1 acceptance: (250)mrad bb/year, full B spectrum Counting house-side: No radiation hard electronics required e.g. Inner Tracker CCPC – Credit Card sized PC 15 different types of DAQ and trigger boards (in total some 400) a few hundred registers each (monitor) and memory blocks (upload) common readout board (Tell1) based on FPGA technology to adopt specific needs DIM – Distributed Information Management System portable lightweight communication layer to interface hardware from ECS if server crashes it can easily republish on DNS node (Robustness) clients can be installed on any machine just specifying DNS node (Portability) – no need to take care of connectivity Long distance I2C Analog Data (twisted pair ~5m) SPECS mezzanine card: Hosted on controller cards in radiation area Contains the SPECS slave – a radiation tolerant ACTEL ProASIC FPGA which translates the SPECS protocol into the required I2C, JTAG or local bus SPECS is a custom protocol made for LHCb to work in radiation and B-field environment SPECS is used for control and readout of ASICs (e.g. preamps) and sensors (e.g. temperature) For infrastructure the ELMB (CAN bus) is used Optical fibres ~100m SPECS ~100m (4 different unidirectional lines) L0 trigger ~ 1 MHz, 4 µs fixed latency Time synchronized to FE-chips L1 electronics: 1 MHz readout (~35 GB/s) collection and preprocessing push-protocol supposes enough buffer in next stage Tell1 Readout Supervisor Throttle Or Multi-Event-Packages (MEPs) sent to PC farm (Gigabit Ethernet) PC-Farm: High Level Triggers Reconstruction Storage Trigger & Fast Control: distributes clock sends trigger and broadcasts assigns farmnode IP sends throttle and resets SPECS Master: PCI card which can talk with up to 32 SPECS slaves generic server running on same PC to communicate with ECS (DIM) client Experiment Control System based on Industrial SCADA (PVSSII) LHC common JCOP framework distributed and scalable system coherent interface platform independend (CONTROL interpreter) Service box Inside concrete bunker close to detectors – requires radiation tolerant electronics: frontend configuration (SPECS mezzanine) and analog to digital conversion DIM protocol Embedded Microcontrollers (CCPC): readout boards in the radiation save area host embedded microcontrollers (credit card PC) access to I2C, JTAG and general purpose bus isolated access path  robustness controls network (10/100 Ethernet) separated from data network generic server to communicate with ECS (DIM) client Configuration DB: stores register settings uploaded at start up Oracle/SQL I2C like protocol without acknowledgements slave can send interrupt and master repeats clock only active during transfer (10 MHz) variable number of words (<256) with 10 bits SPECS – Serial Protocol for the Experiment Control System 10 Mbit/s serial link for configuration of remote electronics single master multi-slave (up to 32) bus – simple, fast and cheap serial bus with two signals in each direction (2 times clk & data) BLVDS on shielded twisted pair (8-bin RJ45, cat 5 Ethernet) SPECS multi-master board – 4 different busses standard 32-bit 33 MHz PCI board inserted on the same PC where SPECS server is running SPECS slave designed as portable VERILOG code implemented in ACTEL APA150 FPGA (tested up to 40 krad) SEU and SEL immune due to triple voting and one-hot state used SPECS mezzanine board houses the SPECS slave provides 16 long distance JTAG chains (4 signals each) provides 15 long distance I2C busses with selectable frequency local i2C bus and 16 bit parallel bus (8 bit address range) DCU chip with 6 ADC channels (12 bit) and 1 temp sensor 32 configurable I/O lines FPGA-based readout board ADC conversion or optical conv. synchronisation, reordering, pedestal subtraction, common mode sub., zero suppression (PP) MEP packing, IP framing (SL) Gigabit Ethernet Link Separated ECS interface (CCPC) 4” – SM520PCX from Digitallogic i486 compatible microcontroller (AMD ELAN 520) Linux Kernel at 133 MHz reading from local bus 20 MB/s filesystem shared over server access to board via gluecard over a PLXPCI9030 bridge: Parallel bus (8/16/32), 3 fast JTAG chains (2 MHz) to load firmware, 4 I2C lines and 9 GPIO lines. low level libraries under SLC4 DIM server running on CCPC or SPECS master PC publishes services to DIM Name Server (DNS) from where the client (ECS) can subscribe to it. Data exchange peer to peer from server to client Client sends commands (write/read) – server updates services (data/status) PVSS Event manager Database manager PVSS Database Control manager API GUIs Distribution manager Driver manager DIM client PVSS00dim User interface layer Processing layer Communication and memory layer Driver layer Distributed System Support PC running DNS node SPECS master PC running SPECS server or embedded controller running CCPC server DNS node server Ethernet The PVSS software architecture comprises a number of managers which may run on independent machines and communicate with the event manager via a PVSS internal protocol on top of TCP/IP a DIM driver is implemented which allows to communicate with hardware over ethernet Representation of the hardware inside the Experiment Control System Modelling of the entire experiment with Finite State Machines in order to model the experiment at an expert system level states and actions have to be defined From the top node (run control) commands can be propagated downwards a hierarchical tree Status and alarms can be propagated upwards the hierarchical tree to the run control The final branch of a tree is always a device unit acting on the hardware (connected to datapoint!) Additional control units can be implemented to give a better logical structure (domains) PVSS datapoints each type of hardware has to be represented as a PVSS datapoint type, from which the various instances can be derived (datapoints) a datapoint is a complex structure connected to the internal PVSS database if the data inside a datapoint changes a callback function or alert can be launched registers can be organized as substructures to mimic the organization of a hardware each register consists of various datapoints holding the settings of each register or allowing to launch DIM commands or receive DIM services FwHw tool a tool (GUI) was developed which eases the development of hardware types scripts using the framework functions can be automatically generated the tool takes care of the connection of the registers to the DIM driver abstraction hides diversity and complexity of the various hardware/bus types registers can be subscribed  register settings (address, etc.) sent to server to be stored in list  DIM services and commands created with unique register names write/read functions by calling just register names server takes care to treat it in the appropriate way according to type DIM client write/read(regname) DIM server Stores list of registers with the different settings Writes/reads to various types I2CLBUSJTAGetc. Ccpc/reg/cmnd Ccpc/reg/srvc recipes sets of predefined register settings (recipes) can be stored in a local cache or database at start up these recipes can be loaded to configure the experiment recipe types can vary according to run type DU_N DU_2 IT_DAQ_ST1 IT_DAQ_ST2 DCSHVDAQIDAQIT_V1 IT_DAQ_ST3 LHCb Control Units IT_HV_ST1 IT_HV_ST2 IT_HV_ST3 IT_HV_ST2_BOXC IT_HV_ST2_BOXDIT_HV_ST2_BOXU IT_HV_ST2_BOXA DU_1 IT_V2 IT_ST1IT_ST2IT_ST3 IT_DCS_ST2_LV IT_DCS_ST2_COOLINGIT_DCS_ST2_TEMP IT_DCS_ST2_GAS IT_DCS_ST1 IT_DCS_ST2 IT_DCS_ST2_HUM IT_DCS_ST3 IT_DCSIT_HV IT_DAQIIT_DAQ IT_DAQ_ST2_TELL1 IT_DAQ_ST2_FE Implementation each domain has a defined transition from one state into the other with the possibility to autorecover (in case of error) change in hardware (datapoint) can trigger a state transition of a device unit rules defined for control unit if its children make a state transition commands can cause state transitions and act on datapoint of device unit, triggering itself a callback function User Interface each device unit offers a set of panels to configure and interact with the hardware (datapoints) the control unit panel from where commands can be launched shows its own and the children’s state Partitioning to allow for more flexibility subparts of the tree can be excluded (either its state is ignored or commands are not allowed) helps to commission subdetectors or to ignore faulty hardware