The LHCb Run Control System

Slides:



Advertisements
Similar presentations
Clara Gaspar, April 2012 The LHCb Experiment Control System: Automation concepts & tools.
Advertisements

The Detector Control System – FERO related issues
The Control System for the ATLAS Pixel Detector
Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
Experiment Control Systems at the LHC An Overview of the System Architecture An Overview of the System Architecture JCOP Framework Overview JCOP Framework.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
1 ALICE Detector Control System (DCS) TDR 28 January 2004 L.Jirdén On behalf of ALICE Controls Coordination (ACC): A.Augustinus, P.Chochula, G. De Cataldo,
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
1 CALO DCS power supply status CALO meeting Anatoli Konoplyannikov [ITEP / LAPP] Outline  Introduction  Power supply description with hardware.
Clara Gaspar, March 2006 LHCb’s Experiment Control System Step by Step.
First attempt of ECS training Work in progress… A lot of material was borrowed! Thanks!
Robert Gomez-Reino on behalf of PH-CMD CERN group.
Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2012 Xavier Vilasis.
Designing a HEP Experiment Control System, Lessons to be Learned From 10 Years Evolution and Operation of the DELPHI Experiment. André Augustinus 8 February.
Summary DCS Workshop - L.Jirdén1 Summary of DCS Workshop 28/29 May 01 u Aim of workshop u Program u Summary of presentations u Conclusion.
09/11/20061 Detector Control Systems A software implementation: Cern Framework + PVSS Niccolo’ Moggi and Stefano Zucchelli University and INFN Bologna.
MDT PS DCS for ATLAS Eleni Mountricha
1 ALICE Control System ready for LHC operation ICALEPCS 16 Oct 2007 L.Jirdén On behalf of the ALICE Controls Team CERN Geneva.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
XXVI Workshop on Recent Developments in High Energy Physics and Cosmology Theodoros Argyropoulos NTUA DCS group Ancient Olympia 2008 ATLAS Cathode Strip.
DCS Workshop - L.Jirdén1 ALICE DCS PROJECT ORGANIZATION - a proposal - u Project Goals u Organizational Layout u Technical Layout u Deliverables.
The Joint COntrols Project Framework Manuel Gonzalez Berges on behalf of the JCOP FW Team.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
André Augustinus 10 October 2005 ALICE Detector Control Status Report A. Augustinus, P. Chochula, G. De Cataldo, L. Jirdén, S. Popescu the DCS team, ALICE.
Controls EN-ICE Finite States Machines An introduction Marco Boccioli FSM model(s) of detector control 26 th April 2011.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
JCOP Review, March 2003 D.R.Myers, IT-CO1 JCOP Review 2003 Architecture.
Bruno Belbute, October 2006 Presentation Rehearsal for the Follow-up meeting of the Protocol between AdI and CERN.
CERN, O.Pinazza: ALICE TOF DCS1 ALICE TOF DCS Answers to DCS Commissioning and Installation related questions ALICE week at CERN O. Pinazza and.
Controls EN-ICE FSM for dummies (…w/ all my respects) 15 th Jan 09.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
DCS overview - L.Jirdén1 ALICE ECS/DCS – project overview strategy and status L.Jirden u Organization u DCS system overview u Implementation.
1 Calorimeters LED control LHCb CALO meeting Anatoli Konoplyannikov /ITEP/ Status of the calorimeters LV power supply and ECS control Status of.
14 November 08ELACCO meeting1 Alice Detector Control System EST Fellow : Lionel Wallet, CERN Supervisor : Andre Augustinus, CERN Marie Curie Early Stage.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
LHCb Configuration Database Lana Abadie, PhD student (CERN & University of Pierre et Marie Curie (Paris VI), LIP6.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
Clara Gaspar, March 2003 Hierarchical Control Demo: Partitioning, Automation and Error Recovery in the (Detector) Control System of LHC Experiments.
André Augustinus 18 March 2002 ALICE Detector Controls Requirements.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
ALICE. 2 Experiment Size: 16 x 26 meters (some detectors >100m away from IP) Weight: 10,000 tons Detectors: 18 Magnets: 2 Dedicated to study of ultra.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
Clara Gaspar, May 2010 SMI++ A Tool for the Automation of large distributed control systems.
Stefan Koestner IEEE NSS – San Diego 2006 IEEE - Nuclear Science Symposium San Diego, Oct. 31 st 2006 Stefan Koestner on behalf of the LHCb Online Group.
B. Franek, poster presented at Computing in High Energy and Nuclear Physics, Praha – Czech Republic, 21 – 27 March 2009 This framework provides: -A method.
Clara Gaspar, March 2013 Thanks to: Franco Carena, Vasco Chibante Barroso, Giovanna Lehmann Miotto, Hannes Sakulin and Andrea Petrucci for their input.
Control systems Detector Safety System (DSS) (no action available for an operator) Display of CERN Level 3 alarms and protection of NA62 hardware Detector.
PVSS an industrial tool for slow control
SPD DCS Overview & FED Server
TPC Commissioning: DAQ, ECS aspects
ATLAS MDT HV – LV Detector Control System (DCS)
CMS – The Detector Control System
Controlling a large CPU farm using industrial tools
WinCC-OA Upgrades in LHCb.
JCOP Review Closeout March 13, 2003 Chip Watson, Robin Lauckner,
TPC Detector Control System
Philippe Vannerem CERN / EP ICALEPCS - Oct03
IEEE - Nuclear Science Symposium San Diego, Oct. 31st 2006
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Experiment Control System
Pierluigi Paolucci & Giovanni Polese
Pierluigi Paolucci & Giovanni Polese
Tools for the Automation of large distributed control systems
Pierluigi Paolucci & Giovanni Polese
Presentation transcript:

The LHCb Run Control System An Integrated and Homogeneous Control System

The Experiment Control System Is in charge of the Control and Monitoring of all parts of the experiment DCS Devices (HV, LV, GAS, Cooling, etc.) Detector Channels L0 TFC Front End Electronics Experiment Control System Readout Network HLT Farm In charge of the Control and Monitoring of: Data Acquisition System FE Electronics, Readout, Event Building, etc. Timing and Trigger Systems Timing & Fast Control (TFC) L0 Trigger High Level Trigger Farm Detector Control System Gas, High Voltages, Low Voltages, Temperatures... Experimental Infrastructure Cooling, Ventilation, Electricity distribution, ... Interaction with the outside world Magnet, Accelerator system, Safety system, etc. Storage Monitoring Farm DAQ External Systems (LHC, Technical Services, Safety, etc)

Some Requirements Large number of devices/IO channels Need for Distributed Hierarchical Control De-composition in Systems, sub-systems, … , Devices Local decision capabilities in sub-systems Large number of independent teams and very different operation modes Need for Partitioning Capabilities (concurrent usage) High Complexity & Non-expert Operators Need for Full Automation of: Standard Procedures Error Recovery Procedures And for Intuitive User Interfaces

Design Steps In order to achieve an integrated System: Promoted HW Standardization (so that common components could be re-used) Ex.: Mainly two control interfaces to all LHCb electronics Credit Card sized PCs (CCPC) for non-radiation zones A serial protocol (SPECS) for electronics in radiation areas Defined an Architecture That could fit all areas and all aspects of the monitoring and control of the full experiment Provided a Framework An integrated collection of guidelines, tools and components that allowed the development of each sub-system coherently in view of its integration in the complete system

Generic SW Architecture INFR. TFC LHC ECS HLT LV Dev1 LV Dev2 LV DevN DCS SubDetN DCS SubDet2 DCS SubDet1 DCS SubDet1 LV SubDet1 TEMP SubDet1 GAS … DAQ SubDetN DAQ SubDet2 DAQ SubDet1 DAQ SubDet1 FEE SubDet1 RO FEE Dev1 FEE Dev2 FEE DevN … Status & Alarms Commands Legend: Control Unit … Device Unit

The Control Framework The JCOP* Framework is based on: SCADA System - PVSSII for: Device Description (Run-time Database) Device Access (OPC, Profibus, drivers) Alarm Handling (Generation, Filtering, Masking, etc) Archiving, Logging, Scripting, Trending User Interface Builder Alarm Display, Access Control, etc. SMI++ providing: Abstract behavior modeling (Finite State Machines) Automation & Error Recovery (Rule based system) * – The Joint COntrols Project (between the 4 LHC exp. and the CERN Control Group) Device Units Control Units

Device Units Provide access to “real” devices: The Framework provides (among others): “Plug and play” modules for commonly used equipment. For example: CAEN or Wiener power supplies (via OPC) LHCb CCPC and SPECS based electronics (via DIM) A protocol (DIM) for interfacing “home made” devices. For example: Hardware devices like a calibration source Software devices like the Trigger processes (based on LHCb’s offline framework – GAUDI) Each device is modeled as a Finite State Machine

Hierarchical control Each Control Unit: Is defined as one or more Finite State Machines Can implement rules based on its children’s states In general it is able to: Summarize information (for the above levels) “Expand” actions (to the lower levels) Implement specific behaviour & Take local decisions Sequence & Automate operations Recover errors Include/Exclude children (i.e. partitioning) Excluded nodes can run is stand-alone User Interfacing Present information and receive commands DCS … Tracker DCS Muon DCS Muon LV Muon GAS

Control Unit Run-Time Dynamically generated operation panels (Uniform look and feel) Configurable User Panels and Logos “Embedded” standard partitioning rules: Take Include Exclude Etc.

Operation Domains Three Domains have been defined: DCS HV DAQ For equipment which operation and stability is normally related to a complete running period Example: GAS, Cooling, Low Voltages, etc. HV For equipment which operation is normally related to the Machine state. Example: High Voltages DAQ For equipment which operation is related to a RUN Example: Readout electronics, High Level Trigger processes, etc.

FSM Templates DCS Domain HV Domain DAQ Domain READY STANDBY1 OFF ERROR Recover STANDBY2 RAMPING_STANDBY1 RAMPING_STANDBY2 RAMPING_READY NOT_READY Go_STANDBY1 Go_STANDBY2 Go_READY READY OFF ERROR NOT_READY Switch_ON Switch_OFF Recover DAQ Domain RUNNING READY NOT_READY Start Stop ERROR UNKNOWN Configure Reset Recover CONFIGURING All Devices and Sub-Systems have been implemented using one of these templates

ECS: Run Control X X Size of the Control Tree: Distributed over ~150 PCs ~100 Linux (50 for the HLT) ~ 50 Windows >2000 Control Units >50000 Device Units The Run Control can be seen as: The Root node of the tree If the tree is partitioned there can be several Run Controls. ECS HV DCS TFC DAQ HLT LHC X X … … SubDet1 DCS SubDetN DCS SubDet1 DAQ SubDetN DAQ SubDet1

Partitioning Creating a Partition ECS Domain Allocate = Get a “slice” of: Timing & Fast Control (TFC) High Level Trigger Farm (HLT) Storage System Monitoring Farm ECS Domain NOT_ALLOCATED Allocate ALLOCATING Deallocate Recover ERROR NOT_READY Configure CONFIGURING Reset READY StartRun StopRun ACTIVE StartTrigger StopTrigger RUNNING

Used for Configuring all Sub-Systems Run Control Matrix Activity Domain X Sub-Detector Used for Configuring all Sub-Systems

Sub-Detector Run Control “Scan” Run

LHCb Operations Two operators on shift: Data Manager Shift Leader has 2 views of the System: Run Control Big Brother Manages the LHC <-> LHCb dependencies SubDetector Voltage x LHC State table

Automation Automation at several levels: BigBrother HLT LHCb Autopilot Always done by the FSM (not by the panels)

Conclusions LHCb has designed and implemented a coherent and homogeneous control system The Run Control allows to: Configure, Monitor and Operate the Full Experiment Run any combination of sub-detectors in parallel in standalone Can be completely automated (when we understand the machine) Some of its main features: Sequencing, Automation, Error recovery, Partitioning Come from the usage of SMI++ (integrated with PVSS) It’s being used daily for Physics data taking and other global or sub-detector activities