Tools for the Automation of large distributed control systems

Slides:



Advertisements
Similar presentations
Clara Gaspar, April 2012 The LHCb Experiment Control System: Automation concepts & tools.
Advertisements

The Detector Control System – FERO related issues
The Control System for the ATLAS Pixel Detector
Experiment Control Systems at the LHC An Overview of the System Architecture An Overview of the System Architecture JCOP Framework Overview JCOP Framework.
1 ALICE Detector Control System (DCS) TDR 28 January 2004 L.Jirdén On behalf of ALICE Controls Coordination (ACC): A.Augustinus, P.Chochula, G. De Cataldo,
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
Clara Gaspar, March 2006 LHCb’s Experiment Control System Step by Step.
First attempt of ECS training Work in progress… A lot of material was borrowed! Thanks!
Robert Gomez-Reino on behalf of PH-CMD CERN group.
Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2012 Xavier Vilasis.
Designing a HEP Experiment Control System, Lessons to be Learned From 10 Years Evolution and Operation of the DELPHI Experiment. André Augustinus 8 February.
09/11/20061 Detector Control Systems A software implementation: Cern Framework + PVSS Niccolo’ Moggi and Stefano Zucchelli University and INFN Bologna.
SCADA Systems - What is the scope of this talk? What are SCADA systems? What are their structure and main features? How open are they? How are they evolving?
MDT PS DCS for ATLAS Eleni Mountricha
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
June 14, 2005 Alice DCS workshop, Utrecht S.Popescu Guidelines and conventions for ALICE PVSSII control software Graphical User Interface Naming and Numbering.
XXVI Workshop on Recent Developments in High Energy Physics and Cosmology Theodoros Argyropoulos NTUA DCS group Ancient Olympia 2008 ATLAS Cathode Strip.
ATLAS MDT Power Supply DCS Anastasios Iliopoulos 15/08/2006.
The Joint COntrols Project Framework Manuel Gonzalez Berges on behalf of the JCOP FW Team.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
D etector C ontrol S ystem ALICE DCS workshop G. De Cataldo CERN-CH, A. Franco INFN Bari, I 1 Finite State Machines (FSM) for the ALICE DCS:
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
Controls EN-ICE Finite States Machines An introduction Marco Boccioli FSM model(s) of detector control 26 th April 2011.
JCOP Review, March 2003 D.R.Myers, IT-CO1 JCOP Review 2003 Architecture.
Bruno Belbute, October 2006 Presentation Rehearsal for the Follow-up meeting of the Protocol between AdI and CERN.
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
Architecture View Models A model is a complete, simplified description of a system from a particular perspective or viewpoint. There is no single view.
Controls EN-ICE FSM for dummies (…w/ all my respects) 15 th Jan 09.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
ProShell Procedure Framework Status MedAustron Control System Week 2 October 7 th, 2010 Roland Moser PR a-RMO, October 7 th, 2010 Roland Moser 1.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
Clara Gaspar, March 2003 Hierarchical Control Demo: Partitioning, Automation and Error Recovery in the (Detector) Control System of LHC Experiments.
22/May/02 JCOP meeting G. De Catatldo-A.Franco INFN Bari Italy 1 The Control System of the HMPID up to the end of 2001 ; 2002: The FSM toolkit, Hierarchy.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
André Augustinus 18 March 2002 ALICE Detector Controls Requirements.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
Clara Gaspar, May 2010 SMI++ A Tool for the Automation of large distributed control systems.
B. Franek, poster presented at Computing in High Energy and Nuclear Physics, Praha – Czech Republic, 21 – 27 March 2009 This framework provides: -A method.
Clara Gaspar, March 2013 Thanks to: Franco Carena, Vasco Chibante Barroso, Giovanna Lehmann Miotto, Hannes Sakulin and Andrea Petrucci for their input.
Control systems Detector Safety System (DSS) (no action available for an operator) Display of CERN Level 3 alarms and protection of NA62 hardware Detector.
PVSS an industrial tool for slow control
Online clock software status
Object-oriented and Structured System Models
SCADA Selection and Usage at CERN
The Finite State Machine toolkit of the JCOP Framework
ATLAS MDT HV – LV Detector Control System (DCS)
CMS – The Detector Control System
Controlling a large CPU farm using industrial tools
WinCC-OA Upgrades in LHCb.
JCOP Review Closeout March 13, 2003 Chip Watson, Robin Lauckner,
by Prasad Mane (05IT6012) School of Information Technology
The LHCb Run Control System
Software Architecture
New FSM v24r1.
TPC Detector Control System
Philippe Vannerem CERN / EP ICALEPCS - Oct03
Analysis models and design models
Control and monitoring of trigger algorithms using Gaucho
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Experiment Control System
Pierluigi Paolucci & Giovanni Polese
Pierluigi Paolucci & Giovanni Polese
The only thing worse than an experiment you can’t control is an experiment you can. May 6, 2019 V. Gyurjyan CHEP2007.
Pierluigi Paolucci & Giovanni Polese
Presentation transcript:

Tools for the Automation of large distributed control systems PVSS & SMI++ Tools for the Automation of large distributed control systems

Outline Some requirements of large control systems (for physics experiments) Control System Architecture Control Framework SCADA: PVSS II FSM toolkit: SMI++ Some important features

Some Requirements… Large number of devices/IOchannels Need for: Parallel and Distributed Data acquisition & Monitoring Hierarchical Control Summarize information up Distribute commands down Decentralized Decision Making DetDcs1 DetDcsN ... DCS ... Dev1 Dev2 Dev3 Devn Devi SubSys2 SubSysn SubSys1

Some Requirements… Large number of independent teams Very different operation modes DCS Need for: Partitioning: The capability of operating parts of the system independently and concurrently ... DetDcs1 DetDcsN SubSys1 SubSys2 Dev1 Dev2 Dev3 To Devices (HW or SW)

Some Requirements… High Complexity Non-expert Operators Need for: Full Automation of: Standard Procedures Error Recovery Procedures Intuitive User Interfaces

Control System Architecture ECS T.S. LHC DCS DAQ DSS ... ... Control Units GAS DetDcs1 DetDcsN DetDaq1 Status & Alarms Commands ... SubSys1 SubSys2 SubSysN Device Units Dev1 Dev2 Dev3 DevI DevN To Devices (HW or SW)

Control Units Each node is able to: Summarize information (for the above levels) “Expand” actions (to the lower levels) Implement specific behaviour & Take local decisions Sequence & Automate operations Recover errors Include/Exclude children (i.e. partitioning) Excluded nodes can run is stand-alone User Interfacing Present information and receive commands DCS Tracker Muon HV Temp HV GAS

The Control Framework The JCOP Framework* is based on: SCADA System - PVSSII for: Device Description (Run-time Database) Device Access (OPC, Profibus, drivers) Alarm Handling (Generation, Filtering, Masking, etc) Archiving, Logging, Scripting, Trending User Interface Builder Alarm Display, Access Control, etc. SMI++ providing: Abstract behavior modeling (Finite State Machines) Automation & Error Recovery (Rule based system) *Please See Talk S11-1 Device Units Control Units

SMI++ Method Classes and Objects Finite State Machines Allow the decomposition of a complex system into smaller manageable entities Finite State Machines Allow the modeling of the behavior of each entity and of the interaction between entities in terms of STATES and ACTIONS Rule-based reasoning Allow Automation and Error Recovery

SMI++ Method (Cont.) SMI++ Objects can be: Abstract (e.g. a Run or the DCS) Concrete (e.g. a power supply or a temp. sensor) Concrete objects are implemented externally either in "C", in C++, or in PVSS (ctrl scripts) Logically related objects can be grouped inside "SMI domains" representing a given sub-system

SMI++ Run-time Environment Device Level: Proxies C, C++, PVSS ctrl scripts drive the hardware: deduceState handleCommands Abstract Levels: Domains Internal objects Implement the logical model Dedicated language User Interfaces For User Interaction Obj SMI Domain Obj SMI Domain Proxy Proxy Proxy Hardware Devices

SMI++ SMI++ - The Language SML –State Management Language Finite State Logic Objects are described as FSMs their main attribute is a STATE Parallelism Actions can be sent in parallel to several objects. Tests on the state of objects can block if the objects are still “transiting” Asynchronous Rules Actions can be triggered by logical conditions on the state of other objects

SML example Device: Sub System:

SML example (many objs) Devices: Sub System: Objects can be dynamically included/excluded in a Set

SML example (automation) External Device: Sub System:

SMI++ Run-time Tools Device Level: Proxies Abstract Levels: Domains C, C++, PVSS ctrl scripts Use a Run Time Library: smirtl To Communicate with their domain Abstract Levels: Domains A C++ engine: smiSM - reads the translated SML code and instantiates the objects User Interfaces Use a Run Time Library: smiuirtl To communicate with the domains All Tools available on: Windows, Unix (Linux) All Communications are dynamically (re)established Obj SMI Domain Obj SMI Domain Proxy Proxy Proxy Hardware Devices

SMI++ History 1989: First implemented for DELPHI in ADA DELPHI used it throughout the experiment A Top level domain: Big-Brother automatically piloted the experiment 1997: Rewritten in C++ 1999: Is used by BaBar for the Run-Control and high level automation 2002: Integration in PVSS for use at LHC

PVSS/SMI++ Integration Graphical Configuration of SMI++ Using PVSS

Building Hierarchies Hierarchy of CUs Distributed over several machines "&" means reference to a CU in another system Editor Mode: Add / Remove / Change Settings Navigator Mode Start / Stop / View

Control Unit Run-Time Dynamically generated operation panels (Uniform look and feel) Configurable User Panels

Full Experiment Control ECS When LHC in PHYSICS -> GET_READY DCS -> GET_READY DAQ -> START_RUN DAQ ECS LHC DCS DAQ Vertex Tracker Muon Vertex Tracker Muon HV Temp HV GAS HV GAS FE RU FE RU FE RU

Parallel Hierarchies Safety When GAS in ERROR -> SWITCH_OFF HVs ECS LHC DCS DAQ Vertex Tracker Muon Vertex Vertex Tracker Muon HV Temp HV GAS HV GAS FE FE RU RU FE RU FE RU Safety

Features of PVSS/SMI++ Task Separation: SMI Proxies/PVSS Scripts execute only basic actions – No intelligence SMI Objects implement the logic behaviour Advantages: Change the HW -> change only PVSS Change logic behaviour sequencing and dependency of actions, etc -> change only SMI rules

Features of PVSS/SMI++ Error Recovery Mechanism Bottom Up SMI Objects react to changes of their children In an event-driven, asynchronous, fashion Distributed Each Sub-System recovers its errors Each team knows how to recover local errors Hierarchical/Parallel recovery Can provide complete automation even for very large systems