ALICE, ATLAS, CMS & LHCb joint workshop on 12.03.2012.

Slides:



Advertisements
Similar presentations
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
Advertisements

André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Peter Chochula, January 31, 2006  Motivation for this meeting: Get together experts from different fields See what do we know See what is missing See.
G O B E Y O N D C O N V E N T I O N WORF: Developing DB2 UDB based Web Services on a Websphere Application Server Kris Van Thillo, ABIS Training & Consulting.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
Software Frameworks for Acquisition and Control European PhD – 2009 Horácio Fernandes.
1 HLT – ECS, DCS and DAQ interfaces Sebastian Bablok UiB.
Nikolay Tomitov Technical Trainer SoftAcad.bg.  What are Amazon Web services (AWS) ?  What’s cool when developing with AWS ?  Architecture of AWS 
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
F Fermilab Database Experience in Run II Fermilab Run II Database Requirements Online databases are maintained at each experiment and are critical for.
“This presentation is for informational purposes only and may not be incorporated into a contract or agreement.”
2/10/2000 CHEP2000 Padova Italy The BaBar Online Databases George Zioulas SLAC For the BaBar Computing Group.
Database System Concepts and Architecture Lecture # 3 22 June 2012 National University of Computer and Emerging Sciences.
Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2012 Xavier Vilasis.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
Ramiro Voicu December Design Considerations  Act as a true dynamic service and provide the necessary functionally to be used by any other services.
1 Alice DAQ Configuration DB
Update on Database Issues Peter Chochula DCS Workshop, June 21, 2004 Colmar.
Bookkeeping Tutorial. Bookkeeping & Monitoring Tutorial2 Bookkeeping content  Contains records of all “jobs” and all “files” that are created by production.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Andrew S. Budarevsky Adaptive Application Data Management Overview.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Dynamic configuration of the CMS Data Acquisition Cluster Hannes Sakulin, CERN/PH On behalf of the CMS DAQ group Part 1: Configuring the CMS DAQ Cluster.
V.Sirotenko, July Status of Online Databases Currently there are 2 online Oracle Databases running on d0online cluster: 1.Production DB, d0onprd,
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
CMS pixel data quality monitoring Petra Merkel, Purdue University For the CMS Pixel DQM Group Vertex 2008, Sweden.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Databases in CMS Conditions DB workshop 8 th /9 th December 2003 Frank Glege.
A university for the world real R © 2009, Chapter 9 The Runtime Environment Michael Adams.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Database Server Concepts and Possibilities Lee Lueking D0 Data Browser Workshop April 8, 2002.
Apr. 8, 2002Calibration Database Browser Workshop1 Database Access Using D0OM H. Greenlee Calibration Database Browser Workshop Apr. 8, 2002.
Peter Chochula ALICE Offline Week, October 04,2005 External access to the ALICE DCS archives.
DABCDABC D ata A cquisition B ackbone C ore J.Adamczewski, H.G.Essel, N.Kurz, S.Linev 1 Work supported by EU RP6 project.
PPDG February 2002 Iosif Legrand Monitoring systems requirements, Prototype tools and integration with other services Iosif Legrand California Institute.
Bookkeeping Tutorial. 2 Bookkeeping content  Contains records of all “jobs” and all “files” that are produced by production jobs  Job:  In fact technically.
Alarm Handling Oliver Holme 7 th November Guidelines & Strategies All provide recommendation to use Framework Classes Standard severities/behaviour/colours.
Configuration database status report Eric van Herwijnen September 29 th 2004 work done by: Lana Abadie Felix Schmidt-Eisenlohr.
TDAQ Experience in the BNL Liquid Argon Calorimeter Test Facility Denis Oliveira Damazio (BNL), George Redlinger (BNL).
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
LHCb Configuration Database Lana Abadie, PhD student (CERN & University of Pierre et Marie Curie (Paris VI), LIP6.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
Copyright 2007, Information Builders. Slide 1 iWay Web Services and WebFOCUS Consumption Michael Florkowski Information Builders.
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
Database Issues Peter Chochula 7 th DCS Workshop, June 16, 2003.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
Online DBs in run Frank Glege on behalf of several contributors of the LHC experiments.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
1 Copyright © 2008, Oracle. All rights reserved. Repository Basics.
The Control and Hardware Monitoring System of the CMS Level-1 Trigger Ildefons Magrans, Computing and Software for Experiments I IEEE Nuclear Science Symposium,
Distribution of ATLAS Software and configuration data Costin Caramarcu on behalf of ATLAS TDAQ SysAdmins.
Database Replication and Monitoring
CMS High Level Trigger Configuration Management
Open Source distributed document DB for an enterprise
Online Control Program: a summary of recent discussions
Controlling a large CPU farm using industrial tools
TriggerDB copy in TriggerTool
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 2 Database System Concepts and Architecture.
Configuration DB Status report Lana Abadie
Presentation transcript:

ALICE, ATLAS, CMS & LHCb joint workshop on

 A lot of subsystems need to be configured for DAQ ◦ Front-end electronics ◦ Readout Units ◦ Trigger ◦ Event Builder ◦ HLT, …  DAQ has to run with different configurations for several types of run ◦ Calibration runs, technical runs, Cosmics, … ◦ Sub-detector stand-alone runs ◦ Physics 2  There needs to be an easy, reliable and fast way to configure the different subsystems according to the different run types

 Each Experiment implemented the DAQ systems differently ◦ Different technologies  Web based  PVSS based, …  Each Experiment implemented DAQ configuration differently ◦ XML Descriptions ◦ Named datasets in DB ◦ Object Persistent DB 3

 Architecture summary: ◦ Global DAQ configurations stored under a Name ◦ Specific machine configurations stored under a Role Name ◦ Configurations are stored in a MySQL Database ◦ Configurations are retrieved via C SQL API  Databases ◦ MySQL DB  More static DAQ configuration settings defined by Name  Streams  Timeouts, …  Configurations for each machine defined by Role Name (e.g. “LDC-TPC-1”)  The MySQL DB is populated by system experts ◦ Via Run Control Human Interface ◦ Via a tool developed to insert configurations - editDb 4

 The ALICE Run Control is based on a central Run Control and distributed Run Control Servers (RCServer) ◦ The RCServers are responsible for starting and configuring the DAQ processes ◦ An RCServer runs on each machine and has a Role Name defined  Upon start of the Run Control, a DAQ configuration, stored with the name “DEFAULT”, is loaded from the database  Another configuration can be selected and loaded from the database, by configuration Name  Before starting a run, other parameter settings can be performed. The new parameters are not saved automatically in the DB.  The configuration is retrieved from the database by the RCServers ◦ The configuration is loaded into shared memory which will be accessed by the processes started by the RCServers 5

 The RCServer processes communicate with the central Run Control via SMI++ ◦ Certain configuration parameters are transmitted via SMI++ parameters  The DAQ processes configuration is also done via other mechanisms: ◦ Local configuration files ◦ Access to the MySQL DB  Configurations in the DB are accessed by Role Name 6 Run Control RCServer Shared Memory DAQ Process Environment Local Files MySQL DB

 Trigger configurations are stored on another MySQL DB (ACT DB) ◦ A tool “ALICE Configuration Tool” (ACT) was developed to manage the configurations on the DB ◦ At the start of Run, the Trigger system downloads the active configuration 7  HLT configuration: ◦ On the ECS (Experiment Control System) some parameters are sent to the HLT System  HLT mode  List of active links  List of Trigger Classes ◦ The configuration is then handled by the HLT system  Front-end configuration is done via the DCS ◦ Using PVSS

 Architecture summary ◦ Based on object approach  Configuration is a graph of linked objects ◦ Classes and objects are stored as XML files  XML Schema files – define object classes  XML Data files – store database objects ◦ XML information is stored in a protected file repository and archived on a relational database ◦ Home made object database is used to access XML information - OKS  Databases ◦ OKS DB provides overall DAQ system description for Control, Monitoring and Data flow ◦ Trigger, HLT and detector configurations are partially covered by OKS 8

 Configuration objects structure is defined by the database schema ◦ Common database schema was agreed with trigger and detector groups ◦ Groups can extend the schema to introduce properties for their configuration objects  OKS classes support inheritance with polymorphism  The OKS database is populated by the relevant experts ◦ GUIs are available to create XML schemas and data ◦ A tool is available to generate automatic configurations with minimal user input  Partition Maker (Python based and with Python interface) 9

 Access to the OKS database is provided via Remote Database servers (RDB) ◦ Provides access to the OKS Db from different clients without a common file system ◦ Caches results of OKS queries  RDB is developed on top of CORBA  To address scalability requirements RDB servers are setup as a balanced tree 10 RDB RDB p1 RDB p RDB 1,1 RDB 1, RDB 10,10 RDB 10,1  Current deployment of the RDB Servers ◦ XML repository -> RDB Master -> Pool of RDBs -> RDBs running on the Racks -> RDB Proxy running on the nodes Proxy N

 Configs - abstract interface to work with DBs and the configuration objects  Available as plugins:  OKS XML Files  OKS Relational archives  RDB Server  Data Access Libraries (DALs) – uses configs to map database schema on the relevant language class and instantiates their configuration objects from DB data  Data Access Libraries are automatically generated from the OKS schema for C++, Python and Java 11 Repository of OKS XML files OKS RDB OKS Archive (RDBMS) user OKS data editor text editor Web GUI DAL config oksconfig rdbconfig roksconfig genconfig Partition Maker Partition Maker developer online process  ATLAS applications do not use the OKS or RDB APIs directly ◦ Avoid dependencies on DB implementations ◦ 2 layers

 Only one configuration is loaded at a given time in the OKS DB  On start, Run Control reads information from OKS ◦ Applications to start ◦ Where should they be started ◦ Application parameters ◦ Each application loads the appropriate config plugin  On the configure transition ◦ Applications instantiate their configuration objects using the corresponding DAL 12

 Architecture summary ◦ Configurations are stored in a relational schema  As XML Descriptions for DAQ software processes ◦ Configurations are stored in ORACLE DB ◦ Run Control queries the DB via JDBC ◦ XML configurations are passed to the online software processes via SOAP  Databases ◦ Resource Service DB  Stores all the information for the dynamic configuration of the hosts  DAQ Configurations based on the templates from other databases  Stores all the parameterized XML descriptions to configure XDAQ executives ◦ Configuration Template DB  Stores slowly changing information templates (e.g. composition of Functional Units) ◦ DAQ Hardware and Configuration DB  Stores frequently changed information  High level configuration layout (e.g. location, mutiplicity and connectivity of Functional Units) 13

 How to populate the Resource Service DB ◦ DAQ Configurator tool  Reads configurations templates and high level layout from the Configuration Template DB and DAQ Hardware and Configuration DB  Computes and sets the parameters from the templates  Allows ad-hoc user input  Creates XDAQ Executives XML configuration documents ◦ Other GUIs 14  Configurations in the Resource Service DB ◦ Versioned ◦ Tree like structure ◦ Used by Run Control and all sub-systems

 DAQ online software is based on the XDAQ framework ◦ XDAQ Executives (processes)  e.g. Builder Units, Filter Units, Event Managers, … ◦ XDAQ Applications  One or more per Executive  Perform actions  Setting front-end parameters  … ◦ XDAQ Executives are highly configurable through XML Documents  Determines the role of the executive  Sets up software environment  Contains applications and parameters to be loaded by the executive  Determines collaborating applications 15

 At the start of a session, the Run control configures the DAQ cluster dynamically ◦ The currently registered configuration gets loaded from the Resource Service DB  Holds the info about the Function Managers to be loaded  Holds hierarchical information for the FMs  The started Function Managers (FMs) load all the XDAQ executives according to the configuration currently registered for each sub-system ◦ Job Control Service (XDAQ Executive) runs on all hosts in the cluster ◦ Job Control Service reads the XML Document from the configuration ◦ XML Documents contain the information about XDAQ executives and applications to be started on the hosts 16  XDAQ configuration determines: ◦ Configuration of custom hardware ◦ Configuration of Super-Fragment Builder ◦ Event data flow topology in Event Builders

 Trigger and HLT configuration ◦ Trigger and HLT configuration can be selected on the Run control ◦ Default configurations are registered in the FMs ◦ The Function Managers load the different configuration settings for the Trigger and HLT accordingly ◦ Configurations for Trigger and HLT are stored in different databases ◦ Configurations can be changed at runtime, which triggers FM reloading requests ◦ Configuration changes in the DBs will be detected by Run Control immediately 17

 Architecture summary ◦ Configurations are stored as named Recipes  Recipe types define for each configurable device type which settings are to be stored in a Recipe  Recipes implement the Recipe types with the valued parameters ◦ Configurations are stored in ORACLE DB and in PVSS Cache ◦ Configurations are accessed by name from PVSS, via an RDB Manager  The Recipe names follow a convention ◦ Hierarchy of activity type (e.g. “PHYSICS|pA|VdM”) 18

 Databases ◦ All the configurations are stored in an Oracle DB – Configuration DB  Configuration DB ◦ Is populated mainly outside running periods by the relevant experts ◦ Is populated using PVSS tools  Cache ◦ Each PVSS project where a configuration will be used can have a cache with the relevant Recipes 19

 Run Control is based on the PVSS Scada software  PVSS Run Control integrates all the systems: ◦ Trigger ◦ Front-end hardware control ◦ Readout Supervisors ◦ HLT ◦ Event Builder control  Configuration of all the sub-systems is treated in a similar manner  The Recipes are applied according to the currently set activity in the Run Control ◦ Recipes to be loaded are matched by name to the set activity ◦ Recipes loaded follow the activity type hierarchy  e.g. If the set activity name is “PHYSICS|pA|VdM”, the devices to be configured will check if a recipe that matches the name and apply it, if there isn’t, they’ll check if there’s a “PHYSICS|pA”, if there isn’t they’ll proceed to check for one named “PHYSICS” ◦ “Default” named Recipes for non Activity dependent configurations.  In case no match is found for the given activity, default settings will be applied 20

 Dataflow Processes configuration ◦ Based on Offline Gaudi FW ◦ Are configured via job options files (python) ◦ There are static job options (changed with new releases of the Online SW) ◦ There are job options files dynamically created from the run control  According to the set Activity  According to the Partitioning Mode 21

 Trigger configuration ◦ Uses an additional Trigger Configuration Key (TCK) ◦ The Recipe loaded for current Activity contains the TCK of the HLT Recipe to be loaded ◦ Can be changed at Runtime without major reconfiguration 22

 Different philosophies and technologies for DAQ implementations dictated different approaches for the system configuration  Some similarities ◦ The experiments implemented a 2 step approach  Define the schema of configurations  Define the parametrized configurations ◦ ATLAS and CMS implemented XML files to store configuration data ◦ LHCb and ALICE store the configurations as named sets of parameters ◦ Usage of dbs is pervasive  Configuration values direct storage  XML files storage  Configuration objects storage ◦ Trigger handled slightly differently  Change trigger settings without whole DAQ system reconfiguration 23

 A thank you to Clara Gaspar, Hannes Sakulin, Igor Soloviev and Vasco Barroso for the information provided and patience explaining it. 24