LHC BLM Software revue June 2008. BLM Software components Handled by BI Software section –Expert GUIs  Not discussed today –Real-Time software  Topic.

Slides:



Advertisements
Similar presentations
Operating System.
Advertisements

Introduction to Network Analysis and Sniffer Pro
Post Mortem Workshop - discussion1 17/1/2007 HWC Requirements GMPMA  Concentrate on pre-beam requirements Post-quench analysis ([semi-] automatic) Integrity.
MPSCWG 12/12/ Operational scenario of the BLM system.
 M.A - BIS Workshop – 4th of February 2015 BIS software layers at CERN Maxime Audrain BIS workshop for CERN and ESS, 3-4 of February 2015 On behalf of.
MCITP Guide to Microsoft Windows Server 2008 Server Administration (Exam #70-646) Chapter 14 Server and Network Monitoring.
Maintaining and Updating Windows Server 2008
Hands-On Microsoft Windows Server 2008 Chapter 11 Server and Network Monitoring.
CH 13 Server and Network Monitoring. Hands-On Microsoft Windows Server Objectives Understand the importance of server monitoring Monitor server.
Windows Server 2008 Chapter 11 Last Update
Technical review on UPS power distribution of the LHC Beam Dumping System (LBDS) Anastasia PATSOULI TE-ABT-EC Proposals for LBDS Powering Improvement 1.
I/O Systems ◦ Operating Systems ◦ CS550. Note:  Based on Operating Systems Concepts by Silberschatz, Galvin, and Gagne  Strongly recommended to read.
E. Hatziangeli – LHC Beam Commissioning meeting - 17th March 2009.
W. Sliwinski – eLTC – 7March08 1 LSA & Safety – Integration of RBAC and MCS in the LHC control system.
Distributed BI Systems BLM, BPM, BTV and BRAN Laurette Ponce & Fabio Follin Thanks to: Anthony Rey, Enrico Bravin, Rhodri Jones, Bernd Dehning, Mariusz.
Proposal for Decisions 2007 Work Baseline M.Jonker for the Cocost* * Collimation Controls Steering Team.
G. Maron, Agata Week, Orsay, January Agata DAQ Layout Gaetano Maron INFN – Laboratori Nazionali di Legnaro.
DISTRIBUTED SYSTEMS RESEARCH GROUP CHARLES UNIVERSITY PRAGUE Faculty of Mathematics and Physics Lubomír Bulej Java Performance.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
LHC Collimation Project Integration into the control system Michel Jonker External Review of the LHC Collimation Project 1 July 2004.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
FGC Upgrades in the SPS V. Kain, S. Cettour Cave, S. Page, J.C. Bau, OP/SPS April
MA CS workshop Adriaan Rijllart1 LabVIEW integration into the LHC control system The LHCLabVIEW.
CH 13 Server and Network Monitoring. Hands-On Microsoft Windows Server Objectives Understand the importance of server monitoring Monitor server.
Eva Barbara Holzer MPP, CERN July 31, Eva Barbara Holzer, CERN MPP CERN, July 31, 2009 PROCEDURES FOR CHANGES TO BLM SYSTEM SETTINGS DURING LHC.
Monitoring and Managing Server Performance. Server Monitoring To become familiar with the server’s performance – typical behavior Prevent problems before.
Architectural issues M.Jonker. Things to do MD was a success. Basic architecture is satisfactory. This is not the end: Understanding of actual technical.
Nominal Workflow = Outline of my Talk Monitor Installation HWC Procedure Documentation Manufacturing & Test Folder Instantiation – NC Handling Getting.
Team now comfortable with > 80k lines of inherited code Controller Ported to run on new 64 bit Proliant machine Re-engineered Orbit Trigger delivery (now.
‘Review’ of the machine protection system in the SPS 1 J. Wenninger BE-OP SPS MPS - ATOP 09.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
EPICS Release 3.15 Bob Dalesio May 19, Features for 3.15 Support for large arrays Channel access priorities Portable server replacement of rsrv.
Real time performance estimates of the LHC BPM and BLM system SL/BI.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
TDAQ Experience in the BNL Liquid Argon Calorimeter Test Facility Denis Oliveira Damazio (BNL), George Redlinger (BNL).
ESS Timing System Prototype 2012 Miha Reščič, ICS
TBPM Front-End Software Design Review L.Piccoli April
Issues concerning Device Access (JAPC / CMW / FESA) With input from: A.Butterworth, E.Carlier, A. Guerrero, JJ. Gras, St. Page, S. Deghaye, R. Gorbonosov,
Post Mortem Workshop Session 4 Data Providers, Volume and Type of Analysis Beam Instrumentation Stéphane Bart Pedersen January 2007.
PM System Architecture Front-Ends, Servers, Triggering Ingredients Workshop on LHC Post Mortem Session 1 – What exists - PM System, Logging, Alarms Robin.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
DIAMON Project Project Definition and Specifications Based on input from the AB/CO Section leaders.
BI Offline Algorithm Framework Some Examples of What It Can Do for Us.
IPOC Software for LBDS Data Acquisition & Analysis LBDS Technical Audit CERN - AB/BT/EC - N.Magnin.
CO Timing Review: The OP Requirements R. Steerenberg on behalf of AB/OP Prepared with the help of: M. Albert, R. Alemany-Fernandez, T. Eriksson, G. Metral,
The Dashboard The Working Conditions The Common Chapters State What’s Next.
PC Current Interlocking for the SPS Fast Extractions. 1 J. Wenninger July 2009.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
E. Hatziangeli – LHC Beam Commissioning meeting - 3 rd March 2009.
LHC RT feedback(s) CO Viewpoint Kris Kostro, AB/CO/FC.
Maintaining and Updating Windows Server 2008 Lesson 8.
MPE Workshop 14/12/2010 Post Mortem Project Status and Plans Arkadiusz Gorzawski (on behalf of the PMA team)
BIS main electronic modules - Oriented Linac4 - Stéphane Gabourin TE/MPE-EP Workshop on Beam Interlock Systems Jan 2015.
AWAKE p+ beam line HWC C. Bracco, J. Schmidt Acknowledgment: MCS, EPC, MPE, SPS-OP, BI, ABP (survey),STI, EA, ACE, RP.
H2LC The Hitchhiker's guide to LSA Core Rule #1 Don’t panic.
7/8/2016 OAF Jean-Jacques Gras Stephen Jackson Blazej Kolad 1.
Event Sources and Realtime Actions
Data providers Volume & Type of Analysis Kickers
2007 IEEE Nuclear Science Symposium (NSS)
Database involvement in Timing
Development of built-in diagnostics in the RADE framework (EN2746)
Distributed BI Systems
Combiner functionalities
Dry Run 0 Week 13: BI test of...everything circulating beam: all fixed displays and BI applications OP directories for 2009, concentrators running Need.
LHC dry-runs (BE-BI view)
LHC BLM Software audit June 2008.
Configuration of BLETC Module System Parameters Validation
LHC Systems Providing Post-Mortem Data
Kaj Rosengren FPGA Designer – Beam Diagnostics
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

LHC BLM Software revue June 2008

BLM Software components Handled by BI Software section –Expert GUIs  Not discussed today –Real-Time software  Topic for this presentation Handled by OP / CO –Fixed displays –Settings management –Operational GUIs

Real-Time software architecture Based on FESA (AB standard framework for real-time software) –Real-time action scheduling  In the case of the BLMs, most actions are triggered by events from the central timing system  Plus some special triggers from the collimation system –Communication mechanism (server actions) with clients  GET, SET and SUBSCRIBE –Basic configuration using NFS xml files –Internal memory organisation  All variables and constants shared between real-time actions and server actions are formally defined in the design. The BLM design has > 450 such variables!

Real-time action scheduling - Acquisition Triggered by the 1Hz tick –Acquires the 12 running sum data from the 16 DABs (Digital Acquisition Boards). Also reads the currently active thresholds for each monitor –Reads all the thresholds and monitor names from the non-volatile memory  This may be removed as it is quite heavy at 1Hz! –Gets status from the 16 DABs + status from the combiner card –Copies some data / status info from the 16 DABs to the combiner card when the combiner sets a test flag –Keeps a history of the last 512 running sums (used later in the post- mortem data) Presently 1 Acquisition action handles everything, but this may be changed to have individual actions to allow accurate diagnostics and benchmarking –No change in functionality though!

Real-time action scheduling - Capture Started by operators sending the capture event –Buffers are actually started using BI’s BST timing which is triggered by the capture event 2048 packets with 2 different capture modes – SLOW (2.54ms) and FAST (40us) The control room can SET this mode to SLOW or FAST before sending the capture event On the CPU, 2 local events are created based on the capture event –SLOW – triggered on capture event + 6 seconds –FAST – triggered on capture event + 90mS One real-time action woken up by these 2 local events (+90ms & +6s) –Uses the ‘data-ready’ flag on the acquisition cards to determine if data capture is complete –If the capture mode is set to FAST, this flag will be set at capture event + ~82ms –Otherwise, the flag will be set at capture event + ~5.2 seconds

Real-time action scheduling - Beam Dump (XPOC) & Post Mortem Unlike Capture buffers which are started on demand, the Beam Dump & Post Mortem buffers are frozen on demand Beam Dump buffers contain 200*40us samples –Using the BST, an additional 4ms delay is applied to the Beam Dump trigger –Therefore we measure 4ms before the beam dump and 4ms after the beam dump Post Mortem buffers contain 2048*40us samples –BST delay is 4ms –Therefore we measure ~78ms before the beam dump and 4ms after the beam dump The 2 corresponding real-time actions are triggered at least 4ms after the respective event from the central timing –The Beam Dump action notifies clients that data is ready –The Post Mortem action pushes the data to a post-mortem server

Real-time action scheduling - Parallelization of the actions The 4 real-time actions are potentially fighting for CPU time –The 1Hz acquisition is synchronous –The Capture/Beam Dump and Post Mortem acquisitions are asynchronous –The post mortem data transfer is quite time consuming (>1 second) so causes holes in the Acquisition (i.e. we miss an acquisition)  Not acceptable! –Latest version of FESA allows us to prioritise actions  The Capture/Beam Dump/Post Mortem data will stay there until we’ve read it so it takes least priority!  So in our case, priority is: –1. Acquisition –2. Collimation data (not discussed today) –3. Capture data –4. Beam Dump data –5. Post Mortem data

Server (client-side) actions Implemented actions (a.k.a. Properties) include –4 GET actions to compliment the 4 main real-time actions  Acquisition  Capture  Beam Dump  Post Mortem –2 additional actions, to GET expert data (not intended for OP)  BLETCExpertAcquisition  BLECSExpertAcquisition –An action to GET and SET the threshold tables and monitor names / details  Special action that inhibits all real-time actions to avoid corruption of data –An action to GET and SET the configuration of the combiner card  May be integrated with the action for the thresholds in the future? As with the real-time actions, we also have the possibility to prioritise these server actions –Basically, make sure that the post-mortem action has lower priority than all the others!

Configuration Depending on its configuration, the front-end controls what is sent to the logging, fixed displays, operational GUIs, etc – it’s very important! –FESA provides persistency using NFS files  Deemed not reliable enough so configuration data stored on non-volatile memory on the acquisition cards  A server action accepts the new threshold data and monitor names, and sends this to the non-volatile memory  For a trace of what was sent, a copy of every new configuration is stored in a directory on NFS  The same system is used for the combiner card configuration The source of this configuration is handled by the LSA settings management –Plus MCS (critical settings) –And some dedicated BLM tables for the data

Finally, some statistics (less is better) 25 instances of the software accessing up to 16 acquisition cards and a combiner card –+ 2 extra instances deployed in CMS for the BCM Lines of code –FESA framework  C++ 17’908 lines  XML 18’553 lines  Perl 4’842 lines  Java 39’941 lines –BLM specific code  C++ 6’210 lines  Java 8’860 lines We rely heavily on FESA!

More details On the LIDS pageLIDS –Link found on BI SW’s homepage  Details of the BLMLHC design  Details of the BLMLHC deployment  … and much more Also watch the LHC Technical Board Wiki Wiki –Link also found on our homepage