The LHCb Online Framework for Global Operational Control and Experiment Protection F. Alessio, R. Jacobsson, CERN, Switzerland S. Schleich, TU Dortmund,

Slides:



Advertisements
Similar presentations
CWG10 Control, Configuration and Monitoring Status and plans for Control, Configuration and Monitoring 16 December 2014 ALICE O 2 Asian Workshop
Advertisements

Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
Safe Machine Parameters General Machine Timing Cross-Check Safe Machine Parameters General Machine Timing Cross-Check 9 th May v3.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
DCS LEB Workshop ‘98, Rome, Detector Control System, H.J.Burckhart,1 Detector Control System H.J Burckhart, CERN u Motivation and Scope u Detector and.
Designing a HEP Experiment Control System, Lessons to be Learned From 10 Years Evolution and Operation of the DELPHI Experiment. André Augustinus 8 February.
Summary DCS Workshop - L.Jirdén1 Summary of DCS Workshop 28/29 May 01 u Aim of workshop u Program u Summary of presentations u Conclusion.
06/03/06Calice TB preparation1 HCAL test beam monitoring - online plots & fast analysis - - what do we want to monitor - how do we want to store & communicate.
January 2006Beam dump and injection inhibits / J.Wenninger1 Beam Dump and Injection Inhibits J. Wenninger AB-OP & D. Macina TS-LEA LHC experiments signals.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Web Based Monitoring DT online shifter tutorial Jesús Puerta-Pelayo CIEMAT Muon_Barrel_Workshop_07/July/10.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
Distribution of machine parameters over GMT in the PS, SPS and future machines J. Serrano, AB-CO-HT TC 6 December 2006.
Daniela Macina (CERN) Workshop on Experimental Conditions and Beam Induced Detector Backgrounds, CERN 3-4 April 2008 Experiments protection from beam failures.
Status of Data Exchange Implementation in ALICE David Evans LEADE 26 th March 2007.
CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, LHCb Readout System and Real-Time Event Management - Emphasis.
Status of the Beam Phase and Intensity Monitor for LHCb Richard Jacobsson Zbigniew Guzik Federico Alessio TFC Team: Motivation Aims Overview of the board.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
André Augustinus 10 October 2005 ALICE Detector Control Status Report A. Augustinus, P. Chochula, G. De Cataldo, L. Jirdén, S. Popescu the DCS team, ALICE.
B. Todd et al. 25 th August 2009 Observations Since v1.
Experimental equipment interacting with beam operation D. Macina TS/LEA Many thanks to my colleagues both from the experiments and the machine for their.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
J. Varela, LIP-Lisbon/CERN LEADE WG Meeting CERN, 29 March 2004 Requirements of CMS on the BST J. Varela LIP Lisbon / CERN LHC Experiment Accelerator Data.
1 The Operation of the LHC detectors LPCC student lectures CERN June 29 th 2010 Special thanks to T. Camporesi, C. Garabatos Cuadrado, T. Pauly, W. Kozanecki,
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
1 Interlock logic for LHC injection: intensity limitations Jörg Wenninger AB-OP-SPS Outcome of the join Machine-Experiments Workshop on Machine Protection.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
ALICE Pixel Operational Experience R. Santoro On behalf of the ITS collaboration in the ALICE experiment at LHC.
Handshake with the experiments 04/09/20091 R. Alemany (BE/OP/LHC) LHC Experiments Handshake.
Signals between LHC Machine & ALICE: an Update David Evans ALICE TB 23 rd February 2006 (with input from Detlef Swoboda)
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Real time performance estimates of the LHC BPM and BLM system SL/BI.
Beam Interlock System MPP Internal ReviewB. Puccio17-18 th June 2010.
6. Shift Leader 1. Introduction 2. SL Duties 3. Golden Rules 4. Operational Procedure 5. Mode Handshakes 6. Cold Start 7. LHCb State Control 8. Clock Switching.
11 th February 2008Brian Martlew EPICS for MICE Status of the MICE slow control system Brian Martlew STFC, Daresbury Laboratory.
14 November 08ELACCO meeting1 Alice Detector Control System EST Fellow : Lionel Wallet, CERN Supervisor : Andre Augustinus, CERN Marie Curie Early Stage.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
AB/CO Review, Interlock team, 20 th September Interlock team – the AB/CO point of view M.Zerlauth, R.Harrison Powering Interlocks A common task.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
CERN R. Jacobsson Between LHC and the Grid - Aspects of Operating the LHC Experiments – T. Camporesi, C.Clement, C. Garabatos Cuadrado, L. Malgeri, T.
TE/TM 30 th March - 0v1 CERN MPP SMP 3v0 - Introduction 3 *fast *safe *reliable *available generates flags & values.
Guy Crockford, BE/OP/LHC, CERN WAO 2012 Automation in the SPS and LHC and its effect on operator skills The past 20 years have seen great advances in the.
Interfacing the FMCM for additional protection in the LHC and the SPS- LHC/CNGS Transfer Lines to the CERN controls system Cristina Gabriel Casado, Interlock.
André Augustinus 18 March 2002 ALICE Detector Controls Requirements.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
CHEP 2010 – TAIPEI Robert Gomez-Reino on behalf of CMS DAQ group.
Fabio Follin Delphine Jacquet For the LHC operation team
DRY RUNS 2015 Status and program of the Dry Runs in 2015
Hand-shake Signals Between LHC Machine and Experiments
Summary of first LHC logging DB meeting
LINAC4 50 MeV phase BIS STATUS
Hannes Sakulin, CERN/EP on behalf of the CMS DAQ group
Safe mode transmission
Update on ALICE-LHC interfaces & BCM
The LHCb Run Control System
LHCCWG Meeting R. Alemany, M. Lamont, S. Page
Beam Dump and Injection Inhibits
The Online Detector Control at the BaBar experiment at SLAC
LHC dry-runs (BE-BI view)
1v1.
Commissioning of the Beam Conditions Monitor of the LHCb Experiment at CERN Ch. Ilgner, October 23, 2008 on behalf of the LHCb BCM group at TU Dortmund:
Beam Interlocks for Detectors and Movable Devices
The LHC Beam Interlock System
Interlocking strategy
The LHCb Front-end Electronics System Status and Future Development
Beam instrumentation and background monitoring
Presentation transcript:

The LHCb Online Framework for Global Operational Control and Experiment Protection F. Alessio, R. Jacobsson, CERN, Switzerland S. Schleich, TU Dortmund, Germany on behalf of the LHCb collaboration Conference on Computing in High Energy and Nuclear Physics Taipei, Taiwan, 19 October 2010

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN Motivations and Implications 2 LHC experiments have never in the past been so tightly connected to the accelerator, why? Enourmous stored energy - up to 2 x 360 MJ at nominal intensity and fragile experiments:  Protection of experiments  Monitor, understand, analyse and optimize experimental conditions Luminosity determination Automatic calibrations, global timing and beam/background monitoring Long-term detector stability and aging monitoring due to radiations  High density of operational communications and procedures  High level of reliability which requires direct communication interfaces at both HW and SW level  High level of interconnectivity, lot of redundancy High interaction rate and large events size:  Fast and reliable centralized readout, storage and transfer to offline processing  Fast feedback from Data Quality checking Many years of 24h operation with few people and non-experts:  Operating the whole detector from one console  Understandable high-level tools for diagnostics, alarms and data monitoring  Homogeneity and scalability of the system  Shifter training and gathering of information “at a glance”

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN LHCb Particularities 3 B Beam 2 from SPS (TI8) 1.Line of direct sight of the injection line High level of protection and reliability Beam stoppers which can produce high density showers Very complex background structure  Many parameters Full batch of up to 288*10^11 protons over 10  s = Energy of 2.4 MJ per single injection … 2.Fragile detector equipped with silicon instrumentation Vertex Locator, Silicon Trackers 3.Readout Electronics already ON at INJECTION, High Voltages OFF at INJECTION Switch ON/OFF coherently with LHC mode 4. Movable detector (VeLo), move IN only during PHYSICS 5. Global timing and readout control /event management centrally managed and relies on LHC parameters

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN 4 Large Interconnectivity 1.Protection through fast beam extraction 2.Automating and securing operational procedure 3.Readout Control and Event Management 4.Diagnostics, direct feedback, and real-time control for fast improvements 5.Analysis and feedbacks for optimization

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN LHCb Radiation Sources and Implications 5 Instantaneous damage Beam Interlock Background ………Trigger rates…………………… ……….Poor data quality………………… ……………………..…….Single event upsets………. …………..………Accelerated aging……….………… ………………….Long-term damage……………..... Online monitoring Accumulated dose and Luminosity Background Beam characteristics Machine settings Halo/beam-gas/………………....…….scraping……………....Beam incident Complex background structure requiring complete understanding and widely different level of reaction times Cover all ranges of losses and protect experiment Study fine time structure of losses Measure online beam characteristics and improve machine settings  reduce background and improve lumi/background

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN LHCb Defense & Safety 6 Inhibit injection or dump the beam 1.Vertex Locator out of garage position in “non-safe” operations 2.LHCb spectrometer magnet NOT OK 3.Diamond based Beam Conditions Monitor BCM dumped the beam (human analysis required) Master of Injection Inhibit CIBU (Beam Permit) CIBF (Injection Inhibit 1) CIBU (Injection Inhibit 2) “LHCb Detector Beam Control” BCM Read-Out & Injection Inhibit Interface (BCM TELL1) BCM Experiment Control System BIC VeLo Beam Interlock Magnet Status Interlock “Safe Beam Flags” for Mode dependence  Distributed via the General Machine Timing  Beam declared “safe” Handshakes  between machine and experiment to allow moving from a “safe”mode to an “unsafe” machine mode (INJECTION, ADJUST during physics and controlled DUMP state)

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN 7 LHCb Global Centralized Slow Operations VELO allowed IN Automation as function of LHC mode  Reduced 19 LHC states into 8 LHCb states by regrouping similar states  HV & LV of each sub-detector and data taking is controlled via LHCb state machine  New LHCb State is proposed and simply acknowledged by shifter (cross check)  Movable devices only allowed to move during the collision phase  Reliability and completeness to ensure to be in the right state at the right moment

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN Providing underlying framework for controlling HV, LV, VELO, DAQ and trigger in PVSS LHCb State Control Panel courtesy of Clara Gaspar

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN LHCb Global Real-Time Operations 9 Readout Supervisor LHC accelerator Beam Phase and Intensity Monitor Subdetectors Event Filter Farm L0 trigger RS Event Bank Events Requests Bunch currents Clock/orbit, UTC, LHC Parameters HW and run parameters Run statistics Luminosity Detector status L0 Decision RO Electronics Trigger Throttle Fast Readout Contrpol FE Electronics Interconnectivity in real-time information exchange for Readout Control and Event Management

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN Experimental Conditions – Online and Offline 10 Beam and background monitoring Timing monitoring Experimental conditions  Trigger rates  Online Luminosity  Bunch-by-bunch measurements  Beam gas  >10000 parameters LHC machine states LHC machine conditions  Bunch structure  Beam characteristics  > parameters Safety conditions SW Interlocks Magnet and radiation doses LHCb DataBases Processed data LHCb control room Alarms! Global centralized operations  HV/LV, LHCb states Timing correction Readout control LHC control room Webpages LHC operations  Tuning of machine LHCb Offline interactive analysis tools  Run Summaries, run “at a glance”  Experimental Analysis Tool (trends, plots, tables…)  Global operations webpage Shifters organization LHCb LHC Information exchange Servers LHC logging DataBases

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN 11 Complete HW and SW framework implemented  Beam Interlock System (BIS) pure hardware between user systems and LHC injection and beam dump  General Machine Timing (GMT) Slow machine timing and distribution of Safe Machine Parameters include Beam Flags  Beam Synchronous Timing (BST) Synchronicity to LHC beam instrumentation  Timing reception and monitoring (TTC) LHC clock and orbit distribution  Experiment-Machine Communication interfaces Gateways, technical network, services/commands  Software project within LHCb Experimental Control System  Post Mortem Telegrams (PMT) post mortem analysis of beam dump/accident  Software interlocks from LHCb particularities  Control of fill procedures, status of experiment and handshakes with machine  Beam and Background monitoring  Online Luminosity and online measurement of parameters/experimental conditions  Visual graph, alarms, Run Summary, Run Plan  Tools for quick offline analysis and web pages  LHCb Control room organization and shifter tools HW for high level of safety, high speed and ~100% reliability, imply a physical link SW for online monitoring, offline analysis, alarms, human-PC interactions, experiment- machine communication, slow controls Conclusion

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN 12 Complete HW and SW framework implemented  Constantly used, stable, high success rate  Heavy contribution to machine commissioning for the entire running period thanks to archiving and interactive tool  Continuously used for LHCb optimization  Ready in time: experiments are starting to be sensitive not only to luminosity, but also to beam currents!! Conclusion

CHEP2010, Taipei, Taiwan, 19/10/10 F. Alessio, CERN Backup 13