LHC RT feedback(s) CO Viewpoint Kris Kostro, AB/CO/FC.

Slides:



Advertisements
Similar presentations
CWG10 Control, Configuration and Monitoring Status and plans for Control, Configuration and Monitoring 16 December 2014 ALICE O 2 Asian Workshop
Advertisements

MSIS 110: Introduction to Computers; Instructor: S. Mathiyalakan1 Systems Design, Implementation, Maintenance, and Review Chapter 13.
4 Dec 2001First ideas for readout/DAQ1 Paul Dauncey Imperial College Contributions from all of UK: result of brainstorming meeting in Birmingham on 13.
Industrial Control Engineering Industrial Controls in the Injectors: "You (will) know that they are here" Hervé Milcent On behalf of EN/ICE IEFC workshop.
–Streamline / organize Improve readability of code Decrease code volume/line count Simplify mechanisms Improve maintainability & clarity Decrease development.
Overview SAP Basis Functions. SAP Technical Overview Learning Objectives What the Basis system is How does SAP handle a transaction request Differentiating.
The TIMING System … …as used in the PS accelerators.
controls Middleware – OVERVIEW & architecture 26th June 2013
E. Hatziangeli – LHC Beam Commissioning meeting - 17th March 2009.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
Coupling Climate and Hydrological Models Interoperability Through Web Services.
Framework for Automated Builds Natalia Ratnikova CHEP’03.
EPICS Archiving Appliance Test at ESS
DAQ Status Graham. EMU / EB status EMU framework prototype is complete. Prototype read, process and send modules are complete. XML configuration mechanism.
Controls Issues Injection beam2 test meeting 28 th Aug 2008 Eugenia Hatziangeli Input from J. Lewis, M. Sobzak, JJ Gras, C. Roderick, M.Pace, N. Stapley,
Marcelo R.N. Mendes. What is FINCoS? A Java-based set of tools for data generation, load submission, and performance measurement of event processing systems;
BI day 2011 T Bogey CERN BE/BI. Overview to the TTpos system Proposed technical solution Performance of the system Lab test Beam test Planning for 2012.
ATF Control System and Interface to sub-systems Nobuhiro Terunuma, KEK 21/Nov/2007.
Eugenia Hatziangeli Beams Department Controls Group CERN, Accelerators and Technology Sector E.Hatziangeli - CERN-Greece Industry day, Athens 31st March.
Distributed System Concepts and Architectures 2.3 Services Fall 2011 Student: Fan Bai
Framework for MDO Studies Amitay Isaacs Center for Aerospace System Design and Engineering IIT Bombay.
LHC Collimation Project Integration into the control system Michel Jonker External Review of the LHC Collimation Project 1 July 2004.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
LHC BLM Software revue June BLM Software components Handled by BI Software section –Expert GUIs  Not discussed today –Real-Time software  Topic.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
IPHC - DRS Gilles CLAUS 04/04/20061/20 EUDET JRA1 Meeting, April 2006 MAPS Test & DAQ Strasbourg OUTLINE Summary of MimoStar 2 Workshop CCMOS DAQ Status.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Our project main purpose is to develop a tool for a combinatorial game researcher. Given a version of combinatorial puzzle game and few more parameters,
Nominal Workflow = Outline of my Talk Monitor Installation HWC Procedure Documentation Manufacturing & Test Folder Instantiation – NC Handling Getting.
Team now comfortable with > 80k lines of inherited code Controller Ported to run on new 64 bit Proliant machine Re-engineered Orbit Trigger delivery (now.
Marcelo R.N. Mendes. What is FINCoS? A set of tools for data generation, load submission, and performance measurement of CEP systems; Main Characteristics:
Florida Institute of Technology, Melbourne, FL
Real time performance estimates of the LHC BPM and BLM system SL/BI.
TC 25th October1 LHC Real-time requirements What is real-time? What is latency? The time between asking for something to be done and it being done The.
ESS Timing System Prototype 2012 Miha Reščič, ICS
CERN Timing Overview CERN timing overview and our future plans with White Rabbit Jean-Claude BAU – CERN – 22 March
Issues concerning Device Access (JAPC / CMW / FESA) With input from: A.Butterworth, E.Carlier, A. Guerrero, JJ. Gras, St. Page, S. Deghaye, R. Gorbonosov,
Post Mortem Workshop Session 4 Data Providers, Volume and Type of Analysis Beam Instrumentation Stéphane Bart Pedersen January 2007.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
DIAMON Project Project Definition and Specifications Based on input from the AB/CO Section leaders.
22/09/05CO Review: FESA IssuesJJ Gras [AB-BDI-SW] 1/18 AB-CO Review FESA  The Functionality  The Tools  The Documentation  The Support  Maintenance.
Marcelo R.N. Mendes. What is FINCoS? A Java-based set of tools for data generation, load submission, and performance measurement of event processing systems;
TC 25th October1 Real-time A system -- e.g., application system, computer system, operating system -- operates in real time to the degree that those of.
CO Timing Review: The OP Requirements R. Steerenberg on behalf of AB/OP Prepared with the help of: M. Albert, R. Alemany-Fernandez, T. Eriksson, G. Metral,
AB-CO TC / J. Wenninger1 Real-time orbit the LHC Summary of the mini-workshop held Oct 6 th 2003 J. Wenninger AB-OP-SPS With emphasis.
Interfacing the FMCM for additional protection in the LHC and the SPS- LHC/CNGS Transfer Lines to the CERN controls system Cristina Gabriel Casado, Interlock.
4. Operations and Performance M. Lonza, D. Bulfone, V. Forchi’, G. Gaio, L. Pivetta, Sincrotrone Trieste, Trieste, Italy A Fast Orbit Feedback for the.
January 2010 – GEO-ISC KickOff meeting Christian Gräf, AEI 10 m Prototype Team State-of-the-art digital control: Introducing LIGO CDS.
MPE Workshop 14/12/2010 Post Mortem Project Status and Plans Arkadiusz Gorzawski (on behalf of the PMA team)
Contents  Samplers and OASIS  New Controls Platform & Strategy  CTF3 Mini Pilot Project  Concluding Remarks IEFC Workshop, March 2011 Rende.
WTG – Wireless Traffic Generator Presented by: Lilach Givaty Supervised by: Dr. Yehuda Ben-Shimol, Shlomi Atias.
J. Snuverink and J. Pfingstner LinSim LinSim Linear Accelerator Simulation Framework with PLACET an GUINEA-PIG Jochem Snuverink Jürgen Pfingstner 16 th.
AB-CO Exploitation 2006 & Beyond Presented at AB/CO Review 20Sept05 C.H.Sicard (based on the work of Exploitation WG)
H2LC The Hitchhiker's guide to LSA Core Rule #1 Don’t panic.
LHC Beam Commissioning Meeting V. Kain & R. Alemany
A monitoring system for the beam-based feedbacks in the LHC
DRY RUNS 2015 Status and program of the Dry Runs in 2015
J. Wenninger AB-OP-SPS for the non-dormant AB feedback team,
Status and Plans for InCA
LHC experiments Requirements and Concepts ALICE
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Guenther Rehm, Head of Diagnostics Group
Magnet Safety System for NA61/Shine
TYPES OFF OPERATING SYSTEM
Real-time orbit the LHC
Orbit Feedback / Chamonix 03 / J. Wenninger
EPICS: Experimental Physics and Industrial Control System
LHC BLM Software audit June 2008.
LHC An LHC OP guide… under construction J. Wenninger
Presentation transcript:

LHC RT feedback(s) CO Viewpoint Kris Kostro, AB/CO/FC

Sept. 21, 2005 Kris Kostro, AB/CO/FC 2 Outline Recapitulation of what was already done by CO Where can we go from here and what remains to be done

Sept. 21, 2005 Kris Kostro, AB/CO/FC 3 CO efforts for RT orbit feedback Before 2003: Studies by Thijs Wijnands and Pedro Ribeiro Jens Andersson (fellow) –SPS orbit feedback server “version 1”, successfully used during MD in It was specifically made for this task. –Feedback server framework. SPS orbit server “version 2”, using this framework, successfully tested during MD in –Network/server performance tests using server framework End 2004 it was decided to freeze the work for at least one year –Infrastructure was not yet ready for more tests (FEC, BPM, PO gateways)

Sept. 21, 2005 Kris Kostro, AB/CO/FC 4 Feedback Server Framework “Triggers” activate a sequence of “modules” Modules perform operations on the state –Multiply vector v with matrix M –Receive or send network data –Log data CMW management interface Triggers and modules setup by a configuration file Written in C++, runs on LynxOS and Linux Main motivation: Flexibility, decouple physics calculations from feedback infrastructure Do some work on the LHC orbit feedback in absence of infrastructure and final interfaces

Sept. 21, 2005 Kris Kostro, AB/CO/FC 5 Server framework Config file 10 Hz triggerData importy=Ax Data Correction output Data Beam outReset correctionsCorrection output Data TriggerModule Trigger Module

Sept. 21, 2005 Kris Kostro, AB/CO/FC 6 Feedback server conclusion Good prototyping tool, used with –SPS orbit feedback –Orbit feedback performance tests –Ralph’s simulation of orbit response

Sept. 21, 2005 Kris Kostro, AB/CO/FC 7 Performance tests to assess feasibility of orbit feedback Used feedback server framework Standard PC gateway hardware –Realistic processing complexity 1000x1000 matrix multiplication Extrapolate network behavior from tests of smaller scale –Split over fewer sources 10 or 20 sources (vs ~68) –But realistic data volume 1000 inputs (BPMs), 1000 outputs (correctors) –PCR/Point 8: Upgraded technical network –No correction data sinks

Sept. 21, 2005 Kris Kostro, AB/CO/FC 8 Test configuration PCR and Point 8 Provisional 10/25/50 Hz time tick (CTRP & CTRI) 10 test sources, test data collection Prevessin site, PCRFerney, surface at point 8 Orbit server 5 frontends Timing

Sept. 21, 2005 Kris Kostro, AB/CO/FC 9 Performance tests conclusion Server performance –1000 inputs, 1000 outputs –10 Hz OK, 50 Hz possible, >50 Hz more complicated Network/system performance –10 sources, 99.9% input sets within 6 ms –No signs of scaling problems so far Collect dataProcess dataSend data <6 ms <12 ms <16 ms

Sept. 21, 2005 Kris Kostro, AB/CO/FC 10 Where can we go from here Thanks to the feedback server framework we can start immediately where we stopped in 2004 with new interfaces and infrastructure. Feedback server framework is at least a good prototyping tool for orbit and other future LHC feedbacks. At least partial infrastructure must be in place and interfaces with equipment defined before serious continuation of the feedback tests is possible. Possible scenario: 1 FT SW engineer from CO, common project with OP –CO responsible for server & communication –OP responsible for physics computation and GUI Start at least 1.5 year before the pilot beam from LHC

Sept. 21, 2005 Kris Kostro, AB/CO/FC 11 What remains to be done for orbit Finalize interfaces –BPM’s: FESA? Standard CMW or special UDP? –Power converters: standard CMW or special UDP? –External interfaces to acquire orbit data? –Matrix changes Synchronization –Timing infrastructure and triggering of feedback loop Server functionality and robustness –Hotswap on matrix change – handling BPM and corrector errors –Post Mortem

Sept. 21, 2005 Kris Kostro, AB/CO/FC 12 What remains to be done for orbit (cont.) Continue to assess network performance –Include (more) data sinks –Study perturbations by simultaneous network load –Faster network interfaces (Gigabit Ethernet) for the server? Miscellaneous –Study, together with BDI, impact of other simultaneous operation of BPM (bunch-per-bunch orbit acquisition) on the feedback Hardware for the server(s)

Sept. 21, 2005 Kris Kostro, AB/CO/FC 13 Basic principle 68 Beam Position Monitor frontends –Up to ~1200 BPMs 80 power converter gateways –Up to ~750 correctors Server … Acquire Calculate Correct Acquire … t

Sept. 21, 2005 Kris Kostro, AB/CO/FC 14 Measurement Illustration of what is being measured –a) Time to (trigger and) collect BPM data –b) Time to get a correction result ready Not included –Time to send the corrections out, and for them to be applied Timing event Server #1, Point 8 #2, Point 8 #3, Point 8 #4, Point 8 #5, Point 8 #6, PCR #7, PCR #8, PCR #9, PCR #10, PCR a b t Data processing Input data set complete Calculation complete

Sept. 21, 2005 Kris Kostro, AB/CO/FC 15 Distribution at server Introduction – Server – Tests – Future –a) Time to receive all data –b) …and also compute corrections

Sept. 21, 2005 Kris Kostro, AB/CO/FC 16 Distribution, log scale –After 6 ms, 99.9+% input sets ready at the server –After 12 ms, 99.9+% output sets ready at the server

Sept. 21, 2005 Kris Kostro, AB/CO/FC 17 Number of sources Investigate scaling Comparing 3, 5, 10, 20 sources of UDP data –Constant: in total 1000 data entries, 25 Hz

Sept. 21, 2005 Kris Kostro, AB/CO/FC 18 Networking Technical network –1000 entries (value, sigma, ID; 20 bytes) At 25 Hz, during 4 ms every 40 ms: ~5 Mbyte/s = ~40 Mbit/s bursts That is, 40% of the servers’ 100 Mbps bandwidth …but only 4% in average, or 0.4% of gigabit backbone –Will the future background load change the picture? 20 kB Transfer rate t 40 ms 4 ms 5 Mbyte/s 0.5 Mbyte/s Approximate per plane – total would be twice the amount. However, 20 bytes can be reduced to 10 or even less.

Sept. 21, 2005 Kris Kostro, AB/CO/FC 19 Distribution, CMW CMW also performs well –Not a CMW/UDP comparison UDP: floats, CMW: doubles UDP: including names, CMW: no names Preparation work done for both CMW and UDP