Presentation is loading. Please wait.

Presentation is loading. Please wait.

“Post Mortem” during Beam Operation Why me ? I have no general responsibility for post-mortem Within CO, I did only little work on it The questions asked.

Similar presentations


Presentation on theme: "“Post Mortem” during Beam Operation Why me ? I have no general responsibility for post-mortem Within CO, I did only little work on it The questions asked."— Presentation transcript:

1 “Post Mortem” during Beam Operation Why me ? I have no general responsibility for post-mortem Within CO, I did only little work on it The questions asked by Karl-Hubert are addressed to CO, but are also question to other groups I have been involved at an early state in the initial discussions on post- mortem For machine protection – post mortem is vital I am pushing inside (and outside) CO to take it up…….. Rüdiger Schmidt Review on Controls September 2005 SCADA Days 1999

2 POST MORTEM - From Latin After death; it is an examination made of a dead body to ascertain the cause of death ; an inquisition post mortem is one made by the coroner. Post Mortem Set Dr. Bill Shepard Dudley (1880) What is “Post Mortem” ? What is “Post Mortem” ?

3 Who is dead ? The LHC ? The magnets ? The beam

4 about 8 m L.Bruno concrete shielding beam absorber (graphite) Dumping an LHC beam up to 800 C

5 First priority: “Post – Mortem” is required to ensure correct operation of LHC protection systems after every beam dump, to ensure that operation can rely on correct functioning of all protection sytems. This includes analysis of data from transient recording, logging and alarms sytems Second priority: “Post – Mortem” is required to understand the causes of any kind of accident (that should not happen) Accident: take LHC out for some hours (quench) …. some years

6 Report from the LHC Machine Protection Review May 10th, 2005 A comprehensive post mortem data acquisition capability after a beam dump is crucial in ensuring efficient operations. The Committee suggests that the various sub-system post mortem requirements should be defined centrally rather than determined ad hoc as seems to be happening at present. The basic operation concept of the machine protection system requires comprehensive post mortem data acquisition and analysis as well as automatic mandatory self-tests to ensure and re-qualify the anticipated safety level of the system. Those functions require tight coordination between the machine protection and the overall accelerator control system. Technical post- mortem procedures have to be designed, implemented and tested and practicable operational sequences have to be specified and executed. Post mortem requirements on the sub-systems need to be centrally determined rather than defined ad hoc. Based on the presentations the committee was not able to review the software interfaces and software methods involved in any detail. The Committee is concerned that the remaining time until the first beam tests might be insufficient to implement all relevant applications and services.

7 Data recording: what type of recording ? Transient recording Stores transient data for certain variables after an event or after an internal fault of a system Logging System Stores logging data for certain variables Collects data, typically at 1 Hz or slower, in regular intervals or on change Alarm System Handles alarms in case of fault conditions and stores alarm data Defines & processes alarm data for certain variables Shot-by-Shot Logging System (designed to monitor LHC filling ) Stores logging data for certain variables for each extraction from SPS to TT40 / TI8 (after an event …the shot) Variables can be stored as one data point, or as transient data with many data points Name and time stamp for entities required Post Mortem analysis is using data from all these systems

8 What has been done…. controls Logging System operational Alarm System operational Shot-by-Shot Logging System (designed to monitor LHC filling) operational, included recording of transients Time stamping: Timing system connected to the instrumentation for all systems Time stamping for beam related measurements by timing system down to ns accuracy, if required Time stamping for slower processes either with PLCs (~1ms accuracy) or via WorldFIP (better than 1 ms)

9 We have these systems, but they must be fed with data this is the only way to validate them Transient data not yet well covered for LHC

10 What has been done…. CO and other groups Some documentation The LHC Post Mortem System, LHC-Project Note 303, 10/2002 (E.Ciapala, F.Rodriguez-Mateos, R.Schmidt, J.Wenninger) What do we see in the control room? (Chamonix 12, 2003) (R.Lauckner) http://lhc-postmortem.web.cern.ch/lhc-postmortem/ DRAFT: POST MORTEM ASYNCHRONOUS TRANSIENT CAPTURE CLIENT INTERFACE, LHC-CP-ES-0001 rev 0.3, R.Lauckner Functional specifications for beam instrumentation including requirements for data recording (e.g. post mortem) Measurement of the Beam Losses in the LHC Rings (LHC-BLM-ES-0001-20-00) On the Measurements of the Beam Current, Lifetime and Decay Rate in the LHC Rings (LHC-BCT-ES-0001-10-00) Measurement of the Beam Transverse Distribution in the LHC Rings (LHC-B- ES-0006-10-00) Instrumentation for the LHC Beam Dumping System (LHC-B-ES-0008-20-00 High Sensitivity Measurement of the Beam Longitudinal Distribution of the LHC Beams (LHC-B-ES-0005.00-rev2.0) Measurement of the Beam Position in the LHC Main Rings (lhc-bpm-es-0004v2)(lhc-bpm-es-0004v2)

11 What has been done…. beam interlocks When the beams are dumped (either programmed or after a failure), the Beam Interlock System will: generate an event to trigger transient recording for all systems that are connected to the timing system and know this event time stamp all beam abort request signals from connected user systems to establish the exact time sequence of what user requests a beam dump at what time This will allow some analysis … what system originally requested a beam dump what other systems would have triggered a beam dump shortly later This would also work if the PM trigger by the timing should not get out This was working during last years TT40 / TI8 tests … however, this is only a very small part of what is required

12 The tools we had during the TT40 / TI8 run 2005 allowed us to reconstruct the cause of the beam accident CERN-AB-Note-2005-014 TT40 Damage during 2004 High Intensity SPS Extraction Goddard, B; Kain, V; Mertens, V; Uythoven, J; Wenninger, J;

13 What is missing It is very useful to write functional specifications with requirements, but the work is not finished then What can we expect from the different systems, and when? How to get the data into a central repository ? How to store and manage data ? How to correlate data ? How to analyse data ? Issues Details on event triggering Data formats Data volumes Data storage and management Data archiving Naming conventions

14 How to go on…. Objective is to arrive to a coherent system across the LHC ‘Post Mortem’ = Data recording and analysis for LHC accelerator commissioning and operation cannot be handled only by the controls group In general, the data is provided by the equipment groups. Most transient recording is done by hardware developed in the equipment groups However, in my view the controls group has the responsibility to put things together (tbd) Many building blocks to make a coherent system are available Post mortem for commissioning of electrical circuits very valuable, we should build on the experience and the competence

15 Role of the different teams Operation and accelerator physics formulating requirements helping with software, mainly to analyse the data Equipment groups fomulating requirements providing the front-end systems necessary for recording the data (HW) pushing their data up from their front-end systems (SW) Controls group providing and transporting triggers for transient recordering (timing) for a few systems, providing front end acquisition transmitting the data from the front-end systems to the servers storing and managing the data providing tools to visualise and to partially analyse data (pattern recognition), and to allow easy access to the data (for others to analyse the data)

16 Proposal Many systems and people need to work together This is an issue that cannot be covered in 30 Minutes I suggest to organise a mini-workshop (~1 day), to discuss what is the status of work in the equipment groups ? what is the status of work in CO ? what do others do ? how to go on ? After such a mini-workshop, we should decide how to co-ordinate the activities (Working Group, Project, Responsibilities, ….) – who is the coroner ? Advice and help from other people (from HERA, RHIC, TEVATRON) might be welcome

17 Conclusion LHC does not have a general system for recording transient data – this task is with the equipment groups Misconception: the controls group is responsible for all ‘Post Mortem’ issues As in other areas, collaboration between groups / teams is required – but this involves many teams progressive effort, starting with some main players Data analysis – endless effort…. more sophisticated analysis is an excellent task for PhD students, possibly fellows, …. For the moment, it is not the lack of manpower that stops progress, rather the lack of coordinated effort Via SACEC we got PM going for Hardware commissioning, same is required for beam commissioning

18 Acknowledgements R.Lauckner A.Rijllart J.Wenninger K.M.Mess R.Lauckner F.Rodriguez-Mateos E.Ciapala Many others were involved in the discussions

19 Functional Specification MEASUREMENT OF THE BEAM POSITION IN THE LHC MAIN RINGS 5.12 TRANSIENT RECORDING AND POST-MORTEM The BPM System shall be able to recognize two external events: total beam loss, partial beam loss, and take appropriate action, using the BPM memory’s as transient recorders. The memories corresponding to these two kinds of events should be separate to avoid any loss of information in case of a total beam loss. The actions to be carried out in case these events are received are under definition by the Post-Mortem Working Group, whose documentation should be consulted [post]. Provisionally, it is foreseen, in case of a total beam loss event, to freeze the BPM memory where trajectories are accumulated 124 turns after the trigger and retain the last 1024 values (900 before the trigger, 124 after), freeze the closed orbit buffer to record the last 1000 orbits before the trigger and 24 orbits after the trigger. back

20 Functional Specification MEASUREMENT OF THE TRANSVERSE BEAM DISTRIBUTION IN THE LHC RINGS 4.8 POST MORTEM During normal running, the beam circulating monitoring devices shall be able to recognize total beam losses and take appropriate action. Provisionally, it is foreseen that: a first circular buffer should store the rms beam sizes, beam position and tilt whenever possible measured every 20 ms over the last 20 s of beam. a second circular buffer should store the last measured individual bunch sizes, positions and tilt recorded over the last ten minutes (i.e. 10 set of values per bunches).

21 Functional Specification ON THE MEASUREMENT OF THE BEAM LOSSES IN THE LHC RINGS 9.8 POST-MORTEM ANALYSIS The signals of all monitors should be buffered for the last 100 - 1000 turns, such that they can be read out and analysed after a beam-dump. In addition, the average rates of all monitors should be easily available for time scales of a few seconds and 10 minutes before a beam-dump.

22 Functional Specification HIGH SENSITIVITY MEASUREMENT OF THE LONGITUDINAL DISTRIBUTION OF THE LHC BEAMS 5.6LOGGING, POST MORTEM During normal running, it is felt that a logging periodicity of the nominal bunch distributions of about one minute is adequate. Other low density distributions can be logged at a lower rate, apart from the abort gap population which needs to be checked at least every second. The goal of the post-mortem is to save the beam pattern prior to a beam dump. Of relevance are the beam intensities in each bucket. The details of the tail distributions are less important. To fulfil this goal, the standard-sensitivity mode data should be frozen in a circular buffer of depth 1 second. More data could be made available on request. The exact requirements in this domain need to be finalized by the Machine Protection Working Group.

23 Functional Specification ON THE MEASUREMENTS OF THE BEAM CURRENT, LIFETIME AND DECAY RATE IN THE LHC RINGS 6.7 POST MORTEM RECORDING For post mortem analysis, data will be stored in different buffers to be frozen by external events signalling a partial or total current loss. The exact requirements in this domain are not finalized yet. Provisionally, it can be foreseen that: a first circular buffer will store the beam current measured every 20 ms over the last 20 s of beam. A second circular buffer will store the turn by turn data measured by the bunchto bunch monitor on 1000 turns. During the initial running in period, storing thesum of all bunch currents will be sufficient. Later, when getting close to the nominal currents, it will be useful to store the individual bunch data. In order then to limit the necessary memory, proper sampling or storage strategy can be foreseen. a third circular buffer will store the last measured individual bunch currents, recorded every second, over the last minute.


Download ppt "“Post Mortem” during Beam Operation Why me ? I have no general responsibility for post-mortem Within CO, I did only little work on it The questions asked."

Similar presentations


Ads by Google