Presentation is loading. Please wait.

Presentation is loading. Please wait.

Debriefing of controls re-commissioning for injectors after LS1 TC 09 October 2014.

Similar presentations


Presentation on theme: "Debriefing of controls re-commissioning for injectors after LS1 TC 09 October 2014."— Presentation transcript:

1 Debriefing of controls re-commissioning for injectors after LS1 Marine @ TC 09 October 2014

2 Outline Debriefing of the start-up & early beam operation Snapshot of current situation Experience with best effort support model Conclusion TC 9 October 2014Marine Pace2

3 An unexpectedly smooth start-up Re-commissioning went much better than we all foresaw – No show-stopper service – No error in design & deployment of new HW installations (failure- free from the beginning !) – No reliability, scalability, performance issues No night spent in CCC, no shift work, few calls outside WH CO performed much better than most EQP GPs – We had many issues but they were in the background of more serious problems from EQP GPs Performance recognized officially at Chamonix workshop And again at last FOM meeting – Klaus: ‘God bless the Controls Group’ ! TC 9 October 2014Marine Pace3

4 Why not a bumpy start-up? Overall good quality of SW and HW developments Early and in-depth validation – Thanks to good QA, test bed and extensive dry-runs Dry-runs: one key of success – Essential for early debugging under nominal conditions and to set milestones in CO + in EQP groups – Leading role of CO. With no dry-run, the start-up would have been messy. – Crucial role of all MCCs – More than 50 dry-runs – Each atomic test prepared & documented, debriefed. – Systematic issue follow-up with a dedicated JIRA (320 issues) Machine schedule was mostly respected – CO services were available on time, but with partial functionality and still buggy – Again we ranked better than EQP GPs TC 9 October 2014Marine Pace4

5 TC 9 October 2014Marine Pace5 For each dry-run, a complete documentation to keep track of outstanding issues

6 Debriefing of the start-up The start-up was not smooth actually, we faced an amazing amount of issues (configuration, integration,...), but we were able to demonstrate our strengths Excellent reactive support to issues & commitment from all teams – Very appreciated by OP: ‘Issues were tackled as soon as posted’ Efficient organization for issue follow-up & coordination of new deployments – Full time job for MCCs + EXM – ± 30 issues per day during early beam operation period – JIRA Issue management was an essential tool (for centralization & follow-up) – Weekly DRY-RUN then EXPL meetings – Representation in OP meetings to understand OP needs and priorities Very close communication CO with OP – Weekly renovations progress report during LS1 – Presence of CO experts with OP crew in control rooms (CCC, local) Good technical collaboration with EQP groups – Expertise, help with advanced debugging Training on diagnostics (3 sessions) + on-site INCA training (8 sessions) TC 9 October 2014Marine Pace6

7 Commissioning Experience with EQP GPs Globally an efficient collaboration: good spirit, technical expertise exchange But difficulties experienced – FESA classes from EQP GPs made available at the last moment Because EQP GP were late and partially because of CO CO could have provided framework and tools earlier and more automated, user friendly procedures. -> Overload on LSA + CCDB for final integration – Some tricky cases Renovations canceled at the last moment, leading CO to resurrect (& maintain) old controls TC 9 October 2014Marine Pace7

8 Snapshot of current situation The number & criticality of issues has dropped – < 10 per day All services running smoothly This is reflected by the faults statistics – Very little beam downtime recorded in e-logbook CO has the lowest beam downtime out of all EQP groups The few faults are related to HW failures (un-avoidable) – Ex : old CPU failure on a non renovated ABT FE TC 9 October 2014Marine Pace8

9 BE-CO Support Model 2014 = the first year without Piquet service for PS accelerators New model (Best Effort specialists) generally understood – Calling procedure at front-end failure usually applied correctly CCC calls first the OPERATIONAL SUPPORT (usually EQP GP) Then call CO specialist if CO problem identified. Escalation to SL and GL in case of un-availability of the whole team Good performance so far – No availability issue experienced – But some teams are very short in resources for a best effort model – Special care to be given to holiday periods Best Effort Specialists model is generally ‘accepted’ – No complaints anymore about the missing Piquet while it was expressed by OP as a major concern before the beam start-up TC 9 October 2014Marine Pace9

10 BE-CO Support: required improvements Diagnostics – Limited work was done in LS1, typically with low priority – Still obscure diagnostics (unknown state of FE in DIAMON, unreadable error messages, …) – Diagnostics not sufficient for OP and EQP GP to do first diagnostic and pinpoint the right specialist – I will analyse the user requirements and present a proposal Preventive maintenance on HW – Campaign underway by the HW installation team Self service access by EQP GPs to our stock of spare modules – Policy to be set case by case Operational Issue management – Up to now, assignment of all JIRA issues has been done by me for efficiency and supervision of the workflow – OP will become responsible for assigning the issue from the e-logbook. This is in line with the policy that OP makes the first diagnostics and identifies the expert. Recording of all interventions to evaluate our support model – The current system is sub-optimal – All interventions must be recorded & qualified – Project underway, in collaboration with e-logbook for Issue and fault registration TC 9 October 2014Marine Pace10

11 BE-CO Support Model: outlook We have been able to make our users happy But we know that the CO support model is under close surveillance and that we may have been ‘lucky’ having few HW failures outside WH hours The previous Piquet work has to be absorbed by best effort teams – This means an increased availability of some best effort teams – It often requires on-site interventions for diagnostics and repair – The person has to intervene on a system he/she is not ‘ responsible’ for (HW module, cabling,..) – The impact should be mitigated by all the upgrades done in LS1 All outside WH interventions, recorded in the elogbook, should be declared in EDH for compensation At the end of the run, as planned, an evaluation of our support model will be performed. TC 9 October 2014Marine Pace11

12 Conclusion We can be proud of our start-up – Excellent planning, efficient dry-runs, commitment & reactiveness of support from all New support model put in place & accepted – But users still a little skeptical Past the rush, it is now the right moment to perform a debriefing of LS1 work organization – To evaluate what could have been done better & extract some lessons for LS2 Please maintain this excellent quality of support and commitment for the coming months to prepare LHC re- commissioning. TC 9 October 2014Marine Pace12

13 Spare slides TC 9 October 2014Marine Pace13


Download ppt "Debriefing of controls re-commissioning for injectors after LS1 TC 09 October 2014."

Similar presentations


Ads by Google