Presentation is loading. Please wait.

Presentation is loading. Please wait.

LS1 Review P.Charrue. Audio/Video infrastructure LS1 saw the replacement of BI and RF analog to digital video transport Was organised in close collaboration.

Similar presentations


Presentation on theme: "LS1 Review P.Charrue. Audio/Video infrastructure LS1 saw the replacement of BI and RF analog to digital video transport Was organised in close collaboration."— Presentation transcript:

1 LS1 Review P.Charrue

2 Audio/Video infrastructure LS1 saw the replacement of BI and RF analog to digital video transport Was organised in close collaboration with the groups involved and with OP Both old and new distribution worked in parallel until OP and equipment groups agreed to stop the old one Today the deployed solution is giving a correct service to OP, BI and RF, with a very short delay between analog and digital (around 10ms)

3 CCC/CCR : What does a long shutdown mean for your services in general? As the CCR provides services to all equipment groups, including access and safety, CV, Cryo, PVSS, there is NO such thing as Shutdown for the CCR team Out of 340 servers in the CCR, about 300 were kept operational during LS1 On the contrary, some services become more critical, such as the LHC CV that has to ‘handle’ human presence in the tunnel Therefore our shutdown work demands a lot of coordination and the need to offer redundant services

4 CCC/CCR: How did you prepare for LS1 There were 4 main activities in the CCC/CCR: Complete re-engineering of the EL and CV infrastructure of the CCR Network backbone upgrade from 1Gb/s to 10Gb/s BE/CO ‘classic’ upgrades and maintenance of the installed servers Change the CCC consoles Preparation: Mainly via meeting with our ‘clients’ to explain what we had to do and to find the best possible window to deploy our changes And via meetings with EN/EL, EN/CV and IT/CS to organise the big changes in the CCR with as little perturbations as possible for our users Plus presentations in ad-hoc meetings (LMC, IEFC, TIOC, …)

5 CCC/CCR: What worked well in LS1? - What didn’t work well in LS1 All the above mentioned activities went extremely smooth, with almost no perturbation to all our users The CCC console change however was not smooth The standard CERN-store PC were not working well due to mother-board issues (PC blocking, Video output frozen, …) Therefore a crash program to replace the 110 PCs with new ones had to take place later in 2015, with the associated perturbation to OP What could be improved is the communication FROM our users regarding their activities: We had to chase our users to understand what were their needs in term of availability of the services from the CCR The operational program of the machines (CTF, LINAC, ISOLDE, …) was not always clear and/or communicated early enough for us to plan our work.

6 CCC/CCR: Planning and organisation As said before, mainly ad-hoc meetings with clients or service providers (EN/EL, EN/CV and IT/CS) or presentations to official meetings (LMC, IEFC, TIOC) No specific tools and process were used We influenced the CCR EL and CV work in order to perturb our users as little as possible.

7 CCC/CCR: Technical impact The work in the CCR improved significantly the local infrastructure in terms of : Network bandwidth, Electrical power and Cooling power Increased redundancy on CV and EL services These changes were relatively transparent to our users who might see a communication speed increase, and the overall operation will be much more stable (no more impact due to EL or CV issue) In close collaboration with EL, CV and IT, several acceptance tests were organised before the system was put in operation

8 Audio/Video: Outlook for LS2 In close collaboration with BE/ICS (former GS/ASE), with the input from OP, a replacement of PublicAddress (and Intercom ?) will be deployed during LS2 The complete video infrastructure will be renovated with modern digital technology.

9 CCC/CCR: Outlook for LS2 LS2 will not differ from LS1 as our services from CCR will continue to be operational We will appreciate to have as soon as possible the list of services/machines that should be kept operational E.g. will LHC stay cold or warm? Will the vacuum be kept empty? What are the access and safety needs? Which machine/experiment will have beam or will be commissioned during LS2? And at what exact dates this will happen? In terms of high-level plans for LS2, we anticipate a review of the BackEnd as the market is evolving fast in this domain. The servers infrastructure in 2020 might be different from the one of today.

10 (New)IN section outlook for LS2 The outcome of the 3 CO3 projects (Platform, Fieldbus, I/O) will incur pilot installations and the IN section is closely involved to anticipate needs Post-ACCOR actions planned by CO3 and followed up by the installation team Follow-up the new HT initiatives (Pulse repeater, RS485 GMT, new WFIP Master, WR for timing, OASIS, Btrain) And be ready for planning these new installations In parallel, the KONTRON platform is secured till end 2018 (before LS2) but its future will have to be addressed in the coming years With the planning and installation inherent to a possible change The same will apply to the distributed consoles outside CCC around the accelerators. An initiative will be launched to study the best solution covering the needs of our users E.g. windows access for TE/ABT Not to forget the next phases of the GIS Rack Portal


Download ppt "LS1 Review P.Charrue. Audio/Video infrastructure LS1 saw the replacement of BI and RF analog to digital video transport Was organised in close collaboration."

Similar presentations


Ads by Google