Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Status of the LHC Controls System Shortly Before Injection of Beam

Similar presentations


Presentation on theme: "The Status of the LHC Controls System Shortly Before Injection of Beam"— Presentation transcript:

1 The Status of the LHC Controls System Shortly Before Injection of Beam
Pierre Charrue on behalf of CERN Accelerator and Beams Controls Group

2 Motivation of this talk
Present the controls challenges of the LHC machine Give an overview of the LHC Controls Infrastructure Propose links to further ICALEPCS’07 presentations MOAB01

3 ICALEPCS'07 - P.Charrue - CERN
Relativity “When you sit with a nice girl for two hours, you think it's only a minute. But when you sit on a hot stove for a minute, you think it's two hours. That's relativity” Albert Einstein Hence the title “LHC controls “shortly” before beam injection” for us means: a few months before LHC Beam Commissioning 9 months after start of LHC Hardware Commissioning 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

4 ICALEPCS'07 - P.Charrue - CERN
Outline LHC controls infrastructure The major LHC challenges Technical Solutions to the LHC challenges Results Summary 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

5 The LHC Controls Infrastructure
Based on the classical 3-tier model The LHC Controls Infrastructure is a new software and hardware architecture built with the experience of controlling the CERN injector chain Level of effort: 300 my, 21 MSFR Remove the red arrow there 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

6 The 3-tier LHC Controls Infrastructure
15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

7 ICALEPCS'07 - P.Charrue - CERN
The Resource Tier VME crates dealing with high performance acquisitions and real-time processing e.g. the LHC beam instrumentation and the LHC beam interlock systems use VME front-ends PC based gateways interfacing systems where a large quantity of identical equipment is controlled through fieldbuses e.g. LHC power converters and LHC Quench Protection System Programmable Logic Controllers (PLCs) driving various sorts of industrials actuators and sensors for systems e.g. LHC Cryogenics systems or the LHC vacuum system. Supported FieldBuses for local connections e.g. Mil1553, WorldFIP, Profibus 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

8 ICALEPCS'07 - P.Charrue - CERN
The Middle Tier Application servers hosting the software required to operate the LHC beams and running the Supervisory Control and Data Acquisition (SCADA) systems HP ProLiant dual CPU with 3Gbyte RAM running LINUX. Dual hot pluggable RAID-1 system disks Redundant dual hot pluggable power supplies. Data servers containing the LHC layout and the controls configuration as well as all the machine settings needed to operate the machine or to diagnose machine behaviors. File servers containing the operational applications Central timing which provides the cycling information of the whole complex of machines involved in the production of the LHC beam and the timestamp reference 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

9 ICALEPCS'07 - P.Charrue - CERN
The Presentation Tier At the control room level, consoles running the Graphical User Interfaces (GUI) will allow machine operators to control and optimize the LHC beams and to supervise the state of key industrial systems. Dedicated fixed displays will also provide real-time summaries of key machine parameters Operational consoles specifications : PCs with 2 Gbyte RAM running either LINUX or WINDOWS One PC, one keyboard, one mouse, and up to 3 screens Console capable of running any type of GUI software for LHC (and PS and SPS) such as JAVA, WEB, SCADA or X-MOTIF. 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

10 ICALEPCS'07 - P.Charrue - CERN
Inside CCC RPPB01 General Purpose Fixed Display Operator Console 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

11 A typical Operator Console
Local Fixed Displays Interactive Consoles 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

12 ICALEPCS'07 - P.Charrue - CERN
Communications General Purpose Network (GPN) For office, mail, www, development No formal connection restrictions Technical Network (TN) For operational equipment Formal connection and access restrictions Limited services available (e.g. no mail server, no external web browsing) Authorization based on MAC addresses Network monitored by CERN IT Department 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

13 ICALEPCS'07 - P.Charrue - CERN
Outline LHC controls infrastructure The major LHC challenges Technical Solutions to the LHC challenges Results Summary 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

14 The LHC Challenges (1/3) The LHC is the largest accelerator in the world number of components diversity of systems The complexity of operation will be extreme: very critical technical subsystems large parameter space need for online magnetic and beam measurements real time feedback loops. Some 500 objects are capable of moving into the aperture of either the LHC Ring, or the transfer lines, ranging from passive valves up to very complex experimental detectors P. Collier - AB/CO LHC Workshop January, 2005 The complexity of the accelerator is unprecedented and repair of damaged equipment would take long, for example, the exchange of a superconducting magnet takes about 30 days R. Schmidt -

15 ICALEPCS'07 - P.Charrue - CERN
Regular arc: Magnets 1232 main dipoles + 3700 multipole corrector magnets 392 main quadrupoles + 2500 corrector magnets 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

16 ICALEPCS'07 - P.Charrue - CERN
Regular arc: Cryogenics Connection via service module and jumper Static bath of superfluid helium at 1.9 K in cooling loops of 110 m length Supply and recovery of helium with 26 km long cryogenic distribution line 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

17 ICALEPCS'07 - P.Charrue - CERN
Regular arc: Vacuum Beam vacuum for Beam 1 + Beam 2 Insulation vacuum for the magnet cryostats Insulation vacuum for the cryogenic distribution line 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

18 ICALEPCS'07 - P.Charrue - CERN
Regular arc: Electronics Along the arc about several thousand electronic crates (radiation tolerant) for: quench protection, power converters for orbit correctors and instrumentation (beam, vacuum + cryogenics) 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

19 The LHC Challenges (2/3) The energy stored in the beams and the energy stored in the magnets exceed present machines by more than 2 orders of magnitude The LHC machine must be protected at all costs In the case of an operational incident operation should be able to analyze what has happened and trace the cause. Moreover no operation can be resumed if the machine is not back in a good state

20 Energy Stored in the Beam
Energy stored in one beam at 7 TeV: 362 MJoule The energy of one shot (5 kg) at 800 km/hour corresponds to the energy stored in one bunch at 7 TeV. There are 2808 bunches. Factor 200 compared to HERA, TEVATRON and SPS. * shot Die Energie die in einem Strahl gspechert ist, entspicht 362 MJ. Es gibt 2808 Bunche. Ein bunch hat die gleiche Energie gespeichert wie eine Kugen mit einer Geschwindigkeit von 800 km/h *Rüdiger Schmidt

21 Energy stored in the magnets
AirBus A380 The energy of an AirBus A380 at 700km/hr corresponds to the energy stored in the LHC magnet system* Kinetic Energy One bunch out of 2808 carries the equivalent of a 5kg shot travelling at 800 km/h* 1 small aircraft carrier of 104 tons going 30 km/h 450 automobiles of 2 tons going 100 km/h** Thermal Energy** melt 500 kg of copper raise 1 cubic meter of water 85ºC: “One ton of tea” Chemical Energy** 80 kg of TNT *Rüdiger Schmidt MAC 9 December 2005 **Mike Harrison HCP2006

22 The LHC Challenges (3/3) The LHC is the first superconducting accelerator built at CERN 4 large scale cryoplants with 1.8 K refrigeration capability

23 ICALEPCS'07 - P.Charrue - CERN
Outline LHC controls infrastructure The major LHC challenges Technical Solutions to the LHC challenges Results Summary 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

24 Technical solutions (1/3)
WPPB38 Network security (CNIC) and equipment access control (RBAC, Machine Critical Settings) Specific tools for the monitoring of the controls infrastructure LASER: general alarm system TIM: monitoring of technical infrastructure DIAMON (in development): monitoring and diagnostics of controls infrastructure Post mortem data storage and analysis Logging system RPPB03 RPPA35 RPPB13 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

25 Security : Computing and Network Infrastructure for Controls (CNIC)
WPPB38 CERN-wide Working group setup in 2004 for the definition of : CERN wide security policy CERN wide networking aspects Operating systems configuration (Windows and Linux) Services and support Main outcome are : Network Security Policy document Formal isolation between General Purpose Network (GPN) and Technical Network (TN) Connection to the TN requires formal authorization MAC address authentication Windows and LINUX OS and patches deployment and management centrally managed 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

26 ICALEPCS'07 - P.Charrue - CERN
Security : RBAC TPPA04 TPPA12 WPPB08 Implement a ‘role-based’ access to equipment in the communication infrastructure Depending on WHICH action is made, on WHO is making the call, from WHERE the call is issued and WHEN it is executed, the access will be granted or denied This allows for filtering, for control and for traceability on the settings modifications to the equipment 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

27 Technical solutions (2/3)
Integrated suite of control room applications LSA (LHC Software Application) FESA framework for front-end computers UNICOS framework for industrial controls WOPA04 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

28 ICALEPCS'07 - P.Charrue - CERN
WOPA03 RPPB05 The LSA framework 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

29 ICALEPCS'07 - P.Charrue - CERN
UNICOS Framework WOAA03 Architecture: from botton to top FIELD LAYER: - Instrumentation (radiation areas): TT, PT, LT, DI, EH - Signal conditioning (radiation areas) - SPLIT intelligent positioners - WorldFip and Profibus PA – DP CONTROL LAYER: - FECs (Industrial PCs Linux SLC4 ; FESA application interfacing WorldFip data) - PLCs (S7-400 interfacing Profibus) - FEC & PLC exchange information SUPERVISION LAYER: - Dedicated Technical Ethernet - SCADA data servers (HP prolient machines, Linux SLC4) - Windows clients in control rooms: local and central control rooms EXAMPLE CLOSING A CONTROL LOOP: TT -> Wfip -> FEC -> PLC -> Profibus -> Valve. 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN Enrique Blanco AB/CO IS 29

30 Technical solutions (3/3)
TPPB23 Specific hardware developments (high SIL levels) for machine protection (PIC, BIC…) LHC Software Interlocks System (SIS) Fully integrated asset, layout, configuration database system Extension to the Injector timing system WPPB03 RPPA03 WPPB02 RPPB31 FOAA03 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

31 ICALEPCS'07 - P.Charrue - CERN
Outline LHC controls infrastructure The major LHC challenges Technical Solutions to the LHC challenges Results Summary 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

32 LHC sector 7-8 : Successful cooldown
The Cryogenics controls system has been a great aid to the operations to achieve this first cooldown 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

33 Primary cause of the 2004 TI8 accident identified
Reconstruction of beam incident from logged data well working logging system SPS at 450 GeV (some 10^13 protons) Conclusion : MSE current appears to be ~2.5% low at extraction 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

34 Tracking between the three main circuits of sector 78
2ppm 1) the power converetres have to regulate without any overshot. In case of an overshoot the hysteresis effects would kill the beam. 2) many power converters have to TRACK to within the ppm: example the 3 PCs. ...and the shown postmortem plots (here post mortem used as transient recorder) nicely show the expected behaviour. Courtesy F.Bordry post mortem view of PC tracking

35 LHC Hardware Commissioning
15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

36 ICALEPCS'07 - P.Charrue - CERN
Oulook LHC Transfer Lines commissioning in the coming weeks LHC Hardware Commissioning in the coming months, with massive parallelism early 2008 First LHC beam commissioning for summer 2008 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

37 ICALEPCS'07 - P.Charrue - CERN
Outline LHC controls infrastructure The major LHC challenges Controls Answers to the challenges Results Summary 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN

38 ICALEPCS'07 - P.Charrue - CERN
Summary We have addressed the LHC challenges and appropriate solutions have been deployed The LHC infrastructure has been tested almost completely on previous milestones (LEIR, TI2/8 Transfer Lines, CNGS, SPS, LHC HWC) we are confident that we can meet the LHC challenges Part of the enormous human resource effort invested for LHC comes from international collaborations Their contribution is highly appreciated Looking forward to more fruitful collaborations We have scheduled our efforts for the 450 GeV engineering run in November 2007 We are now ready and eager to see beam in the LHC Desktop Computing, testing, access from outside, … File systems (DFS, …), databases (CERNDB, …), servers (DNS, …) Breakpoints can be defined TOCSSiC 15 Oct 2007 ICALEPCS'07 - P.Charrue - CERN


Download ppt "The Status of the LHC Controls System Shortly Before Injection of Beam"

Similar presentations


Ads by Google