Presentation is loading. Please wait.

Presentation is loading. Please wait.

December 2005 Scaling Up PVSS Phase II Test Results Paul Burkimsher IT-CO.

Similar presentations


Presentation on theme: "December 2005 Scaling Up PVSS Phase II Test Results Paul Burkimsher IT-CO."— Presentation transcript:

1 December 2005 Scaling Up PVSS Phase II Test Results Paul Burkimsher IT-CO

2 Aim of the Scaling Up Project Investigate functionality and performance of large PVSS systems In Phase 1 we reassured ourselves that PVSS scales to support large systems Provided detail rather than bland reassurances

3 Phase 2: WYSIWYAF Began with a questionnaire to you to establish your concerns Eclectic list of “hot topics of the moment” –Oracle Archiving –Alerts –Regular reconfiguration of channels (alerts and setpoints) –Backup and restore –Configuring all channels at startup

4 Your requests (cont.) –OPC performance –Local DB cache –Central Panel Repository –Windows/Linux lurking limits –System startup time (DPT distribution) –Task Allocation

5 Menu From these requests, we initially picked out four for investigation: –Task Allocation –Backup of a running system –Alerts –Panel Repository

6 Task Allocation Recall that PVSS is manager based and any manager can be scattered to another machine (not just UIs). CTRL Controlmanager API API-Manager D Driver DB Database- Manager UI Userinterface Runtime D Driver D Driver EV Eventmanager UI Userinterface Editor UI Userinterface Runtime EV Eventmanager CTRL Controlmanager UI Userinterface Editor API API-Manager DB Database- Manager D Driver D Driver UI Userinterface Runtime

7 Task Allocation More than 20 different tests conducted to investigate the effect of moving managers around. Results have been available on the web for some time (URLs at the end) Results were surprising and went against our (& ETM’s!) assumptions of what would be “better”…

8 What we measured… A task allocation was deemed “better” if it supported a higher number of datapoint changes per second (“throughput”) than a system running entirely on a single processor. We observed the number of changes per second that the system could support before one of the following became overloaded : –CPU usage –Memory usage –Network traffic –Disk traffic

9 What we saw… As throughput increases on a typical PVSS system, the machine first becomes CPU bound. The Event Manager (EM) is the task most in need of CPU. We expected that scattering the EM away from the Data Manager (DM) would cause slow-down because of the high traffic between these tasks. WRONG!

10 Scattering the EM Despite the overhead of sending traffic EM   DM over the external network, scattering the EM caused throughput to be significantly increased. (+75%)

11 AES The Alert-Event Screen (AES) is CPU-hungry. Runs in a UI task which can be scattered. Beware: Each additional AES not only increases the load on its own machine, but also increases the load on the EM to which it is connected.

12 Recommendation Execute as few AESs as possible outside the main control room. When you are not actually looking at the AES, leave it in “stopped” mode. (Screen is not updated.)

13 Scattering other managers Can improve throughput, but not as spectacularly as when scattering the EM. Moving the DM is useful, but more delicate (i.e. many Value Archive (VA) connections?)

14 Absolute Performance The average number of “changes per second” that can be supported depend on the nature of the traffic. A steady data flow is easier to cope with. Irregular bursts of rapid traffic tend to overflow the queues between the managers. (Queue lengths are configurable.)

15 Load Management PVSS implements several Load Management schemes, e.g. Alert screen update pauses during a brief avalanche Alert screen switches into Stopped mode if the sustained number of alerts arriving is crazy

16 Load Management - II Load Shedding, where EM will cut the umbilical to rogue managers rather than be brought down itself. I recommend that shift operators be taught to recognise the symptoms when they occur

17 Multiple CPUs An alternative to scattering: Buy a dual processor! 2 CPUs are generally enough to satisfy even the hungry Event Manager Our dual-CPUs became disk bound when we pushed them. ---Tribute to the well balanced design of modern PCs!

18 RAM Look how much memory you are using. Buy enough of it. If you are worried about performance, paging is wasted effort!!

19 Task summary Give plenty CPU capacity to the EM by: –Buying a fast machine –Scattering the EM –Buying a dual CPU machine

20 Menu –Task Allocation –Backup of a running system –Alerts –Panel Repository

21 Backup In the development systems nobody did backup. PVSS backup is somewhat intricate. Need for a set of recipes of backup instructions

22 18-page Report What needs backing up What this means in PVSS How to back it up How to restore (rather important!) Handout

23 Four Parts 1) Executive Summary 2) Recipes 3) Detailed Background Description 4) Frequently Asked Questions about Backup. (I’m not going to go through them, just let you know that they exist.)

24 Menu –Task Allocation –Backup of a running system –Alerts –Panel Repository

25 Alerts PVSS 3.5 (due in 200x) will contain new functionality for summary alerts and alert provocation during ramping. I did not do in depth performance measurements on the existing system, beyond those I described to you in Phase 1 of S.U.P.

26 At the request of one experiment though, we did investigate “What is the load of an alert definition on a PVSS system?” Results on the web (Test 38). 

27 Loads of Alert Definitions We showed that it is safe to declare any number of alerts and even to activate them provided that the data values stay in range. It is provocation of the warnings and alerts that incurs a significant CPU load.

28 Memory load Test 39 looked at memory usage of Alerts. Requirement of 2.5KB per DPE alert.

29 Menu –Task Allocation –Backup of a running system –Alerts –Panel Repository

30 Panel Repository Owing to staffing changes in the section, it was not possible to address this topic 

31 On the subject of panels… During the tests I would have found it helpful to have a ready display of the interconnection status of the distributed systems. I recommend that there is something showing this on the top- level display panel. (Even just a grid of red/green pixels showing connection status.) Lost connections should raise an alert.

32 Other questions During the tests, I was approached by different experiments with other issues! We agreed to investigate the following…

33 PVSS Disturbance With Alice we looked together at the effect of heavy external (unrelated) network traffic on PVSS. Results written up as Tests 28 & 29. Use 100Mbit with switches not hubs Conclusion was that external traffic is not a problem

34 Traffic Pattern For Atlas we compared the CPU load demanded by: –Changing 1 item N times vs –Changing N items once each Same

35 Long Term Test (LTT) With CMS’ machines (for the use of which we are very grateful!) we ran a long term test: –Generated random data –Recorded it and displayed it continuously on a trend –Distributed system Results 

36 LTT Results The electricity supply at Cern is unreliable. You really do need a UPS. The Cern campus AFS servers are relatively unreliable and should never be used in a production system! The Cern network infrastructure is very reliable, but can break.

37 Network Problem One network break revealed that the Cern default Linux O/S settings actually prevent PVSS’s automatic recovery feature from accomplishing its goal. Cache-ing problem. Written up in 2 pages of background, symptoms, explanation, how to fix it if it does happen to you and how to avoid it happening in the first place.

38 “Side Effects” of SUP Project Accumulated a large body of practical experience wrestling with PVSS. Systematically recorded for your benefit. Where? 

39 FAQs FAQ pages on http://cern.ch/itcobehttp://cern.ch/itcobe Not restricted to today’s frequent questions but ones that we foresee will become frequent in the near future, e.g. –My disk is nearly full! What can I do? –My archive file is corrupt. What can I do? Please spread the word, tell your friends…

40 FAQ Categories Framework PVSS - Installation PVSS - Project Creation PVSS - Alerts (Alarms) PVSS - Import/Export PVSS - Archiving PVSS - Access Control PVSS - Backup-Restore PVSS - Cross Platform PVSS - Distributed Systems PVSS - Drivers PVSS - Excel Report PVSS - Folklore PVSS - Graphics PVSS - Linux specific PVSS - Messages PVSS - Miscellaneous PVSS - Printing PVSS – Programming PVSS – Programming PVSS - Production Systems PVSS - Run-time problems PVSS - Scattered Systems General Support Issues

41 Folklore What the FAQs don’t really address is the folklore that is built up in a close-knit team. Often this information is unknown (or inaccessible) to outsiders.

42 Folklore Enter the Wiki… –Web pages editable from inside a browser. –Controls Wiki. –Only CERN users can add (or change existing) content.content –Readable worldwide. (Is already used as a reference by non-HEP organisations!) Folklore often embodies recommended ways of doing things. Do read it, and keep reading it… …and edit it. It’s belongs to you!

43 Example Recommendations in the Folklore Assume one PVSS system per machine (Service restriction in Windows) Place EM/DM on a different CPU to OPC client/servers (Protect EM against CPU overload from OPC; Freedom to move EM to Linux) In a Summary (Group) alert, use a CHAR type (not a STRING type) DPE upon which to hang the summary alert. It's more efficient.

44 Support Issues Final Remark: SUP has generated a fair number of support issues that have been followed up with ETM. “Bugs you didn’t know you nearly had”. Significant contribution to the robustness of the PVSS systems.

45 Summary I do not claim to have answered all questions about building large systems. –New questions come up frequently anyway. We have shown that PVSS will scale to build large systems We have investigated the “hot topics of the moment” as defined by you.

46 To read a summary of the salient points of the most recent tests, including a discussion of the observed “Emergent Behaviour” in large systems, see my ICALEPCS paper, “Scaling Up PVSS”.Scaling Up PVSS We are now bringing this project to a close. Thank you! Any (more) questions?

47

48 Reference Links Scaling Up Home Page: http://cern.ch/itcobe/Projects/ScalingUpPVSS/w elcome.html http://cern.ch/itcobe/Projects/ScalingUpPVSS/w elcome.html IT-CO-BE FAQs: http://itcobe.web.cern.ch/itcobe/Services/Pvss/ FAQ/ http://itcobe.web.cern.ch/itcobe/Services/Pvss/ FAQ/ (T)Wiki: https://uimon.cern.ch/twiki/bin/view/Controls/P VSSFolkLore#PVSS_Folklore https://uimon.cern.ch/twiki/bin/view/Controls/P VSSFolkLore#PVSS_Folklore ICALEPCS paper “Scaling Up PVSS”: http://elise.epfl.ch/pdf/P1_056.pdf

49


Download ppt "December 2005 Scaling Up PVSS Phase II Test Results Paul Burkimsher IT-CO."

Similar presentations


Ads by Google