Presentation is loading. Please wait.

Presentation is loading. Please wait.

RPC Readiness for Data-taking RPC Collaboration 1.

Similar presentations


Presentation on theme: "RPC Readiness for Data-taking RPC Collaboration 1."— Presentation transcript:

1 RPC Readiness for Data-taking RPC Collaboration 1

2 RPC Hardware Forward -18/432 chambers are in single gap mode due to HV problems -1 chamber is disconnected due to HV problem -1 chamber is disconnected due to LV problem -7 chambers have not I2C communication (no TH control) Barrel -5/1020 double gaps modules are in single gap mode due to HV problems -15/480 chambers have not I2C communication (no TH control). Out of them, 11 can be controlled through the DT redundancy line. The remaining are set to the default TH, but they are currently masked. 2

3 RPC Firmware CAEN Firmware for HV and LB boards stable Control Board New firmware recently installed to avoid loss of communication after CCU errors. Trigger Board LHC patterns uploaded. They also accommodate cosmic pattern with minimal Pt code and quality A bug related to a wrong forward endcap geometry is still present in the patterns. Modification of the geometry will be first inserted in CMSSW, then new patters will be computed and uploaded. The new firmware is expected to be ready in January. Some loss of efficiency ( few %, under investigation) could arise in RE1/2. 3

4 PAC: – bug in the logic cone definition: one strip of each chamber not connected to any logic strip – corrected – The PAC latency +1 BX – New feature: PACs can look for the hits coincidence in more then one BX (up to 3) – “BX OR – extended coincidence” – Known bugs: wrong sign bit definition; incorrect shape of some patterns – new firmware loaded New firmware for Final Sort: – bug: the latency of the endcap muon was 2 BX bigger than barrel – corrected New firmware for Half Sort : – Bug: phi ghost busting between Trigger Crates not working (almost) at all - new firmware applied and preliminary tested. RPC Firmware 4

5 RPC Online software Using version XDAQ 7. We would like not to rush to upgrade to version 9 unless this is required by CMS Automatic procedures recovering the CCU rings are being implemented in the online software Threshold configuration, RBC and TTU configuration from database on going TTU and RBC controlled through the trigger supervisor to be tested Review the LBs setup procedure to speedup. Ready next week Implement warm setup procedure. Do not re- configure hardware which is in “ready” state. Few weeks needed Extensive work to have the trigger supervisor controlling the full configuration phase is in progress. Ready next year. Improvement of the LB monitoring efficiency and start/stop time. Necessary for the TS controlled configuratione regime. Serious change in the software. Ready next year. Task force at work for the configuration : Karol + Filip + Nicolay + Andres + Mikolaj + Krzysztof 5

6 CMS center Control room P5 Shifter: Operation & Monitor Experts Shifter: Offline DQM, prompt analysis 3 shifters/ day at P5 1 shifter/day at CMS center 1 Shift leader 24h/24h 5 experts on call 24h/24h 3 shifters/ day at P5 1 shifter/day at CMS center 1 Shift leader 24h/24h 5 experts on call 24h/24h Shift Leader RPC Operation Crew Experts Until the end of the year all shifts covered 6

7 RPC Operation at P5 Procedure for configuration 1.LV ON, HV ON 2.LB configuration 3.TH setting 4.LB Synchronization 5.HV STAND-BY For the November run we plan a partially manual configuration that will take some time (about 40 minutes). Once the configuration through the trigger supervisor will be ready, time will go down to few minutes 7

8 RPC Operation at P5 1.DCS monitor via PVSS panel (HV, I, T, …) 2.XDAQ monitor (noise, trigger rates) 3.Monitor the chamber and trigger rate 4.DQM online 5.Fill a run report 1.DCS monitor via PVSS panel (HV, I, T, …) 2.XDAQ monitor (noise, trigger rates) 3.Monitor the chamber and trigger rate 4.DQM online 5.Fill a run report Documentation is available at: https://twiki.cern.ch/twiki/pub/CMS/RPComm/manualpdf 8

9 RPC DCS General Detector View Global performances barrel + Endcap Global FSM states for barrel, endcap, hardware, and gas system Percentage of hardware ok, with trend to spot changes over the shift In case of error condition, the information about the problem, the time stamp, the details and the number to call is displayed Panic buttons List of known problems to be xchecked in case of alarm 8 error conditions flagged by red led on critic subsystems 9

10 RPC DCS Summarize all the information in one interface for the Central Shifter. Not too technical but all the possible problems are spotted easily for not RPC experts. DCS System has been reviewed at the beginning of October. DSS action matrix implemented and the validation is ongoing. Documentations…? Working condition can be monitored via histograms for all the sensitive parameters Global trend can also be monitored with history plots Working condition can be monitored via histograms for all the sensitive parameters Global trend can also be monitored with history plots 10

11 RPC Problems and Actions LV channels: If some channels off try command ON to restore (only once) If one/more channels are in error call RPC shifter leader. HV channels in trip: If less than 10 (1%) channels in total, try to restore (only once). If persistent leave the channels OFF and disable then into DCS If more than 10 channels in total, call RPC shifter leader. Gas alarm 1.Check gas flow 2.Kill acknowledge (if allowed) 3.Put HV standby 4.Put HV ON Under investigation possibility that the gas alarm puts the detector in STAND-BY (OFF after 1 hour) HV channels in trip: If less than 10 (1%) channels in total, try to restore (only once). If persistent leave the channels OFF and disable then into DCS If more than 10 channels in total, call RPC shifter leader. Discussion on how (who) deals with it 11

12 RPC DQM Reference Histos update Decision is taken by the RPC shifter leader. He will evaluate if the appearance of a problem would need an update of a reference histogram also in relation to the solving time perspective of the problem itself. The RPC shift leader should contact the RPC DQM expert who will promptly load the new reference in the DQM database. An additional discussion will be addressed during the weekly RPC Run meeting to validate the update. Online DQM documentation is available at: https://twiki.cern.ch/twiki/bin/view/CMS/DQMShiftRPC https://twiki.cern.ch/twiki/bin/view/CMS/DQMShiftRPC Offline DQM documentation is available at: https://twiki.cern.ch/twiki/bin/view/CMS/DQMShiftOfflineRPC In case of DQM warnings, the central shifter should address the issue to the RPC shifter leader 12

13 RPC DQM 13

14 RPC Operation at CMS Center  Interface to analysis submission  AlCaRPC Express Stream used  RPCMon Primary Dataset to be fully validate  TTU dedicated dataset need to be defined Several established analyses ready to run on demand  Offline Noise Rates  Chamber Performance  Trigger Performance Condition Data Analysis (Dark Current, Temp, Gas flow, etc…) under major review Several established analyses ready to run on demand  Offline Noise Rates  Chamber Performance  Trigger Performance Condition Data Analysis (Dark Current, Temp, Gas flow, etc…) under major review Prompt feedback on performance 14

15 RPC Operation at CMS Center  Graphical interface to easily spot problematic chamber  All DQM histograms available from the synoptic view.  Historical DQM  DQM/CAF that would allow even more flexible use of DQM Bari Tier2 to store all relevant data  SKIM Definition under construction  Move to Crab Bulk Submission, subscribing (beta functionality) RPC Skim  DQM/CAF that would allow even more flexible use of DQM Bari Tier2 to store all relevant data  SKIM Definition under construction  Move to Crab Bulk Submission, subscribing (beta functionality) RPC Skim Performance Monitor Vs Time 15

16 RPC Data Analysis Model Our data analysis model does not rely on the concept of run. Technically can be geared to work on LS basis at high luminosity. The issue is the statistics. Early Analysis Though 2009 low statistics we aim to give green light on results “just the day affter” Dedicated Analysis Team 1)Standard DQM/Prompt Am results with different granularity 2)Summary Plot ala Craft08 paper Procedure to endorse within RPC community Early Results under construction 16

17 Barrel: OFF It could be useful for endcaps : Time synchronization chamber by chamber (internal) and preliminary LHC synchronization At the beginning only two endcap towers in readout and trigger (downstream) at HV @ 8.8 kV (low efficiency) RPC Beam Splash Monitor of: Current from DCS LV current from DCS Occupancy from DQM Noise (from LBB histos) Trigger rate If no problems…. turn on the full endcap system to the next splash 17

18 HV to STAND-BY HV to ON RPC Operation cycle INTERFILL Check: CCU ring Optical link synchronization LB monitoring (to see if readout is effective) Load new mask, if needed (new LB configuration necessary) Load new TH (if needed) Time: about 3-4 minutes to go from STAND-BY to ON Definition of RPC STAND-BY status: HV ON at 7 kV LV FEB ON LV LB ON 18

19 RPC Synchronization at start-up Two options possible: Start with current cosmic synchronization (Link Board settings) Calculate the setting for the muons time of flight from the vertex - we will try to do this next week In both cases: We can configure the PACs so that they are looking for hit coincidence in a few consecutive BXs (2 or 3): “BX OR – extended coincidence” feature. Then the PAC trigger is ~100% efficient, even though not well synchronized. Then by analyzing the taken (DAQ) data we can calculate the corrections. Procedure tested for the cosmic muons. We can compute the corrections from about 10 events per Link Board, 1500 LBs, 5 (3) hits/mu. Assuming flat distribution in eta (which is not a case due to different penetration length) we need about 10 4 muons 19

20 RPC Operation Model A model with 2 full shifts dedicated for 'playtime' every week and possibly 2-3- days every month for longer maintenance period is ok. The two full 'playtime' shifts every week could be dedicated to: - noise and trigger studies (also with cosmics, TTU trigger) - revise the noise channels - software/firmware bug fixes and tests. - establish the detector performance at different HV/TH The 2-3 day maintenance period could be dedicated to: - software bug fixing - major configuration software update, if needed - off detector electronic maintenance 20

21 RPC hardware fairly ready For this year still some manual configuration. Important effort to have it fully controlled by the Trigger Supervisor DCS/DQM in good shape Still some procedure need to be refined and experts presence really needed through the entire shift. Documentation and procedures need to be validated Interface between P5 and CMS Center still to be improved Prompt analysis tool ready and validated during CRAFT09 Experts and shifters ensured for 2009 RPC Conclusion 21


Download ppt "RPC Readiness for Data-taking RPC Collaboration 1."

Similar presentations


Ads by Google