ID Week 13 th of October 2014 Per Johansson Sheffield University.

Slides:



Advertisements
Similar presentations
Anatoli Romaniouk, TRT IB meeting October 30 th 2013 TRT Tasks/Responsibilities/Manpow er issues. 1.
Advertisements

Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
CSC Online and Offline DQA Status Ken Johns, Venkat Kaushik, Xiaowen Lei (U. Arizona)
Atlas SemiConductor Tracker Andrée Robichaud-Véronneau.
CSC DQA and Commissioning Summary  We are responsible for the online and offline DQA for the CSC system, a US ATLAS responsibility  We are ready for.
Robust Alignment of the Inner Detector Alignment Meeting – 1 st June 2005 Florian Heinemann ATLAS  Alignment Challenge  Robust Alignment  Code.
CSC Online and Offline DQA Status Ken Johns, Venkat Kaushik, Xiaowen Lei (U. Arizona)
Introduction & News SCT Weekly during ID Week Dave Robinson Cambridge University.
Data Quality Monitoring of the CMS Tracker
Michael Burnside Blog: Software Quality Assurance, Quality Engineering, and Web and Mobile Test.
SVX Software Overview Sasha Lebedev VTX meeting 09/07/ SVX Software web page:
Offline Tracker DQM Shift Tutorial. 29/19/20152 Tracker Shifts Overview Online Shifts at P5 (3/day for 24 hours coverage) – One Pixel shifter and one.
Requirements Review – July 21, Requirements for CMS Patricia McBride July 21, 2005.
Web Based Monitoring DT online shifter tutorial Jesús Puerta-Pelayo CIEMAT Muon_Barrel_Workshop_07/July/10.
Some notes on ezTree and EMC data in MuDst Marco van Leeuwen, LBNL.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
2nd September Richard Hawkings / Paul Laycock Conditions data handling in FDR2c  Tag hierarchies set up (largely by Paul) and communicated in advance.
1 4 July 2006 Alan Barr - SCT DAQ Experience and plans from running the (SCT) DAQ HEP Setting things up Calibration mode operations Physics mode operation.
ATLAS Liquid Argon Calorimeter Monitoring & Data Quality Jessica Levêque Centre de Physique des Particules de Marseille ATLAS Liquid Argon Calorimeter.
Recent Software Issues L3 Review of SM Software, 28 Oct Recent Software Issues Occasional runs had large numbers of single-event files. INIT message.
Optimising Cuts for HLT George Talbot Supervisor: Stewart Martin-Haugh.
André Augustinus 21 June 2004 DCS Workshop Detector DCS overview Status and Progress.
OFFLINE TRIGGER MONITORING TDAQ Training 5 th November 2010 Ricardo Gonçalo On behalf of the Trigger Offline Monitoring Experts team.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Data Management Seminar, 9-12th July 2007, Hamburg 11 ICCS 2009 – Field Trial Survey Operations Overview.
RPC DQA but also Monitoring for the DCS group: status and prospective for Marcello Bindi RPC L1MU Barrel DQM - 08/05/2013.
17-Aug-00 L.RistoriCDF Trigger Workshop1 SVT: current hardware status CRNowFinal Hit Finders64242 Mergers31616 Sequencers2312 AMboards4624 Hit Buffers21212.
CMS pixel data quality monitoring Petra Merkel, Purdue University For the CMS Pixel DQM Group Vertex 2008, Sweden.
Trigger DQ flags Discussed at April Workshop on Detector Conditions for the Trigger Talks by.
4 th Workshop on ALICE Installation and Commissioning January 16 th & 17 th, CERN Muon Tracking (MUON_TRK, MCH, MTRK) Conclusion of the first ALICE COSMIC.
ALICE Pixel Operational Experience R. Santoro On behalf of the ITS collaboration in the ALICE experiment at LHC.
Part I – Shifter Duties Part II – ACR environment Part III – Run Control & DAQ Part IV – Beam Part V – DCS Part VI – Data Quality Monitoring Part VII.
September 2007CHEP 07 Conference 1 A software framework for Data Quality Monitoring in ATLAS S.Kolos, A.Corso-Radu University of California, Irvine, M.Hauschild.
February 07, 2002 Online Monitoring Meeting Detector Examines Should aid in: 1.Diagnosing problems early and getting it fixed 2.Making decisions on the.
RPC DQM status Cimmino, M. Maggi, P. Noli, D. Lomidze, P. Paolucci, G. Roselli, C. Carillo.
SCT DQ Training DQ Training 2011 Dr. Petra Haefner.
Online (GNAM) and offline (Express Stream and Tier0) monitoring produced results during cosmic/collision runs (Oct-Dec 2009) Shifter and expert level monitoring.
Pixel DQM Status R.Casagrande, P.Merkel, J.Zablocki (Purdue University) D.Duggan, D.Hidas, K.Rose (Rutgers University) L.Wehrli (ETH Zuerich) A.York (University.
DQM for the RPC subdetector M. Maggi and P. Paolucci.
Prompt Calibration Loop 11 February Overview Prompt calibration loop in SCT –Provides ATLAS with conditions data used for the bulk reconstruction.
Upgrade Software University and INFN Catania Upgrade Software Alessia Tricomi University and INFN Catania CMS Trigger Workshop CERN, 23 July 2009.
Zhen YAN, Muon Offline DQ meeting, 05/10/2015 Signed off runs(express stream) since last Friday meeting Default defects:  MS_RPC_BA_ROD_PROBLEM_1  MS_RPC_BC_ROD_PROBLEM_1.
DQM for the RPC subdetector M. Maggi and P. Paolucci.
Online Consumers produce histograms (from a limited sample of events) which provide information about the status of the different sub-detectors. The DQM.
L1Calo EM Efficiencies Hardeep Bansil University of Birmingham L1Calo Joint Meeting, Stockholm 29/06/2011.
HI July Exercise and Muon DQM preparation Mihee Jo Mihee Jo / Lab meeting.
9th October 2003Danny Hindson, Oxford University1 Inner Detector Silicon Alignment Simple approach -Align in stages + rely on iteration Barrel to Barrel.
Валидация TRT DCS CONDITIONS SERVICE Евгений Солдатов, НИЯУ МИФИ “Physics&Computing in ATLAS” – 22/09/2010.
Commissioning and run coordination CMS week Commissioning plenary 28-February 2007 DQM and monitoring workshop MandateGoals.
AliRoot survey: Reconstruction P.Hristov 11/06/2013.
Petra HaefnerSCT Expert Training SCT Expert Training Data Quality 1.
Muon Week DQ Meeting Dr. Petra Haefner, Bonn1 Data Quality Concept of tolerable vs. intolerable defects.
ECAL Shift Duty: A Beginners Guide By Pourus Mehta.
ATLAS Detector Resources & Lumi Blocks Enrico & Nicoletta.
L1Calo Databases ● Overview ● Trigger Configuration DB ● L1Calo OKS Database ● L1Calo COOL Database ● ACE Murrough Landon 16 June 2008.
M4 Operations ● Operational model for M4 ● Shifts and Experts ● Documentation and Checklists ● Control Room(s) ● AOB Murrough Landon 24 July 2007.
THIS MORNING (Start an) informal discussion to -Clearly identify all open issues, categorize them and build an action plan -Possibly identify (new) contributing.
TGC online DQ GNAM + RODSampler status 11/09/2009 GNAMs + ROD sampler: 6/6 GNAM segments running. 1 RODSampler application running. 1/6 GNAM segment (efficiencies.
ATLAS Tile Calorimeter Data Quality Assessment and Performance
Risultati del run di integrazione M4
Offline shifter training tutorial
Remaining Online SW Tasks
Experience between AMORE/Offline and sub-systems
Data Quality Monitoring of the CMS Silicon Strip Tracker Detector
Shifter Instructions regarding the ZD, Dan Peterson
HLT Jet and MET DQA Summary
CMS Pixel Data Quality Monitoring
DQM for the RPC subdetector
CMS Pixel Data Quality Monitoring
Weekend Summary Friday Saturday Sunday C. Gemme, INFN Genova,
Presentation transcript:

ID Week 13 th of October 2014 Per Johansson Sheffield University

 Will touch briefly upon past experiences but will concentrate on the future and Run 2  Cover both offline and online  Bring up some issues we might need to look at in no specific order 2

 Main losses: ◦ Pixel: Standby, HVscan ◦ SCT: ROD_OUT_2, Global_standby, Crate_out ◦ TRT: Nodata_06, Desync ◦ ID Global: 97.5%: tracking coverage (SCT ROD plus...), beamspot  Data Quality paper: ◦  2013 workshop ◦  Summary presentation of workshop ◦ slides/0.pdf

 M4 7– 11 th of July ◦ Managed to run the monitoring, and some of the DQM was also included and running stable in rel. 19 ◦ However Pixel/SCT not powered so only integration work  M th of September ◦ For SCT/Pixel priority of integration with powered detectors ◦ Monitoring not really exercised ◦ Some basic standalone monitoring tools working  M6 started today ◦ Work on ID Global and detector monitoring, DQM and standalone monitoring tools  Should aim to be in a good state for M7 starting 24 th of November, ending on 7 th of December 4

 Histogram Review to be done when packages are working as intended  In Run-1 probably too many unneeded plots ◦ consumes CPU and memory  Too many meaningless checks (wrong algorithms, bad thresholds, etc.) and perhaps not enough good checks ◦ Can the histogram be designed better? ◦ Much lumi information is available - can the histogram be made robust to changes in trigger or lumi/conditions? ◦ Luminosity-dependent rebooking, etc...  Can the offline code be made easier to maintain? ◦ Can its performance be improved?  Are similar plots, e.g. coverage/timing plots made across related sub- systems in a unified way? ◦ Which *need* monitoring online? ◦ Are the checks reliable in all conditions, to avoid spurious red flags?  What should ID ACR shifter watch and what should central DQ desk?  Is the documentation clear and up-to-date? 5

 In run 1 there were 3 detector shifters and one for the ID Global part ◦ Covering tracking, vertexing, beamspot and alignment  Some issues that were raised: ◦ Communication Global – subdetector shifters ◦ The ID Global DQ checks should occur after subdetector checks ◦ Lack of understanding from the detectors about ID decisions made ◦ Duplication of effort between Global and subdetectors ◦ Lack of ID Global experts  Merging these shifts into one was under way during Run 1and there were several successful trial periods with one shifter doing all parts (in addition to the normal ones as crosscheck)  New twiki DQ pages were used to present the shifter with all the information needed ◦ ◦ Not working at the moment?! ◦ Needs some attention – content, histograms, instructions ◦ Webdisplay used previously, contains all histograms, shifter, expert, etc, and is still there of course 6

 Defects need to be reviewed  How to decide if a tolerable or intolerable defect should be set? ◦ Obviously should depend if data is good for physics ◦ Not obviously how to do this, would need feedback from physics groups  An example: for SCT when >100 modules/2 RODs out = intolerable defect in Run 1  However this don’t necessarily means data is bad, one ROD out on each endcap would translate into one hit loss on track trough that quadrant, 2 RODS on barrel would be worse due to eta/phi module layout  Should aim to find and decide in the ID global part if there is a loss of tracking in the affected region and set appropriate defect  Of course still the question of how big a loss leads to an intolerable defect 7

 The prompt calibration loop ◦ Derived calibrations in 36 hour loop which starts after run finished ◦ Status?!  PIX/SCT/TRT calibrations to be moved into the Tier-0 system if stand-alone processing if robust enough ◦ Question raised during recent DPC meeting  DCS Calculator ◦ PVSS -> COOL issues in CONDBR2 ◦ Question if using names/channel numbers to write it and if using names or channel numbers to access data offline ◦ Writing by names -> channel# get random assignment 8

 Lots of things to do!  First make sure monitoring online and offline is running ok, and DQM and various other tools we use  Then start reviewing histograms, defects, shifter tools online and offline, etc,.. and we should not forget Documentation! 9