Presentation is loading. Please wait.

Presentation is loading. Please wait.

Commissioning: Preliminary thoughts from the Offline side ATLAS Computing Workshop December 2004 Rob McPherson Hans von der Schmitt.

Similar presentations


Presentation on theme: "Commissioning: Preliminary thoughts from the Offline side ATLAS Computing Workshop December 2004 Rob McPherson Hans von der Schmitt."— Presentation transcript:

1 Commissioning: Preliminary thoughts from the Offline side ATLAS Computing Workshop December 2004 Rob McPherson Hans von der Schmitt

2 2004/12/06Rob McPherson / Hans von der Schmitt2 ATLAS Commissioning  Commissioning = “Just Installed”  “Operational”  Referring primarily to activities at Point 1 (+ integration)  Broken into 4 phases Phase 1: subsystem standalone commissioning Phase 1: subsystem standalone commissioning DCS: LV, HV, cooling, gas, safety systems, record in DB, retrieve from DB DCS: LV, HV, cooling, gas, safety systems, record in DB, retrieve from DB DAQ: pedestal runs, electronic calibration, write data, analyze DAQ: pedestal runs, electronic calibration, write data, analyze Phase 2: integrate systems into full detector Phase 2: integrate systems into full detector Phase 3: cosmic rays Phase 3: cosmic rays Take data, record, analyze/understand them, distribute to remote sites Take data, record, analyze/understand them, distribute to remote sites Phase 4: single beam, 1 st collisions Phase 4: single beam, 1 st collisions Same, with increasingly higher rates Same, with increasingly higher rates  Phases will overlap Some systems may take cosmics while others are still installing Some systems may take cosmics while others are still installing  Starts very soon Barrel calos start “phase 1” electronics commissioning  Mar 2005 Barrel calos start “phase 1” electronics commissioning  Mar 2005

3 2004/12/06Rob McPherson / Hans von der Schmitt3 ATLAS Commissioning Structure Cryogenics G. Passardi Det. cooling, Gas J. Godlewski Cooling,Ventilation B. Pirollet Safety G. Benincasa Magnets H. Ten Take Databases R. Hawkings, T. Wenaus Offline R. McPherson H. von der Schmitt Pixels L. Rossi LAR L. Hervas Tiles B. Stanek Mu-Ba L. Pontecorvo Mu-EC S. Palestini Lumi., Beam-pipe Shieldings, Etc.. SCT S. McMahon TRT H.Danielsson P. Lichard ID P. Wells DAQ G. Mornacchi CentralDCS H. Burckhart LVL1 T. Wengler HLT F. Wikkens TDAQ G.Mornacchi The are the current commissioning contact people The names are the current commissioning contact people OVERALL ATLAS G.Mornacchi, P.Perrodo

4 2004/12/06Rob McPherson / Hans von der Schmitt4 Offline commissioning  Work program will start with detector debugging/monitoring in early stages, move to cosmics, beam halo, beam gas, and then 1 st collisions  Many issues for detector online+offline software, databases, simulation, data distribution, remote reconstruction,...  Will have meetings as needed  Request contact people from detectors and related groups with some response ID : Maria Costa LAr : ? Tiles : Sasha Solodkov Muons : ? DB : Richard Hawkings and Torre Wenaus Simulation : ? Physics : ?

5 2004/12/06Rob McPherson / Hans von der Schmitt5 RCCVME Detector commissioning: offline view DETECTOR ? ConfigurationDatabase(s)ConditionsDatabase(s) DCS / Controls HV, LV HV, LV Temp sensors Temp sensors Alignment Alignment Cryogenics Cryogenics...... TDAQ/systemWorkstation (GNAM in CTB) OnlineSystem Presenter Online Histo Svc ? Front-End ROD ROS BytestreamFiles SFI EF SFO LVL2 LVL1 ATHENA

6 2004/12/06Rob McPherson / Hans von der Schmitt6 non-Event data access  DCS and other controls data If needed offline, natural access via conditions DB interface If needed offline, natural access via conditions DB interface Can we assume evolution of current PVSS manager sufficient? Probably yes. Assume it will move to Oracle at some point. Can we assume evolution of current PVSS manager sufficient? Probably yes. Assume it will move to Oracle at some point.  But must watch custom DB use in case additional central support tools required.  And must also watch data volume into relational DB... Note that most “DCS” monitoring tasks is done in PVSS et al. (Not called “offline” here) Note that most “DCS” monitoring tasks is done in PVSS et al. (Not called “offline” here)  Configuration information Again, assume necessary information written either to event stream or to conditions DB Again, assume necessary information written either to event stream or to conditions DB  Calibration / alignment constants Obviously need conditions DB for these Obviously need conditions DB for these Will we have common system in time for commissioning ? (POOL et al?) Will we have common system in time for commissioning ? (POOL et al?)  Can fallback to CTB systems... not very nice....

7 2004/12/06Rob McPherson / Hans von der Schmitt7 Event Data Access (1) 1) Running ATHENA on ByteStream / EventStorageFiles  Easiest way for offline code to access data  Would we want to maintain a “commissioning branch” like the CTB?  Would we want this branch built on non-afs like the CTB?  Can re-use a lot of monitoring tools developed for CTB  Ideally ROD  ROB  ROS chain working, but can also RCC  BS  Limited number of channels: ROB/ROS  PC with Filar card (need for subdetectors without full VME readout) 2) In Event Filter  Requires more of the DAQ system running  Experience from CTB : not always possible to keep code up-to-date  uncouple detector monitoring from online software releases etc. as much as possible?  Need to review handling of “incidents” (asynchronous interrupts) passed into the ATHENA job  Histogram reset under certain conditions...

8 2004/12/06Rob McPherson / Hans von der Schmitt8 Event Data Access (2) 3) “Online” workstation (a la GNAM in the CTB)  Can take (ethernet) data stream via ROS (or RCC) ?  Need to review this ethernet data stream and how to read it from an ATHENA job  Also need to review running ATHENA on lower level (ROD?) fragment (is all needed information available?)  If we run ATHENA here, require:  Possibility of “light-weight” ATHENA with only converters and histogramming  If we don’t run ATHENA here:  Require duplication of converters and parallel maintenance  Limited monitoring possible at this level unless we also want to duplicate cabling/mapping services, database interaction, etc.  But we will want to match histogram root tree to ATHENA in any case to use same plots / macros / etc.

9 2004/12/06Rob McPherson / Hans von der Schmitt9 ATHENA “online”  Direct access to TDAQ Information Service (IS) essential Had limited use in CTB04 monitoring (eg beam energy for histograms) Had limited use in CTB04 monitoring (eg beam energy for histograms) Found this a weak point that could use review Found this a weak point that could use review  Need a structured monitoring/histogramming environment that matches online use Dynamic booking / rebooking of histograms Dynamic booking / rebooking of histograms Zero histograms based on some external input Zero histograms based on some external input Eg, shift crew presses a “reset” button... or change in some condition picked up via the Information Service Eg, shift crew presses a “reset” button... or change in some condition picked up via the Information Service Can work features into AthenaMonitoring package, once we understand what features are wanted Can work features into AthenaMonitoring package, once we understand what features are wanted Need a “state model” for online system, mapped/implemented in ATHENA Need a “state model” for online system, mapped/implemented in ATHENA  Need a “smaller” build Strong feeling on subdetector/TDAQ side that ATHENA is too hard to use for the “GNAM” environment Strong feeling on subdetector/TDAQ side that ATHENA is too hard to use for the “GNAM” environment Hard to use, crashes in obscure places (say, ByteStreamSvc somewhere due to corrupt events? How to debug this? It will happen a lot during commissioning!) Hard to use, crashes in obscure places (say, ByteStreamSvc somewhere due to corrupt events? How to debug this? It will happen a lot during commissioning!)

10 2004/12/06Rob McPherson / Hans von der Schmitt10 Summary thoughts on tools  Databases DCS databases and offline access seem OK for early commissioning DCS databases and offline access seem OK for early commissioning Calibration/Alignment databases need rationalization Calibration/Alignment databases need rationalization It would be very nice to have on recommended/supported solution before these are seriously required and used. It would be very nice to have on recommended/supported solution before these are seriously required and used. Some CTB04 solutions (Nova, writing significant data into CDB) won’t scale Some CTB04 solutions (Nova, writing significant data into CDB) won’t scale Want to archive histograms etc. from commissioning phase in DB?? Want to archive histograms etc. from commissioning phase in DB??  If we use ATHENA-based event stream monitoring Many of the CTB tools can be migrated to commissioning Many of the CTB tools can be migrated to commissioning Monitoring Algorithms/AlgTools, root macros, etc. Monitoring Algorithms/AlgTools, root macros, etc. Need to think about detailed plots etc. for full ATLAS Need to think about detailed plots etc. for full ATLAS Have been ridiculous monitoring histogram extrapolations from CTB  ATLAS... must review these Have been ridiculous monitoring histogram extrapolations from CTB  ATLAS... must review these There is a “histogram checker” (Monitoring/MonHighLevel from Manuel Diaz) framework in place, but needs clients There is a “histogram checker” (Monitoring/MonHighLevel from Manuel Diaz) framework in place, but needs clients  If we also use non-ATHENA-based event stream monitoring Surely would still want a common framework Surely would still want a common framework Can still recycle many root macros etc. from CTB Can still recycle many root macros etc. from CTB

11 2004/12/06Rob McPherson / Hans von der Schmitt11 Phase 3 and beyond...  Have fully simulated cosmics, beam halo and beam gas samples available for detector studies Some use so far Some use so far Tiles : commissioning trigger rate studies Tiles : commissioning trigger rate studies Muons : tracking package for non-pointing, out-of-time events Muons : tracking package for non-pointing, out-of-time events LAr and ID : some rate and reconstruction studies LAr and ID : some rate and reconstruction studies Do we want dedicated samples with special detector configurations? Or more statistics of the samples we have? Do we want dedicated samples with special detector configurations? Or more statistics of the samples we have? So far: only G3 simulation of overburden/cavern. Want G4? So far: only G3 simulation of overburden/cavern. Want G4? Need to review the readiness of the subdetector reconstruction software for these non-standard events Need to review the readiness of the subdetector reconstruction software for these non-standard events  Once we’re taking data with full TDAQ chain in place Data distribution to “Tier0” and remote computing centres planned for cosmics and single-beam data Data distribution to “Tier0” and remote computing centres planned for cosmics and single-beam data Considering this is not currently highest priority, but must keep in mind that this will run in parallel with DC3 Considering this is not currently highest priority, but must keep in mind that this will run in parallel with DC3

12 2004/12/06Rob McPherson / Hans von der Schmitt12 Summary  ATLAS commissioning at point 1 starts in a few months  Initially, may need fallback DB solutions, but need to work to avoid these if possible Must watch data rate and volume written into relational DB Must watch data rate and volume written into relational DB  Will use ATHENA for detailed data analysis Must think about “AthenaMonitoring” environment for non-developers Must think about “AthenaMonitoring” environment for non-developers Smaller, faster, simpler, robust... Smaller, faster, simpler, robust... Maybe need to define incident path to react inside ATHENA to changing external conditions matching to TDAQ states Maybe need to define incident path to react inside ATHENA to changing external conditions matching to TDAQ states Also must verify that all subdetectors implement BS fragment versioning. Will evolve significantly during detector commissioning Also must verify that all subdetectors implement BS fragment versioning. Will evolve significantly during detector commissioning  Must consider if ATHENA also OK for “in the pit while plugging in a board” monitoring and then subsequent standard online monitoring TDAQ event stream  ATHENA ? TDAQ event stream  ATHENA ? Regardless, it would still be good to maintain code in only one place Regardless, it would still be good to maintain code in only one place  Also need to review detector reco for cosmics etc.  And eventually also the best timescale to distribution of commissioning events to external computing centres


Download ppt "Commissioning: Preliminary thoughts from the Offline side ATLAS Computing Workshop December 2004 Rob McPherson Hans von der Schmitt."

Similar presentations


Ads by Google