Presentation is loading. Please wait.

Presentation is loading. Please wait.

Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.

Similar presentations


Presentation on theme: "Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring."— Presentation transcript:

1 Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring task was to give the integrated signal for all strips/pixels which requires building 14,248 monodimensional plots with 512/768 channels and 1392 scatter plots with up to 420x160 channels. Jobs 250 - For each of the 41 layers of the detectors 5 jobs were generated, each one processing 2000 events. Real Time of test 95 minutes with at most 35 job running in parallel. 0.5 seconds/event on average. CPU used60% of real time WN and Master Machine Pentium IV 2.4 GHz – 3.0 GHz 2GB RAM Memory ~ 300 MB used for each job The complexity of the HEP detectors for LHC (CMS Tracker has more than 50 millions of electronic channels) and the availability of the Grid computing environment require novel ways to monitor online detector performance and data quality. In the control room all raw data are accessible but the computing resources are scarce. For this reason we would like to send a sizeable amount of tracker raw data to a specialized Tier2 centre (through an high-bandwidth connection). This centre can analyze the data as they arrive, doing a quasi-online monitoring and sending back to the control room the result of this analysis. We report on a feasibility test done using a grid farm of INFN (Tier-2) in Bari. M.S. MENNEA, G. ZITO, N. DE FILIPPIS University & INFN Bari, Italy [1] CMS tracker visualization tools - Maria S.Mennea, A. Regano, G. Zito - Published in Nuclear Inst. And Methods in Physics Research A, Vol 548/3 pp 391-400, 2005 [2] Use of interactive SVG map and web services to monitor CMS tracker - G. Zito, M. S. Mennea - Proceedings of IEEE-NPSS Real Time Conference, Stockholm Sweden June 4-10, 2005 REFERENCE ARCHITECTURE AND DATA FLOW 1. 1. The raw data is processed by a local cluster at CERN with a few thousands CPUs connected in a hierarchical way. The final stage of this processing is the Filter Unit Farm where the raw data is made available to local monitoring programs. The same data is also sent to Tier0 and then Tier1 centers. The Tier2 will get from the nearest Tier1 center a sizeable amount of raw data through a high bandwidth connection. 2. 2. The worker nodes (WN) available on the Tier2, as synchronously as possible with the data transfer, will start processing the data as it arrives. Each CPU will process all or a part of the tracker and only an optimal number of the events for each job. The result of each job is a root file saved on a disk local to the Tier2. In parallel with these jobs, a program will run all the time on the Master Machine waiting for the new root files to be ready. 3. 3. Each root file is analyzed and the result is added to a kind of summary of the monitoring analysis (the 4. 4. In case of problems, the operator at CERN which sees the image using a web browser, can click on the module origin of the problem and get the web page containing the module's report. If a more detailed analysis of the data is requested, the operator can access also the root file available on line. This is a specialized 2D representation of the tracker, a kind of scatter plot where all modules are represented in a single screen (we imagine to disassemble the whole tracker and to assemble it again on a flat surface putting the single modules in positions which are connected to their spatial position). Using this representation in SVG format with the interactive features implemented in JavaScript, we have obtained a kind of high level user interface for tracker monitoring data visualization. This is not a static but an interactive image where the user can zoom and get more detail up to the level of microstrip in the form of a normal histogram visualized in a window nearby the main display, pick a zone and get more information on that zone, etc... SUMMARIZING THE RESULTS OF MONITORING ANALYSIS WITH A TRACKER MAP PRELIMIRARY RESULTS OF THE TEST AT TIER-2 IN BARI Limits Limits related to the test are connected to:  used 1/3 of total CPU per job because of the access of data. Rate are 100 MB/s on the server and 10 MB/s for nodes. Can be optimized using different protocols to access data: rfio, dCache. Rfio is currently used.  availability of the RB, grid overhead for the submission in a intensive use period.  retrieving output from grid job: to be replaced by the retrieve from the storage element  number of jobs running in parallel  saturation of the local area network bandwidth in the data transfer: currently it is at most 100 GB  overhead introduced by the agents for synchronous analysis after data transfer Problems Problems : failure of local (Tier2 specific) and grid services (RB. CE. SE). To be improved with the redundancy of services. TO BE OPTIMIZED Another program would start waiting for the completion of the jobs, in order to process the histograms. For this test we don't do any special check but only count the hits on a module and add the result to the number of previous hits in the same module. The result is saved periodically in the tracker map which becomes always more complete until all data is processed. Minimum delay in updating tracker map is 1 minute.  Feasibility of test  Success of preliminary test  Many parameters to be optimized: an huge improvement is expected CONCLUSIONS tracker map) which is saved on a disk area seen by server web as an SVG image. This image has entries for each of the detector modules connected through an url to a detailed report containing also the module's histograms. ABSTRACT


Download ppt "Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring."

Similar presentations


Ads by Google