Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis experience at GSIAF Marian Ivanov. HEP data analysis ● Typical HEP data analysis (physic analysis, calibration, alignment) and any statistical.

Similar presentations


Presentation on theme: "Analysis experience at GSIAF Marian Ivanov. HEP data analysis ● Typical HEP data analysis (physic analysis, calibration, alignment) and any statistical."— Presentation transcript:

1 Analysis experience at GSIAF Marian Ivanov

2 HEP data analysis ● Typical HEP data analysis (physic analysis, calibration, alignment) and any statistical algorithms needs continuous algorithm refinement cycles ● The refinement cycles should be (optimally) on the seconds, minutes level ● Using the parallelism is the only way to analyze HEP data in reasonable time

3 Data analysis (calibration, alignment) ● Where do we run? ● DAQ farm (calibration) ● HLT farm (calibration, alignment) ● Prompt data processing (calib, align, reco, analysis) with PROOF ● Batch Analysis on the Grid infrastructure

4 Tuning of statistical algorithms ● For our analysis it is often not enough to study just histograms. – Deeper understanding of correlations between variables is needed. To study correlations on large statistic the Root TTrees can be used. – In case of non trivial processing algorithm, intermediate steps might be needed. – It should be possible to analyze (debug) the intermediate results of such non trivial algorithm ● Process function: 1)Preprocess data (can be CPU expensive, e.g track refitting) ● optionally store the preprocessed data in TTrees 2)Histogram and/or fit and/or fill matrices with preprocessed data

5 Component model ● Algorithmic part of our analysis and calibration software should be independent of the running environment ● Example: TPC calibration classes (components)(running, tuning Offline, used in HLT, DAQ and Offline) ● Analysis and calibration code should be written following a component based model ● TSelector (for PROOF) and AliAnalysisTask (see presentation of Andreas Morsch) – just simple wrapper

6 Components ● Basic functionality: ● Process(...) ● process your input data e.g. Track ● Merge() ● merge component ● Analyze() ● analyze the preprocessed data (e.g histograms, matrices, TLinearFitters) ● To enable merging of the information from the slaves the component has to be fully streamable

7 Example component ● class AliTPCcalibTracks : public TNamed {..... – virtual void Process(AliTPCseed * seed); – void Merge(TCollection *); – void Analyze() – // histograms, Fitters, arrays of histograms, fitter, matrices.... – TObjArray *fArrayQDY; // q binned delta Y histograms – TObjArray *fArrayQRMSZ; // q binned delta Z histograms – TlinearFitter *fFitterXXX; – }

8 Example selector wrapper ● User defined light Selector derives from the base selector AliTPCSelectorESD ● class AliTPCSelectorTracks : public AliTPCSelectorESD {.... ● AliTPCcalibTracks *fCalibTracks; ● AliTPCcalibTracksGain *fCalibTracksGain; – } – Bool_t AliTPCSelectorTracks::ProcessIn(Long64_t entry)...... ● fCalibTracks->Process(seed, esd); ● fCalibTracks->Process(seed) ●.......

9 AliTPCSelectorESD ● Additional functionality on top of Tselector implemented – Takes care of the data input – Stores the system information about user process (memory, cpu usage versus time, user stamps) in syswatch.log files- simple visualization using the TTree::Draw ● Optionally memory checker can be enabled – Store the intermediate results in common space for further algorithm refinement (if requested) ● The TProofFile/TFileMerger mechanism to handle file resident trees in the future – Is it possible to use it also for local analysis ? ● Another possible solution – can we use the schema of Alien? – OutputArchive={"log_archive:stdout,stderr,*.log@Alice::CERN::castor2", "calib_archive.zip:TPCcalibTracks.root,TPCcalibTracksGain.root,@Alice::CERN::castor 2"};

10 Assumptions – Data volume to process - accessibility ● Alice - pp event ● ESD size ~ 0.03 Mby/ev ● ESDfriend ~ 0.45 Mby/ev ● (0.5 Tby-5 TBy) per 10^6 events - (no overlaps – 10 overlapped events) ● ESD (friends) Raw data (zero suppressed) – Local ~ 10^5-10^6 pp - 10^4 pp – Batch ~ 10^6-10^7 pp - 10^5 pp – Proof ~ 10^6-10^7 pp - – Grid >10^7 pp - 10^6 pp

11 Software development – Write component – Software validation - Sequence: ● Batch system (second filter) ● Memory consumption – valgrind, memstat ● CPU profiling ● Output – same as local, but on bigger statistic ● PROOF ● For rapid development – fast user feedback ● Iterative improvement of algorithms, selection criteria... ● Processing on bigger statistic ● Be ready for GRID/ALIEN ● Processing on bigger statistic ● Local environment (first filter) ● Stability – debugger ● Memory consumption – valgrind, memstat (root) ● CPU profiling - callgrind, vtune ● Output – rough, quantitative – if possible

12 Proof experience (0) ● It is impossible to write code without bugs ● Only privileged users can use debugger directly on the Proof slaves and/or master ● For normal users the code has to be debugged locally ● ==> It would be nice if the code running locally and on the PROOF could be the same – It is (almost) the case now – Input lists? TProofFile ?

13 Proof experience (1) ● Debugging on PROOF is not trivial ● We tried dumping important system information into a special log file (memory, cpu...). Analyzing these TTrees really helps to understand the processing problems.

14 Proof experience (2) ● In our approach the users can generate files with preprocessed information ● In order to allow users to control those files we had to create a way to interact with XRD in a “file system manner” – AliXRDProofToolkit: ● Generate the list of files - similar to find command ● Check the consistency of the data from the list - reject corrupted files. Corrupted files are one of our biggest problems, as the network at GSI is less stable than in Cern. ● process log files

15 Conclusion ● PROOF is easy to use and well suited for our needs ● We have observed some problems, but they are usually fixed fast ● We use it successfully for development of calibration components ● Further development of tools to simplify debugging on PROOF is very welcome.


Download ppt "Analysis experience at GSIAF Marian Ivanov. HEP data analysis ● Typical HEP data analysis (physic analysis, calibration, alignment) and any statistical."

Similar presentations


Ads by Google