Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing for Alice at GSI (Proposal) (Marian Ivanov)

Similar presentations


Presentation on theme: "Computing for Alice at GSI (Proposal) (Marian Ivanov)"— Presentation transcript:

1 Computing for Alice at GSI (Proposal) (Marian Ivanov)

2 Outline ● Priorities ● Assumptions ● Proposal – GSI Tier2 - special role – Focus on the calibration and alignment - TPC and TRD

3 Priorities (2007-2008) ● Detector calibration and alignment (TPC-ITS- TRD) – First test – Cosmic and Laser – October 2007 – To be ready for first pp collision ● First paper – Time scale - Depends on success of October tests – Goal : ~ 1 week (statistic about 10^4-10^5 events) ● ==> The calibration and alignment has the TOP priority (2007-2008)

4 Assumptions ● CPU requirements – Relative ● Simulation ~ 400 a.u ● Reconstruction ~ 100 a.u ● Alignment ~ 1 a.u ● Calibration ~ 1 a.u ● To verify and improve the calibration and alignment several passes through data are necessary ● The time scale for one iteration ~ minutes, hours ==> ● The calibration and alignment algorithms should be decoupled from the simulation and reconstruction ● The reconstruction algorithm should be repeated after retuning of the calibration

5 Assumptions – Data volume to process - accessibility ● pp event ● ESD size ~ 0.03 Mby/ev ● ESDfriend ~ 0.45 Mby/ev ● (0.5 Tby-5 TBy) per 10^6 events - (no overlaps – 10 overlapped events) ● ESD (friends) Raw data (zero suppressed) – Local ~ 10^5-10^6 pp - 10^4 pp – Batch ~ 10^6-10^7 pp - 10^5 pp – Proof ~ 10^6-10^7 pp - – Grid >10^7 pp - 10^6 pp

6 Assumptions ● Type of analysis (requirements) ● First priority – Calibration of TPC – 10 ^4 -10^5 pp – Validation of the reconstruction - 10^4-10^5 pp – Alignment TPC, TPC-ITS – 10^5 pp + 10^4-10^5 cosmic

7 Assumptions ● Alien and PROOF are in the test phases ● Improvement in time, but, still fragile ● Difficult to distinguish between software bugs in analysis, calibration code and Alien and PROOF internal problems ● Chaotic (user) analysis -> can lead to chaos ● Not existing quotas, priorities ● Some restrictions already implemented ● e.g. On ALIEN - Possibility to submit jobs only for staged files ● Requirements for staging files – officially defined by PWGX ● The requirements from Detectors ?

8 Assumptions ● Alice test in October – (in one month) ● Full stress test of system ● Significant data volume ● ~20 Tby of raw data from test of 2 sectors (2006) ● Bottleneck (2006) – The processing time given by time of the data access - CPU time negligible ● We should be prepared for different scenarios ● We would like to start with the data copied at GSI and reconstruct/calibrate/align locally, later switch to GRID (The same we did in 2006) ● This approach enables several fast iteration over data

9 Proposal ● Algorithmic part of our analysis, calibration software should be independent of the running environment – TPC calibration classes (components) as example (running, tuning OFFLINE, used in HLT, DAQ and Offline) ● Analysis and calibration code should be written following a component based model – Tselector (for PROOF) and AliAnalysisTask (at GRID/ALIEN) – just simple wrapper

10 Example - Component ● Component: – class AliTPCcalibTracks : public TNamed {..... ● public :.. ● virtual void ProofSlaveBegin(TList * output); ● virtual void ProcessTrack(AliTPCseed * seed); ● void Merge(TCollection *); ● // histograms, Fitters.... ● Selector - wrapper – class AliTPCSelectorTracks : public TSelector {.... ● AliTPCcalibTracks *fCalibTracks; //! calib Tracks object – } – Bool_t AliTPCSelectorTracks::Process(Long64_t entry)...... ● fCalibTracks->ProcessTrack(seed); ●.......

11 Software development ● Write component ● Software validation - Sequence: 1)Local environment (first filter) 1)Stability – debugger 2)Memory consumption – valgrind, memstat (root) 3)CPU profiling - callgrind, vtune 4)Output – rough, quantitative – if possible 2)Batch system (second filter) 1)Memory consumption – valgrind, memstat 2)CPU profiling 3)Output - better statistic 3)PROOF 1)For rapid development – fast user feedback 2)Iterative improvement of algorithms, selection criterias... 3)Improve statistic 4)Be ready for GRID/ALIEN 1)Improve statistic

12 What should be done ● Test data transfer to GSI ( ~TBytes) – Using Alien – Kilian – Using xrdcp ● File system toolkit on top of xrd to be developed ● Write components ● Test and use of PROOF at GSI

13 Conclusion ● An analysis/calibration schema has been proposed ● It is not harder to implement than a regular TSelector/AliTask... ● TPC calibration has already been partly implemented using proposed schema ● This schema allows much better testing of the code before proceeding to the next steps (Local ->Batch/Proof-> GRID) ● As the same components are used in each step, we are not blocked if one of the distributed systems doesn't work properly


Download ppt "Computing for Alice at GSI (Proposal) (Marian Ivanov)"

Similar presentations


Ads by Google