Presentation is loading. Please wait.

Presentation is loading. Please wait.

Phase 2 of the Physics Data Challenge ‘04 Peter Hristov For the ALICE DC team Russia-CERN Joint Group on Computing CERN, September 20, 2004.

Similar presentations


Presentation on theme: "Phase 2 of the Physics Data Challenge ‘04 Peter Hristov For the ALICE DC team Russia-CERN Joint Group on Computing CERN, September 20, 2004."— Presentation transcript:

1 Phase 2 of the Physics Data Challenge ‘04 Peter Hristov For the ALICE DC team Russia-CERN Joint Group on Computing CERN, September 20, 2004

2 2 Status of PDC04 15 Sep. 2004, Alice Offline week Outline  Purpose and conditions of Phase 2  Job structure  Experiences and improvements: AliEn and LCG  Statistics (up to today)  Toward phase 3  Conclusions

3 3 Status of PDC04 15 Sep. 2004, Alice Offline week Phase 2 purpose and tasks  Merging of signal events with different physics content into the underlying Pb+Pb events (underlying events are reused several times)  Test of:  Standard production of signal events  Stress test of network and file transfer tools  Storage at remote SEs, stability (crucial for phase 3)  Conditions, jobs …:  62 different conditions  340K jobs, 15.2M events  10 TB produced data  200 TB data transfer from CERN  500 MSI2K hours CPU

4 4 Status of PDC04 15 Sep. 2004, Alice Offline week  Repartition of tasks (physics signals):

5 5 Status of PDC04 15 Sep. 2004, Alice Offline week  Structure of event production in phase 2: Master job submission, Job Optimizer (N sub-jobs), RB, File catalogue, processes monitoring and control, SE… Central servers CEs Sub-jobs Job processing AliEn-LCG interface Sub-jobs RB Job processing CEs Storage CERN CASTOR: underlying events Local SEs CERN CASTOR: backup copy Storage Primary copy Local SEs Output files Underlying event input files zip archive of output files Register in AliEn FC: LCG SE: LCG LFN = AliEn PFN edg(lcg) copy&register File catalogue

6 6 Status of PDC04 15 Sep. 2004, Alice Offline week Experience: AliEn  AliEn system improvements:  AliEn processes tables – split in “running” (lightweight) and “done” (archive) – allows for faster process tracking  Implemented symbolic links and event groups (through sophisticated search algorithms):  Number of underlying events are grouped (through symbolic links) in a directory for a specific signal event type – example 1660 underlying events will be used for each jet signal condition. Another 1660 will be used for the next and so on up to 20000 in total (12 conditions)  Implemented zip archiving, mainly to overcome the limitations of the taping systems (less files, large size)  Fast resubmission of failed jobs – in this phase all jobs must finish  New job monitoring tools, including singe job trace logs from start to finish with logical steps and timing

7 7 Status of PDC04 15 Sep. 2004, Alice Offline week  AliEn problems:  Proxy server – out of memory due to a spiralling number of proxy connections: attempt to introduce a schema with pre-forked and limited number of proxies was not successful and the problem has to be studied further:  Not a show-stopper – we know what to monitor and how to avoid it  JobOptimizer – due to the very complex structure of the jobs (many files in the input box) the time needed to prepare one job for submission is large and the service sometimes cannot supply enough jobs to fill the available resources:  Not a show stopper now – we are mixing jobs with different execution time length, thus load-balancing the system  Has to be fixed for phase 3, where the input boxes of the jobs will be even larger and the processing time is very short – clever ideas how to speed-up the system already exist

8 8 Status of PDC04 15 Sep. 2004, Alice Offline week Experience: LCG  A document on Data Challenges on LCG is being finalised by the GAG, with contributions by ALICE, ATLAS, LHCb  LCG problems and solutions:  General  Problem -> reporting -> fixing -> green light – but no feedback  Same problem somewhere else…  Direct contact with site managers can be useful  Job Management  On average, it works fairly well  Maximum number of CPU served by a RB:  Average Job duration/Submission time  Max submission rate to LCG: 720 jobs/hour  For us it’s less as we do more than just submission  One entry point does not scale to the whole system size…  No multiple job management tools  Ranking: [1 – (jobs waiting)/(total cpus)] works well, but it’s not the default…  Jobs reported as “Running” by LCG fail to report to AliEn that they started – so they stay “queued” forever  Jobs stay Running forever, even if site managers report their completion  Jobs reported as “cancelled by user” even if they were not

9 9 Status of PDC04 15 Sep. 2004, Alice Offline week  LCG problems and solutions (cont’d)  Data Management  “Default SE” vs. “Close SE”  Edg-rm commands  Lcg-cr: lack of diagnostic information  Possible fix for temporarily unavailable SEs  Sites/Configuration  “Black-hole” effect: jobs fail and more and more are attracted  “alicesgm” not allowed to write in the SW installation area  Environment variables  VO_ALICE_SW_DIR not set  Misconfiguration  FZK with INFN certificates  Cambridge: bash not supported  “Default SE” vs. “Close SE” – see above  Library configuration – CNAF (solved, how?), CERN (?)  NFS not working: multiple job failures – see “black-hole” effect  Stability  Behaviour is all but uniform with time – but the general picture is improving

10 10 Status of PDC04 15 Sep. 2004, Alice Offline week Some results (19/09/04)  Phase 2 statistics (start July 2004 – end September 2004):  Jet signals: unquenched and quenched, cent 1: complete  Jet signals: unquenched per1: complete  Jet signals: quenched per1: 30% complete  Special TRD production at CNAF: phase 1 running  Number of jobs: 85K (number of done jobs/day is accelerating)  Number of output files: 422K data, 390K log  Data volume: 3.4 TB at local SEs, 3.4 TB at CERN (backup)  Job duration: 2h 30min cent1, 1h 20min per1:  Careful profiling of AliRoot and cleaning up of the code has reduced the processing time by a factor of 2!

11 11 Status of PDC04 15 Sep. 2004, Alice Offline week LCG Contribution to Phase II (15/09)  Mixing + Reconstruction  “more difficult”: large input to be transferred to the CE, output to a SE local to the CE that executes the job  Jobs (last month, 15 k jobs sent):  DONE5990  ERROR_IB1411 (error in staging input)  ERROR_V3090 (insufficient memory on the WN or AliRoot failure)  ERROR_SV2195 (Data Management or Storage Element failure)  ERROR_E1277 (typically NFS failures, so the executable is not found)  KILLED219 (jobs that fail to contact the AliEn Server when started and stay QUEUED forever while they are Running – also forever – in LCG)  RESUB851  FAILED330  Test of:  Data Management Services  Storage Element  Remarks  Up to 400 jobs Running on LCG on a single interface  No more use of Grid.it (avoid management of too many sites for phase III

12 12 Status of PDC04 15 Sep. 2004, Alice Offline week  Individual sites: CPU contribution  AliEn direct control: 17 CEs, each with a SE; CERN-LCG is encompassing the LCG resources worldwide (also with local/close SEs)

13 13 Status of PDC04 15 Sep. 2004, Alice Offline week  Individual sites: jobs successfully done

14 14 Status of PDC04 15 Sep. 2004, Alice Offline week Toward Phase 3  Purpose: distributed analysis of the processed in Phase 2 data  AliEn analysis prototype already exists:  Designated experts are trying to work with it, but it’s difficult with the production running…  We want to use gLite during this phase as much as possible (and provide feedback)  Service requirements:  In both Phase 1 and 2 the service quality of the computing centres has been excellent with very short response times in case of problems  Phase 3 will continue until the end of the year:  The remote computing centres will have to continue providing high level of service  Since the data are stored locally, interruptions of service will fail (or make very slow) the analysis jobs. The backup copy at CERN is on tape only and will take considerable amount of time to stage back in case the local copy is not accessible  The above is valid for the centres directly controlled through AliEn and the LCG sites

15 15 Status of PDC04 15 Sep. 2004, Alice Offline week Conclusions  Phase 2 of the PDC’04 is about 50% finished and is progressing well, despite its complexity  There is a keen competition for resources at all sites (LHCb and ATLAS are also running massive DCs)  We have not encountered any show-stoppers. All production problems arising are fixed by the AliEn and LCG crew very quickly.  The response of the experts at the computing centres is very efficient  We are also running a considerable amount of jobs on LCG sites and it is performing very well with more and more resources being made available for ALICE, thanks to the hard work of the LCG team  In about 3 weeks time we will seamlessly enter the last phase of the PDC’04…  It’s not over yet, but we are getting close!

16 16 Status of PDC04 15 Sep. 2004, Alice Offline week Acknowledgements  Special tanks to the site experts for the computing and storage resources and for the excellent support : Francesco Minafra – Bari Haavard Helstrup – Bergen Roberto Barbera – Catania Giuseppe Lo Re – CNAF Bologna Kilian Schwarz – FZK Karlsruhe Jason Holland – TLC² Houston Galina Shabratova – IHEP, ITEP, JINR Eygene Ryabinkin – KIAE Moscow Doug Olson – LBL Yves Schutz – CC-IN2P3 Lyon Doug Johnson – OSC Ohio Jiri Chudoba – Golias Prague Andrey Zarochencev – SPBsU St. Petersburg Jean-Michel Barbet – SUBATECH Nantes Mario Sitta – Torino And to: Patricia Lorenzo – LCG contact person for ALICE


Download ppt "Phase 2 of the Physics Data Challenge ‘04 Peter Hristov For the ALICE DC team Russia-CERN Joint Group on Computing CERN, September 20, 2004."

Similar presentations


Ads by Google