Presentation is loading. Please wait.

Presentation is loading. Please wait.

GAG meeting, 5 July 2004, CERN1 LHCb Data Challenge 2004 A.Tsaregorodtsev, Marseille N. Brook, Bristol/CERN GAG Meeting, 5 July 2004, CERN.

Similar presentations


Presentation on theme: "GAG meeting, 5 July 2004, CERN1 LHCb Data Challenge 2004 A.Tsaregorodtsev, Marseille N. Brook, Bristol/CERN GAG Meeting, 5 July 2004, CERN."— Presentation transcript:

1 GAG meeting, 5 July 2004, CERN1 LHCb Data Challenge 2004 A.Tsaregorodtsev, Marseille N. Brook, Bristol/CERN GAG Meeting, 5 July 2004, CERN

2 GAG meeting, 5 July 2004, CERN2 Goals of DC’04  Main goal: gather information to be used for writing the LHCb computing TDR/TP  Robustness test of the LHCb software and production system Using software as realistic as possible in terms of performance  Test of the LHCb distributed computing model Including distributed analyses realistic test of analysis environment, need realistic analyses  Incorporation of the LCG application area software into the LHCb production environment  Use of LCG resources as a substantial fraction of the production capacity

3 GAG meeting, 5 July 2004, CERN3 DC 2004 phases  Phase 1 – MC data production  180M events of different signals, bg, mbias  Simulation+reconstruction  DST’s are copied to Tier1 centres  Phase 2 – Data reprocessing  Selection of various physics streams from DST’s  Copy selections to all Tier1 centers  Phase 3 – User analysis  User analysis jobs on DST data distributed in all the Tier1 centers

4 GAG meeting, 5 July 2004, CERN4 Phase 1 MC production

5 GAG meeting, 5 July 2004, CERN5 DIRAC Services and Resources DIRAC Job Management Service DIRAC Job Management Service DIRAC CE LCG Resource Broker Resource Broker CE 1 DIRAC Sites Agent CE 2 CE 3 Production manager Production manager GANGA UI User CLI JobMonitorSvc JobAccountingSvc AccountingDB Job monitor InfomarionSvc FileCatalogSvc MonitoringSvc BookkeepingSvc BK query webpage BK query webpage FileCatalog browser FileCatalog browser User interfaces DIRAC services DIRAC resources DIRAC Storage DiskFile gridftp bbftp rfio

6 GAG meeting, 5 July 2004, CERN6 Software to be installed  Before an LHCb application can run on a Worker Node the following software components should be installed:  Application software itself;  Software packages on which the application depends;  Necessary databases (file based)  DIRAC software  Single untar command to install in place  All the necessary libraries are included – no assumption made about the availability of whatever software on the destination site (except recent python interpreter):  External libraries;  Compiler libraries;  ld-linux.so

7 GAG meeting, 5 July 2004, CERN7 Software installation  Software repository:  Web server (http protocol)  LCG Storage Element  Installation in place DIRAC way:  By Agent upon reception of a job with particular software requirements; OR  By a running job itself.  Installation in place LCG2 way:  Special kind of a job running standard DIRAC software installation utility

8 GAG meeting, 5 July 2004, CERN8 Software installation in the job  A job may need extra SW packages not in place on CE  Special version of geometry;  User analysis algorithms.  Any number of packages can be installed in the job itself (up to all of them)  Packages are installed in the job user space  Imitate the structure of the LHCb standard SW directory tree with symbolic links

9 GAG meeting, 5 July 2004, CERN9 3’d party components  Originally DIRAC aimed at producing the following components:  Production database;  Metadata and job provenance database;  Workload management.  Expected 3’d party components:  Data management (FileCatalogue, replica management)  Security services  Information and Monitoring Services  Expectations for early delivery of the ARDA prototype components failed

10 GAG meeting, 5 July 2004, CERN10 File catalog service  The LHCb Bookkeeping was not meant to be used as a File (Replica) Catalog  Main use as Metadata and Job Provenance database  Replica catalog based on specially built views  AliEn File Catalog was chosen to get a (full) set of the necessary functionality:  Hierarchical structure: Logical organization of data – optimized queries; ACL by directory; Metadata by directory; File system paradigm;  Robust, proven implementation  Easy to wrap as an independent service: Inspired by the ARDA RTAG work

11 GAG meeting, 5 July 2004, CERN11 AliEn FileCatalog in DIRAC  AliEn FC SOAP interface was not ready in the beginning of 2004  Had to provide our own XML-RPC wrapper Compatible with XML-RPC BK File Catalog  Using AliEn command line “alien –exec”  Ugly, but works  Building service on top of AliEn which is run by the lhcbprod AliEn user  Not really using the AliEn security mechanisms  Using AliEn version 1.32  So far in DC2004:  >100’000 files with >250’000 replicas  Very stable performance

12 GAG meeting, 5 July 2004, CERN12 File catalogs MySQL AliEn FC AliEn UI XML-RPC server XML-RPC server AliEn FC Client AliEn FC Client ORACLE LHCb BK DB LHCb BK DB XML-RPC server XML-RPC server BK FC Client BK FC Client FC Client DIRAC Application, Service DIRAC Application, Service AliEn FileCatalog Service BK FileCatalog Service FileCatalog Client

13 GAG meeting, 5 July 2004, CERN13 Data Production – 2004  Currently distributed data sets  CERN: Complete DST (copied directly from production centres)  Tier1: Master copy of DST produced at associated sites  DIRAC sites: Bologna, Karlsruhe, Spain (PIC), Lyon, UK sites (RAL), all otherwise CERN  LCG sites: Currently only 3 Grid (MSS) SE sites - CASTOR Bologna, PIC, CERN  Bologna:ru,pl,hu,cz,gr,it  PIC: us,ca,es,pt,tw  CERN: elsewhere

14 GAG meeting, 5 July 2004, CERN14 DIRAC DataManagement tools  DIRAC Storage Element:  IS description + server (bbftpd, sftpd, httpd, gridftpd, xmlrpcd, file, rfio, etc)  Need no special service installation on the site  Description in the Information Service:  Host, Protocol, Local path  ReplicaManager API for common operations:  copy(), copyDir(), get(), exists(), size(), mkdir(), etc  Examples of usage:  dirac-rm-copyAndRegister dirac-rm-copy dc2004.dst CERN_Castor_BBFTP  Tier0SE and Tier1SE’s are defined in the central IS

15 GAG meeting, 5 July 2004, CERN15 Reliable Data Transfer  Any data transfer should be accomplished despite temporary failures of various services or networks:  Multiple retries of failed transfers with any necessary delay: Until services are up and running; Not applicable for LCG jobs.  Multiple retries of registration in the Catalog.  Transfer Agent:  Maintains a database of Transfer requests;  Transfers datasets or whole directories with log files;  Retries transfers until success

16 GAG meeting, 5 July 2004, CERN16 DIRAC DataManagement tools Transfer Agent Transfer DB Job Data Manager Data Optimizer SE 1 SE 2 cache Transfer requests

17 GAG meeting, 5 July 2004, CERN17 DIRAC DC2004 performance  In May-June:  Simulation+Reconstruction  >60000 jobs  ~70M events  ~20TB of data CERN,PIC,Lyon,CNAF,RAL  >150’000 files in the catalogs  ~2000 jobs running continuously Up to 3000 in a peak

18 GAG meeting, 5 July 2004, CERN18 LHCb DC'04 Accounting

19 GAG meeting, 5 July 2004, CERN19 Next Phases Reprocessing and Analysis

20 GAG meeting, 5 July 2004, CERN20 Data reprocessing and analysis  Preparing data reprocessing phase:  Stripping – selecting events on DST files into several output streams by groups of physics  Scheduling jobs to sites where the needed data are Tier1’s (CERN, Lyon, PIC, CNAF, RAL, Karlsruhe)  The workload management is capable of automatic job scheduling to a site having data  Tools are being prepared to formulate reprocessing tasks.

21 GAG meeting, 5 July 2004, CERN21 Data reprocessing and analysis (2)  User analysis:  Interfacing GANGA to submit jobs to DIRAC  Submitting user jobs to DIRAC sites: Security concerns – job are executed by the agent account on behalf of user  Submitting user jobs to LCG sites: Through DIRAC to have a common job Monitoring and Accounting Using user certificates to submit to LCG No agent submission:  Expecting high failure rate

22 GAG meeting, 5 July 2004, CERN22 LCG experience

23 GAG meeting, 5 July 2004, CERN23 Production jobs  Long jobs – 23 hours on average 2GHz PIV  Simulation+Digitization+Reconstruction steps  5 to 10 steps in one job  No event input data  Output data – 1-2 output files of ~200MB  Stored to Tier1 and Tier0 SE  Log files copied to an SE at CERN  AliEn and Bookkeeping Catalogues are updated

24 GAG meeting, 5 July 2004, CERN24 Using LCG resources  Different ways of scheduling jobs to LCG  Standard: jobs got via RB;  Direct: jobs go directly to CE;  Reservation  Using Reservation mode for the DC2004 production:  Deploying agents to WN as LCG jobs  DIRAC jobs are fetched by the agents in case the environment is OK  Agent steers the job execution including data transfers, update of the catalogs and bookkeeping.

25 GAG meeting, 5 July 2004, CERN25 Using LCG resources (2)  Using DIRAC DataManagement tools:  DIRAC SE + gridftp + sftp  Starting to populate RLS from DIRAC catalogues:  For evaluation  For use with ReplicaManager of LCG

26 GAG meeting, 5 July 2004, CERN26 Resource Broker I  No trivial use of tools for large number of jobs i.e. production  Command re-authenticated for every job  Produce errors with list of jobs (e.g. retrieve non-terminated jobs)  Slow to response when few 100 jobs in RB  e.g. 15 seconds for job scheduling  Ranking mechanism to provide even distribution of jobs  Number of CPUs published is for site & not for user/VO - request for free CPU in JDL doesn’t help

27 GAG meeting, 5 July 2004, CERN27 Resource Broker II  LCG, in general, does not advertise normalised time units  Solution: request CPU resources for the slowest CPU (500Hz)  Problem: only v. few site have long enough queues  Solution: DIRAC agent scales CPU for particular WN before request to DIRAC  Problem: some sites have normalised their units!  Jobs with ∞loops  3 day job in week queue - killed by proxy expiry rather than CPU reqt  Jobs aborted by “proxy expired”  RB was re-using old proxies !!!!

28 GAG meeting, 5 July 2004, CERN28 Resource Broker III  Job cancelled by RB but with message “cancelled by user”  Due to loss of communication between RB & CE - job rescheduled & killed on original CE  Some job are not killed until they fail due to inability to transfer data  DIRAC also re-schedules!  RB lost control of the status of all jobs  RB “stuck” - not responding to any request - solved without loss of jobs

29 GAG meeting, 5 July 2004, CERN29 Disk Storage  Job runs in directory without enough space  Jobs running need ~2GB - problem where site has jobs sharing same disk server rather than local WN space

30 GAG meeting, 5 July 2004, CERN30 Reliable Data Transfer  In case of data transfer failure the data on LCG is lost. There is no retry mechanism if the destination SE is temporarily not available

31 GAG meeting, 5 July 2004, CERN31 Odds & Sods  LDAP of globus-mds server stops  OK - no jobs can be submitted to site  BUT also problems with authentication of GridFTP transfer  Empty output sandbox  Tricky to debug !  Jobs cancelled by retry count  Occurs on sites with many jobs running  DIRAC just submits more agents

32 GAG meeting, 5 July 2004, CERN32 Conclusions


Download ppt "GAG meeting, 5 July 2004, CERN1 LHCb Data Challenge 2004 A.Tsaregorodtsev, Marseille N. Brook, Bristol/CERN GAG Meeting, 5 July 2004, CERN."

Similar presentations


Ads by Google