Presentation is loading. Please wait.

Presentation is loading. Please wait.

F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;

Similar presentations


Presentation on theme: "F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;"— Presentation transcript:

1 F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna; F.Farina – INFN Milano; O.Gutsche - Fermilab CRAB: a user-friendly tool to perform CMS analysis in grid environment CRAB and the CMS distributed analysis chain CMS computing model and grid infrastructure CMS collaboration is developing some tools, interfaced with grid services, to allow data analysis in a distributed environment They include: installation of CMS software via grid on remote resources “data transfer service” to move and manage a large flow of data among Tiers “data validation system” to ensure data consistency and readiness “data location system” to take trace of data available in each remote site, composed by different kind of catalogues: Dataset Bookkeeping System. It knows which data exist and contains CMS specific description of event data Data Location Service. It knows where data are stored. Mapping between file-blocks and SE Local file catalog: physical location of local data on remote SE job monitoring and logging- bookkeeping system A friendly interface to simplify the creation and the submission of analysis jobs to grid environments: CRAB (CMS Remote Analysis Builder). The purpose of CRAB is to allow users with no knowledge of grid infrastructure to run their analysis code on data available at remote sites as easily as in a local environment hiding grid infrastructure details Users have just to develop their analysis code in an interactive environment and decide which data to analyze. Data discovery on remote resources, resources availability, status monitoring and output retrieval of submitted jobs are fully handled by CRAB CRAB is able to submit analysis jobs to different kind of grid flavours as gLite, LCG and OSG. CRAB is able to create jobs using different CMS software (job type) as ORCA and CMSSW for analysis and FAMOS for fast simulation CRAB input: user has to provide ● Data parameters in the crab.cfg file: dataset name and number of events ● Analysis code and parameter cards ● Output file name and how to manage them (return file on UI or store into SE) Main CRAB functionalities : ● input data discovery: the list of sites (SEs name) where data are stored, querying “data location service” ● Packaging of user code: creation of a tgz archive with user code and parameters ● Job creation: Wrapper of user code executable to run on WN Jdl file: site location of data (SE name) is passed to RB as requirement to drive resources matchmaking Job splitting according to user request ● Job submission to the grid ● Monitoring of job status and output retrieval ● Handling of user output: copy to UI or to a generic Storage Element dataset n.of events Data Location System SE Local File Catalog Data Bookkeepi ng System jdl, job WMS jdl, job CE WN... Job output data CRAB UI SEs list CRAB usage During data acquisition data from detector that overhead different trigger level will be sent, stored and first step reconstructed at Tier-0. Then they will be spread over some Tiers depending on the kind of physics data Until real data are not available, the CMS community needs simulated data to study the detector response, the forseen physics interaction and to get experience with management and analysis of data. So a large number of simulated data are produced and distributed among computing centres. The grid infrastructure guarantees also enough computing power for simulation, processing and analysis data Amount of data (events) –~2 PB/year (assumes startup luminosity 2x10 33 cm -2 s -1 ) All events will be stored into files –O(10^6) files/year Files will be grouped in File-blocks (data location unit) –O(10^3) Fileblocks/year Fileblocks will be grouped in Datasets –O(10^3) Datasets (after 10 years of CMS) CMS (Compact Muon Solenoid) is one of the four particle physics experiment that will collect data at LHC (Large Hadron Collider) starting in 2007, aiming to discover the Higgs boson. CMS will produce a large amount of data that should be stored in many computing centres in the countries partecipating to the CMS collaboration and made available for analysis to world-wide distributed physicists. Large amount of data to be analyzed Large community of physicists which wants to access data Many distributed sites where data will be stored CMS will use a distributed architecture based on grid infrastructure to ensure remote resources availability and to assure remote data access to authorized user (belonging to CMS Virtual Organization). Tools for accessing distributed data and resources are provided by the World LHC Computing Grid (WLCG) that takes care about different grid flavours as LCG/gLite in Europe and OSG in the US Remote data accessible via grid Online system Online farm Tier 0 Tier 1 Tier 2 Tier 3 Tier 2 center Institute A Institute B... Workstation UI France Regional Center recorded data Fermilab Regional Center Italy Regional Center CERN Computer center... Tier 2 center Tier 2 The CMS offline computing system is arranged in four Tiers which are geographically distributed Resource Broker (RB) Workload Manageme nt System SE SESE UI Job submission tools UI Job submission tools Data location system Data location system Information Service collector Information Service collector Query for data Query for matchmaking CE Main LCG middleware components: Virtual Organizations (CMS...) Resource Broker (RB) Replica Catalog (LFC) Computing Elements (Ces) Storage Elements (Ses) Worker nodes (Wns) User Interfaces (UIs) Number of jobs submitted to different grid flavor Each bar represents the total number of jobs and it is divided into three categories: - jobs that produce user executable Exit Code equal to 0 - jobs that produce user executable Exit Status different from 0 - jobs that could not run due to the grid problems The job success rate is about 75%, where success means that jobs arrive to remote sites jobs produced outputs The remnant 25% aborts due to site setup problem or grid services failure Top 20 used CE and datasetNumber of jobs submitted each month Number of submitted sorted by jobtype Number of jobs submitted with CRAB and CRAB + jobRobot More then 1000000 jobs were submitted to the grid using the CRAB tool. Tens of physicists are using CRAB to analyze remote data stored in LCG and OSG sites More or less 1500 jobs are submitted each day. Peaks of daily work were in March-April 06 for Physics Technical Report Design preparation, in October 05 for Service Challenge 3 (SC3) and August-September 06 for Service Challenge 4 When real data will be available the expected daily rate of submitted jobs is ~100000 CMSSW and ORCA are CMS software for analysis. ORCA works with the old CMS framework and is not anymore supported During SC3 and SC4 CRAB has been used through an automatic tool called jobRobot to spread jobs continously over all published data. This usage allows to understand the computing infrastructure weakness, site installation problem, analysis chain bottleneck and to test the Workload Management components CRAB tool is used to analyze remote data and to test distributed analysis chain CRAB proves that CMS users are able to use available grid services and that the full analysys chain works in a distributed environment


Download ppt "F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;"

Similar presentations


Ads by Google