Presentation is loading. Please wait.

Presentation is loading. Please wait.

V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow,

Similar presentations


Presentation on theme: "V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow,"— Presentation transcript:

1 V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow, Russia 2- Joint Institute for Nuclear Research, Dubna, Russia 3 – Skobeltsyn Institute of Nuclear Physics, Moscow, Russia NEC’2007 Varna, Bulgaria, September 10-17, 2007 RDMS CMS Computing

2 Outline RDMS CMS Collaboration RDMS CMS Computing Model Current Status of RDMS CMS Computing - participation in CMS Service, Data and Software Challenges - participation in MTCC 2006 - CMS DBs - participation in ARDA Summary

3 Russia Russian Federation  Institute for High Energy Physics, Protvino  Institute for Theoretical and Experimental Physics, Moscow  Institute for Nuclear Research, RAS, Moscow  Moscow State University, Institute for Nuclear Physics, Moscow  Petersburg Nuclear Physics Institute, RAS, St.Petersburg  P.N.Lebedev Physical Institute, Moscow Associated members:  High Temperature Technology Center of Research & Development Institute of Power Engineering, Moscow  Russian Federal Nuclear Centre – Scientific Research Institute for Technical Physics, Snezhinsk  Myasishchev Design Bureau, Zhukovsky  Electron, National Research Institute, St. Petersburg Georgia  High Energy Physics Institute, Tbilisi State University, Tbilisi  Institute of Physics, Academy of Science,Tbilisi Ukraine  Institute of Single Crystals of National Academy of Science, Kharkov  National Scientific Center, Kharkov Institute of Physics and Technology, Kharkov  Kharkov State University, Kharkov Uzbekistan  Institute for Nuclear Physics, UAS, Tashkent Dubna Member States Armenia  Yerevan Physics Institute, Yerevan Belarus  Byelorussian State University, Minsk  Research Institute for Nuclear Problems, Minsk  National Centre for Particle and High Energy Physics, Minsk  Research Institute for Applied Physical Problems, Minsk Bulgaria  Institute for Nuclear Research and Nuclear Energy, BAS, Sofia  University of Sofia, Sofia JINR  Joint Institute for Nuclear Research, Dubna Composition of the RDMS CMS Collaboration the RDMS CMS Collaboration was founded in Dubna in September 1994 RDMS - Russia and Dubna Member States CMS Collaboration

4 RDMS Full Responsibility RDMS Participation ME1/ 1 HE SEMEEE FS HF RDMS Participation in CMS Construction

5 Full responsibility including management, design, construction, installation, commissioning, maintenance and operation for: Endcap Hadron Calorimeter, HE 1st Forward Muon Station, ME1/1 Participation in: Forward Hadron Calorimeter, HF Endcap ECAL, EE Endcap Preshower, SE Endcap Muon System, ME Forward Shielding, FS Full responsibility including management, design, construction, installation, commissioning, maintenance and operation for: Endcap Hadron Calorimeter, HE 1st Forward Muon Station, ME1/1 Participation in: Forward Hadron Calorimeter, HF Endcap ECAL, EE Endcap Preshower, SE Endcap Muon System, ME Forward Shielding, FS RDMS Participation in CMS Project

6  Design, production and installation  Calibration and alignment  Reconstruction algorithms  Data processing and analysis  Monte Carlo simulation H (150 GeV)  Z 0 Z 0  4  RDMS activities in CMS

7 RAL IN2P3 BNL FZK CNAF PIC ICEPP FNAL LHC Computing Model  Tier-0 (CERN)  Filter  raw data  Reconstruction  summary data (ESD)  Record raw data and ESD  Distribute raw and ESD to Tier-1 PNPI NIKHEF Minsk Kharkov Rome IHEP TRIUMF CSCS Legnaro ITEP JINR IC MSU Prague Budapest Cambridge Tier-1 small centres desktops portables Santiago Weizmann Tier-2  Tier-1  Permanent storage and management of raw, ESD, calibration data, meta-data, analysis data and databases  grid-enabled data service  Data-heavy analysis  Re-processing raw  ESD  ESD-AOD selection  National, regional support  Tier-2  Simulation, digitization, calibration of simulated data  End-user analysis

8 RuTier2 Cluster RuTier2 Cluster Conception: Cluster of institutional computing centers with Tier2 functionality operating for all four experiments - ALICE, ATLAS, CMS and LHCb Basic functions: analysis; simulations; users data support plus some Tier1 functions Participating institutes: Moscow ITEP, SINP MSU, RRC KI, LPI, MEPhI… Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI, SPbSU Novosibirsk BINP … RuTier2

9 RDMS CMS Advanced Tier2 Cluster as a part of RuTier2 Conception: -Tier2 Cluster of institutional computing centers with a partial T1 functionality - summary resources at 1.5 level of the canonical Tier2 center for the CMS experiment + Mass Storage System ~5-10% of RAW DATA, ESD/DST for AOD selections design and checking and AOD for analysis (depending on a concrete task) Basic functions: analysis; simulation; users data support; calibration; reconstruction algorithm development … Host Tier1 in LCG infrastructure: CERN (basically for storage of MC data) access to real data through the T1-T2 ‘mesh” model for Europe Participating institutes: Moscow ITEP, SINP MSU, LPI RAS Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI RAS Minsk (Belarus) NCPHEP Erevan (Armenia) ErPhI Kharkov (Ukraine) KIPT Tbilisi (Georgia) HEPI

10 RDMS CMS Advanced Tier2 Cluster (cont.) RDMS CMS Advanced Tier2 Cluster (cont.) (as Tier2 kind center with some features of the Tier1 center). -to provide the facilities for simulation tasks, a number of analysis tasks and also for detectors calibration, HLT and offline reconstruction algorithms and analysis tools development. - connectivity between the RDMS institutes: >1 Gbit/s Ethernet - computing farms at the institutes combined into GRID infrastructure - data for the each particular task located at the farm closer to the final user of the data. Processing of a part of raw data in addition to ordinary functions of Tier-2 (like maintaining and analysis of AODs, data simulation and user support) It is needed both for a calibration/alignment of some CMS detector systems for which the RDMS is responsible and for creation and testing of reconstruction software applied to some particular physics channels. some part of RAW,ESD/DST and the proper set of AOD data will be transferred and kept at the RDMS CMS Tier2 cluster at the mass storage system (MSS), located in one or two institutes.

11 RDMS CMS Advanced Tier2 Cluster (cont.) RDMS CMS Advanced Tier2 Cluster (cont.) (resources/data sets/databases) -Some fraction of RDMS T2 (30% of resources) will be scheduled for MC simulation of the standard CMS MC samples including the detector simulation and the first path reconstruction. MC simulation will be distrubuted between the RDMS CMS institutes in accordance with the resources located at the institutes (the more resources, the more loading). The MC data will be moved to CERN or(and) stored locally. - As soon as RDMS T2 has some T1 functionality, it will get data both RAW, RECO (the amount needed for the software/trigger/calibration development) and AOD (for physics analysis of the dedicated channels). - RDMS T2 will publish all data located locally to be available for all CMS users, however the available datasets on the disks/tapes will reflect local user tasks. All CMS users will have an access to these data and will be able to submit the job through GRID. The non-local community will not be able to initiate the large-scale transfer to the site. -RDMS T2 will need the replication of databases (conditions), used in reconstruction. - RDMS takes part in CMS DBs development for ME1/1, HE and HF (calibration/equipment/construction DBs): it is necessary to provide facilities for DBs replication

12 CERN to be T1 for RDMS is under an active discussion now

13 CMS Computing model

14 RDMS CMS Participation in CMS Computing Activities during 2006 -2007 LCG installation of CMSSW; participation in Phedex data transfers (load test transfers and heart beat transfers); participation in CMS event simulation in autumn, 2006: 2 RDMS CMS sites (JINR and SINP) were ready and registered for CMS production – 500 000 events done – 1 % of total amount (~50 millions events) Job Robot (user analysis jobs) - 8.6% of total number of CMS jobs were submitted to three RDMS sites (ITEP,JINR,SINP) Phedex FTS T1-T2 transfers during October-November, 2006 (IHEP, ITEP, JINR and SINP) – some problems with transfer due to not stable level of bandwidth and not quite proper settings at T1 FTS servers for RDMS institutes (CNAF-T1 served as FTS-server for JINR, FZK-T1 – for SINP, ASGC-T1 – for IHEP and IN2P3-T1 – for ITEP) After CERN T1 is FTS server for RDMS, data transfer quality satisfies to the CMS requirements

15 CMS Magnet Test and Cosmic Challenge 2006 Concentrate on core SW functionality required for raw data storage, quality monitoring and further processing (a must for all three sub-systems @ MTCC): RAW Data Data Bases Event Data File The magnet test which was held at CERN in 2006 is a milestone in the CMS construction. It completed the commissioning of the magnet system (coil & yoke) before its eventual lowering into the cavern. The MTCC06 have 4 components: Installation validation, Magnet Commissioning, Cosmic Challenge, Field Mapping The cosmic challenge including DAQ and Network tests on cosmic data (850 GB/day) Build an intelligible event from several subdetectors Validate hardware chain Validate integration of subdetectors into DAQ framework, Run Control, DCS Use of databases and network scaled down from final architectures Testing Offline Condition Service and Online-to-Offline(O2O) Transfer Very good opportunity to test data transferring from CERN to CMS RuTier-2

16 CERN - JINR Data management for MTCC: common schema of realization

17 CERN-PROD as T1 for RDMS CERN-T1 as FTS server for Russian sites ( since February) ! It is the first step CERN-PROD to serve as T1 for RDMS March, 19 Transfer rate 14 – 32 MBs 99% of successful transfers and 990 GB have been transferred October-November, 2006 (CNAF as FTS T1 for JINR) Transfer rates less than 2 MBs Ratio of successful transfers 14-420% RDMS CMS PhEDEx file transfer statistics web-page (design of S.Belov, JINR) (http://rocmon.jinr.ru/scripts/phedex)

18 Example of Current CERN-JINR Transfer Rates (08.09.2007)

19 Participation in CMS Databases design and support Talk of Irina Filozova today: “Development of the databases and interfaces for CMS experiment: current status and plans”

20 Participation in ARDA 1.Monitoring of transfer T1 T2 through Dashboard (observation over Load test transfer). 2.Monitoring of production jobs with MonALISA a) monitoring of errors (unique system for LCG and local farms) b) monitoring of the WN, SE, network, installed software c) participation in the design of the monitoring tables in Dashboard 3.Use of PROOF for CMS users: a) understanding of use cases b) integration into CMS computing model 5.Portability of Monitor system 6.Participation in the CRAB-Task Manager integration

21 Dashboard: CMS Job Monitoring ASAP database RB CE Submission tools WN R-GMAMonalisa RB Constantly retrieve job information Web Service Interface R-GMA Client API pg sqLite Postgres sqLite Collector (RGMA) Collector (Monalisa) Others? Dashboard Web UI PHP RDMS CMS staff participates in ARDA activities on monitoring for CMS monitoring development: monitoring of errors due grid- environment and automatic job restart in these cases; monitoring of a number of events done; the further expansion to private simulation Snapshot Statistics Job info. Plots http://www-asap.cern.ch/dashboard/

22 Usage of CPU resources for CMS tasks at Russian Tier2 during 2006 IHEP INR ITEP JINR LPI PNPI RRC KI SINP MSU it is ~ 2 % of CPU resources used for all CMS jobs (M.-C. production, Job Robot, analysis and private users jobs) in the LCG-2 infrastructure 32 members of CMS VO at RDMS - 5.3 % of a total number of CMS VO members: JINR – 15 IHEP – 6 Kharkov -4 SINP -3 INR -2 ITEP -2 CMS sw is installed at RuTier2 LCG-2 sites ~ 15 % of RuTier2 CPU resources have been used for CMS jobs

23 SUMMARY RDMS CMS Computing Model has been in principle defined as Tier2 Cluster of institutional computing centers with a partial T1 functionality Data processing & analysis scenario has been developed in a context of estimation of resources on a basis of selected physics channels in which the RDMS CMS plans to participate. The very solution that CERN will serve as T1 center for RDMS would be a strong basement to provide the RDMS CMS computing model requirements. CERN-T1 serves as FTS server for Russian sites since February, 2007. It is the first step CERN-PROD Tier1 to be T1 center for RDMS. The proper RDMS CMS Grid infrastructure has been constructed at RuTier2 and the infrastructure has been successfully tested, in particular, for MTCC test data transferring RDMS RuTier2 CPU and storage resources are sufficient for analysis of first data after the LHC start and for simulation in the 2008 year


Download ppt "V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow,"

Similar presentations


Ads by Google