Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, L.Levchuk 4, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
V.Gavrilov 1, I.Golutvin 2, O.Kodolova 3, V.Korenkov 2, L.Levchuk 4, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
NCPHEP ATLAS/CMS Tier3: status update V.Mossolov, S.Yanush, Dz.Yermak National Centre of Particle and High Energy Physics of Belarusian State University.
21 October 2010 Dietrich Liko Grid Tier-2 HEPHY Scientific Advisory Board.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, September 27, 2006 RDMS CMS Computing.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Ismayilov Ali Institute of Physics of ANAS Creating a distributed computing grid of Azerbaijan for collaborative research NEC'2011.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
The Beijing Tier 2: status and plans
Grid site as a tool for data processing and data analysis
Belle II Physics Analysis Center at TIFR
Current Status and Plans
Update on Plan for KISTI-GSDC
Status of RDMS CMS Computing
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
The LHCb Computing Data Challenge DC06
Presentation transcript:

Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 2 WLCG infrastructure of the CMS experiment (CMS computing model) KIPT T2 center (T2_UA_KIPT): status and prospects Summary

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 3 CERN: acquisition, first-pass reconstruction, storage & distribution 1.25 GB/sec (heavy ions) 3

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 4 CMS Physics Goals CMS is expected to shed light on origin of particle masses (search for Higgs boson for all possible M H : does it really exist?) origin of “dark matter” in the Universe (search for SUSY signals) origin of asymmetry matter/antimatter in the Universe (study of CP violation in B decays) whether new states of hadronic matter exist (study of heavy ion collisions) if any “exotica” can emerge at the TeV (10 12 eV)) scale

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 5 T2_ * _ * T2_UA_KIPT T2_ * _ * T2_RU_JINR T2_ * _ * 50 sites Storage of RECO and AOD data (~ MB/evt) Data analysis & MC production Connectivity: ~1 Gbit/s; SE: ~200 TB WN: ~4 kHSP06 (LHC startup) CMS trigger T0_CH_CERN T1_CH_CERN T1_DE_FZK T1_ES_PIC T1_FR_CCIN2P3 T1_IT_CNAF T1_TW_ASGC T1_UK_RAL T1_US_FNAL 8 sites event reconstruction; storage of data of all types (RAW, RECO, AOD, skim) T3_ * _* currently 50 sites WLCG User Interface (UI); no special requirements for resources but reasonable connectivity with T2  RAW ~1-3 MB/evt ~100 evt/s  CMS Computing Model

CMS Dashboard

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 7 KIPT CMS Computing Facility HEPSPEC06 Benchmark Results: systemOSkernel32/64memgccbenchmarktotalper core Dual Intel Xeon EM64T 3.4 GHzSLC el5644 GB4.1.2S2k6 all_cpp 64bit Dual CPU, Quad Core, Intel Xeon E GHz SLC el56416 GB4.1.2S2k6 all_cpp 64bit Storage Element ( SE, SRM): 12 nodes (24 CPU’s Intel Xeon EM64T: 20   2.8 GHz) 11 SATA RAID5 (400 GB disks) +1 SATA RAID6 (1.0 TB disks) SRM space: ~ 50 TB (currently), upgrade to ~120 TB by LHC startup (end of 2009) Type: DPM (SLC4) Worker Nodes (WN): 68 x86-64 CPU cores (Intel Xeon EM64T 3.4 GHz & E GHz), 0.5 kHEPSPEC06

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 8 KIPT CMS Computing Facility Year 2001 – RH Linux cluster of a few dual PIII nodes from 2002 on – participation in CMS Monte Carlo production 2003 – Grid middleware (LCG2) deployed for the first time from 2005 on – the site registered in WLCG (Kharkov-KIPT-LCG2) from 2007 on – SLC cluster built on x86-64 CPU platform 2008 – CMS software (CMSSW & PhEDEx) deployment; site registration in CMS SiteDB (T2_UA_KIPT) 2009 (June) – successful commissioning of T2_UA_KIPT 2009 (July) – T2_UA_KIPT is in “Production” (“Ready” > 80% of time) !!! Up to now, the KIPT computing facility remains to be - the only active WLCG/EGEE site of Ukraine; - the only CMS RDMS T2 site, which is non-”_RU_”

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 9 Conditions for site ‘Readiness’ The CMS site readiness takes into account several sources of information to define whether a site is in a good status:  site availability according to the CMS Site Availability Monitor (SAM) testsSite Availability Monitor (SAM) 16 CMS-specific critical tests of CE, SRM (SE) and CMS software in addition to “routine” WLCG SAM tests; daily SAM availability must be  80%  fraction of successful jobs submitted by the Job RobotJob Robot automatic submission and management of “fake” CMS analysis jobs via WLCG (using CRAB) (typically, 6×100 jobs/day); daily JR efficiency must be  80% CRAB  number of operational links connecting the site with other sites number of commissioned links TO Tier-1 sites ≥ 2 (to commission one such link, site has to sustain ≥ 5 MB/s of LoadTest during 24 hours) number of commissioned links FROM Tier-1 sites ≥ 4 (to commission one such link, site has to sustain ≥ 20 MB/s of LoadTest during 24 hours)

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria Sep :41http://dashb-ssb.cern.ch/dashboard/request.py/siteview?

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 11 Commissioning of first 6 T2_UA_KIPT uplinks LoadTest from: CH_CERN ES_PIC DE_FZK IT_CNAF US_FNAL FR_CCIN2P3

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 12 Commissioning of first 6 T2_UA_KIPT uplinks (II) TB

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 13 Internet Connectivity Network backbone for academic institutes of Ukraine Project has been completed in 2007 – 2008 (UARNET) KIPT 10 Gbit/s L’vovKiev Kharkov Frankfurt DE Moscow RU Connectivity of NSC KIPT -- 2 Gbit/s Connectivity of Kharkov-KIPT-LCG2 (T2_UA_KIPT) -- up to 1 Gbit/s

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 14 T2_UA_KIPT T2_RU_ITEP T2_RU_SINP T2_RU_JINR 01 September 2009 T2 site quality ranking in August 2009

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 15 ITEP JINRSINP KIPTIHEPRC KI INR Activity of CMS RDMS T2 sites in August MC Production -- Data Analysis Thanks to Peter Kreuzer Units: average number of CMS jobs at a given moment of the day

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 16 T2_UA_KIPT pledge for 1st half of 2009 CMS Т2 site in 2010 (CMS Comp. ТDR) SRM Storage (TB)50200 WN (kHEPSPEC06)0.54 Connectivity (Gbit/s) Resources: available vs. needed For 2-nd half of 2009: storage upgrade up to 120 TB Major upgrade of CPU capacity is planned for 2010 No problem to have connectivity of 1 Gbit/s right now

11 September 09 L.Levchuk, NEC'09, Varna, Bulgaria 17SUMMARY  Preparation of KIPT WLCG infrastructure for CMS data analysis is in good progress  Site T2_UA_KIPT is commissioned (‘Ready’)  Site T2_UA_KIPT is stable (‘Ready’ >80% of time)  Upgrade of hardware to fit CMS Computing TDR requirements is still needed