Prague Site Report Jiří Chudoba Institute of Physics, Prague 23.4.2012 Hepix meeting, Prague.

Slides:



Advertisements
Similar presentations
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware and software on Prague farms Brief statistics about running LHC experiments.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Computing in Poland from the Grid/EGEE/WLCG point of view Ryszard Gokieli Institute for Nuclear Studies Warsaw Gratefully acknowledging slides from: P.Lasoń.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
, Prague JAN ŠVEC Institute of Physics AS CR.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
ATLAS DC2 seen from Prague Tier2 center - some remarks Atlas sw workshop September 2004.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
Site Report: Tokyo Tomoaki Nakamura ICEPP, The University of Tokyo 2013/12/13Tomoaki Nakamura ICEPP, UTokyo1.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
WLCG Tier-2 site in Prague: a little bit of history, current status and future perspectives Dagmar Adamova, Jiri Chudoba, Marek Elias, Lukas Fiala, Tomas.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Grid activities in the Czech Republic Jiří Kosina, Miloš Lokajíček, Jan Švec Institute of Physics of the Academy of Sciences of the Czech Republic
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Analysis in STEP09 at TOKYO Hiroyuki Matsunaga University of Tokyo WLCG STEP'09 Post-Mortem Workshop.
Networks ∙ Services ∙ People Enzo Capone (GÉANT) LHCOPN/ONE Meeting, LBL Berkeley (USA) LHCONE Application Pierre Auger Observatory 1-2 June.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
5 Sept 2006GDB meeting BNL, MIlos Lokajicek Service planning and monitoring in T2 - Prague.
Computing Jiří Chudoba Institute of Physics, CAS.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI VO auger experience with large scale simulations on the grid Jiří Chudoba.
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Jiri Chudoba for the Pierre Auger Collaboration Institute of Physics of the CAS and CESNET.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
Real time analysis of human voice - environment for support of voice training and ORL medicine Tomáš Kulhánek 1,3 Marek Frič 2 1 CESNET z.s.p.o. 2 Academy.
WLCG IPv6 deployment strategy
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
Prague TIER2 Site Report
CERN Data Centre ‘Building 513 on the Meyrin Site’
Update on Plan for KISTI-GSDC
Статус ГРИД-кластера ИЯФ СО РАН.
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
RHUL Site Report Govind Songara, Antonio Perez,
Presentation transcript:

Prague Site Report Jiří Chudoba Institute of Physics, Prague Hepix meeting, Prague

Local Organization Institute of Physics: o 2 locations in Prague, 1 in Olomouc o 786 employees (281 researchers + 78 doctoral students) Department of Networking and Computing Techniques (SAVT) o networking up to offices, mail and web servers, central services Computing centre (CC) o large scale calculations o part of SAVT (except leader – Jiri Chudoba) Division of Elementary Particle Physics o Section Department of detector development and data processing head Milos Lokajicek started large scale calculations, later transferred to CC the biggest hw contributor (LHC computing) participates in the CC operation

Server room I Server room I (Na Slovance) o 62 m2, ~20 racks 350 kVA motor generator, x 100 kVA UPS, 108 kW air cooling, 176 kW water cooling o continuous changes o hosts computing servers and central services

Other server rooms New server room for SAVT o located next to server room I o independent UPS (24 kW now, max 64 kW n+1), motor generator (96 kW), cooling 25 kW (n+1) o dedicated for central services o 16 m2, now 4 racks (room for 6) o very high reliability required o first servers moved in last week Server room Cukrovarnicka o another building in Prague o 14 m2, 3 racks (max 5), 20 kW central UPS, 2x8 kW cooling o backup servers and services Server room UTIA o 3 racks, 7 kW cooling, 3 + 5x1.5 kW UPS o dedicated to Department of Condensed Matter Theory

5

Clusters in CC - Dorje Dorje: Altix ICE8200, 1.5 rack o 512 cores on 64 diskless WN, IB, 2 disk arrays (6+14 TB) o only local users, solid state physics, condense matter theory o 1 admin for administration and user support o relatively small number of jobs, MPI jobs up to 256 processes o Torque + Maui, SLES10 SP2, SGI Tempo, MKL, OpenMPI, ifort users run mostly: Wien2k, vasp, fireball, apls

Cluster LUNA 2 servers SunFire X4600 o 8 CPUs 32 cores, 256 GB RAM 4 servers SunFire V20z, V40z Operated by CESNET Metacentrum – distributed computing activity of the NGI_CZ Metacentrum o 9 locations o 3500 cores o 300 TB

Cluster Thsun, Small group servers Thsun o “private” cluster small number of users power users with root privileges o 12 servers of variable hw servers for groups o managed by groups in collaboration with CC

Cluster Golias Upgraded every year – several subclusters of the identical hw 3812 cores, HS06 almost 2 PB disk space the newest (March 2012) subcluster rubus: o 23 nodes SGI Rackable C1001-G13 o 2x (Opteron cores) 64 GB RAM, 2x SAS 300 GB o 374 W (full load) o 232 HS06 per node, 5343 HS06 total

Golias shares 2011 HS06share Alice+Star Atlas D Solid9144 Calice300 Auger HS06share Alice+Star Atlas D Solid6292 Calice130 Auger Subclusters contribution to the total performance Planned vs real usage (walltime)

WLCG Tier2 cluster + xrootd Rez 2012 pledges: o ATLAS HS06, 1030 TiB; HS06 available, 1300 TB av. o ALICE 5000 HS06, 420 TiB; 7564 HS06, 540 TB available delivery of almost 600 TB delayed due to floods 66% efficiency is assumed for WLCG accounting o sometimes under 100% of pledges Low cputime/walltime ratio for the ALICE o not only on our site o Tests with limits on number of concurrent jobs (last week) o “no limit” (about 900 jobs) – 45% o limit 600 jobs - 54 %

Utilization Very high average utilization o several different projects, different tools for production o D0 – production submitted locally by 1 user o ATLAS – panda, ganga, local users; DPM o ALICE – VO box; xrootd D0 ALICE ATLAS

Networking CESNET upgraded our main CISCO router o > 6509 o supervisor SUP720 -> SUP2T o new 8x 10G X2 card o planned upgrade of power supplies 2x3kW -> 2x6 kW (2 cards 48x1 Gbps, 1 card 4x10 Gbps, FW service module)

External connection Exclusive: 1 Gbps (to FZK) + 10 Gbps (CESNET) Shared: 10 Gbps (PASNET – GEANT) Not enough for ATLAS T2D limit (5 MB/s to/from T1s) Perfsonar installed FZK -> FZU FZU -> FZK PASNET link

Miscellaneous items Torque server performance o W jobs, sometimes long response time o divide Golias in 2 clusters with 2 torque instances? o memory limits for ATLAS and ALICE queues CVMFS o used by ATLAS, works well o some older nodes have too small disks -> excluded for ATLAS Management o Cfengine v2 used for production o Puppet used for IPv6 testbed 2 new 64 core nodes o SGI Rackable H2106-G7, 128 GB RAM, 4x Opteron GHz, 446 HS06 o frequent crashes when loaded with jobs Another 2 servers with Intel SB expected o small subclusters with different hw

Water cooling Active vs passive cooling doors o 1 new rack with cooling doors o 2 new cooling doors on APC racks

Water cooling good sealing crucial diskservers on off (divider added) diskservers rubus01

Distributed Tier2, Tier3s Networking infrastructure (provided by CESNET) connects all Prague institutions involved o Academy of Sciences of the Czech Republic Institute of Physics (FZU, Tier-2) Nuclear Physics Institute o Charles University in Prague Faculty of Mathematics and Physics o Czech Technical University in Prague Faculty of Nuclear Sciences and Physical Engineering Institute of Experimental and Applied physics Now only NPI hosts resources visible in Grid o Many reasons why others do not: manpower, suitable rooms, lack of IPv4 addresses Data Storage group at CESNET o deployment for LHC projects discussed

Thanks to my colleagues for help with preparation of these slides: o Marek Eliáš o Lukáš Fiala o Jiří Horký o Tomáš Hrubý o Tomáš Kouba o Jan Kundrát o Miloš Lokajíček o Petr Roupec o Jana Uhlířová o Ota Velínský 22