1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware and software on Prague farms Brief statistics about running LHC experiments.

Slides:



Advertisements
Similar presentations
ESLEA and HEPs Work on UKLight Network. ESLEA Exploitation of Switched Lightpaths in E- sciences Applications Exploitation of Switched Lightpaths in E-
Advertisements

Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Southgrid Status Pete Gronbech: 21 st March 2007 GridPP 18 Glasgow.
Liverpool HEP – Site Report May 2007 John Bland, Robert Fay.
Liverpool HEP - Site Report June 2008 Robert Fay, John Bland.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Polish Tier-2 Ryszard Gokieli Institute for Nuclear Studies Warsaw.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
BINP/GCF Status Report Jan 2010
Week 1.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Computing in Poland from the Grid/EGEE/WLCG point of view Ryszard Gokieli Institute for Nuclear Studies Warsaw Gratefully acknowledging slides from: P.Lasoń.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Prague Site Report Jiří Chudoba Institute of Physics, Prague Hepix meeting, Prague.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
, Prague JAN ŠVEC Institute of Physics AS CR.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
WLCG Tier-2 site in Prague: a little bit of history, current status and future perspectives Dagmar Adamova, Jiri Chudoba, Marek Elias, Lukas Fiala, Tomas.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Grid activities in the Czech Republic Jiří Kosina, Miloš Lokajíček, Jan Švec Institute of Physics of the Academy of Sciences of the Czech Republic
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
5 Sept 2006GDB meeting BNL, MIlos Lokajicek Service planning and monitoring in T2 - Prague.
Computing Jiří Chudoba Institute of Physics, CAS.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
Brief introduction about “Grid at LNS”
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
Prague TIER2 Site Report
LCG Deployment in Japan
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

1 PRAGUE site report

2 Overview Supported HEP experiments and staff Hardware and software on Prague farms Brief statistics about running LHC experiments

3 Experiments and projects Three institutions in Prague –Academy of Sciences of the Czech Republic –Charles University in Prague –Czech Technical University in Prague Collaborate on experiments –CERN – ATLAS, ALICE, TOTEM, AUGER –FNAL – D0 –BNL - STAR –DESY – H1 Involved in LCG, EGEE (future EGEE2) projects –LCG Production site –EGEE VOCE (regional computing) Certification testbed for ROC CE PPS

4 People Collaborating community 125 persons –60 researchers –43 students and PHD students –22 engineers and 21 technicians LCG/EGEE Computing staff –Jiri Kosina – LCG, experiment software support, networking –Jiri Chudoba – ATLAS and ALICE SW and running –Jan Svec – HW, operating system, PbsPro, networking, D0 SW support (SAM, JIM) –Lukas Fiala – HW, networking, web –Tomáš Kouba – LCG, sw support

5 Available HW in Prague Two independent farms in Prague –GOLIAS – Institute of Physics AS CR LCG2 (ATLAS, ALICE, H1 production), D0 (SAM and JIM installation) STAR, AUGER - locally –SKURUT – CESNET, z.s.p.o. LCG2 production (ATLAS, ALICE) EGEE production (VOCE) EGEE certification testbed for ROC CE EGEE preproduction site –Sharing of resources D0:ATLAS:rest= 50:40:10 (dynamically changed when needed) GOLIAS

6 Available HW in Prague GOLIAS (IOP AS CR) –Server room: 18 racks, 200kVA UPS (Newave Maxi) + 380kVA F.G.Wilson diesel, 2 air condition units Liebert-Hiross (120kW) + reserved space for third 110 nodes 32 HP LP1000r dual CPU nodes PIII1.13GHz, 1GB RAM 53 HP DL140 dual XEON 3.06GHz, 2GB RAM 14 HP DL140 dual XEON 3.06GHZ, 4GB RAM 4 HP DL360 dual XEON 2.8GHz, 2GB RAM 3 HP DL145 dual Opteron 244, 2GB RAM 10 HP BL35p dual CPU dual Core Opteron 275, 4GB RAM

7 Golias data storage 3 disk arrays –1TB – 15x73GB SCSI RPM, RAID5, ext3 –10TB, EasySTOR 1400RP, 3(boxes)x14x250GB ATA, 7200RPM/16MB, connected via UltraSCSI160 to fileserver with Adaptec aic7899 SCSI controller, RAID5, ext3 –30TB, EasySTOR 1600RP 6(boxes)x16x300GB SATA 7200RPM/16MB, connected via UltraSCSI160 to fileserver with Adaptec aic7899 SCSI controller, RAID6, xfs –All disk capacities are raw. Arrays exported to SE via NFS. –Performance problems under heavy load -> looking for alternate solutions (infortrend,...)

8 Golias software Scientific linux CERN 3.06 LCG 2_7_0 PBSPro version 5.2 (migrating to 7.1) with own infoproviders for LCG and scp tweaking SAM + JIM Currently problems with DPM pools over NFS (different kernels on NFS client and server)

9 Skurut farm HW - located at CESNET –32 dual CPU nodes PIII 700MHz, 1GB RAM –SE mounts disks via NFS from golias SW –SLC3.0.6 OS, Torque+Maui batch system –LCG2 installation: 1xCE+UI, 1xSE, 19xWNs –GILDA installation: 1xCE+UI, WNs are manually moved from LCG2 to GILDA, if needed. –Certification testbed: 1xCE –PPS installation: 1xCE, 1xSE, 1xWN (with gLite 1.5, planned 3.0)

10 Network connection General – Geant connection –2 Gbps (etherchannel) backbone at GOLIAS, over 10 Gbps Metropolitan Prague backbone –CZ - GEANT 2.5 Gbps (over 10 Gbps HW) Dedicated (fiber or lambda) connections – provided by CESNET within CzechLight project –1Gbps optical connection to FNAL USA –1Gbps optical connection to ASGC Taipei –4x1Gbps optical connection to Prague Tier3 centers –Interconnected with Cisco catalyst 6503 switch (located at CESNET) using BGP for routing. –Major traffic is between IOP and other institutions => 1Gbps between IOP and CESNET is bottleneck => We plan an upgrade. (2x1 Gbps next week)

11 Computation results

12 Computation results