Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.

Slides:



Advertisements
Similar presentations
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware and software on Prague farms Brief statistics about running LHC experiments.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
Computing in Poland from the Grid/EGEE/WLCG point of view Ryszard Gokieli Institute for Nuclear Studies Warsaw Gratefully acknowledging slides from: P.Lasoń.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Prague Site Report Jiří Chudoba Institute of Physics, Prague Hepix meeting, Prague.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
, Prague JAN ŠVEC Institute of Physics AS CR.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
ATLAS DC2 seen from Prague Tier2 center - some remarks Atlas sw workshop September 2004.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
WLCG Tier-2 site in Prague: a little bit of history, current status and future perspectives Dagmar Adamova, Jiri Chudoba, Marek Elias, Lukas Fiala, Tomas.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
Klaster obliczeniowy WLCG – cz.I Alice::WTU::LCG - skład: VOBOX  alicluster.if.pw.edu.plVM: saturn.if.pw.edu.pl CREAM-CE  aligrid.if.pw.edu.pl VM: saturn.if.pw.edu.pl.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Grid activities in the Czech Republic Jiří Kosina, Miloš Lokajíček, Jan Švec Institute of Physics of the Academy of Sciences of the Czech Republic
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Computing Jiří Chudoba Institute of Physics, CAS.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Processors with Hyper-Threading and AliRoot performance Jiří Chudoba FZÚ, Prague.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI VO auger experience with large scale simulations on the grid Jiří Chudoba.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
COMPUTING for ALICE at WLCG TIER-2 SITE in PRAGUE in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
Prague TIER2 Site Report
LCG Deployment in Japan
Kolkata Status and Plan
Update on Plan for KISTI-GSDC
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague

Golias farm 250 CPUs as WN (published) mostly HP ProLiant DL 140 2x XEON 3,06 GHz, 2 – 4 GB RAM, 80 GB ATA HDD new: HP blade servers with dual-core Opterons GHz, 4 GB RAM – 10 machines, 40 CPU cores older HP LP1000r 2x PIII 1,13 GHz, 1GB RAM, 18GB SCSI HDD 40 TB (raw) disk space 1 Gbps dedicated optical link to CESNET, 1 Gbps link shared CE, PBSPro, SE: classical + DPM, voboxes (ALICE, ATLAS), LFC (ALICE), BDII gLite installed (plus 1 CE with LCG 2.7) supported experiments: LHC (ATLAS, ALICE) – 100 CPUs, D0 – 150 CPUs, AUGER,... manpower: 5 administrators (with many other duties)

ALICE running services: vobox, LFC catalogue (J. Chudoba) production operation (D. Adamova) production started in June 2006 ALICE was able to use idle resources otherwise reserved for other experiments

ALICE - charts

ALICE – more charts > 10 TB transferred to CERN > files written

ATLAS Production jobs (from ATLAS central DB):  jobs (36 % efficiency)  2857 days (75 % efficiency) All ATLAS jobs (from local logs):  jobs  8931 days Statistics from 1.1. –

ATLAS Data Transfers participation in Service Challenge 4 data transfers between FZU (Tier2) and FZK (associated Tier1 centre) and between FZU and CERN up to 50 Mbytes/s from CERN to FZU

ATLAS Data Transfers ATLAS Tier0 test (June 2006) export of data toTier1 sites and then to Tier2 sites ATLAS Tier0 test 2 (October 2006)

ATLAS – other computing activities Contribution to central Distributed Data Operations Team (J. Chudoba) Data Management within FZK cloud (J. Chudoba) Contribution to Production System Operations (emerging CZ team)

D0 participation in MC production and data reprocessing MC production within the last year MC 12 mil. events generated jobs 800 GB copied to FNAL 5 % of total production Reprocessing 26 mil. events 2 TB

Statistics from PBS alice: jobs, 7975 days = 21 years atlas: jobs, 8931 days = 24 years auger: 282 jobs, 146 days d0: jobs, days = 76 years h1: 515 jobs, 140 days star: 4806 jobs, 41 days long queue (local users): 676 jobs, 613 days total: jobs, days = 126 years