Presentation is loading. Please wait.

Presentation is loading. Please wait.

Henryk Palka NEC’07 Varna, Sept 2007 1 HEP GRID computing in Poland Henryk Palka Institute of Nuclear Physics, PAN, Krakow, Poland.

Similar presentations


Presentation on theme: "Henryk Palka NEC’07 Varna, Sept 2007 1 HEP GRID computing in Poland Henryk Palka Institute of Nuclear Physics, PAN, Krakow, Poland."— Presentation transcript:

1 Henryk Palka NEC’07 Varna, Sept 2007 1 HEP GRID computing in Poland Henryk Palka Institute of Nuclear Physics, PAN, Krakow, Poland

2 Henryk Palka NEC’07 Varna, Sept 2007 2 ● LHC data rates and computing model ● LCG : LHC Computing GRID project ● Polish Grid infrastructure ● Sharing of Central Europe Grid resources ● ATLAS MC production and Data Challanges at ACC Cyfronet ● BalticGrid project Topics:

3 Henryk Palka NEC’07 Varna, Sept 2007 3 LHC experiments data rates Rate [Hz] RAW [MB] ESD Reco [MB] AOD [kB] Monte Carlo [MB/evt] Monte Carlo % of real ALICE HI 100 12.5 2.5250 300100 ALICE pp 100 1 0.044 0.4100 ATLAS 200 1.6 0.5100 220 CMS 150 1.5 0.2550 2100 LHCb 2000 0.025 0.520 10 7 seconds/year pp from 2008 on (?)  ~10 9 events/experiment 10 6 seconds/year heavy ion For LHC computing: 100M SpecInt2000 or 100K of ~3GHz cores is needed! For data storage: 20 Peta Bytes or 100K of disks/ tapes per year is needed!

4 Henryk Palka NEC’07 Varna, Sept 2007 4 LHC Computing Model organisation of WLCG

5 Henryk Palka NEC’07 Varna, Sept 2007 5 Tier-0 at CERN –Record RAW data (1.25 GB/s ALICE) –Distribute second copy to Tier-1s –Calibrate and do first-pass reconstruction Tier-1 centers (11 defined) –Manage permanent storage – RAW, simulated, processed –Capacity for reprocessing, bulk analysis Tier-2 centers (> 100 identified) –Monte Carlo event simulation –End-user analysis Tier-3 –Facilities at universities and laboratories –Access to data and processing in Tier-2s, Tier-1s Computing model („cloud”)

6 Henryk Palka NEC’07 Varna, Sept 2007 6 LHC Computing Grid project - LCG Objectives - design, prototyping and implementation of the computing environment for LHC experiments (Monte Carlo simulation, reconstruction and data analysis): - infrastructure (PC farms, networking) - middleware (based on EDG, VDT, gLite….) - operations (experiment VOs, operation and support centres) Schedule - phase 1 (2002 – 2005; ~50 MCHF); R&D and prototyping (up to 30% of the final size) - phase 2 (2006 – 2008 ); preparation of a Technical Design Report, Memoranda of Understanding, deployment (2007) Full physics run 200520072006 2008 First physics First beams Cosmics 200520072006 2008 First physics First beams Cosmics

7 Henryk Palka NEC’07 Varna, Sept 2007 7 Planned sharing of capacity between CERN and Regional Centres in 2008 Requirements from December 2004 Computing model papers, reviewed by LHCC Jan 05 Preliminary planning data Tape

8 8 LHC Computing WLCG is based on few computing scientific grids

9 9 GTS 1,6 Gb/s GDAŃSK POZNAŃ ZIELONA GÓRA KATOWICE KRAKÓW LUBLIN WARSZAWA BYDGOSZCZ TORUŃ CZĘSTOCHOWA BIAŁYSTOK OLSZTYN RZESZÓW Bielsko-Biała GÉANT 10+10 Gb/s KOSZALIN SZCZECIN WROCŁAW ŁÓDŹ KIELCE PUŁAWY OPOLE RADOM BASNET 34 Mb/s CESNET, SANET GÉANT / TELIA 2x2,5 Gb/s DFN 10 Gb/s Gorzów MAN 10 Gb/s (1 lambda) 2 x 10 Gb/s 1 Gb/s CBDF 10 Gb/s PIONIER’S FIBERS Polish Grid infrastructure Networking – PIONIER project Tier1 FZK Karlsruhe Tier2 PCSS Poznań Tier2 ICM Warszawa Tier2 ACK Cyfronet Kraków HEP VLAN 1 Gb/s HEP VLAN 1 Gb/s

10 10 Polish Grid infrastructure Tier2: ACC Cyfronet – ICM – PSNC Three computing centres contribute to the Polish Tier2 (as part of EGEE/ LCG ROC) ACC Cyfronet Krakow ~300 (450) Pentium 32 bit processors connected to PSNC via 1 Gbs HEP VLAN ICM Warsaw ~180 (340) AMD-64 Opteron processors connected to PSNC via 1 Gbs HEP VLAN PSNC Poznan ~240 Itanium IA-64 processors connected to GEANT and DFN – 10 Gbs In the hierarchy of WLCG the Polish Tier2 is connected to Tier1 at FZK Karlsruhe Building Tier3 at IFJ PAN Krakow and IPJ/ FP TU Warsaw Warsaw-ICM Cracow-CYFRONET Poznan-PSNC

11 11 Polish Grid infrastructure Disk storage at ACC Cyfronet HP EVA 8000:  - 8GB cache;  - 8 FC shortwave ports;  - 240 FATA 500GB 7200rpm HDDs (120TB)  2nd HP EVA with 90 TB underway

12 12 Polish Grid infrastructure Towards Polish National Grid Poland has been/ is involved in number of EU Grid projects 5 FP: EUROGRID, GridLab, CrossGrid, GridStart, GRIP,… 6 FP: EGEE, EGEE2, K-WF Grid, BalticGrid, Core GRID, ViroLab, Gredia, int.eu.grid, UNIGRIDS, EUChinaGrid,… In 2007 five major Polish computing centers (Krakow, Gdansk, Poznan, Warsaw and Wroclaw) signed an agreement to form Polish National Grid called PL-Grid. Yearly Cracow Grid Workshops about 150 participants (in 2007: W. Boch, F. Gagliardi, W. Gensch, K. Kasselman, D. Kranzmueller, T. Priol, P. Sloot, and others…), this year workshop, 7th in row, will take place on 15-18 October 2007

13 Henryk Palka NEC’07 Varna, Sept 2007 13 Sharing of CEGC resources

14 Henryk Palka NEC’07 Varna, Sept 2007 14 Sharing of CEGC resources

15 Henryk Palka NEC’07 Varna, Sept 2007 15 EGEE-CEGC computing resources usage by LHC experiments and other VOs ICM Warszawa Cyfronet Kraków WCSS64 Wrocław PCSS Poznań

16 Henryk Palka NEC’07 Varna, Sept 2007 16 ATLAS MC production at ACC Cyfronet Atlas production at Cyfronet was running very well and with high efficiency. ATLAS is regularly getting its fairshare recently running at the level of more than 100 CPU constantly.

17 17 ATLAS Data Challenges Status ATLAS ~ 1350 kSI2k.months ~ 120,000 jobs ~ 10 Million events fully simulated (Geant4) ~ 27 TB All 3 Grids have been proven to be usable for a real production about 1% of the events have been generated in Cracow - DC2 Phase I started in July, finished in October 2004 - 3 Grids were used LCG ( ~70 sites, up to 7600 CPUs) NorduGrid (22 sites, ~3280 CPUs (800), ~14TB) Grid3 (28 sites, ~2000 CPUs) LCG 41% Grid3 29% NorduGrid 30% from L. Robertson at C-RRB 2004

18 Henryk Palka NEC’07 Varna, Sept 2007 18 Data transfer T0->T1->T2 T0  T1 tests started in May Mostly FZK Tier1 involved End of May: proposal to include Tier2s from FZK Cloud Delayed due to a high rate of errors at FZK (even though nominal transfer rates has been achieved) Mid June: T1(FZK)  (Cloud)T2 functional test started DQ2 tool at FZK worked well CYFRONET and 4 other sites out of total 7 tested had ~100% file transfer efficiency Transfer rates FZK  CYF as high as 60 Mbyte/s FZK dCache Transfer Rates

19 19 Where is the digital divide in Europe? courtesy of D. Foster

20 20 BalticGrid in One Slide ■ Started 1 Nov 2005 (duration 30m) ■ Partners:  10 Leading institutions in six countries in the Baltic Region and Switzerland (CERN) ■ Budget:  3.0 M€ ■ Coordinator:  KTH PDC, Stockholm ■ Compute Resources:  17 resource centres ■ Pilot applications: HEP, material science, biology, linguistics SA - Specific Service Activities NA - Networking Activities JRA - Joint Research Activities ■ Estonian Educational and Research Network, EENet ■ Keemilise ja Bioloogilse Füüsika Instituut, NICPB ■ Inst. of Mathematics and Computer Science, IMCS UL ■ Riga Technical University, RTU ■ Vilnius University, VU ■ Institute of Theoretical Physics and Astronomy, ITPA ■ Poznan Supercomputing and Networking Center, PSNC ■ Instytut Fizyki Jadrowej, im. H Niewodniczanskiego, Polskiej Akademii Nauk, IFJ PAN ■ Parallelldatorcentrum, Kungl Tek. Högskolan, KTH PDC ■ CERN

21 21 Grid Operations Activity Leader: Lauri Anton Krakow Coordinator: Marcin Radecki (Status end of 2006)

22 22 BaltiGrid recources in IFJ PAN The seed of Tier 3: ■ Development of local GRID installations  Access GRID from local UI ■ Support for HEP users  Installation of experimental applications  Development and tests of user algorithms  Submit jobs to GRID – distributed analysis ■ Mini cluster (blade technology)  Financed from associated national project  32 cores, 2 GB RAM/core, 2 TB disks  To be extended in future (local Tier 3)

23 Henryk Palka NEC’07 Varna, Sept 2007 23 The insurmountable problem of LHC computing seems to be solvable, thanks to rapid progress in IT technologies Polish Tier-2 LCG infrastructure and organisation are sound and they are being developed further to meet committments for 2008 HEP GRID community plays also essential role in removing ‘digital divides’ and in bringing the GRID technology to other branches of science Summary and conclusions

24 Henryk Palka NEC’07 Varna, Sept 2007 24 The material used in this presentations comes from many sources: the LHC collider and LCG projects, the LHC experimental teams… Special thanks are to Michal Turala, the spiritus movens of Polish GRID computing. I also thank my other Krakow colleagues: P. Lason, A. Olszewski M. Radecki and M. Witek. Acknowledgements

25 Henryk Palka NEC’07 Varna, Sept 2007 25 Thank you for your attention

26 26 Backup

27 Henryk Palka NEC’07 Varna, Sept 2007 27 LHC Computing

28 Henryk Palka NEC’07 Varna, Sept 2007 28 Data preselection in real time - many different physics processes - several levels of filtering - high efficiency for events of interest - total reduction factor of about 10 7 LHC experiments and data rate Level 1 - Special Hardware Level 2 - Embedded Processors/Farm 40 MHz (1000 TB/sec) equivalent) Level 3 – Farm of commodity CPU 75 KHz (75 GB/sec)fully digitised 5 KHz (5 GB/sec) 100 Hz (100 MB/sec) Data Recording & Offline Analysis

29 29 ICFA Network Task Force (1998): required network bandwidth (Mbps) 100–1000 X Bandwidth Increase Foreseen for 1998-2005 See the ICFA-NTF Requirements Report: http://l3www.cern.ch/~newman/icfareq98.html

30 Henryk Palka NEC’07 Varna, Sept 2007 30 Progress on IT technology from R. Mount Performance per unit cost in function of time

31 31 Where is the digital divide in Europe?

32 32 BalticGrid initiative  Estonian Educational and Research Network, EENet  Keemilise ja Bioloogilse Füüsika Instituut, NICPB  Institute of Mathematics and Computer Science, IMCS UL  Riga Technical University, RTU  Vilnius University, VU  Institute of Theoretical Physics and Astronomy, ITPA  Poznan Supercomputing and Networking Center, PSNC  Instytut Fizyki Jadrowej, im. Henryka Niewodniczanskiego, Polskiej Akademii Nauk, IFJ PAN  Parallelldatorcentrum at Kungl Tekniska Högskolan, KTH PDC  CERN Proposal to the recent EU 6 FP call (research infrastructure) submitted

33 33 BalticGrid Partners ■ Estonia  Tallinn, Tartu ■ Lithuania  Vilnius ■ Latvia  Riga ■ Poland  Kraków, Poznan ■ Switzerland  Geneva ■ Sweden  Stockholm Details on www.balticgrid.org


Download ppt "Henryk Palka NEC’07 Varna, Sept 2007 1 HEP GRID computing in Poland Henryk Palka Institute of Nuclear Physics, PAN, Krakow, Poland."

Similar presentations


Ads by Google