Presentation is loading. Please wait.

Presentation is loading. Please wait.

Российский GRID для операций с большими массивами научных данных Ильин В.А. (НИИЯФ МГУ) проекты LCG (LHC Computing GRID) и EGEE (Enabling Grids for E_science.

Similar presentations


Presentation on theme: "Российский GRID для операций с большими массивами научных данных Ильин В.А. (НИИЯФ МГУ) проекты LCG (LHC Computing GRID) и EGEE (Enabling Grids for E_science."— Presentation transcript:

1 Российский GRID для операций с большими массивами научных данных Ильин В.А. (НИИЯФ МГУ) проекты LCG (LHC Computing GRID) и EGEE (Enabling Grids for E_science in Europe) Таруса, 6 февраля 2004 г.

2 CERN

3

4 Online system Multi-level trigger Filter out background Reduce data volume Online reduction 10 7 Trigger menus Select interesting events Filter out less interesting level 1 - special hardware 40 MHz (40 TB/sec) level 2 - embedded processors level 3 - PCs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz (100 MB/sec) Data recording & offline analysis

5 Большой адронный коллайдер (БАК) потоки данных, этапы обработки и анализа 0.1-1 ГБ/сек Архивное хранение 1-100 ГБ/сек Интерактивный физический анализ Подготовка данных для анализа Подготовка данных для анализа детектор Суммарные данные по событию ESD сырые данные Реконструкция события Реконструкция события Моделиро- вание событий Моделиро- вание событий Данные для анализа (выделенные по физ. каналам) Отбор событий и первичная реконструкция Отбор событий и первичная реконструкция ~100 MБ/сек 1-6 ПБ/год тысячи ученых 200 TБ/год 0.5-1 ПБ/год 200 MБ/сек РИВК БАК 5-10% Tier1 Tier2

6 LHC Challenges: Scale (in 2006) Data written to tape ~40 Petabytes/Year and UP (1 PB = 10**9 MBytes) Processing capacity 100 - TIPS and UP (1 TIPS = 10**6 MIPS) Typical networks Few Gbps Per Link Lifetime of experiment 2-3 Decades (start in 2007) Users ~ 5000 physicists Software developers ~ 300 (Four Experiments)

7 MONARC project regional group LHC Computing Model 2001 - evolving CERN Tier3 physics department    Desktop Germany UK France Italy CERN Tier1 USA Tier1 The opportunity of Grid technology Tier2 Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x Russia

8 Russian Tier2-Cluster Russian regional center for LHC computing Cluster of institutional computing centers with Tier2 functionality and summary resources at 50- 70% level of the canonical Tier1 center for each experiment (ALICE, ATLAS, CMS, LHCb): analysis; simulations; users data support. Participating institutes: Moscow ITEP, KI, MSU, LPI, MEPhI… Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI RAS, … Novosibirsk BINP SB RAS Coherent use of distributed resources by means of LCG (EDG, VDT, …) technologies. Active participation in the LCG Phase1 Prototyping and Data Challenges (at 5% level). 200220032004 Q42007 CPU kSI95 510-(15)25-35410 Disk TB 712-(16)50-70850 Tape TB (10)20-(50)1001250 Network Mbps 2050155/…Gbps/…

9 Russia in LCG http://www.cern.ch/lcg We have started activity in LCG (LHC Computing GRID project) in autumn 2002. Russia is joining to the LCG-1 infrastructure now. First SINP MSU, then JINR, ITEP and IHEP. Goal – to have in Russia in Q4 operational segment of world-wide LCG infrastructure and be ready to DataChallenges in 2004. Manpower contribution to LCG (started in May 2003): the Agreement is under signing by CERN and Russia and JINR officials, 3 tasks for our responsibility: 1) testing new GRID mw to be used in LCG 2) evaluation of new-on-the-market GRID mw (first task – evaluation of GT3 and WebSphere) 3) common solutions for event generators (event data bases).

10 LHC Data Challenges Программы генерации модельных (Монте Карло) данных для будущих экспериментов на LHC. Типичная нагрузка на каналы связи сейчас – передача 100 Гбайт данных из Москвы в ЦЕРН за рабочий день  50 Мбит/сек ! Но это не средняя нагрузка – «пиковая»:

11 LCG is not a development project – F it relies on other grid projects for grid middleware development and support LCG - Goals The goal of the LCG project is to prototype and deploy the computing environment for the LHC experiments Two phases: –Phase 1: 2002 – 2005 –Build a service prototype, based on existing grid middleware –Gain experience in running a production grid service –Produce the TDR for the final system –Phase 2: 2006 – 2008 –Build and commission the initial LHC computing environment

12 Collaborating Computer Centres Building a Grid  The virtual LHC Computing organizations Grid Alice VO CMS VO

13 CEWN lhc01.sinp.msu.ru lhc02.sinp.msu.ru НИИЯФ МГУ SE lhc03.sinp.msu.ru Пример использования EDG (LCG) middleware (CMS VO) SINP MSU RB+ Information Index lhc20.sinp.msu.ru Пользователь lhc04.sinp.msu.ru ЦЕРН lxshare0220.cern.ch Падуя grid011.pd.infn.it

14

15 EGEE Timeline http://www.cern.ch/egee

16 Distribution of GRID Service Activities over Europe: Operations Management at CERN; Core Infrastructure Centres (CICs) in the UK, France, Italy, Russia and at CERN, responsible for managing the overall Grid infrastructure; Regional Operations Centres (ROCs), responsible for coordinating regional resources, regional deployment and support of services. Россия: CIC – МГУ, РНЦ КИ, ОИЯИ, ИТЭФ, ИПМ РАН ROC – ИФВЭ, ОИЯИ, ПИЯФ РАН, ИМПБ РАН

17 Pan-European Multi-Gigabit Backbone (33 Countries) January 2004 Planning Underway for “GEANT2” (GN2) Multi-Lambda Backbone, to Start In 2005

18 International Committee for Future Accelerators (ICFA) Standing Committee on Inter-Regional Connectivity (SCIC) http://icfa-scic.web.cern.ch/ICFA-SCIC/ ICFA SCIC Reports Networking for High Energy and Nuclear Physics - Feb/2004 ( Doc - A4 / Pdf - A4 )DocA4PdfA4 Report on the Digital Divide in Russia - Feb/2004 ( Doc - A4 / Pdf - A4 )DocA4PdfA4 Network Monitoring Report - Feb/2004 ( Doc - A4 / Pdf - A4 )DocA4PdfA4 Advanced Technologies Interim Report - Feb/2003 ( Doc )Doc Digital Divide Executive Report - Feb/2003 ( Doc )Doc

19 GLORIAD: Global Optical Ring (US-Ru-Cn) Also Important for Intra-Russia Connectivity “ Little Gloriad ” (OC3) Launched January 12; to OC192 in 2005

20 SCIC Monitoring WG – Throughput Improvements 1995-2004 Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) (1) 1) Matthis et al., Computer Communication Review 27(3), July 1997 ( 1) Matthis et al., Computer Communication Review 27(3), July 1997 60% annual improvement Factor ~100/10 yr Progress: but Digital Divide is Mostly Maintained Some Regions ~5-10 Years Behind SE Europe and Parts of Asia May be Catching Up (Slowly)

21 GLORIAD 10 Gbps Moscow 1 Gbps IHEP 8 Mbps (m/w), under construction 100 Mbps fiber-optic (Q1-Q2 2004?) JINR 45 Mbps, 100-155 Mbps (Q1-Q2 2004), Gbps (2004-2005) INR RAS 2 Mbps+2x4Mbps(m/w) BINP 1 Mbps, 45 Mbps (2004 ?), … PNPI 512 Kbps (commodity), and 34 Mbps f/o but budget is only for 2 Mbps (!) USA NaukaNET 155 Mbps GEANT 155 Mbps basic link, plus 155 Mbps additional link for GRID projects Japan through USA by FastNET, 512 Kbps Novosibirsk(BINP) - KEK REGIONAL and INTERNATIONAL CONNECTIVITY for RUSSIA High Energy Physics


Download ppt "Российский GRID для операций с большими массивами научных данных Ильин В.А. (НИИЯФ МГУ) проекты LCG (LHC Computing GRID) и EGEE (Enabling Grids for E_science."

Similar presentations


Ads by Google