Presentation is loading. Please wait.

Presentation is loading. Please wait.

Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna.

Similar presentations


Presentation on theme: "Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna."— Presentation transcript:

1 Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna 19 September, 2003

2 total processor performance 2  10 6 MIPS disk space for data storage 50 TB robotic mass storage 5  10 4 TB communication channel CERN - regional centre 622 Мbps Creation of a regional centre for experimental data processing of the Large Hadron Collider (LHC) in Russia The project is intended for the years 1999 - 2007, first stage is the creation of a prototype of the centre - 1999-2001. By the year 2005 the computing resources of the regional centre and the throughput of the links to CERN will provide: 622 Мbps The distributed regional centre is expected to be created on the basis of the infrastructure of 4 centers: SINP MSU, ITEP, IHEP, and JINR. The unified computer network will be constructed for all Russian institutes participating in the LHC project

3 MONARC project regional group LHC Computing Model 2001 - evolving CERN Tier3 physics department    Desktop Germany UK France Italy CERN Tier1 USA Tier1 The opportunity of Grid technology Tier2 Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x Russia

4 DataGrid Architecture Collective Services Information & Monitoring Replica Manager Grid Scheduler Local Application Local Database Underlying Grid Services Computin g Element Services Authorization Authentication & Accounting Replica Catalog Storage Element Services Database Services Fabric services Configuration Management Configuration Management Node Installation & Management Node Installation & Management Monitoring and Fault Tolerance Monitoring and Fault Tolerance Resource Management Fabric Storage Management Fabric Storage Management Grid Fabric Local Computing Grid Grid Application Layer Data Management Job Management Metadata Management Object to File Mapping Logging & Book- keeping

5 EDG overview : structure, work packages The EDG collaboration is structured in 12 Work Packages WP1: Work Load Management System WP2: Data Management WP3: Grid Monitoring / Grid Information Systems WP4: Fabric Management WP5: Storage Element WP6: Testbed and demonstrators WP7: Network Monitoring WP8: High Energy Physics Applications WP9: Earth Observation WP10: Biology WP11: Dissemination WP12: Management } } Applications

6 Russian HEP institutes: IHEP (Protvino), ITEP (Moscow), JINR (Dubna), SINP MSU, TC “Science and Society”(Moscow), Keldysh IAM (Moscow), RCC MSU, PNPI (St.Petersburg) participated in the first European GRID project, EU DataGRID (WP6, WP8, WP10), with success deployment of EDG middleware and participation in EDG testbeds. These activities led to accumulation of an experience in a work with modern Grid environment and integration of Russian Grid segment into European Grid infrastructure.

7 Activities of Russian institutes in EDG Project: information service (GIIS) certification service (Certification Authority) data management (GDMP; OmniBack&OmniStorage) monitoring Metadispetcher mass events production for CMS&ATLAS experiments DOLLY – a solution proposed to integrate mass events production for CMS into Grid infrastructure

8 The technology of creation of GIIS information servers [which collect the information on local computing resources and resources of data storage (this information is created by GRIS Globus service at an each node of a distributed system) and transmit this information in a dynamical way to the higher GIIS server] has been put into practice. So way, a hierarchical structure of GRIS-GIIS information service building has been applied and tested. A common GIIS information server (ldap://lhc-fs.sinp.msu.ru:2137) has been organized. It transfers the information on local resources of Russian centers to information server (ldap://testbed1.cern.ch:2137) of European EU DataGrid project. G I I S

9 Russian National GIIS SRCC MSU, KIAM and TCSS participate only in Russian DataGrid project and are not involved in CERN projects. dc=ru, o=grid Country-level GIIS lhc-fs.sinp.msu.ru:2137 dc=ru, o=grid Country-level GIIS lhc-fs.sinp.msu.ru:2137 dc=sinp, dc=ru, o=grid SINP MSU, Moscow dc=sinp, dc=ru, o=grid SINP MSU, Moscow dc=srcc, dc=ru, o=grid SRCC MSU, Moscow dc=srcc, dc=ru, o=grid SRCC MSU, Moscow dc=itep, dc=ru, o=grid ITEP, Moscow dc=itep, dc=ru, o=grid ITEP, Moscow dc=jinr, dc=ru, o=grid JINR, Dubna dc=jinr, dc=ru, o=grid JINR, Dubna dc=kiam, dc=ru, o=grid KIAM, Moscow dc=kiam, dc=ru, o=grid KIAM, Moscow CERN Top-level WP6 GIIS testbed001.cern.ch:2137 CERN Top-level WP6 GIIS testbed001.cern.ch:2137 dc=ihep, dc=ru, o=grid IHEP, Protvino dc=ihep, dc=ru, o=grid IHEP, Protvino dc=tcss, dc=ru, o=grid TCSS, Moscow dc=tcss, dc=ru, o=grid TCSS, Moscow dc=?, dc=ru, o=grid St. Petersburg dc=?, dc=ru, o=grid St. Petersburg

10 . Certification authority (СА) center for Russian grid segment has been created at SINP MSU. The certificates of this center are accepted by all the participants of EU DataGRID project. A scheme of confirming of requirements to certificates by an electronic signature has been created with an assistance of Registration authority (RC) centers which are located in another institutes. The programs on installing and checking an electronic signature and a package of automated operation of certification center have been developed. The scheme CA+RA proposed and a program package have been accepted at CERN and other participants of EU DataGrid project. C A

11 . GDMP (GRID Data Mirroring Package) – a program for replication of files and data bases - has been installed and tested. GDMP had been created for remote actions with distributed data bases. GDMP uses GRID certificates and works in accordance with a client-server scheme i.e. a replication of changes in a data base is accomplished dynamically. Periodically the server notifies the clients on the changes in a data base and the clients send the updated files with a use of GSI-ftp command. GDMP is actively user for replication purposes and is considered to become a Grid standard for replication of changes in distributed data bases. GDMP

12 OMNIBACK Usage Some tests on transfer of data from Protvino (sirius-b.ihep.su; OS Digital UNIX Alpha Systems 4.0) to ATL-2640 mass storage system in Dubna (dtmain.jinr.ru; OS HP-UX 11.0) to define a transmission capacity and a stability af a system including communication channels and a mass storage (OmniBack disk agent in Protvino and OmniBack tape agent in Dubna). No abnormal terminations have been fixed. The average speed of a transmission by all the attempts – 480 Kb/s or 1.68 Gb/h. A maximal speed – 623 Kb/s. A minimal speed – 301 Kb/s.(A distance between Dubna and Protvino is about 250 km; communication between Protvino and Moscow – 8 Mbps). OMNISTORAGE Usage Data storage of data obtained during CMS M.-C. Mass Production runs is provided with the usage of Omnistorage : the volumes of data from SINP MSU have been transferred to Dubna (~1 TB) tp ATL-2640; an access to data via scp. Some first experience with a common usage of mass storage system in Dubna (ATL-2640)

13 Complex of works on monitoring of network resources, computing nodes, services and applications had been fulfilled. The JINR members of staff take part in a development of monitoring facilities for computing clusters with a large number of nodes (10 000 and more) which are used in the EU Data Grid infrastructure created. In the frames of a task of Monitoring and Fault Tolerance they take part in a creation of a Correlation Engine system. This system serves for an operative discovering of abnormal states at cluster nodes and taking measures on preventing of abnormal states. A Correlation Engine Prototype is installed at CERN and in JINR for accounting of abnormal states of nodes. MONITORING

14 The Metadispetcher program had being installed in Russian EU Data Grid segment in a cooperation with the Keldysh institute of applied mathematics. The Metadispetcher program is served for a jobs start planning in a distributed computing grid-environment. The program had being tested; after that the program had being modified to provide an effective data transfer by means of Globus toolkit. Metadispetcher

15 A Task of Mass Event Generation for CMS Experiment at LHC (the solution proposed) GRID Environment DOLLY BOSS jobs mySQL DB RefDB at CERN CE batch manager NFS WN1WN2 CMKIN IMPALA WNn UI EDG-RB UI job executer job

16 Fundamental Goal of the LCG To help the experiments’ computing projects get the best, most reliable and accurate physics results from the data coming from the detectors Phase 1 – 2002-05 prepare and deploy the environment for LHC computing Phase 2 – 2006-08 acquire, build and operate the LHC computing service

17 The protocol between CERN, Russia and JINR on a participation in LCG Project has been approved in 2003. The tasks of the Russian institutes in the LCG : LCG software testing; evaluation of new Grid technologies (e.g. Globus toolkit 3) in a context of using in the LCG; event generators repository, data base of physical events: support and development; LCG infrastructure creation in Russia. Since April, 2003 the groups on the directions mentioned above are created and began their work.

18 Collaborating Computer Centres Building a Grid  The virtual LHC Computing Centre Grid Alice VO CMS VO

19 Russian LCG Portal

20

21

22 Monitoring Facilities

23

24

25 The EGEE (Enabling Grids for E-science in Europe) project is accepted by the European Commission (6 th Framework program). The aim of the project is to create a global Pan-European computing infrastructure of a Grid type. Main goal is the integration of Russian GRID segments, created during past two years, in the European GRID infrastructure to be developed in the framework of EGEE project. EGEE

26 Russian Data Intensive GRID (RDIG) Consortium EGEE Federation Eight Russian Institutes made up the consortium RDIG (Russian Data Intensive GRID) as a national federation in the EGEE project. They are: IHEP - Institute of High Energy Physics (Protvino), IMPB RAS - Institute of Mathematical Problems in Biology (Russian Academy of Science, Pushchino, ITEP - Institute of Theoretical and Experimental Physics (Moscow), JINR - Joint Institute of Nuclear Physics (Dubna), KIAM RAS - Keldysh Institute of Applied Mathematics (Russian Academy of Science, Moscow), PNPI - Petersburg Nuclear Physics Institute (Russian Academy of Science, Gatchina), RRC KI - Russian Research Center 'Kurchatov Institute' (Moscow), SINP-MSU - Skobeltsyn Institute of Nuclear Physics (Moscow State University, Moscow). The Russian memorandum on a creation of a Grid type computing infrastructure on distributed processing of huge data volumes has being signed in September, 2003 by the Directors of the eight institutes.

27 Russian Contribution to EGEE RDIG as an operational and functional part of EGEE infrastructure (CIC, ROC, RC; integration with EGEE). Specific Service activities: SA1 - Creation of Infrastructure SA2 – Network Activities NA2 – Dissemination and Outreach NA3 – User Training and Induction NA4 - Application Identification and Support

28

29


Download ppt "Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna."

Similar presentations


Ads by Google