Presentation is loading. Please wait.

Presentation is loading. Please wait.

PIONIER - Polish Optical Internet: The eScience Enabler Jarek Nabrzyski Poznan Supercomputing and Networking Center Digital Divide.

Similar presentations


Presentation on theme: "PIONIER - Polish Optical Internet: The eScience Enabler Jarek Nabrzyski Poznan Supercomputing and Networking Center Digital Divide."— Presentation transcript:

1 PIONIER - Polish Optical Internet: The eScience Enabler Jarek Nabrzyski Poznan Supercomputing and Networking Center naber@man.poznan.pl Digital Divide and HEPGRID Workshop

2 Poland Population: 38 mln Area: 312 000 km 2 Since 01.05.04 part of EU Temperature today: 0 0 C

3 R&D Center PSNC was established in 1993 and is an R&D Center in: –New Generation Networks POZMAN and PIONIER networks 6-NET, SEQUIN, ATRIUM projects –HPC and Grids GRIDLAB, CROSSGRID, VLAB, PROGRESS, CLUSTERIX projects –Portals and Content Management Tools Polish Educational Portal "Interkl@sa", Multimedia City Guide, Digital Library Framework, Interactive TV

4

5 PIONIER - an idea of „All Optical Network”, facts: 4Q1999 – proposal of program submited to KBN 2Q2000 – PIONIER testbed (DWDM, TNC 2001) 3Q2000 – project accepted (tender for co-operation, negotiations with Telcos) 4Q2001 – I Phase: ~10 mln Euro –Contracts with Telbank and Szeptel (1434 km) 4Q2002 – II Phase: ~18.5 mln Euro –Contracts with Telbank, regional Power Grids Companies (1214 km) –Contract for equipment: 10GE&DWDM and IP router 2H2003 – installation of 10GE with DWDM rep./amp. –16 MANs connected and 2648 km of fibers installed –Contracts with partners (Telbank and HAVE) (1426 km): I phase ~ 5 mln Euro 2004/2005 – 21 MANs connected with 5200 km of fiber

6 PIONIER - fibers deployment, 1Q2004. Installed fiber PIONIER nodes Fibers started in 2003 Fibers planned in 2004/2005 PIONIER nodes planned in 2004/2005

7 How we build fibers Co-investment with telco operators or self-investment (with right of way: power distribution, railways and public roads) Average of 16 fibers available (4xG.652 for national backbone, 8xG.652 for regional use, 4xG.655 for long haul transmission) (2001-2002) 2 pipes and one cable with 24 fibers available (2003) Average span length 60km for national backbone (regeneration possible) Local loop contruction is sometimes difficult (urban area - average 6 months waiting time for permissions) Found on time...

8 Link optimization a side effect of an urgent demand for DCM module ;-) replacement of G.652 fiber with G.655 fiber (NZDS) similar cost of fiber G.652 and G.655 cost reduction via: lower # of amp./regenerators, lower # of DCM But: optimisation is valid for selected cases only and is wavelength/waveset/link dependent.

9 Cost approximately 140KEuro Cost approximately 90KEuro, cost savings (equipment only): 35%

10 Community demands as a driving force Academic Internet –international connections: GEANT 10 Gb/s TELIA 2 Gb/s GTS/SPRINT 2 Gb/s –national connections between MANs (10Gb/s, 622Gb/s leased lambda) –near future – n x 10Gb/s High Performance Computing Centers (FC, GE, 10GE) –Project PROGRESS „Access environment to computational services performed by cluster of SUNs” SUN cluster (3 sites x 1Gb/s) result presented on SC 2002 and SC 2003 –Project SGI „HPC/HPV in Virtual Laboratory on SGI clusters” SGI cluster (6 sites x 1Gb/s) –Project CLUSTERIX (12 sites x 1 Gb/s) „National CLUSTER of LInuX Systems” –Project in preparation National Data Storage system (5 sites x 1Gb/s)

11 Community demands as a driving force Dedicated Capacity for European Projects –ATRIUM (622Mb/s) –6NET (155-622Mb/s) –VLBI (2x1Gb/s dedicated) –CERN-ATLAS (>1 Gb/s dedicated per site) –near future – 6 FP IST

12 GDAŃSK POZNAŃ WROCŁAW ZIELONA GÓRA ŁÓDŹ KATOWICE KRAKÓW LUBLIN WARSZAWA BYDGOSZCZ TORUŃ CZĘSTOCHOWA BIAŁYSTOK OLSZTYN KIELCE PUŁAWY RZESZÓW OPOLE BIELSKO-BIAŁA RADOM GÉANT 10 Gb/s Metropolitan Area Networks 622 Mb/s 155 Mb/s 10 Gb/s OWN FIBERS GÉANT LEASED CHANNELS Intermediate stage - 10GE over fiber KOSZALIN SZCZECIN

13 PIONIER - the economy behind Cost reduction via: simplified network architecture IP / ATM / SDH / DWDM  IP / GE / DWDM lower investment, lower depreciation ATM /SDH  GE simplified management

14 PIONIER - the economy behind... Cost relation (connections between 21 MANs, per year): 622Mb/s channels from telco (real cost): 4.8 MEuro 2.5Gb/s channels from telco (estimate): 9.6 MEuro 10Gb/s channels from telco (estimate) : 19.2 MEuro PIONIER costs (5200km of fibers, 10GE): 55.0 MEuro Annual PIONIER maintenance costs: 2.1 MEuro Return of Investment in 3 years! (calculations made only for 1 lambda used)

15 PIONIER – e-Region Two e-Regions already defined: Cottbus – Zielona Gora (D-PL) Ostrava – Bielsko Biala (CZ-PL) e-Region objectives: 1.Creation of a rational base and possibility of integrated work between institutions across the border, as defined by e- Europe. (...) education, medicine, natural disasters, information bases, protection of environment. 2.Enchancing the abilities of co-operation by developing new generation of services and applications. 3.Promoting the region in the Europe (as a micro scale of e- Europe concept)

16 PIONIER – „Porta Optica” „ PORTA OPTICA” - a distributed optical gateway to eastern neigbours of Poland (project proposal) A chance for close cooperation in scientific projects, by the means of providing multichannel/multilambda Internet connections to the neighbouring countries. An easy way to extend GEANT to Eastern European countries

17 PIONIER – cooperation with neighbours GERMANY CZECH REP. SLOVAKIA UKRAINE BELARUS LITHUANIA RUSSIA e-Region PORTA OPTICAPORTA OPTICA

18 PROGRESS (3) HPC and IST project VLBI ATLAS OTHER PROJECTS? HPC network (5+3)

19 CLUSTERIX - National CLUSTER of LInuX Systems launched in November 2003, 30 months its realization is divided into two stages: - research and development – first 18 months - deployment – starting after the r&d stage and lasting 12 months in more than 50 % funded by the consortium members consortium: 12 universities and Polish Academy of Sciences

20 CLUSTERIX goals to develop mechanisms and tools that allow the deployment of a production Grid environment with the basic infrastructure comprising local PC- clusters based on 64-bit Linux machines located in geographically distant independent centers connected by the fast backbone network provided by the Polish Optical Network PIONIER existing PC-clusters, as well as new clusters with both 32- and 64-bit architecture, will be dynamically connected to the basic infrastructure as a result, a distributed PC-cluster of a new generation with a dynamically changing size, fully operational and integrated with the existing services offered by other projects related to the PIONIER program, will be obtained results in the software infrastructure area will allow for increasing the portability and stability of the software and performance of the services and computations in the Grid-type structure

21 CLUSTERIX - objectives development of software capable of managing clusters with dynamically changing configuration, i.e. changing number of nodes, users and available services; one of the most important factors is reducing the management overhead; new quality of services and applications based on the IPv6 protocols; production Grid infrastructure available for the Polish research community; integration and making use of the existing services delivered as the outcome of other projects (data warehouse, remote visualization, computational resources of KKO); taking into consideration local policies of infrastructure administration and management, within independent domains; integrated end-user/administrator interface; providing required security in a heterogeneous distributed system.

22 CLUSTERIX: Pilot installation

23 Architecture

24 CLUSTERIX: Technologies the software developed will be based on the Globus Toolkit v.3, using the OGSA (Open Grid Services Architecture) concept - this technology ensures software compatibility with other environments used for creating Grid systems, and makes the created services easier to reuse - accepting OGSA as a standard will allow for co-operation of the services with other meta-clusters and Grid systems Open Source technology - allows anybody to access the project source code, modify it and publish the changes - makes the software more reliable and secure - open software is easier to integrate with the existing solutions and helps other technologies using Open Source software to develop Integration with existing software will be used extensively, e.g., GridLab broker, Virtual Users Account System

25 CLUSTERIX: R&D architecture design accordingly to specific requirements of users data management procedures of attaching a local PC cluster of any architecture design and implementation of the task/resource management system users account and virtual organization management security mechanisms in a PC cluster network resources management utilization of the IPv6 protocol family monitoring of cluster nodes and distributed applications design of a user/administrator interface design of tools for an automated installation/reconfiguration of all nodes within the entire cluster dynamic load balancing and checkpointing mechanism end-user’s applications

26 High Performance Computing and Visualisation with the SGI Grid for Virtual Laboratory Applications Project No. 6 T11 0052 2002 C/05836

27 Project duration: R&D – December 2002.. November 2004 Deployment – 1 year Partners: HPC centers ACK CYFRONET AGH (Kraków) PSNC (Poznan) TASK (Gdansk) WCSS (Wroclaw) University Lodz End User IMWM (Warsaw) Institute of Bioorganic Chemistry PAS Industry SGI, ATM S.A. Funds: KBN, SGI PSNC TASK IMWM WCSS PŁ CYFRONET

28 Structure

29 Added value Real remote access to the national cluster (... GRID): Real remote access to the national cluster (... GRID): ASPs ASPs HPC/HPV HPC/HPV Labour instruments Labour instruments Better usage of licences Better usage of licences Dedicated Application Servers Dedicated Application Servers Better usage of HPC resources Better usage of HPC resources HTC HTC Emergency Computing Site Emergency Computing Site IMWM IMWM Production Grid environment Production Grid environment Midleware we will work out Midleware we will work out

30 virtual, remote (?) VERY limited access Main reason - COSTS Main GOAL - to make accessible on a common way Added Value Virtual Laboratory

31 Remote usage of expensive and unique facilities Better utilisation Joint venture and on-line co-operation of scientific teams Shorter deadlines, faster work eScience – closer Equal chances Tele –work, -science The Goal

32 Pilot installation of NMR Spectroscopy Optical network HPC, HPV systems Data Mining... more than remote access Testbed infrastructure

33 Building a national wide HPC/HPV infrastructure: Connecting the existing infrastructure with the new testbed. Dedicated Application Servers Resource Management Data access optimisation tape subsystems Access to scientific libraries Checkpoint restart kernel level IA64 architecture Advanced visualization Distributed Remote visualization Programming environment supporting the end user How to simplify the process of making parallel applications Remaining R&D activities

34 PROGRESS (1) Duration: December 2001 – May 2003 (R&D) Budget: ~4,0 MEuro Project Partners –SUN Microsystems Poland –PSNC IBCh Poznań –Cyfronet AMM, Kraków –Technical University Łódź Co-funded by The State Committee for Scientific Research (KBN) and SUN Microsystems Poland

35 PROGRESS (2) Deployment: June 2003 –.... –Grid constructors –Computational applications developers –Computing portals operators Enabling access to global grid through deployment of PROGRESS open source packages

36 PROGRESS (3) Cluster of 80 processors Networked Storage of 1,3 TB Software: ORACLE, HPC Cluster Tools, Sun ONE, Sun Grid Engine, Globus Wrocław Gdańsk

37 PROGRESS GPE

38 http://progress.psnc.pl/ http://progress.psnc.pl/portal/ progress@psnc.pl

39 EU Projects: Progress and GridLab

40 What is CrossGrid ? 5. FP, founded by the EU –Time frame: March 2002 – February 2005 Structure of the project –WP1 - CrossGrid Applications Development –WP2 - Grid Application Programming Environment –WP3 - New Grid Services and Tools –WP4 - International Testbed Organisation –WP5 - Project Management (including Architecture Team and central Dissemination/ Exploitation)

41 Partners 21 partners 2 industry partners 11 countries The biggest testbed in Europe

42 Project structure –WP1 - CrossGrid Applications Development –WP2 - Grid Application Programming Environment –WP3 - New Grid Services and Tools –WP4 - International Testbed Organisation –WP5 - Project Management

43 GRIDMIDDLEWAREGRIDMIDDLEWARE Visualization Supercomputer, PC-Cluster Data-storage, Sensors, Experiments Internet, networks Desktop Mobile Access Hoffmann, Reinefeld, Putzer Middleware

44 Surgery planning & visualisation Flooding control MIS HEP data analysis Weather & pollution modelling level 1 - special hardware 40 MHz (40 TB/sec) level 2 - embedded processors level 3 - PCs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz (100 MB/sec) data recording & offline analysis Applications

45 Migrating Desktop What’s the best way to ‘travel’ ? Roaming Access Grids Microsoft Windows Linux take it anywhere access anyhow http://ras.man.poznan.pl

46 GridLab Enabling Applications on the Grid www.gridlab.org Jarek Nabrzyski, Project Coordinator naber@man.poznan.pl office@gridlab.org Poznan Supercomputing and Networking Center

47 GridLab Project Funded by the EU (5+ M€), January 2002 – December 2004 Application and Testbed oriented –Cactus Code, Triana Workflow, all the other applications that want to be Grid-enabled Main goal: to develop a Grid Application Toolkit (GAT) and set of grid services and tools...: –resource management (GRMS), –data management, –monitoring, –adaptive components, –mobile user support, –security services, –portals,... and test them on a real testbed with real applications

48 GridLab Members nPSNC (Poznan) - coordination nAEI (Potsdam) nZIB (Berlin) nUniv. of Lecce nCardiff University nVrije Univ. (Amsterdam) nSZTAKI (Budapest) nMasaryk Univ. (Brno) nNTUA (Athens) Sun Microsystems HP nANL (Chicago, I. Foster) nISI (LA, C.Kesselman) nUoWisconsin (M. Livny) collaborating with: –Users! EU Astrophysics Network, DFN TiKSL/GriKSL NSF ASC Project –other Grid projects Globus, Condor, GrADS, PROGRESS, GriPhyn/iVDGL, CrossGrid and all the other European Grid Projects (GRIDSTART) GWEN, HPC-Europa

49 GridLab Aims Get Computational Scientists using the “Grid” and Grid services for real, everyday, production work (AEI Relativists, EU Network, Grav Wave Data Analysis, Cactus User Community), all the other potential grid apps Make it easier for applications to make flexible, efficient, robust, use of the resources available to their virtual organizations Dream up, prototype, and test new application scenarios which make adaptive, dynamic, wild, and futuristic uses of resources.

50 What GridLab isn’t We are not developing low level Grid infrastructure, We do not want to repeat work which has already been done (want to incorporate and assimilate it …) –Globus APIs, –OGSA, –ASC Portal (GridSphere/Orbiter), –GPDK, –GridPort, –DataGrid, –GriPhyn, –...

51 …need to make it easier to use GAT Application “Is there a better resource I could be using?” GAT_FindResource( ) The Grid

52 GridLab Architecture

53 The Same Application … Application GAT Application GAT Application GAT Laptop The Grid Super Computer No network! Firewall issues!

54 More info / summary www.GridLab.org gridlab@gridlab.org, news@gridlab.orggridlab@gridlab.orgnews@gridlab.org office@gridlab.org Bring your application and test it with the GAT and our services.


Download ppt "PIONIER - Polish Optical Internet: The eScience Enabler Jarek Nabrzyski Poznan Supercomputing and Networking Center Digital Divide."

Similar presentations


Ads by Google