Presentation is loading. Please wait.

Presentation is loading. Please wait.

HENP Networks and Grids for Global Science

Similar presentations


Presentation on theme: "HENP Networks and Grids for Global Science"— Presentation transcript:

1 HENP Networks and Grids for Global Science
Harvey B. Newman California Institute of Technology 3rd International Data Grid Workshop Daegu, Korea August 26, 2004

2 Challenges for Global HENP Experiments
LHC Example- 2007 Physicists 250+ Institutes 60+ Countries BaBar/D0 Example 500+ Physicists 100+ Institutes 35+ Countries Major Challenges (Shared with Other Fields) Worldwide Communication and Collaboration Managing Globally Distributed Computing & Data Resources Cooperative Software Development and Data Analysis

3 Large Hadron Collider (LHC) CERN, Geneva: 2007 Start
pp s =14 TeV L=1034 cm-2 s-1 27 km Tunnel in Switzerland & France CMS TOTEM pp, general purpose; HI First Beams: Summer 2007 Physics Runs: from Fall 2007 ALICE : HI LHCb: B-physics Atlas Higgs, SUSY, QG Plasma, CP Violation, … the Unexpected

4 Challenges of Next Generation Science in the Information Age
Petabytes of complex data explored and analyzed by 1000s of globally dispersed scientists, in hundreds of teams Flagship Applications High Energy & Nuclear Physics, AstroPhysics Sky Surveys: TByte to PByte “block” transfers at Gbps Fusion Energy: Time Critical Burst-Data Distribution; Distributed Plasma Simulations, Visualization and Analysis; Preparations for Fusion Energy Experiment eVLBI: Many real time data streams at Gbps BioInformatics, Clinical Imaging: GByte images on demand Provide results with rapid turnaround, coordinating large but limited computing and data handling resources, over networks of varying capability in different world regions Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs With reliable, quantifiable high performance

5 LHC Data Grid Hierarchy: Developed at Caltech
CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1 ~PByte/sec ~ MBytes/sec Online System Experiment CERN Center PBs of Disk; Tape Robot Tier 0 +1 Tier 1 Gbps IN2P3 Center RAL Center INFN Center FNAL Center ~10 Gbps Tier 2 Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center ~10 Gbps Tier 3 Institute Institute Institute Institute Tens of Petabytes by An Exabyte ~5-7 Years later. Physics data cache 1 to 10 Gbps Tier 4 Workstations Emerging Vision: A Richly Structured, Global Dynamic System

6 ICFA and Global Networks for Collaborative Science
National and International Networks, with sufficient (rapidly increasing) capacity and seamless end-to-end capability, are essential for The daily conduct of collaborative work in both experiment and theory Experiment development & construction on a global scale Grid systems supporting analysis involving physicists in all world regions The conception, design and implementation of next generation facilities as “global networks” “Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

7 History of Bandwidth Usage – One Large Network; One Large Research Site
ESnet Accepted Traffic 1/90 – 1/04 Exponential Growth Since ’92; Annual Rate Increased from 1.7 to 2.0X Per Year In the Last 5 Years SLAC Traffic ~300 Mbps; ESnet Limit Growth in Steps: ~ 10X/4 Years Projected: ~2 Terabits/s by ~2014

8 Int’l Networks BW on Major Links for HENP: US-CERN Example
Rate of Progress >> Moore’s Law (US-CERN Example) 9.6 kbps Analog (1985) kbps Digital ( ) [X 7 – 27] 1.5 Mbps Shared (1990-3; IBM) [X 160] 2 -4 Mbps ( ) [X ] 12-20 Mbps ( ) [X 1.2k-2k] Mbps (2001-2) [X 16k – 32k] 622 Mbps (2002-3) [X 65k] 2.5 Gbps  (2003-4) [X 250k] 10 Gbps  (2005) [X 1M] 4x10 Gbps or 40 Gbps (2007-8) [X 4M] A factor of ~1M Bandwidth Improvement over (a factor of ~5k during ) A prime enabler of major HENP programs HENP has become a leading applications driver, and also a co-developer of global networks

9 Internet Growth in the World At Large
Amsterdam Internet Exchange Point Example 5 Minute Max 30 Gbps 20 Gbps Average Some Annual Growth Spurts; Typically In Summer-Fall The Rate of HENP Network Usage Growth (~100% Per Year) is Similar to the World at Large

10 Internet 2 Land Speed Record (LSR)
Judged on product of transfer speed and distance end-to-end, using standard Internet (TCP/IP) protocols. IPv6 record: 4.0 Gbps between Geneva and Phoenix (SC2003) IPv4 Multi-stream record with Windows & Linux: 6.6 Gbps between Caltech and CERN (16 kkm; “Grand Tour d’Abilene”) June 2004 Exceeded 100 Petabit-m/sec Single Stream 7.5 Gbps X 16 kkm with Linux Achieved in July Concentrate now on reliable Terabyte-scale file transfers Note System Issues: CPU, PCI-X Bus, NIC, I/O Controllers, Drivers LSR History – IPv4 single stream Petabitmeter (10^15 bit*meter) Monitoring of the Abilene traffic in LA: June 2004 Record Network

11 HENP Bandwidth Roadmap for Major Links (in Gbps)
Continuing Trend: ~1000 Times Bandwidth Growth Per Decade; Keeping Pace with Network BW Usage (ESNet, SURFNet etc.)

12 Evolving Quantitative Science Requirements for Networks (DOE High Perf
Evolving Quantitative Science Requirements for Networks (DOE High Perf. Network Workshop) Science Areas Today End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s 100 Gb/s 1000 Gb/s High bulk throughput Climate (Data & Computation) Gb/s N x 1000 Gb/s SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s + QoS for Control Channel Remote control and time critical throughput Fusion Energy 0.066 Gb/s (500 MB/s burst) 0.198 Gb/s (500MB/ 20 sec. burst) Time critical throughput Astrophysics 0.013 Gb/s (1 TByte/week) N*N multicast Computational steering and collaborations Genomics Data & Computation 0.091 Gb/s (1 TBy/day) 100s of users High throughput and steering

13 HENP Lambda Grids: Fibers for Physics
Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. Example: Take 800 secs to complete the transaction. Then Transaction Size (TB) Net Throughput (Gbps) (Capacity of Fiber Today) Summary: Providing Switching of 10 Gbps wavelengths within ~2-4 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”, to fully realize the discovery potential of major HENP programs, as well as other data-intensive research.

14 ICFA Standing Committee on Interregional Connectivity (SCIC)
Created by ICFA in July 1998 in Vancouver CHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe As part of the process of developing these recommendations, the committee should Monitor traffic Keep track of technology developments Periodically review forecasts of future bandwidth needs, and Provide early warning of potential problems Representatives: Major labs, ECFA, ACFA, North and South American Physics Community

15 SCIC in 2003-2004 http://cern.ch/icfa-scic
Three 2004 Reports; Presented to ICFA in February Main Report: “Networking for HENP” [H. Newman et al.] Includes Brief Updates on Monitoring, the Digital Divide and Advanced Technologies [*] A World Network Overview (with 27 Appendices): Status and Plans for the Next Few Years of National & Regional Networks, and Optical Network Initiatives Monitoring Working Group Report [L. Cottrell] Digital Divide in Russia [V. Ilyin] August 2004 Update Reports at the SCIC Web Site: See Asia Pacific, Latin America, GLORIAD (US-Ru-Ko-China); Brazil, Korea, etc.

16 SCIC Main Conclusion for 2003 Setting the Tone for 2004
The disparity among regions in HENP could increase even more sharply, as we learn to use advanced networks effectively, and we develop dynamic Grid systems in the most favored” regions We must take action, and work to Close the Digital Divide To make Physicists from All World Regions Full Partners in Their Experiments; and in the Process of Discovery This is essential for the health of our global experimental collaborations, our plans for future projects, and our field.

17 ICFA Report: Networks for HENP General Conclusions (2)
Reliable high End-to-end Performance of networked applications such as large file transfers and Data Grids is required. Achieving this requires: End-to-end monitoring extending to all regions serving our community. A coherent approach to monitoring that allows physicists throughout our community to extract clear information is required. Upgrading campus infrastructures. These are still not designed to support Gbps data transfers in most HEP centers. One reason for under-utilization of national and Int’l backbones, is the lack of bandwidth to end-user groups in the campus. Removing local, last mile, and nat’l and int’l bottlenecks end-to-end, whether technical or political in origin. While National and International backbones have reached 2.5 to 10 Gbps speeds in many countries, the bandwidths across borders, the countryside or the city may be much less. This problem is very widespread in our community, with examples stretching from the Asia Pacific to Latin America to the Northeastern U.S. Root causes for this vary, from lack of local infrastructure to unfavorable pricing policies.

18 ICFA Report (2/2004) Update: Main Trends Continue, Some Accelerate
Current generation of Gbps network backbones and major Int’l links arrived in the last 2-3 Years [US+Europe+Japan; Now Korea and China] Capability: 4 to Hundreds of Times; Much Faster than Moore’s Law Proliferation of 10G links across the Atlantic Now; Will Begin use of Multiple 10G Links (e.g. US-CERN) Along Major Paths by Fall 2005 Direct result of Falling Network Prices: $ 0.5 – 1M Per Year for 10G Ability to fully use long 10G paths with TCP continues to advance: Gbps X 16kkm (August 2004) Technological progress driving equipment costs in end-systems lower “Commoditization” of Gbit Ethernet (GbE) ~complete: ($20-50 per port) 10 GbE commoditization (e.g. < $ 2K per NIC with TOE) underway Some regions (US, Europe) moving to owned or leased dark fiber Emergence of the “Hybrid” Network Model: GNEW2004; UltraLight, GLIF Grid-based Analysis demands end-to-end high performance & management The rapid rate of progress is confined mostly to the US, Europe, Japan and Korea, as well as the major Transatlantic routes; this threatens to cause the Digital Divide to become a Chasm

19 Work on the Digital Divide: Several Perspectives
Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, … Find Ways to work with vendors, NRENs, and/or Gov’ts Exploit Model Cases: e.g. Poland, Slovakia, Czech Republic Inter-Regional Projects GLORIAD, Russia-China-US Optical Ring South America: CHEPREO (US-Brazil); EU CLARA Project Virtual SILK Highway Project (DESY): FSU satellite links Workshops and Tutorials/Training Sessions For Example: Digital Divide and HEPGrid Workshop, UERJ Rio, February 2004; Next Daegu May 2005 Help with Modernizing the Infrastructure Design, Commissioning, Development Tools for Effective Use: Monitoring, Collaboration Participate in Standards Development; Open Tools Advanced TCP stacks; Grid systems

20 Grid and Network Workshop at CERN March 15-16, 2004
CONCLUDING STATEMENT "Following the 1st International Grid Networking Workshop (GNEW2004) that was held at CERN and co-organized by CERN/DataTAG, DANTE, ESnet, Internet2 & TERENA, there is a wide consensus that hybrid network services capable of offering both packet- and circuit/lambda-switching as well as highly advanced performance measurements and a new generation of distributed system software, will be required in order to support emerging data intensive Grid applications, Such as High Energy Physics, Astrophysics, Climate and Supernova modeling, Genomics and Proteomics, requiring Gbps and up over wide areas." WORKSHOP GOALS Share and challenge the lessons learned by nat’l and international projects in the past three years; Share the current state of network engineering and infrastructure and its likely evolution in the near future; Examine our understanding of the networking needs of Grid applications (e.g., see the ICFA-SCIC reports); Develop a vision of how network engineering and infrastructure will (or should) support Grid computing needs in the next three years.

21 National Lambda Rail (NLR)
Transition beginning now to optical, multi-wavelength Community owned or leased “dark fiber” networks for R&E National Lambda Rail (NLR) PIT POR FRE RAL WAL NAS PHO OLG ATL CHI CLE KAN OGD SAC BOS NYC WDC STR DAL DEN LAX SVL SEA SDG JAC NLR Coming Up Now Initially 4 10G Wavelengths Northern Route Operation by 4Q04 Internet2 HOPI Initiative (w/HEP) To 40 10G Waves in Future nl, de, pl, cz,jp 18 US States 15808 Terminal, Regen or OADM site Fiber route

22 JGN2: Japan Gigabit Network (4/04 – 3/08) 20 Gbps Backbone, 6 Optical Cross-Connects
Kanazawa Sendai Sapporo Nagano Kochi Nagoya Fukuoka Okinawa Okayama <1G>  ・Teleport Okayama (Okayama)  ・Hiroshima University (Higashi Hiroshima) <100M>  ・Tottori University of Environmental Studies (Tottori)  ・Techno Ark Shimane (Matsue)  ・New Media Plaza Yamaguchi (Yamaguchi) <10G>  ・Kyoto University (Kyoto)  ・Osaka University (Ibaraki)  ・NICT Kansai Advanced Research Center (Kobe)  ・Lake Biwa Data Highway AP * (Ohtsu)  ・Nara Prefectural Institute of Industrial Technology (Nara)  ・Wakayama University (Wakayama)  ・Hyogo Prefecture Nishiharima Technopolis (Kamigori-cho, Hyogo Prefecture)  ・Kyushu University (Fukuoka)  ・NetCom Saga (Saga)  ・Nagasaki University (Nagasaki)  ・Kumamoto Prefectural Office (Kumamoto)  ・Toyonokuni Hyper Network AP *(Oita)  ・Miyazaki University (Miyazaki)  ・Kagoshima University (Kagoshima)  ・Kagawa Prefecture Industry Promotion Center (Takamatsu)  ・Tokushima University (Tokushima)  ・Ehime University (Matsuyama)  ・Kochi University of Technology (Tosayamada-cho, Kochi Prefecture)  ・Nagoya University (Nagoya)  ・University of Shizuoka (Shizuoka)  ・Softopia Japan (Ogaki, Gifu Prefecture)  ・Mie Prefectural College of Nursing (Tsu)  ・Ishikawa Hi-tech Exchange Center (Tatsunokuchi-machi, Ishikawa Prefecture)  ・Toyama Institute of Information Systems (Toyama)  ・Fukui Prefecture Data Super Highway AP * (Fukui)    ・Niigata University (Niigata)  ・Matsumoto Information Creation Center (Matsumoto, Nagano Prefecture)  ・Tokyo University (Bunkyo Ward, Tokyo)  ・NICT Kashima Space Research Center (Kashima, Ibaraki Prefecture) <1G>  ・Yokosuka Telecom Research Park (Yokosuka, Kanagawa Prefecture) <100M>  ・Utsunomiya University (Utsunomiya)  ・Gunma Industrial Technology Center (Maebashi)  ・Reitaku University (Kashiwa, Chiba Prefecture)  ・NICT Honjo Information and Communications Open Laboratory (Honjo, Saitama Prefecture)  ・Yamanashi Prefecture Open R&D Center (Nakakoma-gun, Yamanashi Prefecture)  ・Tohoku University (Sendai)  ・NICT Iwate IT Open Laboratory (Takizawa-mura, Iwate Prefecture)  ・Hachinohe Institute of Technology (Hachinohe, Aomori Prefecture)  ・Akita Regional IX * (Akita)  ・Keio University Tsuruoka Campus (Tsuruoka, Yamagata Prefecture)  ・Aizu University (Aizu Wakamatsu)  ・Hokkaido Regional Network Association AP * (Sapporo) NICT Tsukuba Research Center 20Gbps 10Gbps 1Gbps Optical testbeds Core network nodes Access points Osaka Otemachi USA NICT Keihannna Human Info-Communications Research Center NICT Kita Kyushu IT Open Laboratory NICT Koganei Headquarters [Legends ] *IX:Internet eXchange AP:Access Point

23 APAN-KR : KREONET/KREONet2 II

24 UltraLight Collaboration: http://ultralight.caltech.edu
Caltech, UF, FIU, UMich, SLAC,FNAL, MIT/Haystack, CERN, UERJ(Rio), NLR, CENIC, UCAID, Translight, UKLight, Netherlight, UvA, UCLondon, KEK, Taiwan Cisco, Level(3) Integrated hybrid experimental network, leveraging Transatlantic R&D network partnerships; packet-switched + dynamic optical paths 10 GbE across US and the Atlantic: NLR, DataTAG, TransLight, NetherLight, UKLight, etc.; Extensions to Japan, Taiwan, Brazil End-to-end monitoring; Realtime tracking and optimization; Dynamic bandwidth provisioning Agent-based services spanning all layers of the system, from the optical cross-connects to the applications.

25 GLIF: Global Lambda Integrated Facility
“GLIF is a World Scale Lambda based Lab for Application and Middleware development, where Grid applications ride on dynamically configured networks based on optical wavelengths ... Coexisting with more traditional packet-switched network traffic 4th GLIF Workshop: Nottingham UK Sept. 2004 10 Gbps Wavelengths For R&E Network Development Are Prolifering, Across Continents and Oceans

26 PROGRESS in SE Europe (Sk, Pl, Cz, Hu, …)
1660 km of Dark Fiber CWDM Links, up to 112 km. 1 to 4 Gbps (GbE) August 2002: First NREN in Europe to establish Int’l GbE Dark Fiber Link, to Austria April 2003 to Czech Republic. Planning 10 Gbps Backbone; dark fiber link to Poland this year.

27

28 Dark Fiber in Eastern Europe Poland: PIONIER Network
2650 km Fiber Connecting 16 MANs; 5200 km and 21 MANs by 2005 Support Computational Grids Domain-Specific Grids Digital Libraries Interactive TV Add’l Fibers for e-Regional Initiatives

29 The Advantage of Dark Fiber CESNET Case Study (Czech Republic)
2513 km Leased Fibers (Since 1999) Case Study Result Wavelength Service Vs. Fiber Lease: Cost Savings of % Over 4 Years for Long 2.5G or 10G Links

30 ICFA/SCIC Network Monitoring
Prepared by Les Cottrell, SLAC, for ICFA

31 PingER: World View from SLAC
Now monitoring 650 sites in 115 countries In last 9 months: Several sites in Russia (GLORIAD) Many hosts in Africa (27 of 54 Countries) Monitoring sites in Pakistan, Brazil C. Asia, Russia, SE Europe, L. America, M. East, China: yrs behind India, Africa: 7 yrs behind View from CERN Confirms This View Spreadsheet \cottrell\iepm\esnet-to-all-longterm.xls CERN data only goes back to Aug-01. It confirms S.E. Europe & Russia are catching up, and India & Africa are falling behind PingER is arguably the most extensive set of measurements of the end-to-end performance of the Internet going back almost ten years. Measurements are available from over 30 sites in 13 countries to sites in over 100 countries. We will use the PingER results to: demonstrate how the Internet performance to the regions of the world has evolved over the last 9 years; identify regions that have poor connectivity, how far they are behind the developed world and whether they are catching up or falling further behind; and illustrate the correlation between the UN Technology Achievement Index and Internet performance. Ghana, Nigeria and Uganda are all satellite links with ms RTTs. The losses to Ghana & Nigeria are 8-12% while to Uganda they are 1-3%. The routes are different. The route from SLAC to Ghana uses ESnet-Worldcom-UUNET, Nigeria goes CalREN-Qwest-Teiianet-New Skies satellite, Uganda goes Esnet-Level3-Intelsat. For both Ghana and Nigeria there are no losses (for 100 pings) until the last hop when over 40 of 100 packets were lost. For Uganda the losses (3 in 100 packets) also occur at the last hop. Worksheet: for trends: \\Zwinsan2\c\cottrell\iepm\esnet-to-all-longterm.xls for Africa: \\Zwinsan2\c\cottrell\iepm\africa.xls Important for policy makers

32 Research Networking in Latin America: August 2004
AmPath The only Countries with research network connectivity now in Latin America: Argentina, Brazil, Chile, Mexico, Venezuela AmPath Provided connectivity for some South American countries New Sao Paolo-Miami Link at 622 Mbps Starting This Month New: CLARA (Funded by EU) Regional Network Connecting 19 Countries: Argentina Dominican Republic Panama Brasil Ecuador Paraguay Bolivia El Salvador Peru Chile Guatemala Uruguay Colombia Honduras Venezuela Costa Rica Mexico Nicaragua Cuba 155 Mbps Backbone with Mbps Spurs; 4 Mbps Satellite to Cuba; 622 Mbps to Europe Also NSF Proposals To Connect at 2.5G to US

33 UERJ: T2T1, 100500 Nodes; Plus T2s to 100 Nodes
HEPGRID (CMS) in Brazil HEPGRID-CMS/BRAZIL is a project to build a Grid that At Regional Level will include CBPF,UFRJ,UFRGS,UFBA, UERJ & UNESP At International Level will be integrated with CMS Grid based at CERN; focal points include iVGDL/Grid3 and bilateral projects with Caltech Group Brazilian HEPGRID On line systems T0 +T1 Gbps CERN T1 France Germany Italy BRAZIL USA 622 Mbps UNESP/USP SPRACE-Working UERJ Regional Tier2 Ctr T2 T1 Gigabit T3 T2 UFRGS UERJ: T2T1, 100500 Nodes; Plus T2s to 100 Nodes CBPF UERJ UFBA UFRJ Individual Machines T4

34 Latin America Science Areas Interested in Improving Connectivity ( by Country)
Subject Argentina Brazil Chile Colombia Costa Rica Equator Mexico Astrophysics e-VLBI High Energy Physics Geosciences Marine sciences Health and Biomedical applications Environmental studies Networks and Grids: The Potential to Spark a New Era of Science in the Region

35 Asia Pacific Academic Network Connectivity
APAN Status July 2004 RU 200M Europe JP 20.9G 34M KR 2G 1.2G US Connectivity to US from JP, KO, AU is Advancing Rapidly. Progress in the Region, and to Europe is Much Slower 155M 310M CN 622M 9.1G 777M TW 45M `722M HK 90M 2M 7.5M IN 45M TH 1.5M 155M VN PH 155M 1.5M 932M (to 21 G) LK MY 2M 12M SG ID 2.5M Access Point Exchange Point Current Status 2004 (plan) 16M AU Better North/South Linkages within Asia JP-SG link: 155Mbps in 2005 is proposed to NSF by CIREN JP- TH link: 2Mbps  45Mbps in 2004 is being studied. CIREN is studying an extension to India

36 (service interrupted)
APAN Link Information Countries Network Bandwidth (Mbps) AUP/Remark AU-US AARNet 310 to 2 x 10 Gbps soon R&E + Commodity AU-US (PAIX) 622 CN-HK CERNET CSTNET 155 R&E CN-JP 45 Native IPv6 CN-US HK-US HARNET HK-TW HARNET/TANET/ASNET 100 IN-US/UK ERNET 16 JP-ASIA UDL 9 JP-ID AI3(ITB) 0.5/1.5 JP-KR APII 2Gbps JP-LA AI3 (NUOL) 0.128/0.128 JP-MY AI3(USM) 1.5/0.5 JP-PH AI3(ASTI) MAFFIN 6 Research JP-SG AI3(TP) JP-TH AI3(AIT) (service interrupted) SINET(ThaiSarn) 2 JP-US TransPac 5 Gbps to 2x10 Gbps soon

37      APAN Link Information Countries Network Bandwidth (Mbps)
Countries Network Bandwidth (Mbps) AUP/Remark (JP)-US-EU SINET 155 R&E / No Transit JP-US 5 Gbps IEEAF 10Gbps R&E wave service 622 R&E Japan-Hawaii JP-VN AI3(IOIT) 1.5/0.5 KR-FR KOREN/RENATER 34 Research (TEIN) KR-SG APII 8 KR-US KOREN/KREONet2 1.2Gbps LK-JP LEARN 2.5 MY-SG NRG/SICU 2 Experiment (Down) SG-US SingaREN 90 TH-US Uninet TW-HK ASNET/TANET/TWAREN TW-JP ASNET/TANET TW-SG ASNET/SingAREN TW-US 6.6 Gbps (TW)-US-NL 2.5 Gbps

38 APAN Recommendations (at July 2004 Meeting in CAIRNS, Au)
Central Issues for APAN this decade Stronger linkages between applications and infrastructure - neither can exist independently Stronger application and infrastructure linkages among APAN members. Continuing focus on APAN as an organization that represents infrastructure interests in Asia Closer connection between APAN the infrastructure & applications organization and regional political organizations (e.g. APEC, ASEAN) New issues demand attention Application measurement, particularly end-to-end network performance measurement is increasingly critical (deterministic networking) Security must now be a consideration for every application and every network.

39 KR-US/CA Transpacific connection
Participation in Global-scale Lambda Networking Two STM-4 circuits (1.2G) : KR-CA-US Global lambda networking : North America, Europe, Asia Pacific, etc. Global Lambda Networking KREONET/SuperSIReN CA*Net4 StarLight STM-4 * 2 Chicago APII-testbed/KREONet2 PacWave Seattle

40 New Record!!! 916 Mbps from CHEP to Caltech (22/06/’04)
Subject: UDP test on KOREN-TransPAC-Caltech Date: Tue, 22 Jun :47: From: "Kihwan Kwon" To: Iperf]# ./iperf -c socrates.cacr.caltech.edu -u -b 1000m Client connecting to socrates.cacr.caltech.edu, UDP port 5001 Sending 1470 byte datagrams; UDP buffer size: 64.0 KByte (default) [ 5] local port connected with [ ID] Interval Transfer Bandwidth [ 5] sec GBytes Mbits/sec USA TransPAC G/H-Japan KNU/Korea Max Mbps

41 Aug. 8 2004: P.K. Young, Korean IST Advisor to President Announces
Global Ring Network for Advanced Applications Development OC3 circuits Moscow-Chicago-Beijing since January 2004 OC3 circuit Moscow-Beijing July 2004 (completes the ring) Korea (KISTI) joining US, Russia, China as full partner in GLORIAD Plans developing for Central Asian extension (w/Kyrgyz Government) Rapid traffic growth with heaviest US use from DOE (FermiLab), NASA, NOAA, NIH and Universities (UMD, IU, UCB, UNC, UMN, PSU, Harvard, Stanford, Wash., Oregon, 250+ others) Aug : P.K. Young, Korean IST Advisor to President Announces Korea Joining GLORIAD TEIN gradually to 10G, connected to GLORIAD Asia Pacific Info. Infra- Structure (1G) will be backup net to GLORIAD > 5TBytes now transferred monthly via GLORIAD to US, Russia, China GLORIAD 5-year Proposal Pending (with US NSF) for expansion: 2.5G Moscow-Amsterdam-Chicago-Seattle-Hong Kong-Pusan-Beijing circuits early 2005; 10G ring around northern hemisphere 2007; multiple wavelength service 2009 – providing hybrid circuit-switched (primarily Ethernet) and routed services

42 Internet in China (J.P.Wu APAN July 2004)
Internet users in China: from 6.8 Million to 78 Million within 6 months IP Addresses: 32M(1A+233B+146C) Backbone:2.5-10G DWDM+Router International links:20G Exchange Points:> 30G(BJ,SH,GZ) Last Miles Ethernet,WLAN,ADSL,CTV,CDMA,ISDN,GPRS,Dial-up Need IPv6 Total Wireline Dial Up ISDN Broad band 68M 23.4M 45.0M 4.9M 9.8M

43 China: CERNET Update 1995, 64K Nation wide backbone connecting 8 cities, 100 Universities 1998, 2M Nation wide backbone connecting 20 cities, 300 Universities 2000, Own dark fiber crossing 30+ major cities and 30,000 kilometers 2001, CERNET DWDM/SDH network finished 2001, 2.5G/155M Backbone connecting 36 cities, 800 universities 2003, universities and institutes, over 15 million users

44 CERNET2 and Key Technologies
CERNET 2: Next Generation Education and Research Network in China CERNET 2 Backbone connecting GigaPOPs at 2.5G-10Gbps (I2-like Model) Connecting 200 Universities and 100+ Research Institutes at 1Gbps-10Gbps Native IPv6 and Lambda Networking Support/Deployment of the following technologies: E2E performance monitoring Middleware and Advanced Applications Multicast

45 M. Jensen and P. Hamilton Infrastructure Report, March 2004
AFRICA: Key Trends M. Jensen and P. Hamilton Infrastructure Report, March 2004 Growth in traffic and lack of infrastructure Predominance of Satellite; But these satellites are heavily subscribed Int’l Links: Only ~1% of traffic on links is for Internet connections; Most Internet traffic (for ~80% of countries) via satellite Flourishing Grey market for Internet & VOIP traffic using VSAT dishes Many Regional fiber projects in “planning phase” (some languished in the past); Only links from South Africa to Nimibia, Botswana done so far Int’l fiber Project: SAT-3/WASC/SAFE Cable from South Africa to Portugal Along West Coast of Africa Supplied by Alcatel to Worldwide Consortium of 35 Carriers 40 Gbps by Mid-2003; Heavily Subscribed. Ultimate Capacity 120 Gbps Extension to Interior Mostly by Satellite: < 1 Mbps to ~100 Mbps typical Note: World Conference on Physics and Sustainable Development, 10/31 – 11/2/05 in Durban South Africa; Part of World Year of Physics Sponsors: UNESCO, ICTP, IUPAP, APS, SAIP

46 AFRICA: Nectar Net Initiative
Growing Need to connect academic researchers, medical researchers & practitioners to many sites in Africa Examples: CDC & NIH: Global AIDS Project, Dept. of Parasitic Diseases, Nat’l Library of Medicine (Ghana, Nigeria) Gates $ 50M HIV/AIDS Center in Botswana; Project Coord at Harvard Africa Monsoon AMMA Project, Dakar Site [cf. East US Hurricanes] US Geological Survey: Global Spatial Data Infrastructure Distance Learning: Emory-Ibadan (Nigeria); Research Channel Content But Africa is Hard: 11M Sq. Miles, 600 M People, 54 Countries Little Telecommunications Infrastructure Approach: Use SAT-3/WASC Cable (to Portugal), GEANT Across Europe, Amsterdam-NY Link Across the Atlantic, then Peer with R&E Networks such as Abilene in NYC Cable Landings in 8 West African Countries and South Africa Pragmatic approach to reach end points: VSAT satellite, ADSL, microwave, etc. W. Matthews Georgia Tech

47 Sample Bandwidth Costs for African Universities
Bandwidth prices in Africa vary dramatically; are in general many times what they could be if universities purchase in volume Sample Bandwidth Costs for African Universities Anyone working in ICT in Africa will say cost is the main factor. Sample size of 26 universities Average Cost for VSAT service: Quality, CIR, Rx, Tx not distinguished Roy Steiner Internet2 Workshop

48 Grid2003: An Operational Production Grid, Since October 2003
27 sites (U.S., Korea) CPUs Concurrent Jobs Trillium: PPDG GriPhyNiVDGL Korea Prelude to Open Science Grid:

49 HENP Data Grids, and Now Services-Oriented Grids
The original Computational and Data Grid concepts are largely stateless, open systems Analogous to the Web The classical Grid architecture had a number of implicit assumptions The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness) Short transactions with relatively simple failure modes HENP Grids are Data Intensive & Resource-Constrained Resource usage governed by local and global policies Long transactions; some long queues Analysis: 1000s of users competing for resources at dozens of sites: complex scheduling; management HENP Stateful, End-to-end Monitored and Tracked Paradigm Adopted in OGSA [Now WS Resource Framework]

50 The Move to OGSA and then Managed Integrated Systems
App-specific Services ~Integrated Systems Open Grid Services Arch Stateful; Managed Web Services Resrc Framwk Web services + … Increased functionality, standardization GGF: OGSI, … (+ OASIS, W3C) Multiple implementations, including Globus Toolkit Globus Toolkit X.509, LDAP, FTP, … Defacto standards GGF: GridFTP, GSI Custom solutions Time

51 Managing Global Systems: Dynamic Scalable Services Architecture
MonALISA: 24 X 7 Operations Multiple Orgs. Grid2003 US CMS CMS-DC04 ALICE STAR VRVS ABILENE Soon: GEANT + GLORIAD “Station Server” Services-engines at sites host many “Dynamic Services” Scales to thousands of service-Instances Servers autodiscover and interconnect dynamically to form a robust fabric Autonomous agents + CLARENS: Web Services Fabric and Portal Architecture

52 Grid Analysis Environment
CLARENS: Web Services Architecture Grid Scheduler Catalogs Analysis Client Grid Services Web Server Execution Priority Manager Grid Wide Service Data Management Fully- Concrete Planner Abstract Virtual Replica Applications Monitoring Partially- Metadata HTTP, SOAP, XML/RPC Analysis Clients talk standard protocols to the CLARENS “Grid Services Web Server”, with a simple Web service API The secure Clarens portal hides the complexity of the Grid Services from the client Key features: Global Scheduler, Catalogs, Monitoring, and Grid-wide Execution service; Clarens servers form a Global Peer to peer Network Slide 7 - This diagram shows how the client access the grid services through a grid service host, and a rough idea of how the types of services interact. One key point from this slide is that the clients can be as smart or dumb as the application desires. They can let the grid service host do all of the work and ignore the details of the various services, or they can be more involved in the service flow. The motivator for hiding the grid services behind a grid service host is that it allows for simpler and easier to develop client application. The services are still available for smarter clients, however, so that application developers can have more control over the service flow if necessary. Clarens is used as the Grid Service host, providing XMLRPC, SOAP, and HTTP access to the grid services. Clarens also provides authentication and authorization to allow secure access to services. Factors motivating design choices: HTTP, SOAP, XMLRPC were chosen for transport/communication protocols due to their wide acceptance as standards. The heterogeneous and decentrally administered GAE must use open protocols for communication in order to be widely accepted. This lowers the barriers to entry when introducing new services and GAE grid sites.

53 World Summit on the Information Society (WSIS): Geneva 12/2003 and Tunis in 2005
The UN General Assembly adopted in 2001 a resolution endorsing the organization of the World Summit on the Information Society (WSIS), under UN Secretary-General, Kofi Annan, with the ITU and host governments taking the lead role in its preparation. GOAL: To Create an Information Society: A Common Definition was adopted in the “Tokyo Declaration” of January 2003: “… One in which highly developed ICT networks, equitable and ubiquitous access to information, appropriate content in accessible formats and effective communication can help people achieve their potential” Kofi Annan Challenged the Scientific Community to Help (3/03) CERN and ICFA SCIC have been quite active in the WSIS in Geneva (12/2003)

54 Role of Science in the Information Society; WSIS 2003-2005
HENP Active in WSIS CERN RSIS Event SIS Forum & CERN/Caltech Online Stand at WSIS I (Geneva 12/03) Visitors at WSIS I Kofi Annan, UN Sec’y General John H. Marburger, Science Adviser to US President Ion Iliescu, President of Romania; and Dan Nica, Minister of ICT Jean-Paul Hubert, Ambassador of Canada in Switzerland Planning Underway for WSIS II: Tunis 2005

55 Tutorials Available (w/Video) on the Web
HEPGRID and Digital Divide Workshop UERJ, Rio de Janeiro, Feb Theme: Global Collaborations, Grids and Their Relationship to the Digital Divide ICFA, understanding the vital role of these issues for our field’s future, commissioned the Standing Committee on Inter-regional Connectivity (SCIC) in 1998, to survey and monitor the state of the networks used by our field, and identify problems. For the past three years the SCIC has focused on understanding and seeking the means of reducing or eliminating the Digital Divide, and proposed to ICFA that these issues, as they affect our field of High Energy Physics, be brought to our community for discussion. This led to ICFA’s approval, in July 2003, of the Digital Divide and HEP Grid Workshop.  More Information: NEWS: Bulletin: ONE TWO WELCOME BULLETIN General Information Registration Travel Information Hotel Registration Participant List How to Get UERJ/Hotel Computer Accounts Useful Phone Numbers Program Contact us: Secretariat Chairmen Tutorials C++ Grid Technologies Grid-Enabled Analysis Networks Collaborative Systems Sessions & Tutorials Available (w/Video) on the Web SPONSORS  CLAF  CNPQ FAPERJ        UERJ

56 International ICFA Workshop on HEP Networking, Grids and Digital Divide Issues for Global e-Science
Proposed Workshop Dates: May 23-27, 2005 Venue: Daegu, Korea Dongchul Son Center for High Energy Physics Kyungpook National University ICFA, Beijing, China Aug. 2004 ICFA Approval Requested Today

57 International ICFA Workshop on HEP Networking, Grids and Digital Divide Issues for Global e-Science
Themes Networking, Grids, and Their Relationship to the Digital Divide for HEP as Global e-Science Focus on Key Issues of Inter-regional Connectivity Mission Statement ICFA, understanding the vital role of these issues for our field’s future, commissioned the Standing Committee on Inter-regional Connectivity (SCIC) in 1998, to survey and monitor the state of the networks used by our field, and identify problems. For the past three years the SCIC has focused on understanding and seeking the means of reducing or eliminating the Digital Divide, and proposed to ICFA that these issues, as they affect our field of High Energy Physics, be brought to our community for discussion. This workshop, the second in the series begun with the the 2004 Digital Divide and HEP Grid Workshop in Rio de Janeiro (approved by ICFA in July 2003) will carry forward this work while strengthening the involvement of scientists, technologists and governments in the Asia Pacific region.

58 International ICFA Workshop on HEP Networking, Grids and Digital Divide Issues for Global e-Science
Workshop Goals Review the current status, progress and barriers to the effective use of the major national, continental and transoceanic networks used by HEP Review progress, strengthen opportunities for collaboration, and explore the means to deal with key issues in Grid computing and Grid-enabled data analysis, for high energy physics and other fields of data intensive science, now and in the future Exchange information and ideas, and formulate plans to develop solutions to specific problems related to the Digital Divide in various regions, with a focus on Asia Pacific, Latin America, Russia and Africa Continue to advance a broad program of work on reducing or eliminating the Digital Divide, and ensuring global collaboration, as related to all of the above aspects.

59 Networks and Grids, GLORIAD, ITER and HENP
Network backbones and major links used by major experiments in HENP and other fields are advancing rapidly To the 10 G range in < 2 years; much faster than Moore’s Law New HENP and DOE Roadmaps: a factor ~1000 improvement per decade We are learning to use long distance 10 Gbps networks effectively Developments: to 7.5 Gbps flows over 16 kkm Important advances in Asia-Pacific, notably Korea A transition to community-owned and operated R&E networks is beginning (us, ca, nl, pl, cz, sk …) or considered (de, ro, …) We Must Work to Close to Digital Divide Allowing Scientists and Students from All World Regions to Take Part in Discoveries at the Frontiers of Science Removing Regional, Last Mile, Local Bottlenecks and Compromises in Network Quality are now On the Critical Path GLORIAD is A Key Project to Achieve These Goals Synergies Between the Data-Intensive Missions of HENP & ITER Enhancing Partnership and Community among the US, Russia and China: both in Science and Education 

60 LHC Global Collaborations
ATLAS CMS CMS 1980 Physicists and Engineers 36 Countries, 161 Institutions

61 SC2004: HEP Network Layout Preview of Future Grid systems
Brazil UK Japan SLAC 3*10Gbps TeraGrid Australia 10 Gbps Abilene FNAL StarLight 10 Gbps NLR 2*10 Gbps NLR LA 10 Gbps LHCNet 2 Metro 10 Gbps Waves LA-Caltech Joint Caltech, CERN, SLAC, FNAL, UKlight, HP, Cisco… Demo 6 to 8 10 Gbps waves to HEP setup on the show floor Bandwidth challenge: aggregate throughput goal of 40 to 60 Gbps Caltech CACR CERN Geneva

62 The Move to Dark Fiber is Spreading FiberCO
18 State Dark Fiber Initiatives In the U.S. (As of 3/04) California (CALREN), Colorado (FRGP/BRAN) Connecticut Educ. Network, Florida Lambda Rail, Indiana (I-LIGHT), Illinois (I-WIRE), Md./DC/No. Virginia (MAX), Michigan, Minnesota, NY + New England (NEREN), N. Carolina (NC LambdaRail), Ohio (Third Frontier Net) Oregon, Rhode Island (OSHEAN), SURA Crossroads (SE U.S.), Texas, Utah, Wisconsin The Move to Dark Fiber is Spreading FiberCO

63

64 SCIC in 2003-2004 http://cern.ch/icfa-scic
Strong Focus on the Digital Divide Continues A Striking Picture Continues to Emerge: Remarkable Progress in Some Regions, and a Deepening Digital Divide Among Nations Intensive Work in the Field: > 60 Meetings and Workshops: E.g., Internet2, TERENA, AMPATH, APAN, CHEP2003, SC2003, Trieste, Telecom World 2003, SC2003, WSIS/RSIS, GLORIAD Launch, Digital Divide and HEPGrid Workshop (Feb in Rio), GNEW2004, GridNets2004, NASA ONT Workshop, … etc. 3rd Int’l Grid Workshop in Daegu (August 26-28, 2004); Plan for 2nd ICFA Digital Divide and Grid Workshop in Daegu (May 2005) HENP increasingly visible to governments; heads of state: Through Network advances (records), Grid developments, Work on the Digital Divide and issues of Global Collaboration Also through the World Summit on the Information Society Process. Next Step is WSIS II in TUNIS November 2005

65 Coverage Now monitoring 650 sites in 115 countries
In last 9 months added: Several sites in Russia (thanks GLORIAD) Many hosts in Africa (5  36 now; in 27 out of 54 countries) Monitoring sites in Pakistan and Brazil (Sao Paolo and Rio) Working to install monitoring host in Bangalore, India Monitoring site Remote site Looks like chicken pox For Brazil thanks to Alberto Santoro for UERJ, and Sergio Novaes for UNESP, for Pakistan thanks to Arshad Ali for NIIT Pakistan monitors 4 .pk hosts plus CERN & SLAC, routes go via London (even between .pk sites). Sao Paolo monitors about 25 sites in 16 countries. Within Brazil direct connections (5-30 msec) else goes via AMPATH/FIU (even Chile and Uruguay)

66 Latin America: CLARA Network (2004-2006 EU Project)
Significant contribution from European Comission and Dante through ALICE project NRENs in 18 LA countries forming a regional network for collaboration traffic Initial backbone ring bandwidth f 155 Mbps Spur links at 10 to 45 Mbps (Cuba at 4 Mbps by satellite) Initial connection to Europe at 622 Mbps from Brazil Tijuana (Mexico) PoP soon to be connected to US through dark fibre link (CUDI-CENIC) access to US, Canada and Asia - Pacific Rim

67 NSF IRNC 2004: Two Proposals to Connect CLARA to the US (and Europe)
1st Proposal: FIU and CENIC 2nd Proposal: Indiana and Internet2 To East Coast Clara To West Coast to West Coast to East Coast to Europe Note: CHEPREO (FIU, UF, FSU Caltech, UERJ, USP, RNP) 622 Mbps Sao Paolo – Miami Link Started in August

68 GIGA Project: Experimental Gbps Network: Sites in Rio and Sao Paolo
Universities IME PUC-Rio UERJ UFF UFRJ Unesp Unicamp USP R&D Centres CBPF - physics CPqD - telecom CPTEC - meteorology CTA - aerospace Fiocruz - health IMPA - mathematics INPE - space sciences LNCC - HPC LNLS - physics About 600 km extension - not to scale LNCC CTA INPE CPqD LNLS Unicamp CPTEC CBPF LNCC Fiocruz IME IMPA-RNP PUC-Rio telcos UERJ UFRJ UFF Fapesp telcos Unesp USP – Incor USP - C.Univ. Slide from M. Stanton

69 GIGA Project in Rio and Sao Paolo
Maceió João Pessoa Extension of the GIGA Project Using 3000 km of dark fiber. “A good and real Advancement for Science in Brazil” – A. Santoro. GIGA Project in Rio and Sao Paolo “This is wonderful NEWS! our colleagues from Salvador -Bahia will can start to work with us on CMS.”

70 Trans-Eurasia Information Network TEIN (2004-2007)
Circuit between KOREN(Korea) and RENATER(France). AP Beneficiaries: China, Indonesia, Malaysia, Philippines, Thailand, Vietnam (Non-beneficiaries: Brunei, Japan, Korea, Singapore EU partners: NRENs of France, Netherlands, UK The scope expanded to South-East Asia and China recently. Upgraded to 34 Mbps in 11/2003. Upgrade to 155Mbps planned 12M Euro EU Funds Coordinating Partner DANTE Direct EU-AP Link; Other Links go Across the US

71 APAN China Consortium CERNet NSFCnet
Has been established in 1999.  The China Education and Research Network (CERNET), the Natural Science Foundation of China Network (NSFCNET) and the China Science and Technology Network (CSTNET) are the main three advanced networks. CERNet NSFCnet 2.5 Gbps Tsinghua --- Tsinghua University PKU --- Peking University NSFC --- Natural Science Foundation of China CAS --- China Academy of Sciences BUPT --- Beijing Univ. of Posts and Telecom. BUAA --- Beijing Univ. of Aero- and Astronautics

72 GLORIAD: Global Optical Ring (US-Ru-Cn; Korea Now Full Partner )
DOE: ITER Distributed Ops.; Fusion-HEP Cooperation NSF: Collaboration of Three Major R&E Communities Aug : P.K. Young, Korean IST Advisor to President Announces Korea Joining GLORIAD TEIN gradually to 10G, connected to GLORIAD Asia Pacific Info. Infra- Structure (1G) will be backup net to GLORIAD Also Important for Intra-Russia Connectivity; Education and Outreach

73 GLORIAD and HENP Example: Network Needs of IHEP Beijing
ICFA SCIC Report: Appendix 18, on Network Needs for HEP in China (See “IHEP is working with the Computer Network Information Center (CNIC) and other universities and institutes to build Grid applications for the experiments. The computing resources and storage management systems are being built or upgraded in the Institute. IHEP has a 100 Mbps link to CNIC, so it is quite easy to connect to GLORIAD and the link could be upgraded as needed.” Prospective Network Needs for IHEP Bejing Experiment Year Year 2006 and on LHC/LCG 622Mbps 2.5Gbps BES 100Mbps 155Mbps YICRO AMS Others Total (Sharing) 1Gbps

74 The Open Science Grid http://www.opensciengrid.org
The Open Science Grid will Build on the experience of Grid2003, as a persistent, production-quality Grid of national and international scope Ensure that the U.S. plays a leading role in defining and operating the global grid infrastructure needed for large-scale collaborative and international scientific research. Combine computing resources at several DOE labs and at dozens of universities to effectively become a single national computing infrastructure for science, the Open Science Grid. Provide opportunities for educators and students to participate in building and exploiting this grid infrastructure and opportunities for developing and training a scientific and technical workforce. This has the potential to transform the integration of education and research at all levels.

75 Role of Sciences in Information Society. Palexpo, Geneva 12/2003
Demos at the CERN/Caltech RSIS Online Stand Advanced network and Grid-enabled analysis Monitoring very large scale Grid farms with MonALISA World Scale multisite multi-protocol videoconference with VRVS (Europe-US-Asia-South America) Distance diagnosis and surgery using Robots with “haptic” feedback (Geneva-Canada) Music Grid: live performances with bands at St. John’s, Canada and the Music Conservatory of Geneva on stage VRVS 37k hosts 106 Countries 2-3X Growth/Year

76 Achieving throughput User can’t achieve throughput available (Wizard gap) TCP Stack, End-System and/or Local, Regional, Nat’l Network Issues Big step just to know what is achievable (e.g. 7.5 Gbps over 16 kkm Caltech-CERN) Spreadsheet \cottrell\iepm\wizard.xls Most users are unaware of the bottleneck bandwidth on the path


Download ppt "HENP Networks and Grids for Global Science"

Similar presentations


Ads by Google