Presentation is loading. Please wait.

Presentation is loading. Please wait.

Networks, Grids and the Digital Divide in HEP and Global e-Science Networks, Grids and the Digital Divide in HEP and Global e-Science Harvey B. Newman.

Similar presentations

Presentation on theme: "Networks, Grids and the Digital Divide in HEP and Global e-Science Networks, Grids and the Digital Divide in HEP and Global e-Science Harvey B. Newman."— Presentation transcript:

1 Networks, Grids and the Digital Divide in HEP and Global e-Science Networks, Grids and the Digital Divide in HEP and Global e-Science Harvey B. Newman Harvey B. Newman ICFA Workshop on HEP Networking, Grids, and Digital Divide Issues for Global e-Science Daegu, May 23 2005 ICFA Workshop on HEP Networking, Grids, and Digital Divide Issues for Global e-Science Daegu, May 23 2005

2 TOTEM pp, general purpose; HI LHCb: B-physics ALICE : HI  pp  s =14 TeV L=10 34 cm -2 s -1  27 km Tunnel in Switzerland & France Large Hadron Collider CERN, Geneva: 2007 Start CMS Atlas Higgs, SUSY, Extra Dimensions, CP Violation, QG Plasma, … the Unexpected 5000+ Physicists 250+ Institutes 60+ Countries

3 LHC Data Grid Hierarchy: Developed at Caltech Tier 1 Tier2 Center Online System CERN Center PBs of Disk; Tape Robot FNAL Center IN2P3 Center INFN Center RAL Center Institute Workstations ~150-1500 MBytes/sec ~10 Gbps 1 to 10 Gbps Tens of Petabytes by 2007-8, at ~100 Sites. An Exabyte ~5-7 Years later. Physics data cache ~PByte/sec 10 - 40 Gbps Tier2 Center ~1-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1 Emerging Vision: A Richly Structured, Global Dynamic System

4 ICFA and Global Networks for Collaborative Science  National and International Networks, with rapidly increasing capacity and end-to-end capability are essential, for  The daily conduct of collaborative work in both experiment and theory  Experiment development & construction on a global scale  Grid systems supporting analysis involving physicists in all world regions  The conception, design and implementation of next generation facilities as “global networks”  “Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

5 Challenges of Next Generation Science in the Information Age  Flagship Applications  High Energy & Nuclear Physics, AstroPhysics Sky Surveys: TByte to PByte “block” transfers at 1-10+ Gbps  Fusion Energy: Time Critical Burst-Data Distribution; Distributed Plasma Simulations, Visualization, Analysis  eVLBI: Many (quasi-) real time data streams at 1-10 Gbps  BioInformatics, Clinical Imaging: GByte images on demand  Advanced integrated Grid applications rely on reliable, high performance operation of our LANs and WANs  Analysis Challenge: Provide results with rapid turnaround, over networks of varying capability in different world regions Petabytes of complex data explored and analyzed by 100s-1000s of globally dispersed scientists, in 10s-100s of teams

6 Huygens Space Probe Lands on Titan - Monitored by 17 telescopes in Au, Jp, CN, US uIn October 1997, the Cassini spacecraft left Earth to travel to Saturn uOn Christmas day 2004, the Huygens probe separated from Cassini uOn 14 January 2005 it started its descent through the dense (methane, nitrogen) atmosphere of Titan (speculated to be similar to that of Earth billions of years ago)  uThe signals sent back from Huygens to Cassini were monitored by 17 telescopes in Australia, China, Japan and the US to accurately position the probe to within a kilometre (Titan is ~1.5 billion kilometres from Earth) Courtesy G. McLaughlin


8 Australian eVLBI data sent over high speed links to the Netherlands uThe data from two of the Australian telescopes were transferred to the Netherlands over the SXTransport and IEEAF links, and CA*net4 using UCLP, and were the first to be received by JIVE (Joint Institute for VLBI in Europe), the correlator site uThe data was transferred at an average rate of 400Mbps (note 1Gbps was available) uThe data from these two telescopes were reformatted and correlated within hours of the end of the landing uThis early correlation allowed calibration of the data processor at JIVE, ready for the data from other telescopes to be added uSignificant int’l collaborative effort: 9 Organizations G. McLaughlin, D. Riley

9 ICFA Standing Committee on Interregional Connectivity (SCIC)  Created in July 1998 in Vancouver ; Following ICFA-NTF CHARGE:  Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe  As part of the process of developing these recommendations, the committee should  Monitor traffic on the world’s networks  Keep track of technology developments  Periodically review forecasts of future bandwidth needs, and  Provide early warning of potential problems  Create subcommittees as needed to meet the charge  Representatives: Major labs, ECFA, ACFA, North and South American Users  Chair of the committee reports to ICFA twice per year

10 SCIC in 2004-2005 Three 2005 Reports, Presented to ICFA Today uMain Report: “Networking for HENP” [H. Newman et al.] èIncludes Updates on the Digital Divide, World Network Status; Brief updates on Monitoring and Advanced Technologies [*] è18 Appendices: A World Network Overview Status and Plans for the Next Few Years of Nat’l & Regional Networks, and Optical Network Initiatives uMonitoring Working Group Report [L. Cottrell] Also See: uSCIC Digital Divide Report [A. Santoro et al.] uSCIC 2004 Digital Divide in Russia Report [V. Ilyin] uTERENA ( 2004 Compendium

11 SCIC Main Conclusion for 2002-5  The disparity among regions in HENP could increase even more sharply, as we learn to use advanced networks effectively, and we develop dynamic Grid systems in the “most favored” regions  We must therefore take action, and work to Close the Digital Divide  To make Scientists in All World Regions Full Partners in the Process of Frontier Discoveries  This is essential for the health of our global experimental collaborations, for our field, and for international collaboration in many fields of science.

12 NEWS: Bulletin: ONE TWO WELCOME BULLETIN General Information Registration Travel Information Hotel Registration Participant List How to Get UERJ/Hotel Computer Accounts Useful Phone Numbers Program Contact us: Secretariat Chairmen CLAF CNPQ FAPERJ UERJ SPONSORS HEPGRID and Digital Divide Workshop UERJ, Rio de Janeiro, Feb. 16-20 2004 Theme: Global Collaborations, Grids and Their Relationship to the Digital Divide For the past three years the SCIC has focused on understanding and seeking the means of reducing or eliminating the Digital Divide. It proposed to ICFA that these issues, as they affect our field, be brought to our community for discussion. This led to ICFA’s approval, in July 2003, of the Digital Divide and HEP Grid Workshop.  Review of R&E Networks; Major Grid Projects  Perspectives on Digital Divide Issues by Major HEP Experiments, Regional Representatives  Focus on Digital Divide Issues in Latin America; Relate to Problems in Other Regions See http://www.lishep.uerj.br Tutorials  C++  Grid Technologies  Grid-Enabled Analysis  Networks  Collaborative Systems A. Santoro

13 International ICFA Workshop on HEP Networking, Grids, and Digital Divide Issues for Global e-Science May 23-27, 2005 Daegu, Korea Dongchul Son Center for High Energy Physics Harvey Newman California Institute of Technology  Focus on Asia-Pacific  Also Latin America, Middle East, Africa Approved by ICFA August 2004

14 고에너지물리연구센터 CENTER FOR HIGH ENERGY PHYSICS International ICFA Workshop on HEP Networking, Grids and Digital Divide Issues for Global e-Science Workshop Goals  Review the current status, progress and barriers to effective use of major national, continental and transoceanic networks  Review progress, strengthen opportunities for collaboration, and explore the means to deal with key issues in Grid computing and Grid-enabled data analysis, for high energy physics and other fields of data intensive science  Exchange information and ideas, and formulate plans to develop solutions to specific problems related to the Digital Divide, with a focus on the Asia Pacific region, as well as Latin America, Russia and Africa  Continue to advance a broad program of work on reducing or eliminating the Digital Divide, and ensuring global collaboration, as related to all of the above aspects.

15 PingER: World View from SLAC, CERN S.E. Europe, Russia: Catching up Latin Am., China: Keeping up India, Mid-East, Africa: Falling Behind C. Asia, Russia, SE Europe, L. America, M. East, China: 4-7 yrs behind India, Africa: 7-8 yrs behind Latin America  R. Cottrell

16 Connectivity to Africa  Internet Access: More than an order of magnitude lower than the corresponding percentages in Europe (33%) & N. America (70%).  Digital Divide: Lack of Infrastructure, especially interior, high prices (e.g. $ 4-10/kbps/mo.); “Gray” satellite bandwidth market  Intiatives: EUMEDCONNECT (EU-North Africa); GEANT: 155 Mbps to S. Africa; Nectarnet (Ga. Tech); IEEAF/I2 NSF-Sponsored Initiative INTERNET USERS AND POPULATION STATISTICS FOR AFRICA Population ( 2004 Est. ) Pop. Pct. World Internet Users Usage Growth (2000-4) Penetration: Percent of Population Users: Percent in World TOTAL for AFRICA 893 M14.0 %12.9 M186.6 %1.4 %1.6 % REST of WORLD 5,497 M86.0 %800 M124.4 %14.6 %98.4 %

17 Sample Bandwidth Costs for African Universities Sample size of 26 universities Average Cost for VSAT service: Quality, CIR, Rx, Tx not distinguished Bandwidth prices in Africa vary dramatically; are in general many times what they could be if universities purchase in volume Roy Steiner Internet2 2004 Workshop Avg. Unit Cost is 40X US Avg.; Cost is Several Hundred Times, Compared to Leading Countries

18 Asia Pacific Academic Network Connectivity Asia Pacific Academic Network Connectivity Better North/South Linkages within Asia Needed JP- TH link: 2Mbps  45Mbps in 2004. JP- TH link: 2Mbps  45Mbps in 2004. Connectivity to US from JP, KO, AU is Advancing Rapidly. Progress in the Region, and to Europe is Much Slower APAN Status 7/2004 D. Y. Kim

19 CountriesNetworkBandwidth(Mbps)AUP/Remark ID-ID AI3 (ITB-UNIBRAW) 0.128/0.128 R&E + Commodity IN-US/UKERNET70R&E JP-CNNICT-CERNET 1 Gbps R&E JP-CNNICT-CSTNET R&E JP-ID AI3 (ITB) 0.5/1.5 (From/To JP) R&E + Commodity JP-KRAPII 2 Gbps R&E JP-LA AI3 (NUOL) 0.128 R&E + Commodity JP-LK AI3 (IOIT) 0.5/0.5 (From/To JP) R&E + Commodity JP-MY AI3 (USM) 0.5/0.5 (From/To JP) R&E + Commodity JP-PH AI3 (ASTI) 0.5/0.5 (From/To JP) R&E + Commodity JP-PH MAFFIN (ASTI) 6R&E JP-SG AI3 (TP) 0.5/0.5 (From/To JP) R&E + Commodity JP-TH AI3 (AIT) 0.5/1.5 (From/To JP) R&E + Commodity JP-TH SINET (ThaiSarn) 45R&E JP-USIEEAF 10 Gbps+622 R&E JP-USJGN2 10 Gbps R&E JP-USSINET R&E Some APAN Links G. McLaughlin

20 Digital Divide Illustrated by Network Infrastructures: TERENA NREN Core Capacity Core capacity goes up in Large Steps: 10 to 20 Gbps; 2.5 to 10 Gbps; 0.6-1 to 2.5 Gbps Current In Two Years SE Europe, Medit., FSU, Middle East: Less Progress Based on Older Technologies (Below 0.15, 1.0 Gbps): Digital Divide Will Not Be Closed Source: TERENA

21 Long Term Trends in Network Traffic Volumes: 300-1000X/10Yrs  SLAC Traffic ~400 Mbps; Growth in Steps (ESNet Limit): ~ 10X/4 Years.  Projected: ~2 Terabits/s by ~2014  July 2005: 2x10 Gbps links: one for production and one for research ESnet Accepted Traffic 1990 – 2004 Exponential Growth Since ’92; Annual Rate Increased from 1.7 to 2.0X Per Year In the Last 5 Years W. Johnston L. Cottrell Progress in Steps W. Johnston 10 Gbit/s  FNAL: 10 to 20 (+40) Gbps by Fall 2005

22 CANARIE (Canada) Utilization Trends and UCLPv2 Gbps Network Capacity Limit Jan 05  “Demand for Customer Empowered Nets (CENs) is exceeding our wildest expectations  New version of UCLP will allow easier integration of CENs into E2E nets for specific communities &/or disciplines  UCLPv2 will be based on SOA, web services and workflow to allow easy integration into cyber-infrastructure projects  The Network is no longer a static fixed facility – but can be ‘orchestrated’ with different topologies, routing etc to meet specific needs of end users” W. St. Arnaud

23 National Lambda Rail (NLR): NLR  Initially 4-8 10G Wavelengths  To 40 10G Waves in Future  Ultralight, Internet2 HOPI, Cisco Research & UltraScience Net Initiatives w/HEP  Atlantic & Pacific Wave Transition beginning now to optical, multi- wavelength Community owned or leased “dark fiber” networks for R&E Initiatives in: nl, ca, jp, uk, kr; pl, cz, sk, pt, ei, gr, sb/mn … + 30 US States (Ca, Il, Fl, In, …)

24 GEANT2 Hybrid Architecture  Cooperation of 26 NRENs  Implementation on dark fiber, IRU Asset, Transmission & Switching Equipment  Layer 1 & 2 switching, “the Light Path”  Point to Point (E2E) Wavelength services  LHC in Europe: N X 10G T0-T1 Overlay Net Global Connectivity   10 Gbps + 3x2.5 Gbps to North America   2.5 Gbps to Japan   622 Mbps to South America   45 Mbps to Mediterranean countries   155 Mbps to South Africa Will be Improved in GEANT2 H. Doebbling

25 SXTransport: Au-US 2 X 10G AARNet has dual 10Gbps circuits to the US via Hawaii, dual 622Mbps commodity links G. McLaughlin

26 JGN2: Japan Gigabit Network (4/04 – 3/08) 20 Gbps Backbone, 6 Optical Cross-Connects Kanazawa Sendai Sapporo Nagano Kochi Nagoya Fukuoka Okinawa Okayama ・ Teleport Okayama (Okayama) ・ Hiroshima University (Higashi Hiroshima) ・ Tottori University of Environmental Studies (Tottori) ・ Techno Ark Shimane (Matsue) ・ New Media Plaza Yamaguchi (Yamaguchi) ・ Kyoto University (Kyoto) ・ Osaka University (Ibaraki) ・ NICT Kansai Advanced Research Center (Kobe) ・ Lake Biwa Data Highway AP * (Ohtsu) ・ Nara Prefectural Institute of Industrial Technology (Nara) ・ Wakayama University (Wakayama) ・ Hyogo Prefecture Nishiharima Technopolis (Kamigori-cho, Hyogo Prefecture) ・ Kyushu University (Fukuoka) ・ NetCom Saga (Saga) ・ Nagasaki University (Nagasaki) ・ Kumamoto Prefectural Office (Kumamoto) ・ Toyonokuni Hyper Network AP *(Oita) ・ Miyazaki University (Miyazaki) ・ Kagoshima University (Kagoshima) ・ Kagawa Prefecture Industry Promotion Center (Takamatsu) ・ Tokushima University (Tokushima) ・ Ehime University (Matsuyama) ・ Kochi University of Technology (Tosayamada-cho, Kochi Prefecture) ・ Nagoya University (Nagoya) ・ University of Shizuoka (Shizuoka) ・ Softopia Japan (Ogaki, Gifu Prefecture) ・ Mie Prefectural College of Nursing (Tsu) ・ Ishikawa Hi-tech Exchange Center (Tatsunokuchi-machi, Ishikawa Prefecture) ・ Toyama Institute of Information Systems (Toyama) ・ Fukui Prefecture Data Super Highway AP * (Fukui) ・ Niigata University (Niigata) ・ Matsumoto Information Creation Center (Matsumoto, Nagano Prefecture) ・ Tokyo University (Bunkyo Ward, Tokyo) ・ NICT Kashima Space Research Center (Kashima, Ibaraki Prefecture) ・ Yokosuka Telecom Research Park (Yokosuka, Kanagawa Prefecture) ・ Utsunomiya University (Utsunomiya) ・ Gunma Industrial Technology Center (Maebashi) ・ Reitaku University (Kashiwa, Chiba Prefecture) ・ NICT Honjo Information and Communications Open Laboratory (Honjo, Saitama Prefecture) ・ Yamanashi Prefecture Open R&D Center (Nakakoma-gun, Yamanashi Prefecture) ・ Tohoku University (Sendai) ・ NICT Iwate IT Open Laboratory (Takizawa-mura, Iwate Prefecture) ・ Hachinohe Institute of Technology (Hachinohe, Aomori Prefecture) ・ Akita Regional IX * (Akita) ・ Keio University Tsuruoka Campus (Tsuruoka, Yamagata Prefecture) ・ Aizu University (Aizu Wakamatsu) ・ Hokkaido Regional Network Association AP * (Sapporo) NICT Tsukuba Research Center 20Gbps 10Gbps 1Gbps Optical testbeds Core network nodes Access points Osaka Otemachi USA NICT Keihannna Human Info-Communications Research Center NICT Kita Kyushu IT Open Laboratory NICT Koganei Headquarters [Legends ] * IX:Internet eXchange AP:Access Point JGN2  Connection services at the optical level: 1 GbE and 10GbE  Optical testbeds: e.g. GMPLS Interop. Tests Y. Karita

27 APAN-KR : KREONET/KREONet2 II KREONET  11 Regions, 12 POP Centers  Optical 2.5-10G Backbone; SONET/SDH, POS, ATM  National IX Connection KREONET2  Support for Next Gen. Apps:  IPv6, QoS, Multicast; Bandwidth Alloc. Services  StarLight/Abilene Connection International Links  GLORIAD Link to 10G to Seattle Aug. 1 (MOST)  US: 2 X 622 Mbps via CA*Net; GbE via TransPAC  Japan: 2 Gbps  TEIN to GEANT: 155 Mbps SuperSIREN (7 Res. Institutes)  Optical 10-40G Backbone  Collaborative Environment Support  High Speed Wireless: 1.25 G D. Son

28 The Global Lambda Integrated Facility for Research and Education (GLIF) u Architecting an international LambdaGrid infrastructure u Virtual organization supports persistent data-intensive scientific research and middleware development on “LambdaGrids” Many 2.5 - 10G Links Across the Atlantic and Pacific Peerings: Pacific & Atlantic Wave; Seattle, LA, Chicago, NYC, HK

29  Product of transfer speed and distance using standard Internet (TCP/IP) protocols.  Single Stream 7.5 Gbps X 16 kkm with Linux: July 2004  IPv4 Multi-stream record with FAST TCP: 6.86 Gbps X 27kkm: Nov 2004  IPv6 record: 5.11 Gbps between Geneva and Starlight: Jan. 2005  Concentrate now on reliable Terabyte-scale file transfers  Disk-to-disk Marks: 536 Mbytes/sec (Windows); 500 Mbytes/sec (Linux)  Note System Issues: PCI-X Bus, Network Interface, Disk I/O Controllers, CPU, Drivers Internet 2 Land Speed Record (LSR) NB: Computing Manuf.’s Roadmaps for 2006: One Server Pair to One 10G Link Nov. 2004 Record Network Internet2 LSRs: Blue = HEP 7.2G X 20.7 kkm Throuhgput (Petabit-m/sec) S. Ravot

30 SC2004 Bandwidth Record by HEP: High Speed TeraByte Transfers for Physics  Caltech, CERN SLAC, FNAL, UFl, FIU, ESNet, UK, Brazil, Korea; NLR, Abilene, LHCNet, TeraGrid; DOE, NSF, EU, …; Cisco, Neterion, HP, NewiSys, …  Ten 10G Waves, 80 10GbE Ports, 50 10GbE NICs  Aggregate Rate of 101 Gbps  1.6 Gbps to/from Korea  2.93 Gbps to/from Brazil UERJ, USP Monitoring NLR, Abilene, LHCNet, SCINet, UERJ, USP, Int’l R&E Nets and 9000+ Grid Nodes Simultaneously I. Legrand

31 SC2004 KNU Traffic: 1.6 Gbps to/From Pittsburgh Via Transpac (LA) and NLR Monitoring in Daegu Courtesy K. Kwon

32 SC2004: 2.93 (1.95 + 0.98) Gbps Sao Paulo – Miami – Pittsburgh (Via Abilene) Brazilian T2+T3 HEPGrid: Rio + Sao Paolo Also 500 Mbps Via Red CLARA, GEANT (Madrid)  Madrid & GEANT  GEANT (SURFNet) J. Ibarra

33 HENP Bandwidth Roadmap for Major Links (in Gbps) Continuing Trend: ~1000 Times Bandwidth Growth Per Decade; HEP: Co-Developer as well as Application Driver of Global Nets

34 Evolving Quantitative Science Requirements for Networks (DOE High Perf. Network Workshop) Science Areas Today End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s 100 Gb/s 1000 Gb/s High bulk throughput Climate (Data & Computation) 0.5 Gb/s 160-200 Gb/s N x 1000 Gb/s High bulk throughput SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s + QoS for Control Channel Remote control and time critical throughput Fusion Energy 0.066 Gb/s (500 MB/s burst) 0.198 Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/s Time critical throughput Astrophysics 0.013 Gb/s (1 TByte/week) N*N multicast 1000 Gb/s Computat’l steering and collaborations Genomics Data & Computation 0.091 Gb/s (1 TBy/day) 100s of users 1000 Gb/s + QoS for Control Channel High throughput and steering See / W. Johnston

35 LHCNet, ESnet Plan 2007/2008: 40Gbps US-CERN, ESnet MANs, IRNC DEN ELP ALB ATL Metropolitan Area Rings Aus. Europe SDG AsiaPac SEA Major DOE Office of Science Sites High-speed cross connects with Internet2/Abilene New ESnet hubs ESnet hubs SNV Europe Japan Science Data Network core, 40-60 Gbps circuit transport Lab supplied Major international Production IP ESnet core, 10 Gbps enterprise IP traffic Japan Aus. Metro Rings ESnet 2nd Core: 30-50G ESnet IP Core (≥10 Gbps) 10Gb/s 30Gb/s 2 x 10Gb/s NYC CHI LHCNet Data Network (4 x 10 Gbps US-CERN ) LHCNet Data Network DC GEANT2 SURFNet IN2P3 NSF/IRNC circuit; GVA-AMS connection via Surfnet or Geant2 CERN FNAL BNL LHCNet US-CERN: 9/05: 10G CHI + 10G NY 2007: 20G + 20G 2009: ~40G + 40G S. Ravot

36 We Need to Work on the Digital Divide from Several Perspectives  Workshops and Tutorials/Training Sessions  For Example: ICFA DD Workshops, Rio 2/04, HONET (Pakistan) 12/04; Daegu May 2005  Share Information: Monitoring, BW Progress; Dark Fiber Projects; Prices in different markets  Use Model Cases: Poland, Slovakia, Czech Rep., China, Brazil,…  Encourage, and Work on Inter-Regional Projects  GLORIAD, Russia-China-Korea US Optical Ring  Latin America: CHEPREO/WHREN (US-Brazil); RedCLARA  Help with Modernizing the Infrastructure  Design, Commissioning, Development  Provide Tools for Effective Use: Monitoring, Collaboration Systems; Advanced TCP Stacks, Grid System Software  Work on Policies and/or Pricing: pk, br, cn, SE Europe, in, …  Encourage Access to Dark Fiber  Raise World Awareness of Problems, & Opportunities for Solutions

37 UERJ T2 HEPGRID Inauguration: Dec. 2004: The Team (Santoro et al.) UERJ Tier2 Now On Grid3 and Open Science Grid (5/13) 100 Dual Nodes; Upgrades Planned Also Tier3 in Sao Paulo (Novaes)

38 Grid3, the Open Science Grid and DISUN Grid3: A National Grid Infrastructure  35 sites, 3500 CPUs: Univ. + 4 Nat’l labs  Part of LHC Grid  Running since October 2003  HEP, LIGO, SDSS, Biology, Computer Sci. Transition to Open Science Grid ( + Brazil (UERJ, USP) 7 US CMS Tier2s; Caltech, Florida, UCSD, UWisc Form DISUN P. Avery

39 Science-Driven: HEPGRID (CMS) in Brazil HEPGRID-CMS/BRAZIL is a project to build a Grid that  At Regional Level will include CBPF,UFRJ,UFRGS,UFBA, UERJ & UNESP  At International Level will be integrated with CMS Grid based at CERN; focal points include Grid3/OSG and bilateral projects with Caltech Group France Italy USA Germany BRAZIL 622 Mbps UFRGS UERJ UFRJ T1 Individual Machines On line systems Brazilian HEPGRID CBPF UNESP/USP SPRACE-Working Gigabit CERN 2.5 - 10 Gbps UFBA UERJ Regional Tier2 Ctr T4 T0 +T1 T2  T1 T3  T2 UERJ: T2  T1, 100  500 Nodes; Plus T2s to 100 Nodes ICFA DD Workshop 2/04; T2 Inauguration + GIGA/RNP Agree 12/04

40 Rio Tier2-SPRACE (Sao Paolo) -Ampath Direct Link at 1 Gbps AMPATH NAP of Brazil Terremark Eletropaulo Fiber leased to ANSP 1 Gbps Iqara Fiber leased to ANSP 1 Gbps SPRACE T2 Rio CC-USP Jump CC Giga Fiber 1 Gbps Giga Fiber 1 Gbps Giga Router ANSP Routers UERJ Caltech/Cisco Routers L. Lopez

41 Highest Bandwidth Link in NREN’s Infrastructure, EU & EFTA Countries, & Dark Fiber  Owning (or leasing) dark fiber is an interesting option for an NREN; Depends on the national situation.  NRENs that own dark fiber can decide for themselves which technology and what speeds to use on it 10.0G 1.0G 0.1G 0.01G Source: TERENA

42 Europe: Core Network Bandwidth Increase for Years 2001-2004 and 2004-2006  Countries With No Increase Already Had 1-2.5G Backbone in 2001  These are all going to 10G backbones by 2006-7  Countries Showing the Largest Increase Are: è PIONIER (Poland) from 155 Mbps to 10 Gbps capacity (64X) è SANET (Slovakia) from 4 Mbps to 1 Gbps (250X). Source: TERENA

43  1660 km of Dark Fiber CWDM Links, 1 to 4 Gbps (GbE)  August 2002: Dark Fiber Link, to Austria  April 2003: Dark Fiber Link to Czech Republic  2004: Dark Fiber Link to Poland  Planning 10 Gbps Backbone > 250X: 2002-2005 120km CBDF Cost 4 k Per Month 1 GE Now; 10G Planned T. Weis

44 Dark Fiber in Eastern Europe Poland: PIONIER (10-20G) Network 2763 km Lit Fiber Connects 22 MANs; +1286 km (9/05) + 1159 km (4Q/06) Vision: Support -  Computational Grids; Domain-Specific Grids  Digital Libraries  Interactive TV  Add’l Fibers for e-Regional Initiatives 10 Gb/s Metropolitan Area Networks 1 Gb/s 10 Gb/s (2 lambdas) 4Q05 Plan: Courtesy M. Przybylski

45 PIONIER Cross Border Dark Fiber Plan Locations Single GEANT PoP in Poznan

46 CESNET2 (Czech Republic) Network Topology, Dec. 2004 2500+ km Leased Fibers (Since 1999) 2005: 10GE Link Praha-Brno (300km) in Service; Plan to go to 4 X 10G and higher as needed; More 10GE links planned J. Gruntorad

47 APAN China Consortium Established in 1999. The China Education and Research Network (CERNET) and the China Science and Technology Network (CSTNET) are the main advanced networks. Established in 1999. The China Education and Research Network (CERNET) and the China Science and Technology Network (CSTNET) are the main advanced networks.CERNETCSTNETCERNETCSTNET CERNet 2.5 Gbps CSTnet CERNET  2000: Own dark fiber crossing 30+ major cities and 30,000 kilometers  2003: 1300+ universities and institutes, over 15 million users CERNET 2: Next Generation R&E Net  Backbone connects 15-20 Giga- POPs at 2.5G-10Gbps (I2-like)  Connects to 200 Universities and 100+ Research Institutes at 1 Gbps-10 Gbps  Native IPv6 and Lambda Networking From 6 to 78 Million Internet Users in China from Jan. – July 2004 J. P. Wu, H. Chen

48 Brazil (RNP2): Rapid Backbone Progress and the GIGA Project  RNP Connects the regional networks in all 26 states of Brazil  Backbone on major links to 155 Mbps; 622 Mbps Rio – Sao Paulo.  2.5G to 10G Core in 2005 (300X Improvement in 2 Years)  The GIGA Project – Dark Fiber experimental network  700 km of fiber, 7 cities and 20 institutions in Sao Paolo and Rio  GbE to Rio Tier-2, Sao Paulo Tier-3    RNP & GIGA: Extend GIGA to the Northeast, with 4000 km of dark fiber by 2008 L. Lopez

49 DFN (Germany): X-WiN-Fiber Network Faser KPN Faser GL Faser GC Faser vorhanden GAR ERL BAY MUE FZJ AAC BIR DES HAM POT TUB FZK GSI DUI BRE HAN BRA MAG BIE FRA HEI STU REG DRE CHE ZIB ILM KIE ROS LEI JEN WEI ESF HUB ADH uMost of the X-WiN core will be a fibre network, (see map), the rest will be provided by wavelengths uSeveral fibre and wavelengths providers uFibre is relatively cheap – in most cases more economic than (one) wavelength uX-Win creates many new options besides being cheaper than the current G-WiN core 13.04.2005 K. Schauerhammer

50 Romania: Inter-City Links were 2 to 6 Mbps in 2002; Improved to 155 Mbps in 2003-2004; GEANT-Bucharest Link: 155 to 622 Mbps RoEduNet January 2005 Plan: 3-4 Centers at 2.5 Gbps; Dark Fiber InterCity Backbone Compare Pk: 56 univ. share 155 Mbps Internationally N. Tapus T. Ul Haq

51 ICFA Report: Networks for HENP General Conclusions  Reliable high End-to-end Performance of networked applications such as Data Grids is required. Achieving this requires:  A coherent approach to End-to-end monitoring in all regions that allows scientists throughout the world to extract clear information  Upgrading campus infrastructures. To support Gbps flows to HEP centers.  Removing local, last mile, and nat’l and int’l bottlenecks end-to-end, whether technical or political in origin. Bandwidth across borders, the countryside or the city is often much less than on national backbones and int’l links This problem is very widespread in our community: Examples stretching from the Asia Pacific to Latin America to the Northeastern US. Root causes vary, from lack of local infrastructure, to unfavorable policies and pricing

52 Situation of Local Access in Belém in Brazil in 2004 Institution Summary of local network connections Annual Cost (US$) CEFET Access to provider at 512 kbps 22,200 CESUPA (4 campi) Internal + access to provider at 6 Mbps 57,800 IEC/MS (2 campi) Internal at 512 kbps + Access to provider at 512 kbps 13,300 MPEG (2 campi) Internal at 256 kbps; Access at 34 Mbps (radio link) 7,600 UEPA (5 campi) Internal at 128 kbps; Access at 512 kbps 18,500 UFPA (4 campi) Internal at 128 kbps; Provider PoP 16,700 UFRA Access to provider at 1 Mbps 16,000 UNAMA (4 campi) Internal wireless links, access at 6 Mbps 88,900 Annual telco charges for POOR local access = US$ 241,000

53 Belém: a Possible Topology (30 km ring)

54 Alternative Approach in Brazil – Do It Yourself (DIY) Networking (M. Stanton, RNP) 1. Form a consortium for joint network provision 2. Build your own optical fibre network to reach ALL the campi of ALL consortium members 3. Light it up and go! Costs involved:  Building out the fibre: using utility poles of electric company  US$ 7,000 per km  Monthly rental of US $1 per pole (~25 poles per km)  Equipment costs: mostly use cheap 2 port GbE switches  Operation and maintenance In Belém for 11 institutions using All GigE connections:  Capital costs around US $ 500,000  Running costs around US $ 40,000 p.a.  Compare with current US $ 240,000 p.a. for traditional telco solution [for 0.128 to 6 Mbps: ~1000X less bandwidth ]

55 Brazil: RNP Nat’l Plan for Optical Metro Nets in 2005-6  In December 2004, RNP signed contracts with “Finep” (the agency of the Ministry of Science and Technology) to build optical metro networks in all 27 capital cities in Brazil  Total value of more than US$15 millions  Most of this money will be spent in 2005 M. Stanton

56 G. ColeSegmentCurrent Year 1 Year 2 Year 3 Year 4 Year 5 1 - Trans- Asia 155 Mbps 2.5 Gbps (US- China),10 Gbps (US-Korea-China) 2 x 10 Gbps US-China; US-Korea -China 2 x 10 Gbps N x 10 Gbps 2 - Trans- China 2.5 Gbps (155 Mbps, Beijing- Khabarovsk) 2.5 Gbps 1 x 10 Gbps 2 x 10 Gbps N x 10 Gbps 3 - Trans- Russia 155 Mbps 622 Mbps 2.5 Gbps 1 x 10 Gbps N x 10 Gbps 4 - Trans- Europe 622 Mbps 2 x 10 Gbps N x 10 Gbps 5 - Trans- Atlantic 622 Mbps 1 Gbps 1 x 10 Gbps 2 x 10 Gbps N x 10 Gbps 6 - Trans- North America 2.5G (Asia- Chicago), GbE NYC-Chicago (via CANARIE) 10 Gbps, Seattle- Chicago-NYC 2 x 10 Gbps N x 10 Gbps Seattle Chicago NYC Amsterdam Moscow Novosibirsk Khabarovsk Beijing Hong Kong GLORIAD Topology – Current, Plans for Years 1-5 Pusan

57 Closing the Digital Divide: R&E Networks in/to Latin America RNP2 and the GIGA Project (ANSP) AmPath: 45 Mbps Links to US (2002-5) CHEPREO (NSF, from 2004): 622 Mbps Sao Paulo – Miami WHREN/LILA (NSF, from 2005) è 0.6 to 2.5 G Ring by 2006 è Connections to Pacific & Atlantic Wave RedCLARA (EU): Connects 18 Latin Am. NRENs, Cuba; 622 Mbps Link to Europe PacificWave Sao Paulo ANSP AtlanticWave To GEANT 622 Mbps

58 GOAL: An Information Society: “… One in which highly developed networks, equitable and ubiquitous access to information, appropriate content in accessible formats, and effective communication can help people achieve their potential” GOAL: An Information Society: “… One in which highly developed networks, equitable and ubiquitous access to information, appropriate content in accessible formats, and effective communication can help people achieve their potential” Role of Science in the Information Society; WSIS 2003-2005  HEP Active in WSIS I, Geneva èTheme: “Creating a Sustainable Process of Innovation” èCERN RSIS Event èSIS Forum & CERN/Caltech Online Stand at WSIS I (12/03) > 50 Demos: Advanced Nets & Grids, Global VideoConf., Telesurgery, “Music Grids”… u Visitors at WSIS I: èKofi Annan, UN Sec’y Gen’l èJohn H. Marburger, Science Adviser to US President èIon Iliescu, President of Romania; and Dan Nica, Minister of ICT è… World Conference on Physics and Sustainable Development World Conference on Physics and Sustainable Development Durban, South Africa 10/31-11/2/05 “The World Conference will serve as the first global forum to focus the physics community toward development goals and to create new mechanisms of cooperation toward their achievement.” WSIS II: TUNIS 11/16-11/18/2005

59 Networks and Grids for HEP and Data Intensive Global Science  Networks used by HEP and other fields of DIS are advancing rapidly  To the 10 G range and now N X 10G; much faster than Moore’s Law  New HENP and DOE Roadmaps: Factor ~1000 BW Growth/Decade  HEP & CS are learning to use long range 10 Gbps networks effectively  2004-5: 7+ Gbps TCP flows over 20+ kkm; 101 Gbps Record  Transition to community-operated/owned optical R&E networks (us, ca, nl, jp, kr; pl, cz, br, sk, pt, ei, gr, … ) is underway  A new era of “hybrid” optical networks & Grid systems is emerging  We Must Work to Close to Digital Divide, from Several Perspectives  To Allow Scientists in All World Regions to Take Part in Discoveries  Removing Regional, Last Mile, Local Bottlenecks and Compromises in Network Quality are On the Critical Path  Important Examples on the Road to Closing the Digital Divide  GLORIAD (US-Russia-China-Korea) Global Optical Ring  IEEAF “Global Quilt”; NSF: IRNC and New Initiative on Africa  CHEPREO, WHREN and the Brazil HEPGrid in Latin America  Leadership & Outreach: HEP Groups in US, EU, Japan, Korea, Latin America

60 Acknowledgements uR. Aiken uA. Ali uS. Altmanova uP. Avery uJ. Boroumand uJ. Bakken uL. Bauerdick uJ. Bunn uR. Cavanaugh uH. S. Chen uK. Cho uG. Cole uL. Cottrell uD. Davids uH. Doebbling uJ. Dolgonas uE. Fantegrossi uD. Foster uI. Foster uP. Galvez uJ. Gruntorad uJ. Ibarra uV. Ilyin uW. Johnston uY. Karita uD. Y. Kim uY. Kim uK. Kwon uI. Legrand uM. Livny uS. Low uO. Martin uR. Mount uS. McKee uG. McLaughlin uD. Nae uK. Neggers uS. Novaes uJ. Pool uD. Petravick uS. Ravot uD. Reese uD. Riley uA. Santoro uK. Schauerhammer uC. Smith uD. Son uM. Stanton uC. Steenberg uX. Su uR. Summerhill uM. Thomas uF. Van Lingen uE. Velikhov uD. Walsten uT. Weis uT. West uD. Williams uV. White uJ. P. Wu uF. Wuerthwein uY. Xia uUS DOE uUS NSF uESnet uEuropean Commission uCERN uSLAC uFermilab uCACR uNLR uCENIC uInternet2 /HOPI uFLR uUltraIight uStarlight uKISTI, KAIST uRNP, ANSP uCisco uNeterion

61 Some Extra Slides Follow

62 LHC: Many Petabytes/Yr of Complex Data Unprecedented Instruments, IT Challenges At 10 34 Luminosity A Bunch Crossing Every 25 nsec (40 MHz) ~20 Events from Known Physics Superimposed Per Crossing: 10 9 Interactions/s Instruments E.g. CMS Tracker: 223 Sq-meters of Silicon Sensors

63 HEPGRID and Digital Divide Workshop UERJ, Rio Feb. 16-20 2004 Summary  Worldviews of Networks and Grid Projects and the Relationship to the Digital Divide: [Newman, Santoro, Gagliardi, Avery, Cottrell]  View from Major Experiments and Labs (D0, ATLAS, CMS, LHCb, ALICE; Fermilab): [Blazey, Jones, Foa, Nakada, Carminati]  Progress in Acad. & Research Networks in Europe, Asia Pacific, Latin America [Williams, Tapus, Karita, Son, Ibarra]  Major Network Initatives: I2, GLIF, NLR, Aarnet, CLARA [Preston, de Laat, Silvester, McLaughlin, Stanton, Olson]  View from/for Developing Countries [Willers, Ali, Davalos, De Paula, Canessa, Sciutto, Alvarez]  Grids and Digital Divide in Russia [Ilyin, Soldatov]  Other Fields: ITER (Fusion), MammoGrid: [Velikhov, Amendolia]  Round Tables: Network Readiness, Digital Divide in Latin America  Special Talks on HEP in Brazil [Lederman], and Review of the CERN “Role of Science in the Information Society” Event [Hoffmann]  Participation of Latin American NRENs, and Gov’t Officials  152 Participants, 18 Countries: BR 107, US 18, CH 8, AG 3, FR 3,… See

64 Int’l Networks BW on Major Links for HENP: US-CERN Example  Rate of Progress >> Moore’s Law (US-CERN Example)  9.6 kbps Analog 1985)  64-256 kbps Digital 1989 - 1994) [X 7 – 27]  1.5 Mbps Shared 1990-3; IBM) [X 160]  2 -4 Mbps 1996-1998) [X 200-400]  12-20 Mbps 1999-2000) [X 1.2k-2k]  155-310 Mbps 2001-2) [X 16k – 32k]  622 Mbps 2002-3) [X 65k]  2.5 Gbps ( ) 2003-4 [X 250k]  10 Gbps 2004-5 [X 1M]  2 x 10 Gbps 2005-6 [X 2M]  4 x 10 Gbps ~2007-8[X 4M]  8 x 10 Gbps or 2x40Gbps ~2009-10[X 8M]  A factor of ~1M Bandwidth Improvement over 1985-2005; A factor of ~5k during 1995-2005  HENP has become a leading applications driver, and also a co-developer of global networks

65 Internet Growth in the World At Large Amsterdam Internet Exchange Point Example 20 Gbps 60 Gbps Average 5 Minute Max Some Annual Growth Spurts; Typically In Summer-Fall “Acceleration” Last Summer The Rate of HENP Network Usage Growth (> 100% Per Year) is Not Unlike the World at Large 66 Gbps 40 Gbps

66 Brazil: National backbone in 2003 RNP2 – May 2003 (  30 Mbps) M. Stanton

67 BRAZIL: Nat’l Backbone May 2005 RNP – May 2005 (  622 Mbps)

68 BRAZIL: RNPng – 10G & 2.5G Core Network (3Q2005) Fortaleza Recife Salvador Rio de Janeiro Belo Horizonte Brasília São Paulo Curitiba Florianópolis Porto Alegre 2.5 Gbps 10 Gbps Core Network Tender Completed May 12 th -13 th

69  The classical Grid architecture had a number of implicit assumptions  The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness)  Short transactions with relatively simple failure modes  HENP Grids are Data Intensive & Resource-Constrained  Resource usage governed by local and global policies  Long transactions; some long queues  Grid Analysis: 1000s of users compete for resources at dozens of sites: Complex scheduling; management  HENP  Stateful, End-to-end Monitored and Tracked Paradigm  Adopted in OGSA, Now WS Resource Framework HENP Data Grids, and Now Services-Oriented Grids

70 Grid Analysis: A Real Time SOA Enabling Global Communities uAnalysis Clients talk standard protocols to the Clarens data/services portal; hides complexity. uSimple Web service API allows diverse Analysis Clients (simple or complex) uClarens Servers Autodiscover, Auto- connect to form a “Services Fabric” uKey features: Global Scheduler, Catalogs, Monitoring, and Strategic Grid-wide Execution service. Scheduler Catalogs Analysis Client Grid Services Web Server Execution Priority Manager Grid Wide Execution Service Data Management Fully- Concrete Planner Analysis Client Analysis Client Virtual Data Replica Applications Monitoring Partially- Abstract Planner Metadata HTTP, SOAP, XML-RPC MonALISA Clarens ROOT- Clarens/ Cojac/ IGUANA BOSS RefDB/MOPDB POOL ORCA ROOT FAMOS Fully- Abstract Planner VDT-Server Chimera MCRunjob Sphinx F. Van Lingen, M. Thomas

71 u Next generation Information System, with the network as an integrated, actively managed subsystem in a global Grid  Hybrid network infrastructure: packet-switched + dynamic optical paths  End-to-end monitoring; Realtime tracking and optimization; Dynamic bandwidth provisioning; Agent-based services spanning all layers UERJ, USP CHEPREO KNU 10 Gbps  Caltech, UF, UMich, FNAL, SLAC, CERN,  UERJ (Rio), USP (Sao Paulo), FIU, KNU (Korea), KEK  NLR, CENIC, UCAID, Translight, UKLight, Netherlight, UvA; UCLondon, Taiwan  Cisco S. McKee; G. Karmous-Edwards

72 LISA- Localhost Information Service Agent End To End Monitoring Tool u Easy to deploy & install with any browser; user friendly GUI  Detects system architecture, OS  For all versions of Windows, Linux, Mac.  Complete system monitoring of the host  CPU, memory, IO, disk, …  Hardware detection including Audio, Video equipment; drivers installed in the system  Provides embedded clients for IPERF (or other net monitoring tools, e.g. Web 100 ) LISA and ApMon2: A basis for strategic, end-to-end managed Grids Complete, lightweight monitoring of end user systems & network connectivity. Uses MonALISA framework to optimize client applications. Try it: I. Legrand

73 ApMon Grid Analysis Demo: Distributed Processing and Monitoring at SC2004  Demonstrated how CMS analysis jobs can be submitted to multiple sites, and monitored from anywhere else on the grid  Three sites: Caltech, UERJ and USP  Using the “BOSS” job submission and tracking tool, running as a “Clarens” Grid service  Job status is monitored by MonALISA, and visualized using (netlogger-style) “lifelines” in real time M. Thomas

Download ppt "Networks, Grids and the Digital Divide in HEP and Global e-Science Networks, Grids and the Digital Divide in HEP and Global e-Science Harvey B. Newman."

Similar presentations

Ads by Google