HENP Networks and Grids for Global Science

Slides:



Advertisements
Similar presentations
International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
Advertisements

February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
Jorge Gasós Grid Technologies Unit European Commission The EU e Infrastructures Programme Workshop, Beijing, June 2005.
AMPATH™: Pathway of the Americas Internet2 Member Meeting International Task Force October 13, 2003 Julio Ibarra Principal Investigator and Director
Grids e HEP. Concorde (15 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km) Bytes 10 3 Terabytes 1 Petabyte.
SCIC Digital Divide Workshops and Panels  : An effective way to raise awareness of problems, and discuss approaches and opportunities for sol’ns.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
1 ICFA/SCIC Network Monitoring Prepared by Les Cottrell, SLAC, for ICFA
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
COnvergence of fixed and Mobile BrOadband access/aggregation networks Work programme topic: ICT Future Networks Type of project: Large scale integrating.
United Nations Millennium Action Plan Health InterNetwork World Health Organization April 2001.
Users’ Authentication in the VRVS System David Collados California Institute of Technology November 20th, 2003TERENA - Authentication & Authorization.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Global High Performance Networks N+I Tokyo’98 Session Chair : Kilnam Chon Speakers : George Strawn / NSF US Next Generation Internet Projects.
ICFA/SCIC Monitoring WG Les Cottrell – SLAC representing the ICFA/SCIC Monitoring WG Prepared for the ICFA-SCIC, phone meeting, Jan 15, 2003
National Workshop on ANSN Capacity Building IT modules OAP, Thailand 25 th – 27 th June 2013 KUNJEER Sameer B History of centralized ANSN website as well.
The SIS On-Line Stand at the ICT4D World Summit Exhibition demonstrates how the internet connectivity and technology has the potential.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Florida International UniversityAMPATH AMPATH Julio E. Ibarra Director, AMPATH Global Research Networking Summit Brussels, Belgium.
Update on CA*net 4 Network
20 October 2015 Internet2 International Activities Heather Boyles Director, International Relations, Internet2 Internet2 Industry Strategy Council Meeting.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
20 th APAN Meeting, Taipei 23/27 August 2005 TEIN TEIN2 Overview David West TEIN2 Project Manager DANTE Slide 1.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
21 st APAN Meeting Host: APAN-JP Consortium Place: Tokyo Date: Early February, 2006 Joint Events: JGN2 Workshop: NICT/MPHPT NGI Core University Project:
Kanazawa Sapporo Kochi Fukuoka Naha Okayama ・ Chugoku Core Network Node A (Okayama) ・ Hiroshima Moto-machi (Hiroshima) ・ Teleport Okayama (Okayama) ・ Tottori.
SCIC in the WSIS Stocktaking Report (July 2005): uThe SCIC, founded in 1998 by ICFA, is listed in Section.
July 13, 2010NSF IRNC Kickoff NREN’s in Asia Pacific Jianping Wu APAN Chair CERNET/Tsinghua University July 13 , 2010.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
International Research Networking David West, DANTE 26 April 2007 S Asia Planning Meeting Crystal Gateway Marriott, Arlington, Virginia TEIN2 experiences.
JGN2 Technical Update Jul 19, 2006 National Institute of Information and Communications Technology (NICT) Takayuki Nakao
Connecting Advanced Networks in Asia-Pacific Kilnam Chon APAN Focusing on Applications -
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Trans-Eurasia Information Network Status Report Bo-Hyun SEO Executive Director, APII Cooperation Center Senior Research Fellow, KISDI 2003 APAN Conference,
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
1 Quantifying the Digital Divide: focus Africa Prepared by Les Cottrell, SLAC for the NSF IRNC meeting, March 11,
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
OSG Consortium Meeting (January 23, 2006)Paul Avery1 University of Florida Open Science Grid Progress Linking Universities and Laboratories.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
3 December 2015 Examples of partnerships and collaborations from the Internet2 experience Interworking2004 Ottawa, Canada Heather Boyles, Internet2
NSF International Research Network Connections (IRNC) Program I2 Joint Techs Meetings Kevin Thompson NSF CISE/SCI February 15, 2005.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
Status of APAN International Workshop of HEP Data Grid Nov 9, 2002 Yong-Jin Park APAN, Director of Secretariat/ Hanyang University.
May Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO Indiana.
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
National LambdaRail/ Florida LambdaRail Larry Conrad Associate VP and CIO, FSU Board Chair, Florida LambdaRail, LLC.
Thoughts on International e-Science Infrastructure Kevin Thompson U.S. National Science Foundation Office of Cyberinfrastructure iGrid2005 9/27/2005.
G É ANT2 Development Support Activity and the Republic of Moldova 1st RENAM User Conference Chisinau, Republic of Moldova 14-May-2007 Valentino Cavalli.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Natural Resource Area Report of Earth Monitoring Working Group APAN Conference 2003 Fukuoka, Japan 20 th -24 th January 2003.
25-September-2005 Manjit Dosanjh Welcome to CERN International Workshop on African Research & Education Networking September ITU, UNU and CERN.
1 Quantifying the Digital Divide Prepared by Les Cottrell, SLAC for the Internet2/World Bank meeting, Feb 7,
IPCEI on High performance computing and big data enabled application: a pilot for the European Data Infrastructure Antonio Zoccoli INFN & University of.
HOPI Update - Internet 2 Project Hybrid Optical Packet Infrastructure Peter O’Neil NCAB May 19, 2004.
Fall 2005 Internet2 Member Meeting International Task Force Julio Ibarra, PI Heidi Alvarez, Co-PI Chip Cox, Co-PI John Silvester, Co-PI September 19, 2005.
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
XO International Partners Strong Relationships, Strong Connections.
Transporting High Energy Physics Experiment Data over High Speed Genkai/Hyeonhae on 4 October 2002 at Oita Korea-Kyushu Gigabit Network.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
Networking between China and Europe
EGEE support for HEP and other applications
The SURFnet Project Bram Peeters, Manager Network Services
Wide-Area Networking at SLAC
Infrastructure Update
Presentation transcript:

HENP Networks and Grids for Global Science Harvey B. Newman California Institute of Technology 3rd International Data Grid Workshop Daegu, Korea August 26, 2004

Challenges for Global HENP Experiments LHC Example- 2007 5000+ Physicists 250+ Institutes 60+ Countries BaBar/D0 Example - 2004 500+ Physicists 100+ Institutes 35+ Countries Major Challenges (Shared with Other Fields) Worldwide Communication and Collaboration Managing Globally Distributed Computing & Data Resources Cooperative Software Development and Data Analysis

Large Hadron Collider (LHC) CERN, Geneva: 2007 Start pp s =14 TeV L=1034 cm-2 s-1 27 km Tunnel in Switzerland & France CMS TOTEM pp, general purpose; HI First Beams: Summer 2007 Physics Runs: from Fall 2007 ALICE : HI LHCb: B-physics Atlas Higgs, SUSY, QG Plasma, CP Violation, … the Unexpected

Challenges of Next Generation Science in the Information Age Petabytes of complex data explored and analyzed by 1000s of globally dispersed scientists, in hundreds of teams Flagship Applications High Energy & Nuclear Physics, AstroPhysics Sky Surveys: TByte to PByte “block” transfers at 1-10+ Gbps Fusion Energy: Time Critical Burst-Data Distribution; Distributed Plasma Simulations, Visualization and Analysis; Preparations for Fusion Energy Experiment eVLBI: Many real time data streams at 1-10 Gbps BioInformatics, Clinical Imaging: GByte images on demand Provide results with rapid turnaround, coordinating large but limited computing and data handling resources, over networks of varying capability in different world regions Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs With reliable, quantifiable high performance

LHC Data Grid Hierarchy: Developed at Caltech CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1 ~PByte/sec ~100-1500 MBytes/sec Online System Experiment CERN Center PBs of Disk; Tape Robot Tier 0 +1 Tier 1 10 - 40 Gbps IN2P3 Center RAL Center INFN Center FNAL Center ~10 Gbps Tier 2 Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center ~10 Gbps Tier 3 Institute Institute Institute Institute Tens of Petabytes by 2007-8. An Exabyte ~5-7 Years later. Physics data cache 1 to 10 Gbps Tier 4 Workstations Emerging Vision: A Richly Structured, Global Dynamic System

ICFA and Global Networks for Collaborative Science National and International Networks, with sufficient (rapidly increasing) capacity and seamless end-to-end capability, are essential for The daily conduct of collaborative work in both experiment and theory Experiment development & construction on a global scale Grid systems supporting analysis involving physicists in all world regions The conception, design and implementation of next generation facilities as “global networks” “Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

History of Bandwidth Usage – One Large Network; One Large Research Site ESnet Accepted Traffic 1/90 – 1/04 Exponential Growth Since ’92; Annual Rate Increased from 1.7 to 2.0X Per Year In the Last 5 Years SLAC Traffic ~300 Mbps; ESnet Limit Growth in Steps: ~ 10X/4 Years Projected: ~2 Terabits/s by ~2014

Int’l Networks BW on Major Links for HENP: US-CERN Example Rate of Progress >> Moore’s Law (US-CERN Example) 9.6 kbps Analog (1985) 64-256 kbps Digital (1989 - 1994) [X 7 – 27] 1.5 Mbps Shared (1990-3; IBM) [X 160] 2 -4 Mbps (1996-1998) [X 200-400] 12-20 Mbps (1999-2000) [X 1.2k-2k] 155-310 Mbps (2001-2) [X 16k – 32k] 622 Mbps (2002-3) [X 65k] 2.5 Gbps  (2003-4) [X 250k] 10 Gbps  (2005) [X 1M] 4x10 Gbps or 40 Gbps (2007-8) [X 4M] A factor of ~1M Bandwidth Improvement over 1985-2005 (a factor of ~5k during 1995-2005) A prime enabler of major HENP programs HENP has become a leading applications driver, and also a co-developer of global networks

Internet Growth in the World At Large Amsterdam Internet Exchange Point Example 5 Minute Max 30 Gbps 20 Gbps Average 11.08.04 Some Annual Growth Spurts; Typically In Summer-Fall The Rate of HENP Network Usage Growth (~100% Per Year) is Similar to the World at Large

Internet 2 Land Speed Record (LSR) Judged on product of transfer speed and distance end-to-end, using standard Internet (TCP/IP) protocols. IPv6 record: 4.0 Gbps between Geneva and Phoenix (SC2003) IPv4 Multi-stream record with Windows & Linux: 6.6 Gbps between Caltech and CERN (16 kkm; “Grand Tour d’Abilene”) June 2004 Exceeded 100 Petabit-m/sec Single Stream 7.5 Gbps X 16 kkm with Linux Achieved in July Concentrate now on reliable Terabyte-scale file transfers Note System Issues: CPU, PCI-X Bus, NIC, I/O Controllers, Drivers LSR History – IPv4 single stream Petabitmeter (10^15 bit*meter) Monitoring of the Abilene traffic in LA: June 2004 Record Network http://www.guinnessworldrecords.com/

HENP Bandwidth Roadmap for Major Links (in Gbps) Continuing Trend: ~1000 Times Bandwidth Growth Per Decade; Keeping Pace with Network BW Usage (ESNet, SURFNet etc.)

Evolving Quantitative Science Requirements for Networks (DOE High Perf Evolving Quantitative Science Requirements for Networks (DOE High Perf. Network Workshop) Science Areas Today End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s 100 Gb/s 1000 Gb/s High bulk throughput Climate (Data & Computation) 160-200 Gb/s N x 1000 Gb/s SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s + QoS for Control Channel Remote control and time critical throughput Fusion Energy 0.066 Gb/s (500 MB/s burst) 0.198 Gb/s (500MB/ 20 sec. burst) Time critical throughput Astrophysics 0.013 Gb/s (1 TByte/week) N*N multicast Computational steering and collaborations Genomics Data & Computation 0.091 Gb/s (1 TBy/day) 100s of users High throughput and steering

HENP Lambda Grids: Fibers for Physics Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. Example: Take 800 secs to complete the transaction. Then Transaction Size (TB) Net Throughput (Gbps) 1 10 10 100 100 1000 (Capacity of Fiber Today) Summary: Providing Switching of 10 Gbps wavelengths within ~2-4 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”, to fully realize the discovery potential of major HENP programs, as well as other data-intensive research.

ICFA Standing Committee on Interregional Connectivity (SCIC) Created by ICFA in July 1998 in Vancouver CHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe As part of the process of developing these recommendations, the committee should Monitor traffic Keep track of technology developments Periodically review forecasts of future bandwidth needs, and Provide early warning of potential problems Representatives: Major labs, ECFA, ACFA, North and South American Physics Community

SCIC in 2003-2004 http://cern.ch/icfa-scic Three 2004 Reports; Presented to ICFA in February Main Report: “Networking for HENP” [H. Newman et al.] Includes Brief Updates on Monitoring, the Digital Divide and Advanced Technologies [*] A World Network Overview (with 27 Appendices): Status and Plans for the Next Few Years of National & Regional Networks, and Optical Network Initiatives Monitoring Working Group Report [L. Cottrell] Digital Divide in Russia [V. Ilyin] August 2004 Update Reports at the SCIC Web Site: See http://icfa-scic.web.cern.ch/ICFA-SCIC/documents.htm Asia Pacific, Latin America, GLORIAD (US-Ru-Ko-China); Brazil, Korea, etc.

SCIC Main Conclusion for 2003 Setting the Tone for 2004 The disparity among regions in HENP could increase even more sharply, as we learn to use advanced networks effectively, and we develop dynamic Grid systems in the most favored” regions We must take action, and work to Close the Digital Divide To make Physicists from All World Regions Full Partners in Their Experiments; and in the Process of Discovery This is essential for the health of our global experimental collaborations, our plans for future projects, and our field.

ICFA Report: Networks for HENP General Conclusions (2) Reliable high End-to-end Performance of networked applications such as large file transfers and Data Grids is required. Achieving this requires: End-to-end monitoring extending to all regions serving our community. A coherent approach to monitoring that allows physicists throughout our community to extract clear information is required. Upgrading campus infrastructures. These are still not designed to support Gbps data transfers in most HEP centers. One reason for under-utilization of national and Int’l backbones, is the lack of bandwidth to end-user groups in the campus. Removing local, last mile, and nat’l and int’l bottlenecks end-to-end, whether technical or political in origin. While National and International backbones have reached 2.5 to 10 Gbps speeds in many countries, the bandwidths across borders, the countryside or the city may be much less. This problem is very widespread in our community, with examples stretching from the Asia Pacific to Latin America to the Northeastern U.S. Root causes for this vary, from lack of local infrastructure to unfavorable pricing policies.

ICFA Report (2/2004) Update: Main Trends Continue, Some Accelerate Current generation of 2.5-10 Gbps network backbones and major Int’l links arrived in the last 2-3 Years [US+Europe+Japan; Now Korea and China] Capability: 4 to Hundreds of Times; Much Faster than Moore’s Law Proliferation of 10G links across the Atlantic Now; Will Begin use of Multiple 10G Links (e.g. US-CERN) Along Major Paths by Fall 2005 Direct result of Falling Network Prices: $ 0.5 – 1M Per Year for 10G Ability to fully use long 10G paths with TCP continues to advance: 7.5 Gbps X 16kkm (August 2004) Technological progress driving equipment costs in end-systems lower “Commoditization” of Gbit Ethernet (GbE) ~complete: ($20-50 per port) 10 GbE commoditization (e.g. < $ 2K per NIC with TOE) underway Some regions (US, Europe) moving to owned or leased dark fiber Emergence of the “Hybrid” Network Model: GNEW2004; UltraLight, GLIF Grid-based Analysis demands end-to-end high performance & management The rapid rate of progress is confined mostly to the US, Europe, Japan and Korea, as well as the major Transatlantic routes; this threatens to cause the Digital Divide to become a Chasm

Work on the Digital Divide: Several Perspectives Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, … Find Ways to work with vendors, NRENs, and/or Gov’ts Exploit Model Cases: e.g. Poland, Slovakia, Czech Republic Inter-Regional Projects GLORIAD, Russia-China-US Optical Ring South America: CHEPREO (US-Brazil); EU CLARA Project Virtual SILK Highway Project (DESY): FSU satellite links Workshops and Tutorials/Training Sessions For Example: Digital Divide and HEPGrid Workshop, UERJ Rio, February 2004; Next Daegu May 2005 Help with Modernizing the Infrastructure Design, Commissioning, Development Tools for Effective Use: Monitoring, Collaboration Participate in Standards Development; Open Tools Advanced TCP stacks; Grid systems

Grid and Network Workshop at CERN March 15-16, 2004 CONCLUDING STATEMENT "Following the 1st International Grid Networking Workshop (GNEW2004) that was held at CERN and co-organized by CERN/DataTAG, DANTE, ESnet, Internet2 & TERENA, there is a wide consensus that hybrid network services capable of offering both packet- and circuit/lambda-switching as well as highly advanced performance measurements and a new generation of distributed system software, will be required in order to support emerging data intensive Grid applications, Such as High Energy Physics, Astrophysics, Climate and Supernova modeling, Genomics and Proteomics, requiring 10-100 Gbps and up over wide areas." WORKSHOP GOALS Share and challenge the lessons learned by nat’l and international projects in the past three years; Share the current state of network engineering and infrastructure and its likely evolution in the near future; Examine our understanding of the networking needs of Grid applications (e.g., see the ICFA-SCIC reports); Develop a vision of how network engineering and infrastructure will (or should) support Grid computing needs in the next three years.

National Lambda Rail (NLR) Transition beginning now to optical, multi-wavelength Community owned or leased “dark fiber” networks for R&E National Lambda Rail (NLR) PIT POR FRE RAL WAL NAS PHO OLG ATL CHI CLE KAN OGD SAC BOS NYC WDC STR DAL DEN LAX SVL SEA SDG JAC NLR Coming Up Now Initially 4 10G Wavelengths Northern Route Operation by 4Q04 Internet2 HOPI Initiative (w/HEP) To 40 10G Waves in Future nl, de, pl, cz,jp 18 US States 15808 Terminal, Regen or OADM site Fiber route

JGN2: Japan Gigabit Network (4/04 – 3/08) 20 Gbps Backbone, 6 Optical Cross-Connects Kanazawa Sendai Sapporo Nagano Kochi Nagoya Fukuoka Okinawa Okayama <1G>  ・Teleport Okayama (Okayama)  ・Hiroshima University (Higashi Hiroshima) <100M>  ・Tottori University of Environmental Studies (Tottori)  ・Techno Ark Shimane (Matsue)  ・New Media Plaza Yamaguchi (Yamaguchi) <10G>  ・Kyoto University (Kyoto)  ・Osaka University (Ibaraki)  ・NICT Kansai Advanced Research Center (Kobe)  ・Lake Biwa Data Highway AP * (Ohtsu)  ・Nara Prefectural Institute of Industrial Technology (Nara)  ・Wakayama University (Wakayama)  ・Hyogo Prefecture Nishiharima Technopolis (Kamigori-cho, Hyogo Prefecture)  ・Kyushu University (Fukuoka)  ・NetCom Saga (Saga)  ・Nagasaki University (Nagasaki)  ・Kumamoto Prefectural Office (Kumamoto)  ・Toyonokuni Hyper Network AP *(Oita)  ・Miyazaki University (Miyazaki)  ・Kagoshima University (Kagoshima)  ・Kagawa Prefecture Industry Promotion Center (Takamatsu)  ・Tokushima University (Tokushima)  ・Ehime University (Matsuyama)  ・Kochi University of Technology (Tosayamada-cho, Kochi Prefecture)  ・Nagoya University (Nagoya)  ・University of Shizuoka (Shizuoka)  ・Softopia Japan (Ogaki, Gifu Prefecture)  ・Mie Prefectural College of Nursing (Tsu)  ・Ishikawa Hi-tech Exchange Center (Tatsunokuchi-machi, Ishikawa Prefecture)  ・Toyama Institute of Information Systems (Toyama)  ・Fukui Prefecture Data Super Highway AP * (Fukui)    ・Niigata University (Niigata)  ・Matsumoto Information Creation Center (Matsumoto, Nagano Prefecture)  ・Tokyo University (Bunkyo Ward, Tokyo)  ・NICT Kashima Space Research Center (Kashima, Ibaraki Prefecture) <1G>  ・Yokosuka Telecom Research Park (Yokosuka, Kanagawa Prefecture) <100M>  ・Utsunomiya University (Utsunomiya)  ・Gunma Industrial Technology Center (Maebashi)  ・Reitaku University (Kashiwa, Chiba Prefecture)  ・NICT Honjo Information and Communications Open Laboratory (Honjo, Saitama Prefecture)  ・Yamanashi Prefecture Open R&D Center (Nakakoma-gun, Yamanashi Prefecture)  ・Tohoku University (Sendai)  ・NICT Iwate IT Open Laboratory (Takizawa-mura, Iwate Prefecture)  ・Hachinohe Institute of Technology (Hachinohe, Aomori Prefecture)  ・Akita Regional IX * (Akita)  ・Keio University Tsuruoka Campus (Tsuruoka, Yamagata Prefecture)  ・Aizu University (Aizu Wakamatsu)  ・Hokkaido Regional Network Association AP * (Sapporo) NICT Tsukuba Research Center 20Gbps 10Gbps 1Gbps Optical testbeds Core network nodes Access points Osaka Otemachi USA NICT Keihannna Human Info-Communications Research Center NICT Kita Kyushu IT Open Laboratory NICT Koganei Headquarters [Legends ] *IX:Internet eXchange AP:Access Point

APAN-KR : KREONET/KREONet2 II

UltraLight Collaboration: http://ultralight.caltech.edu Caltech, UF, FIU, UMich, SLAC,FNAL, MIT/Haystack, CERN, UERJ(Rio), NLR, CENIC, UCAID, Translight, UKLight, Netherlight, UvA, UCLondon, KEK, Taiwan Cisco, Level(3) Integrated hybrid experimental network, leveraging Transatlantic R&D network partnerships; packet-switched + dynamic optical paths 10 GbE across US and the Atlantic: NLR, DataTAG, TransLight, NetherLight, UKLight, etc.; Extensions to Japan, Taiwan, Brazil End-to-end monitoring; Realtime tracking and optimization; Dynamic bandwidth provisioning Agent-based services spanning all layers of the system, from the optical cross-connects to the applications.

GLIF: Global Lambda Integrated Facility “GLIF is a World Scale Lambda based Lab for Application and Middleware development, where Grid applications ride on dynamically configured networks based on optical wavelengths ... Coexisting with more traditional packet-switched network traffic 4th GLIF Workshop: Nottingham UK Sept. 2004 10 Gbps Wavelengths For R&E Network Development Are Prolifering, Across Continents and Oceans

PROGRESS in SE Europe (Sk, Pl, Cz, Hu, …) 1660 km of Dark Fiber CWDM Links, up to 112 km. 1 to 4 Gbps (GbE) August 2002: First NREN in Europe to establish Int’l GbE Dark Fiber Link, to Austria April 2003 to Czech Republic. Planning 10 Gbps Backbone; dark fiber link to Poland this year.

Dark Fiber in Eastern Europe Poland: PIONIER Network 2650 km Fiber Connecting 16 MANs; 5200 km and 21 MANs by 2005 Support Computational Grids Domain-Specific Grids Digital Libraries Interactive TV Add’l Fibers for e-Regional Initiatives

The Advantage of Dark Fiber CESNET Case Study (Czech Republic) 2513 km Leased Fibers (Since 1999) Case Study Result Wavelength Service Vs. Fiber Lease: Cost Savings of 50-70% Over 4 Years for Long 2.5G or 10G Links

ICFA/SCIC Network Monitoring Prepared by Les Cottrell, SLAC, for ICFA www.slac.stanford.edu/grp/scs/net/talk03/icfa-aug04.ppt

PingER: World View from SLAC Now monitoring 650 sites in 115 countries In last 9 months: Several sites in Russia (GLORIAD) Many hosts in Africa (27 of 54 Countries) Monitoring sites in Pakistan, Brazil C. Asia, Russia, SE Europe, L. America, M. East, China: 4-5 yrs behind India, Africa: 7 yrs behind View from CERN Confirms This View Spreadsheet \cottrell\iepm\esnet-to-all-longterm.xls CERN data only goes back to Aug-01. It confirms S.E. Europe & Russia are catching up, and India & Africa are falling behind PingER is arguably the most extensive set of measurements of the end-to-end performance of the Internet going back almost ten years. Measurements are available from over 30 sites in 13 countries to sites in over 100 countries. We will use the PingER results to: demonstrate how the Internet performance to the regions of the world has evolved over the last 9 years; identify regions that have poor connectivity, how far they are behind the developed world and whether they are catching up or falling further behind; and illustrate the correlation between the UN Technology Achievement Index and Internet performance. Ghana, Nigeria and Uganda are all satellite links with 800-1100ms RTTs. The losses to Ghana & Nigeria are 8-12% while to Uganda they are 1-3%. The routes are different. The route from SLAC to Ghana uses ESnet-Worldcom-UUNET, Nigeria goes CalREN-Qwest-Teiianet-New Skies satellite, Uganda goes Esnet-Level3-Intelsat. For both Ghana and Nigeria there are no losses (for 100 pings) until the last hop when over 40 of 100 packets were lost. For Uganda the losses (3 in 100 packets) also occur at the last hop. Worksheet: for trends: \\Zwinsan2\c\cottrell\iepm\esnet-to-all-longterm.xls for Africa: \\Zwinsan2\c\cottrell\iepm\africa.xls Important for policy makers

Research Networking in Latin America: August 2004 AmPath The only Countries with research network connectivity now in Latin America: Argentina, Brazil, Chile, Mexico, Venezuela AmPath Provided connectivity for some South American countries New Sao Paolo-Miami Link at 622 Mbps Starting This Month New: CLARA (Funded by EU) Regional Network Connecting 19 Countries: Argentina Dominican Republic Panama Brasil Ecuador Paraguay Bolivia El Salvador Peru Chile Guatemala Uruguay Colombia Honduras Venezuela Costa Rica Mexico Nicaragua Cuba 155 Mbps Backbone with 10-45 Mbps Spurs; 4 Mbps Satellite to Cuba; 622 Mbps to Europe Also NSF Proposals To Connect at 2.5G to US

UERJ: T2T1, 100500 Nodes; Plus T2s to 100 Nodes HEPGRID (CMS) in Brazil HEPGRID-CMS/BRAZIL is a project to build a Grid that At Regional Level will include CBPF,UFRJ,UFRGS,UFBA, UERJ & UNESP At International Level will be integrated with CMS Grid based at CERN; focal points include iVGDL/Grid3 and bilateral projects with Caltech Group Brazilian HEPGRID On line systems T0 +T1 2.5 - 10 Gbps CERN T1 France Germany Italy BRAZIL USA 622 Mbps UNESP/USP SPRACE-Working UERJ Regional Tier2 Ctr T2 T1 Gigabit T3 T2 UFRGS UERJ: T2T1, 100500 Nodes; Plus T2s to 100 Nodes CBPF UERJ UFBA UFRJ Individual Machines T4

Latin America Science Areas Interested in Improving Connectivity ( by Country) Subject Argentina Brazil Chile Colombia Costa Rica Equator Mexico Astrophysics e-VLBI High Energy Physics Geosciences Marine sciences Health and Biomedical applications Environmental studies Networks and Grids: The Potential to Spark a New Era of Science in the Region

Asia Pacific Academic Network Connectivity APAN Status July 2004 RU 200M Europe JP 20.9G 34M KR 2G 1.2G US Connectivity to US from JP, KO, AU is Advancing Rapidly. Progress in the Region, and to Europe is Much Slower 155M 310M CN 622M 9.1G 777M TW 45M `722M HK 90M 2M 7.5M IN 45M TH 1.5M 155M VN PH 155M 1.5M 932M (to 21 G) LK MY 2M 12M SG ID 2.5M Access Point Exchange Point Current Status 2004 (plan) 16M AU Better North/South Linkages within Asia JP-SG link: 155Mbps in 2005 is proposed to NSF by CIREN JP- TH link: 2Mbps  45Mbps in 2004 is being studied. CIREN is studying an extension to India

(service interrupted) APAN Link Information 2004.7.7 sec@apan.net Countries Network Bandwidth (Mbps) AUP/Remark AU-US AARNet 310 to 2 x 10 Gbps soon R&E + Commodity AU-US (PAIX) 622 CN-HK CERNET CSTNET 155 R&E CN-JP 45 Native IPv6 CN-US HK-US HARNET HK-TW HARNET/TANET/ASNET 100 IN-US/UK ERNET 16 JP-ASIA UDL 9 JP-ID AI3(ITB) 0.5/1.5 JP-KR APII 2Gbps JP-LA AI3 (NUOL) 0.128/0.128 JP-MY AI3(USM) 1.5/0.5 JP-PH AI3(ASTI) MAFFIN 6 Research JP-SG AI3(TP) JP-TH AI3(AIT) (service interrupted) SINET(ThaiSarn) 2 JP-US TransPac 5 Gbps to 2x10 Gbps soon          

     APAN Link Information Countries Network Bandwidth (Mbps) 2004.7.7 sec@apan.net Countries Network Bandwidth (Mbps) AUP/Remark (JP)-US-EU SINET 155 R&E / No Transit JP-US 5 Gbps IEEAF 10Gbps R&E wave service 622 R&E Japan-Hawaii JP-VN AI3(IOIT) 1.5/0.5 KR-FR KOREN/RENATER 34 Research (TEIN) KR-SG APII 8 KR-US KOREN/KREONet2 1.2Gbps LK-JP LEARN 2.5 MY-SG NRG/SICU 2 Experiment (Down) SG-US SingaREN 90 TH-US Uninet TW-HK ASNET/TANET/TWAREN TW-JP ASNET/TANET TW-SG ASNET/SingAREN TW-US 6.6 Gbps (TW)-US-NL 2.5 Gbps     

APAN Recommendations (at July 2004 Meeting in CAIRNS, Au) Central Issues for APAN this decade Stronger linkages between applications and infrastructure - neither can exist independently Stronger application and infrastructure linkages among APAN members. Continuing focus on APAN as an organization that represents infrastructure interests in Asia Closer connection between APAN the infrastructure & applications organization and regional political organizations (e.g. APEC, ASEAN) New issues demand attention Application measurement, particularly end-to-end network performance measurement is increasingly critical (deterministic networking) Security must now be a consideration for every application and every network.

KR-US/CA Transpacific connection Participation in Global-scale Lambda Networking Two STM-4 circuits (1.2G) : KR-CA-US Global lambda networking : North America, Europe, Asia Pacific, etc. Global Lambda Networking KREONET/SuperSIReN CA*Net4 StarLight STM-4 * 2 Chicago APII-testbed/KREONet2 PacWave Seattle

New Record!!! 916 Mbps from CHEP to Caltech (22/06/’04) Subject: UDP test on KOREN-TransPAC-Caltech Date: Tue, 22 Jun 2004 13:47:25 +0900 From: "Kihwan Kwon" <kihwan@bh.knu.ac.kr> To: <son@knu.ac.kr> [root@sul Iperf]# ./iperf -c socrates.cacr.caltech.edu -u -b 1000m ------------------------------------------------------------ Client connecting to socrates.cacr.caltech.edu, UDP port 5001 Sending 1470 byte datagrams; UDP buffer size: 64.0 KByte (default) [ 5] local 155.230.20.20 port 33036 connected with 131.215.144.227 [ ID] Interval Transfer Bandwidth [ 5] 0.0-2595.2 sec 277 GBytes 916 Mbits/sec USA TransPAC G/H-Japan KNU/Korea Max. 947.3Mbps

Aug. 8 2004: P.K. Young, Korean IST Advisor to President Announces Global Ring Network for Advanced Applications Development OC3 circuits Moscow-Chicago-Beijing since January 2004 OC3 circuit Moscow-Beijing July 2004 (completes the ring) Korea (KISTI) joining US, Russia, China as full partner in GLORIAD Plans developing for Central Asian extension (w/Kyrgyz Government) Rapid traffic growth with heaviest US use from DOE (FermiLab), NASA, NOAA, NIH and Universities (UMD, IU, UCB, UNC, UMN, PSU, Harvard, Stanford, Wash., Oregon, 250+ others) Aug. 8 2004: P.K. Young, Korean IST Advisor to President Announces Korea Joining GLORIAD TEIN gradually to 10G, connected to GLORIAD Asia Pacific Info. Infra- Structure (1G) will be backup net to GLORIAD > 5TBytes now transferred monthly via GLORIAD to US, Russia, China GLORIAD 5-year Proposal Pending (with US NSF) for expansion: 2.5G Moscow-Amsterdam-Chicago-Seattle-Hong Kong-Pusan-Beijing circuits early 2005; 10G ring around northern hemisphere 2007; multiple wavelength service 2009 – providing hybrid circuit-switched (primarily Ethernet) and routed services

Internet in China (J.P.Wu APAN July 2004) Internet users in China: from 6.8 Million to 78 Million within 6 months IP Addresses: 32M(1A+233B+146C) Backbone:2.5-10G DWDM+Router International links:20G Exchange Points:> 30G(BJ,SH,GZ) Last Miles Ethernet,WLAN,ADSL,CTV,CDMA,ISDN,GPRS,Dial-up Need IPv6 Total Wireline Dial Up ISDN Broad band 68M 23.4M 45.0M 4.9M 9.8M  

China: CERNET Update 1995, 64K Nation wide backbone connecting 8 cities, 100 Universities 1998, 2M Nation wide backbone connecting 20 cities, 300 Universities 2000, Own dark fiber crossing 30+ major cities and 30,000 kilometers 2001, CERNET DWDM/SDH network finished 2001, 2.5G/155M Backbone connecting 36 cities, 800 universities 2003,1300 + universities and institutes, over 15 million users

CERNET2 and Key Technologies CERNET 2: Next Generation Education and Research Network in China CERNET 2 Backbone connecting 15-20 GigaPOPs at 2.5G-10Gbps (I2-like Model) Connecting 200 Universities and 100+ Research Institutes at 1Gbps-10Gbps Native IPv6 and Lambda Networking Support/Deployment of the following technologies: E2E performance monitoring Middleware and Advanced Applications Multicast

M. Jensen and P. Hamilton Infrastructure Report, March 2004 AFRICA: Key Trends M. Jensen and P. Hamilton Infrastructure Report, March 2004 Growth in traffic and lack of infrastructure Predominance of Satellite; But these satellites are heavily subscribed Int’l Links: Only ~1% of traffic on links is for Internet connections; Most Internet traffic (for ~80% of countries) via satellite Flourishing Grey market for Internet & VOIP traffic using VSAT dishes Many Regional fiber projects in “planning phase” (some languished in the past); Only links from South Africa to Nimibia, Botswana done so far Int’l fiber Project: SAT-3/WASC/SAFE Cable from South Africa to Portugal Along West Coast of Africa Supplied by Alcatel to Worldwide Consortium of 35 Carriers 40 Gbps by Mid-2003; Heavily Subscribed. Ultimate Capacity 120 Gbps Extension to Interior Mostly by Satellite: < 1 Mbps to ~100 Mbps typical Note: World Conference on Physics and Sustainable Development, 10/31 – 11/2/05 in Durban South Africa; Part of World Year of Physics 2005. Sponsors: UNESCO, ICTP, IUPAP, APS, SAIP

AFRICA: Nectar Net Initiative Growing Need to connect academic researchers, medical researchers & practitioners to many sites in Africa Examples: CDC & NIH: Global AIDS Project, Dept. of Parasitic Diseases, Nat’l Library of Medicine (Ghana, Nigeria) Gates $ 50M HIV/AIDS Center in Botswana; Project Coord at Harvard Africa Monsoon AMMA Project, Dakar Site [cf. East US Hurricanes] US Geological Survey: Global Spatial Data Infrastructure Distance Learning: Emory-Ibadan (Nigeria); Research Channel Content But Africa is Hard: 11M Sq. Miles, 600 M People, 54 Countries Little Telecommunications Infrastructure Approach: Use SAT-3/WASC Cable (to Portugal), GEANT Across Europe, Amsterdam-NY Link Across the Atlantic, then Peer with R&E Networks such as Abilene in NYC Cable Landings in 8 West African Countries and South Africa Pragmatic approach to reach end points: VSAT satellite, ADSL, microwave, etc. W. Matthews Georgia Tech

Sample Bandwidth Costs for African Universities Bandwidth prices in Africa vary dramatically; are in general many times what they could be if universities purchase in volume Sample Bandwidth Costs for African Universities Anyone working in ICT in Africa will say cost is the main factor. Sample size of 26 universities Average Cost for VSAT service: Quality, CIR, Rx, Tx not distinguished Roy Steiner Internet2 Workshop

Grid2003: An Operational Production Grid, Since October 2003 27 sites (U.S., Korea) 2300-2800 CPUs 700-1100 Concurrent Jobs Trillium: PPDG GriPhyNiVDGL www.ivdgl.org/grid2003 Korea Prelude to Open Science Grid: www.opensciencegrid.org

HENP Data Grids, and Now Services-Oriented Grids The original Computational and Data Grid concepts are largely stateless, open systems Analogous to the Web The classical Grid architecture had a number of implicit assumptions The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness) Short transactions with relatively simple failure modes HENP Grids are Data Intensive & Resource-Constrained Resource usage governed by local and global policies Long transactions; some long queues Analysis: 1000s of users competing for resources at dozens of sites: complex scheduling; management HENP Stateful, End-to-end Monitored and Tracked Paradigm Adopted in OGSA [Now WS Resource Framework]

The Move to OGSA and then Managed Integrated Systems App-specific Services ~Integrated Systems Open Grid Services Arch Stateful; Managed Web Services Resrc Framwk Web services + … Increased functionality, standardization GGF: OGSI, … (+ OASIS, W3C) Multiple implementations, including Globus Toolkit Globus Toolkit X.509, LDAP, FTP, … Defacto standards GGF: GridFTP, GSI Custom solutions Time

Managing Global Systems: Dynamic Scalable Services Architecture MonALISA: http://monalisa.cacr.caltech.edu 24 X 7 Operations Multiple Orgs. Grid2003 US CMS CMS-DC04 ALICE STAR VRVS ABILENE Soon: GEANT + GLORIAD “Station Server” Services-engines at sites host many “Dynamic Services” Scales to thousands of service-Instances Servers autodiscover and interconnect dynamically to form a robust fabric Autonomous agents + CLARENS: Web Services Fabric and Portal Architecture

Grid Analysis Environment CLARENS: Web Services Architecture Grid Scheduler Catalogs Analysis Client Grid Services Web Server Execution Priority Manager Grid Wide Service Data Management Fully- Concrete Planner Abstract Virtual Replica Applications Monitoring Partially- Metadata HTTP, SOAP, XML/RPC Analysis Clients talk standard protocols to the CLARENS “Grid Services Web Server”, with a simple Web service API The secure Clarens portal hides the complexity of the Grid Services from the client Key features: Global Scheduler, Catalogs, Monitoring, and Grid-wide Execution service; Clarens servers form a Global Peer to peer Network Slide 7 - This diagram shows how the client access the grid services through a grid service host, and a rough idea of how the types of services interact. One key point from this slide is that the clients can be as smart or dumb as the application desires. They can let the grid service host do all of the work and ignore the details of the various services, or they can be more involved in the service flow. The motivator for hiding the grid services behind a grid service host is that it allows for simpler and easier to develop client application. The services are still available for smarter clients, however, so that application developers can have more control over the service flow if necessary. Clarens is used as the Grid Service host, providing XMLRPC, SOAP, and HTTP access to the grid services. Clarens also provides authentication and authorization to allow secure access to services. Factors motivating design choices: HTTP, SOAP, XMLRPC were chosen for transport/communication protocols due to their wide acceptance as standards. The heterogeneous and decentrally administered GAE must use open protocols for communication in order to be widely accepted. This lowers the barriers to entry when introducing new services and GAE grid sites.

World Summit on the Information Society (WSIS): Geneva 12/2003 and Tunis in 2005 The UN General Assembly adopted in 2001 a resolution endorsing the organization of the World Summit on the Information Society (WSIS), under UN Secretary-General, Kofi Annan, with the ITU and host governments taking the lead role in its preparation. GOAL: To Create an Information Society: A Common Definition was adopted in the “Tokyo Declaration” of January 2003: “… One in which highly developed ICT networks, equitable and ubiquitous access to information, appropriate content in accessible formats and effective communication can help people achieve their potential” Kofi Annan Challenged the Scientific Community to Help (3/03) CERN and ICFA SCIC have been quite active in the WSIS in Geneva (12/2003)

Role of Science in the Information Society; WSIS 2003-2005 HENP Active in WSIS CERN RSIS Event SIS Forum & CERN/Caltech Online Stand at WSIS I (Geneva 12/03) Visitors at WSIS I Kofi Annan, UN Sec’y General John H. Marburger, Science Adviser to US President Ion Iliescu, President of Romania; and Dan Nica, Minister of ICT Jean-Paul Hubert, Ambassador of Canada in Switzerland … Planning Underway for WSIS II: Tunis 2005

Tutorials Available (w/Video) on the Web HEPGRID and Digital Divide Workshop UERJ, Rio de Janeiro, Feb. 16-20 2004 Theme: Global Collaborations, Grids and Their Relationship to the Digital Divide ICFA, understanding the vital role of these issues for our field’s future, commissioned the Standing Committee on Inter-regional Connectivity (SCIC) in 1998, to survey and monitor the state of the networks used by our field, and identify problems. For the past three years the SCIC has focused on understanding and seeking the means of reducing or eliminating the Digital Divide, and proposed to ICFA that these issues, as they affect our field of High Energy Physics, be brought to our community for discussion. This led to ICFA’s approval, in July 2003, of the Digital Divide and HEP Grid Workshop.  More Information: http://www.lishep.uerj.br NEWS: Bulletin: ONE TWO WELCOME BULLETIN General Information Registration Travel Information Hotel Registration Participant List How to Get UERJ/Hotel Computer Accounts Useful Phone Numbers Program Contact us: Secretariat Chairmen Tutorials C++ Grid Technologies Grid-Enabled Analysis Networks Collaborative Systems Sessions & Tutorials Available (w/Video) on the Web SPONSORS  CLAF  CNPQ FAPERJ        UERJ

International ICFA Workshop on HEP Networking, Grids and Digital Divide Issues for Global e-Science Proposed Workshop Dates: May 23-27, 2005 Venue: Daegu, Korea Dongchul Son Center for High Energy Physics Kyungpook National University ICFA, Beijing, China Aug. 2004 ICFA Approval Requested Today

International ICFA Workshop on HEP Networking, Grids and Digital Divide Issues for Global e-Science Themes Networking, Grids, and Their Relationship to the Digital Divide for HEP as Global e-Science Focus on Key Issues of Inter-regional Connectivity Mission Statement ICFA, understanding the vital role of these issues for our field’s future, commissioned the Standing Committee on Inter-regional Connectivity (SCIC) in 1998, to survey and monitor the state of the networks used by our field, and identify problems. For the past three years the SCIC has focused on understanding and seeking the means of reducing or eliminating the Digital Divide, and proposed to ICFA that these issues, as they affect our field of High Energy Physics, be brought to our community for discussion. This workshop, the second in the series begun with the the 2004 Digital Divide and HEP Grid Workshop in Rio de Janeiro (approved by ICFA in July 2003) will carry forward this work while strengthening the involvement of scientists, technologists and governments in the Asia Pacific region.

International ICFA Workshop on HEP Networking, Grids and Digital Divide Issues for Global e-Science Workshop Goals Review the current status, progress and barriers to the effective use of the major national, continental and transoceanic networks used by HEP Review progress, strengthen opportunities for collaboration, and explore the means to deal with key issues in Grid computing and Grid-enabled data analysis, for high energy physics and other fields of data intensive science, now and in the future Exchange information and ideas, and formulate plans to develop solutions to specific problems related to the Digital Divide in various regions, with a focus on Asia Pacific, Latin America, Russia and Africa Continue to advance a broad program of work on reducing or eliminating the Digital Divide, and ensuring global collaboration, as related to all of the above aspects.

Networks and Grids, GLORIAD, ITER and HENP Network backbones and major links used by major experiments in HENP and other fields are advancing rapidly To the 10 G range in < 2 years; much faster than Moore’s Law New HENP and DOE Roadmaps: a factor ~1000 improvement per decade We are learning to use long distance 10 Gbps networks effectively 2003-2004 Developments: to 7.5 Gbps flows over 16 kkm Important advances in Asia-Pacific, notably Korea A transition to community-owned and operated R&E networks is beginning (us, ca, nl, pl, cz, sk …) or considered (de, ro, …) We Must Work to Close to Digital Divide Allowing Scientists and Students from All World Regions to Take Part in Discoveries at the Frontiers of Science Removing Regional, Last Mile, Local Bottlenecks and Compromises in Network Quality are now On the Critical Path GLORIAD is A Key Project to Achieve These Goals Synergies Between the Data-Intensive Missions of HENP & ITER Enhancing Partnership and Community among the US, Russia and China: both in Science and Education 

LHC Global Collaborations ATLAS CMS CMS 1980 Physicists and Engineers 36 Countries, 161 Institutions

SC2004: HEP Network Layout Preview of Future Grid systems Brazil UK Japan SLAC 3*10Gbps TeraGrid Australia 10 Gbps Abilene FNAL StarLight 10 Gbps NLR 2*10 Gbps NLR LA 10 Gbps LHCNet 2 Metro 10 Gbps Waves LA-Caltech Joint Caltech, CERN, SLAC, FNAL, UKlight, HP, Cisco… Demo 6 to 8 10 Gbps waves to HEP setup on the show floor Bandwidth challenge: aggregate throughput goal of 40 to 60 Gbps Caltech CACR CERN Geneva

The Move to Dark Fiber is Spreading FiberCO 18 State Dark Fiber Initiatives In the U.S. (As of 3/04) California (CALREN), Colorado (FRGP/BRAN) Connecticut Educ. Network, Florida Lambda Rail, Indiana (I-LIGHT), Illinois (I-WIRE), Md./DC/No. Virginia (MAX), Michigan, Minnesota, NY + New England (NEREN), N. Carolina (NC LambdaRail), Ohio (Third Frontier Net) Oregon, Rhode Island (OSHEAN), SURA Crossroads (SE U.S.), Texas, Utah, Wisconsin The Move to Dark Fiber is Spreading FiberCO

 

SCIC in 2003-2004 http://cern.ch/icfa-scic Strong Focus on the Digital Divide Continues A Striking Picture Continues to Emerge: Remarkable Progress in Some Regions, and a Deepening Digital Divide Among Nations Intensive Work in the Field: > 60 Meetings and Workshops: E.g., Internet2, TERENA, AMPATH, APAN, CHEP2003, SC2003, Trieste, Telecom World 2003, SC2003, WSIS/RSIS, GLORIAD Launch, Digital Divide and HEPGrid Workshop (Feb. 16-20 in Rio), GNEW2004, GridNets2004, NASA ONT Workshop, … etc. 3rd Int’l Grid Workshop in Daegu (August 26-28, 2004); Plan for 2nd ICFA Digital Divide and Grid Workshop in Daegu (May 2005) HENP increasingly visible to governments; heads of state: Through Network advances (records), Grid developments, Work on the Digital Divide and issues of Global Collaboration Also through the World Summit on the Information Society Process. Next Step is WSIS II in TUNIS November 2005

Coverage Now monitoring 650 sites in 115 countries In last 9 months added: Several sites in Russia (thanks GLORIAD) Many hosts in Africa (5  36 now; in 27 out of 54 countries) Monitoring sites in Pakistan and Brazil (Sao Paolo and Rio) Working to install monitoring host in Bangalore, India Monitoring site Remote site Looks like chicken pox For Brazil thanks to Alberto Santoro for UERJ, and Sergio Novaes for UNESP, for Pakistan thanks to Arshad Ali for NIIT Pakistan monitors 4 .pk hosts plus CERN & SLAC, routes go via London (even between .pk sites). Sao Paolo monitors about 25 sites in 16 countries. Within Brazil direct connections (5-30 msec) else goes via AMPATH/FIU (even Chile and Uruguay)

Latin America: CLARA Network (2004-2006 EU Project) Significant contribution from European Comission and Dante through ALICE project NRENs in 18 LA countries forming a regional network for collaboration traffic Initial backbone ring bandwidth f 155 Mbps Spur links at 10 to 45 Mbps (Cuba at 4 Mbps by satellite) Initial connection to Europe at 622 Mbps from Brazil Tijuana (Mexico) PoP soon to be connected to US through dark fibre link (CUDI-CENIC) access to US, Canada and Asia - Pacific Rim

NSF IRNC 2004: Two Proposals to Connect CLARA to the US (and Europe) 1st Proposal: FIU and CENIC 2nd Proposal: Indiana and Internet2 To East Coast Clara To West Coast to West Coast to East Coast to Europe Note: CHEPREO (FIU, UF, FSU Caltech, UERJ, USP, RNP) 622 Mbps Sao Paolo – Miami Link Started in August

GIGA Project: Experimental Gbps Network: Sites in Rio and Sao Paolo Universities IME PUC-Rio UERJ UFF UFRJ Unesp Unicamp USP R&D Centres CBPF - physics CPqD - telecom CPTEC - meteorology CTA - aerospace Fiocruz - health IMPA - mathematics INPE - space sciences LNCC - HPC LNLS - physics About 600 km extension - not to scale LNCC CTA INPE CPqD LNLS Unicamp CPTEC CBPF LNCC Fiocruz IME IMPA-RNP PUC-Rio telcos UERJ UFRJ UFF Fapesp telcos Unesp USP – Incor USP - C.Univ. Slide from M. Stanton

GIGA Project in Rio and Sao Paolo Maceió João Pessoa Extension of the GIGA Project Using 3000 km of dark fiber. “A good and real Advancement for Science in Brazil” – A. Santoro. GIGA Project in Rio and Sao Paolo “This is wonderful NEWS! our colleagues from Salvador -Bahia will can start to work with us on CMS.”

Trans-Eurasia Information Network TEIN (2004-2007) Circuit between KOREN(Korea) and RENATER(France). AP Beneficiaries: China, Indonesia, Malaysia, Philippines, Thailand, Vietnam (Non-beneficiaries: Brunei, Japan, Korea, Singapore EU partners: NRENs of France, Netherlands, UK The scope expanded to South-East Asia and China recently. Upgraded to 34 Mbps in 11/2003. Upgrade to 155Mbps planned 12M Euro EU Funds Coordinating Partner DANTE Direct EU-AP Link; Other Links go Across the US

APAN China Consortium CERNet NSFCnet Has been established in 1999.  The China Education and Research Network (CERNET), the Natural Science Foundation of China Network (NSFCNET) and the China Science and Technology Network (CSTNET) are the main three advanced networks. CERNet NSFCnet 2.5 Gbps Tsinghua --- Tsinghua University PKU --- Peking University NSFC --- Natural Science Foundation of China CAS --- China Academy of Sciences BUPT --- Beijing Univ. of Posts and Telecom. BUAA --- Beijing Univ. of Aero- and Astronautics

GLORIAD: Global Optical Ring (US-Ru-Cn; Korea Now Full Partner ) DOE: ITER Distributed Ops.; Fusion-HEP Cooperation NSF: Collaboration of Three Major R&E Communities Aug. 8 2004: P.K. Young, Korean IST Advisor to President Announces Korea Joining GLORIAD TEIN gradually to 10G, connected to GLORIAD Asia Pacific Info. Infra- Structure (1G) will be backup net to GLORIAD Also Important for Intra-Russia Connectivity; Education and Outreach

GLORIAD and HENP Example: Network Needs of IHEP Beijing ICFA SCIC Report: Appendix 18, on Network Needs for HEP in China (See http://cern.ch/icfa-scic) “IHEP is working with the Computer Network Information Center (CNIC) and other universities and institutes to build Grid applications for the experiments. The computing resources and storage management systems are being built or upgraded in the Institute. IHEP has a 100 Mbps link to CNIC, so it is quite easy to connect to GLORIAD and the link could be upgraded as needed.” Prospective Network Needs for IHEP Bejing Experiment Year 2004-2005 Year 2006 and on LHC/LCG 622Mbps 2.5Gbps BES 100Mbps 155Mbps YICRO AMS Others Total (Sharing) 1Gbps

The Open Science Grid http://www.opensciengrid.org The Open Science Grid will Build on the experience of Grid2003, as a persistent, production-quality Grid of national and international scope Ensure that the U.S. plays a leading role in defining and operating the global grid infrastructure needed for large-scale collaborative and international scientific research. Combine computing resources at several DOE labs and at dozens of universities to effectively become a single national computing infrastructure for science, the Open Science Grid. Provide opportunities for educators and students to participate in building and exploiting this grid infrastructure and opportunities for developing and training a scientific and technical workforce. This has the potential to transform the integration of education and research at all levels.

Role of Sciences in Information Society. Palexpo, Geneva 12/2003 Demos at the CERN/Caltech RSIS Online Stand Advanced network and Grid-enabled analysis Monitoring very large scale Grid farms with MonALISA World Scale multisite multi-protocol videoconference with VRVS (Europe-US-Asia-South America) Distance diagnosis and surgery using Robots with “haptic” feedback (Geneva-Canada) Music Grid: live performances with bands at St. John’s, Canada and the Music Conservatory of Geneva on stage VRVS 37k hosts 106 Countries 2-3X Growth/Year

Achieving throughput User can’t achieve throughput available (Wizard gap) TCP Stack, End-System and/or Local, Regional, Nat’l Network Issues Big step just to know what is achievable (e.g. 7.5 Gbps over 16 kkm Caltech-CERN) Spreadsheet \cottrell\iepm\wizard.xls Most users are unaware of the bottleneck bandwidth on the path