1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.

Slides:



Advertisements
Similar presentations
Andrew Schmidt Electronic Visualization Laboratory University of Illinois at Chicago.
Advertisements

STAR TAP The Persistent Interconnect for International High- Performance Networks STAR TAP ITN International Transit Network Linda Winkler
Abilene and Internet2 Engineering Update Guy Almes Terena Networking Conference 2002 Limerick, Ireland Guy Almes Terena Networking Conference 2002 Limerick,
1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
1 Energy Sciences Network (ESnet) Futures Chinese American Networking Symposium November 30 – December 2, 2004 Joseph Burrescia, Senior Network Engineer.
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
The Energy Sciences Network BESAC August 2004
1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Visual Collaboration Jose Leary, Media Specialist.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
1 ESnet - Connecting the USA DOE Labs to the World of Science Eli Dart, Network Engineer Network Engineering Group Chinese American Network Symposium Indianapolis,
1 Cyberinfrastructure and Networks : The Advanced Networks and Services Underpinning the Large-Scale Science of DOE’s Office of Science William E. Johnston.
Innovating the commodity Internet Update to CENIC 14-Mar-2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 The Energy Sciences Network BESAC August 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
1 The Intersection of Grids and Networks: Where the Rubber Hits the Road William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
1 Status Report on US networks at the Turn of the Century Les Cottrell – SLAC & Stanford U.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
1 ESNet Update ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins,
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
University of Illinois at Chicago StarLight: Applications-Oriented Optical Wavelength Switching for the Global Grid at STAR TAP Tom DeFanti, Maxine Brown.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Abilene Update SC'99 :: Portland :: 17-Nov-99. Outline Goals Architecture Current Status NGI Peering International Peering Multicast.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
© 2001 Caspian Networks, Inc. CONFIDENTIAL AND PROPRIETARY INFORMATION Internet Intelligence and Traffic Growth Lawrence G. Roberts Chairman & CTO Caspian.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
Fermilab T1 infrastructure
AARNet Network Update IPv6 Workshop APAN 23, Manilla 2007 AARNet.
Networking for the Future of Science
ESnet Network Engineer Lawrence Berkeley National Laboratory
The Energy Sciences Network BESAC August 2004
AARNet Network Update IPv6 Workshop APAN 23, Manilla 2007 AARNet.
Energy Sciences Network Enabling Virtual Science June 9, 2009
Next Generation Abilene
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Abilene Update Rick Summerhill
Presentation transcript:

1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team Lawrence Berkeley National Laboratory

TWC JGI SNLL LBNL SLAC YUCCA MT BECHTEL PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NERSC NREL ALB HUB LLNL GA DOE-ALB SDSC Japan GTN&NNSA International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet mid-2004 SInet (Japan) Japan – Russia(BINP) CA*net4 MREN Netherlands Russia StarTap Taiwan (ASCC) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren ESnet core: Packet over SONET Optical Ring and Hubs ELP HUB SNV HUB CHI HUB ATL HUB DC HUB peering points MAE-E Fix-W PAIX-W MAE-W NY-NAP PAIX-E Euqinix PNWG SEA HUB ESnet Provides Full Internet Service to DOE Facilities and Collaborators with High-Speed Access to Major Science Collaborators hubs SNV HUB Abilene high-speed peering points Abilene MAN LAN Abilene CERN (DOE link) GEANT - Germany, France, Italy, UK, etc NYC HUB Starlight Chi NAP

STARLIGHT MAE-E NY-NAP PAIX-E GA LBNL ESnet’s Peering Infrastructure Connects the DOE Community With its Collaborators ESnet Peering (connections to other networks) NYC HUBS SEA HUB Japan SNV HUB MAE-W FIX-W PAIX-W 26 PEERS CA*net4 CERN MREN Netherlands Russia StarTap Taiwan (ASCC) Abilene + 7 Universities 22 PEERS MAX GPOP GEANT - Germany - France - Italy - UK - etc SInet (Japan) KEK Japan – Russia (BINP) Australia CA*net4 Taiwan (TANet2) Singaren 20 PEERS 3 PEERS LANL TECHnet 2 PEERS 39 PEERS CENIC SDSC PNW-GPOP CalREN2 CHI NAP Distributed 6TAP 19 Peers 2 PEERS KDDI (Japan) France EQX-ASH 1 PEER 5 PEERS ESnet provides access to all of the Internet by managing the full complement of Global Internet routes (about 150,000) at 10 general/commercial peering points + high-speed peerings w/ Abilene and the international R&E networks. This is a lot of work, and is very visible, but provides full access for DOE. ATL HUB University International Commercial Abilene EQX-SJ Abilene 6 PEERS Abilene

4 Major ESnet Changes in FY04 Dramatic increase in International traffic as major large- scale science experiments start to ramp up CERNlink connected at 10 Gb/s GEANT (main European R&E network – like Abilene and ESnet) connected at 2.5 Gb/s Abilene-ESnet high-speed cross-connects 2.5 Gb/s and 10 Gb/s) In order to meet the Office of Science program needs, a new architectural approach has been developed o Science Data Network (a second core network for high- volume traffic) o Metropolitan Area Networks (MANs)

5 Organized by Office of Science Mary Anne Scott, Chair Dave Bader Steve Eckstrand Marvin Frazier Dale Koelling Vicky White Workshop Panel Chairs Ray Bair and Deb Agarwal Bill Johnston and Mike Wilde Rick Stevens Ian Foster and Dennis Gannon Linda Winkler and Brian Tierney Sandy Merola and Charlie Catlett August 13-15, 2002 Predictive Drivers for Change Focused on science requirements that drive o Advanced Network Infrastructure o Middleware Research o Network Research o Network Governance Model The requirements for DOE science were developed by the OSC science community representing major DOE science disciplines o Climate o Spallation Neutron Source o Macromolecular Crystallography o High Energy Physics o Magnetic Fusion Energy Sciences o Chemical Sciences o Bioinformatics Available at

6 Evolving Quantitative Science Requirements for Networks Science AreasToday End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s100 Gb/s1000 Gb/shigh bulk throughput Climate (Data & Computation) 0.5 Gb/s Gb/sN x 1000 Gb/shigh bulk throughput SNS NanoScienceNot yet started1 Gb/s1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy0.066 Gb/s (500 MB/s burst) Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/stime critical throughput Astrophysics0.013 Gb/s (1 TBy/week) N*N multicast1000 Gb/scomputational steering and collaborations Genomics Data & Computation Gb/s (1 TBy/day) 100s of users1000 Gb/s + QoS for control channel high throughput and steering

7 Traffic coming into ESnet = Green Traffic leaving ESnet = Blue Traffic between sites % = of total ingress or egress traffic Note that more that 90% of the ESnet traffic is OSC traffic ESnet Appropriate Use Policy (AUP) All ESnet traffic must originate and/or terminate on an ESnet an site (no transit traffic is allowed) Observed Drivers for Change ESnet Inter-Sector Traffic Summary, Jan 2003 / Feb 2004: 1.7X overall traffic increase, 1.9X OSC increase (The international traffic is increasing due to BABAR at SLAC and the LHC tier 1 centers at FNAL and BNL) Peering Points Commercial R&E (mostly universities) 21/14% 17/10% 9/26% 14/12% 10/13% 4/6% ESnet ~25/18% DOE collaborator traffic, inc. data 72/68% 53/49% DOE is a net supplier of data because DOE facilities are used by universities and commercial entities, as well as by DOE researchers DOE sites International (almost entirely R&E sites)

8 ESnet Top 20 Data Flows, 24 hr. avg., Fermilab (US)  CERN SLAC (US)  IN2P3 (FR) 1 Terabyte/day SLAC (US)  INFN Padva (IT) Fermilab (US)  U. Chicago (US) CEBAF (US)  IN2P3 (FR) INFN Padva (IT)  SLAC (US) U. Toronto (CA)  Fermilab (US) Helmholtz-Karlsruhe (DE)  SLAC (US) DOE Lab  DOE Lab SLAC (US)  JANET (UK) Fermilab (US)  JANET (UK) Argonne (US)  Level3 (US) Argonne  SURFnet (NL) IN2P3 (FR)  SLAC (US) Fermilab (US)  INFN Padva (IT) A small number of science users account for a significant fraction of all ESnet traffic  Since BaBar production started, the top 20 ESnet flows have consistently accounted for > 50% of ESnet’s monthly total traffic (~130 of 250 TBy/mo)  As LHC data starts to move, this will increase a lot ( times)

9 ESnet Top 10 Data Flows, 1 week avg., FNAL (US)  IN2P3 (FR) 2.2 Terabytes SLAC (US)  INFN Padua (IT) 5.9 Terabytes U. Toronto (CA)  Fermilab (US) 0.9 Terabytes SLAC (US)  Helmholtz-Karlsruhe (DE) 0.9 Terabytes SLAC (US)  IN2P3 (FR) 5.3 Terabytes CERN  FNAL (US) 1.3 Terabytes FNAL (US)  U. Nijmegen (NL) 1.0 Terabytes FNAL (US)  Helmholtz-Karlsruhe (DE) 0.6 Terabytes FNAL (US)  SDSC (US) 0.6 Terabytes U. Wisc. (US)  FNAL (US) 0.6 Terabytes  The traffic is not transient: Daily and weekly averages are about the same.

10 ESnet and Abilene Abilene and ESnet together provide most of the nation’s transit networking for science Abilene provides national transit networking for most of the US universities by interconnecting the regional networks (mostly via the GigaPoPs) ESnet connects the DOE Labs ESnet and Abilene have recently established high- speed interconnects and cross-network routing Goal is that DOE Lab ↔ Univ. connectivity should be as good as Lab ↔ Lab and Univ. ↔ Univ.  Constant monitoring is the key

11 ESnet Abilene ORNL DEN ELP ALB DC DOE Labs w/ monitors Universities w/ monitors network hubs high-speed cross connects: ESnet ↔ Internet2/Abilene Monitoring DOE Lab ↔ University Connectivity Current monitor infrastructure (red) and target infrastructure Uniform distribution around ESnet and around Abilene Need to set up similar infrastructure with GEANT Japan CERN/Europe Europe SDG Japan CHI AsiaPac SEA NYC HOU KC LA ATL IND SNV Initial site monitors SDSC LBNL FNAL NCSU BNL OSU ESnet Abilene

12 Initial Monitoring is with OWAMP One-Way Delay Tests These measurements are very sensitive – e.g. NCSU Metro DWDM reroute of about 350 micro seconds is easily visible Fiber Re-Route ms

13 Initial Monitor Results (

14 ESnet, GEANT, and CERNlink GEANT plays a role in Europe similar to Abilene and ESnet in the US – it interconnects the European National Research and Education networks, to which the European R&E sites connect GEANT currently carries essentially all ESnet international traffic (LHC use of CERNlink to DOE labs is still ramping up) GN2 is the second phase of the GEANT project o The architecture of GN2 is remarkably similar to the new ESnet Science Data Network + IP core network model CERNlink will be the main CERN to US, LHC data path o Both US, LHC tier 1 centers are on ESnet (FNAL and BNL) o ESnet directly connects at 10 Gb/s to the CERNlink o The ESnet new architecture (Science Data Network) will accommodate the anticipated 40 Gb/s from LHC to US

15 GEANT and CERNlink A recent meeting between ESnet and GEANT produced proposals in a number of areas designed to ensure robust and reliable science data networking between ESnet and Europe o A US-EU joint engineering task force (“ITechs”) should be formed to coordinate US-EU science data networking -Will include, e.g., ESnet, Abilene, GEANT, CERN -Will develop joint operational procedures o ESnet will collaborate in GEANT development activities to ensure some level of compatibility -Bandwidth-on-demand (dynamic circuit setup) -Performance measurement and authentication -End-to-end QoS and performance enhancement -Security o 10 Gb/s connectivity between GEANT and ESnet will be established by mid-2005 and backup 2.5 Gb/s will be added

16 New ESnet Architecture Needed to Accommodate OSC The essential DOE Office of Science requirements cannot be met with the current, telecom provided, hub and spoke architecture of ESnet The core ring has good capacity and resiliency against single point failures, but the point-to-point tail circuits are neither reliable nor scalable to the required bandwidth ESnet Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE sites

17 A New ESnet Architecture Goals o full redundant connectivity for every site o high-speed access for every site (at least 10 Gb/s) Three part strategy 1) MAN rings provide dual site connectivity and much higher site-to-core bandwidth 2) A Science Data Network core for -multiply connected MAN rings for protection against hub failure -expanded capacity for science data -a platform for provisioned, guaranteed bandwidth circuits -alternate path for production IP traffic -carrier circuit and fiber access neutral hubs 3) An IP core (e.g. the current ESnet core) for high-reliability

GEANT (Europe) Asia- Pacific ESnet IP Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Washington, DC (DC) El Paso (ELP) DOE/OSC Labs New hubs Existing hubs ESnet Science Data Network (2 nd Core) A New ESnet Architecture: Science Data Network + IP Core Possible new hubs Atlanta (ATL) Metropolitan Area Rings CERN

19 ESnet Long-Term Architecture ESnet Metropolitan Area Networks ESnet SDN core ring site (typ.) site production IP provisioned circuits carried over optical channels / lambdas Optical channel (λ) equipmen – Carrier management domain 10 GigEthernet switch(s) – ESnet management domain core router – Esnet management domain ESnet IP core ring provisioned circuits tunneled through the IP core via MPLS monitor ESnet management and monitoring equipment monitor site router – site management domain ESnet hub (typ.) one or more indep. fiber pairs one or more independent fiber pairs

20 ESnet New Architecture, Part 1: MANs The MAN architecture is designed to provide o At least one redundant path from sites to ESnet hub o Scalable bandwidth options from sites to ESnet hub o The first step in point-to-point provisioned circuits -With endpoint authentication, these are private and intrusion resistant circuits, so they should be able to bypass site firewalls if the endpoints trust each other -End-to-end provisioning will be initially provided by a combination of Ethernet switch management of λ paths in the MAN and MPLS paths in the ESnet POS backbone (OSCARS project) -Provisioning will initially be provided by manual circuit configuration, on-demand in the future (OSCARS) o Cost savings over two or three years, when including the future site needs in increased bandwidth

21 Site gateway router site equip. Site gateway router Qwest hub ESnet IP core ANL FNAL ESnet MAN Architecture – logical (Chicago, e.g.) CERN (DOE funded link) monitor ESnet management and monitoring ESnet managed λ / circuit services tunneled through the IP backbone monitor site equip. ESnet production IP service ESnet managed λ / circuit services T320 StarLight International peerings Site LAN ESnet SDN core T320

22 ESnet Metropolitan Area Network ring (MANs) In the near term MAN rings will be built in the San Francisco and Chicago areas In long term there will likely be MAN rings on Long Island, in the Newport News, VA area, in No. New Mexico, in Idaho-Wyoming, etc. San Francisco Bay Area MAN ring progress o Feasibility has been demonstrated with an engineering study from CENIC o A competitive bid and “best value source selection” methodology will select the ring provider within two months

23 Joint Genome Institute SLAC Qwest / ESnet hub ESnet IP Core Ring Chicago El Paso SF Bay Area MAN NLR / UltraScienceNet Seattle and Chicago LA and San Diego LLNL SNLL NERSC LBNL SF Bay Area SF BA MAN Level 3 hub ESnet Science Data Network core

24 Proposed Chicago MAN ESnet CHI-HUB Qwest - NBC Bld 455 N Cityfront Plaza Dr, Chicago, IL ANL 9700 S Cass Ave, Lemont, IL FNAL Feynman Computing Center, Batavia, IL StarLight 910 N Lake Shore Dr, Chicago, IL 60611

25 ESnet New Architecture – Part 2: Science Data Network SDN (second core): Rationale Add major points of presence in carrier circuit and fiber access neutral facilities at Sunnyvale, Seattle, San Diego, and Chicago o Enable UltraSciNet cross-connect with ESnet o Provide access to NLR and other fiber-based networks o Allow for more competition in acquiring circuits Initial steps toward Science Data Network (SDN) o Provide a second, independent path between major northern route hubs -Alternate route for ESnet core IP traffic o Provide for high-speed paths on the West Coast to reach PNNL, GA, and AsiaPac peering o Increase ESnet connectivity to other R&E networks

26 DEN ELP ALB ATL MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites ESnet New Architecture Goal FY05 Science Data Network Phase 1 and SF BA MAN Japan  CERN (2X10Gb/s) Europe AsiaPac SEA NYC new ESnet hubs current ESnet hubs SNV ESnet IP core (Qwest) ESnet SDN core Lab supplied Major international Europe UltraSciNet DC SDG Japan CHI 2.5 Gbs 10 Gbs Future phases Existing ESnet Core New Core

27 DEN ELP ALB ATL MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites ESnet New Architecture Goal FY06 Science Data Network Phase 2 and Chicago MAN Japan  CERN (3X10Gb/s) Europe AsiaPac SEA NYC new ESnet hubs current ESnet hubs SNV Europe DC SDG Japan CHI ESnet SDN core Lab supplied Major international UltraSciNet 2.5 Gbs 10 Gbs Future phases ESnet IP core (Qwest)

28 DEN ELP ALB ATL MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites ESnet Beyond FY07 Japan CERN Europe SDG AsiaPac SEA ESnet SDN core hubs ESnet IP core (Qwest) hubs SNV Europe 10Gb/s 30Bg/s 40Gb/s Japan CHI High-impact science core Lab supplied Major international 2.5 Gbs 10 Gbs Future phases Production IP ESnet core DC Japan NYC