Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.

Similar presentations


Presentation on theme: "1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National."— Presentation transcript:

1 1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of Science

2 2 TWC SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL Lab DC Offices MIT ANL BNL FNAL AMES NREL LLNL GA DOE-ALB OSC GTN NNSA International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) 42 end user sites SINet (Japan) Russia (BINP) CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP DC commercial peering points MAE-E PAIX-PA Equinix, etc. PNWGPoP/ PAcificWave ESnet 3 with Sites and Peers (Early 2007) ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene Abilene CERN (USLHCnet DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc NYC Starlight SNV Abilene JGI LBNL SLAC NERSC SNV SDN SDSC Equinix SNV ALB ORNL CHI MREN Netherlands StarTap Taiwan (TANet2, ASCC) NASA Ames AU SEA CHI-SL MAN LAN Abilene Specific R&E network peers Other R&E peering points UNM MAXGPoP AMPATH (S. America) ESnet Science Data Network (SDN) core R&E networks Office Of Science Sponsored (22) ATL NSF/IRNC funded Equinix

3 3 ESnet 3 Backbone as of January 1, 2007 Sunnyvale 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) MAN rings (≥ 10 G/s) Lab supplied links Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Future ESnet Hub ESnet Hub

4 4 ESnet 4 Backbone as of April 15, 2007 Clev. Boston 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Future ESnet Hub ESnet Hub

5 5 ESnet 4 Backbone as of May 15, 2007 SNV Clev. Boston 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Clev. Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Future ESnet Hub ESnet Hub

6 6 ESnet 4 Backbone as of June 20, 2007 Clev. Boston Houston Kansas City 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver Future ESnet Hub ESnet Hub

7 7 ESnet 4 Backbone August 1, 2007 (Last JT meeting at FNAL) Clev. Boston Houston Los Angeles 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver Future ESnet Hub ESnet Hub

8 8 ESnet 4 Backbone September 30, 2007 Clev. Boston Houston Boise Los Angeles 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver Future ESnet Hub ESnet Hub

9 9 ESnet 4 Backbone December 2007 Clev. Boston Houston Boise Los Angeles 10 Gb/s SDN core (NLR) 2.5 Gb/s IP Tail (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso New York City Washington DC Atlanta Denver Nashville Future ESnet Hub ESnet Hub Chicago

10 10 DC ESnet 4 Backbone December, 2008 Houston 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver Los Angeles Nashville Future ESnet Hub ESnet Hub X2

11 LVK SNLL YUCCA MT BECHTEL-NV PNNL LIGO LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL Lab DC Offices MIT/ PSFC BNL AMES NREL LLNL GA DOE-ALB DOE GTN NNSA NNSA Sponsored (13+) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) ~45 end user sites SINet (Japan) Russia (BINP) CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ELPA WASH commercial peering points PAIX-PA Equinix, etc. ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (12/2007) ESnet core hubs CERN (USLHCnet: DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc NEWY SUNN Abilene JGI LBNL SLAC NERSC SNV1 SDSC Equinix ALBU ORNL CHIC MREN StarTap Taiwan (TANet2, ASCC) NASA Ames AU SEA CHI-SL Specific R&E network peers UNM MAXGPoP NLR AMPATH (S. America) R&E networks Office Of Science Sponsored (22) ATLA NSF/IRNC funded IARC PacWave KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 KAREN / REANNZ Internet2 SINGAREN ODN Japan Telecom America NETL ANL FNAL Starlight USLHCNet NLR International (1-10 Gb/s) 10 Gb/s SDN core (I2, NLR) 10Gb/s IP core MAN rings (≥ 10 Gb/s) Lab supplied links OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Salt Lake PacWave Abilene Equinix DENV DOE SUNN NASH Geography is only representational Internet2 NYSERNet MAN LAN Other R&E peering points USHLCNet to GÉANT INL

12 12 ESnet4 Core networks 50-60 Gbps by 2009-2010 (10Gb/s circuits), 500-600 Gbps by 2011-2012 (100 Gb/s circuits) Cleveland Europe (GEANT) Asia-Pacific New York Chicago Washington DC Atlanta CERN (30+ Gbps) Seattle Albuquerque Australia San Diego LA Denver South America (AMPATH) South America (AMPATH) Canada (CANARIE) CERN (30+ Gbps) Canada (CANARIE) Asia- Pacific Asia Pacific GLORIAD (Russia and China) Boise Houston Jacksonville Tulsa Boston Science Data Network Core IP Core Kansas City Australia Core network fiber path is ~ 14,000 miles / 24,000 km 1625 miles / 2545 km 2700 miles / 4300 km Sunnyvale Production IP core (10Gbps) SDN core (20-30-40-50 Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections IP core hubs Primary DOE Labs SDN hubs High speed cross-connects with Ineternet2/Abilene Possible hubs USLHCNet

13 13 A Tail of Two ESnet4 Hubs MX960 Switch T320 Router 6509 Switch T320 Routers Sunnyvale Ca Hub Chicago Hub ESnet’s SDN backbone is implemented with Layer2 switches; Cisco 6509s and Juniper MX960s each present their own unique challenges.

14 14 ESnet 4 Factoids as of January 21, 2008 ESnet4 installation to date: o 32 new 10Gb/s backbone circuits -Over 3 times the number from last JT meeting o 20,284 10Gb/s backbone Route Miles -More than doubled from last JT meeting o 10 new hubs -Since last meeting –Seattle –Sunnyvale –Nashville o 7 new routers 4 new switches o Chicago MAN now connected to Level3 POP -2 x 10GE to ANL -2 x 10GE to FNAL -3 x 10GE to Starlight

15 15 ESnet Traffic Continues to Exceed 2 PetaBytes/Month 2.7 PBytes in July 2007 1 PBytes in April 2006 ESnet traffic historically has increased 10x every 47 months Overall traffic tracks the very large science use of the network

16 When A Few Large Data Sources/Sinks Dominate Traffic it is Not Surprising that Overall Network Usage Follows the Patterns of the Very Large Users - This Trend Will Reverse in the Next Few Weeks as the Next Round of LHC Data Challenges Kicks Off

17 17 ESnet Continues to be Highly Reliable; Even During the Transition “5 nines” (>99.995%) “3 nines” (>99.5%) “4 nines” (>99.95%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.

18 18 OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS Guaranteed Bandwidth Virtual Circuit Services On-demand Secure Circuits and Advance Reservation System

19 19 OSCARS Status Update ESnet Centric Deployment o Prototype layer 3 (IP) guaranteed bandwidth virtual circuit service deployed in ESnet (1Q05) o Prototype layer 2 (Ethernet VLAN) virtual circuit service deployed in ESnet (3Q07) Inter-Domain Collaborative Efforts o Terapaths (BNL) -Inter-domain interoperability for layer 3 virtual circuits demonstrated (3Q06) -Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07) o LambdaStation (FNAL) -Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07) o HOPI/DRAGON -Inter-domain exchange of control messages demonstrated (1Q07) -Integration of OSCARS and DRAGON has been successful (1Q07) o DICE -First draft of topology exchange schema has been formalized (in collaboration with NMWG) (2Q07), interoperability test demonstrated 3Q07 -Initial implementation of reservation and signaling messages demonstrated at SC07 (4Q07) o UVA -Integration of Token based authorization in OSCARS under testing o Nortel -Topology exchange demonstrated successfully 3Q07 -Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07)

20 20 ESnet o About 1/3 of the 10GE bandwidth test platforms & 1/2 of the latency test platforms for ESnet 4 have been deployed. -10GE test systems are being used extensively for acceptance testing and debugging -Structured & ad-hoc external testing capabilities have not been enabled yet. -Clocking issues at a couple POPS are not resolved. o Work is progressing on revamping the ESnet statistics collection, management & publication systems -ESxSNMP & TSDB & PerfSONAR Measurement Archive (MA) -PerfSONAR TS & OSCARS Topology DB -NetInfo being restructured to be PerfSONAR based LHC and PerfSONAR o PerfSONAR based network measurement solutions for the Tier 1/Tier 2 community are nearing completion. o A proposal from DANTE to deploy a perfSONAR based network measurement service across the LHCOPN at all Tier1 sites is being evaluated by the Tier 1 centers Network Measurement Update

21 21 End


Download ppt "1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National."

Similar presentations


Ads by Google