Presentation is loading. Please wait.

Presentation is loading. Please wait.

Networking for the Future of Science

Similar presentations


Presentation on theme: "Networking for the Future of Science"— Presentation transcript:

1 Networking for the Future of Science
ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager Energy Sciences Network Lawrence Berkeley National Laboratory January 21, 2008 Networking for the Future of Science

2 ESnet 3 with Sites and Peers (Early 2007)
ESnet Science Data Network (SDN) core Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren SINet (Japan) Russia (BINP) GÉANT - France, Germany, Italy, UK, etc CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 MREN Netherlands StarTap Taiwan (TANet2, ASCC) PNWGPoP/ PAcificWave CERN (USLHCnet DOE+CERN funded) SEA NSF/IRNC funded LIGO MAN LAN Abilene PNNL AU Starlight CHI-SL Abilene NYC MIT Abilene INEEL ESnet IP core: Packet over SONET Optical Ring and Hubs Lab DC Offices BNL JGI TWC FNAL LLNL LBNL AMES ANL PPPL SNLL NERSC SNV CHI MAE-E SLAC Equinix PAIX-PA Equinix, etc. OSC GTN NNSA Equinix SNV SDN BECHTEL-NV NASA Ames KCP NREL DC JLAB ORNL YUCCA MT OSTI ORAU LANL MAXGPoP ARM Abilene SDSC SNLA NOAA AU GA SRS ALB Allied Signal PANTEX UNM ATL AMPATH (S. America) 42 end user sites AMPATH (S. America) DOE-ALB Office Of Science Sponsored (22) ELP International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) commercial peering points R&E networks Specific R&E network peers Other R&E peering points IP SNV ESnet core hubs Abilene high-speed peering points with Internet2/Abilene

3 ESnet 3 Backbone as of January 1, 2007
Seattle New York City Sunnyvale Chicago Washington DC San Diego Albuquerque Atlanta El Paso Future ESnet Hub ESnet Hub 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) MAN rings (≥ 10 G/s) Lab supplied links

4 ESnet 4 Backbone as of April 15, 2007
Seattle Boston Clev. New York City Sunnyvale Chicago Washington DC San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub

5 ESnet 4 Backbone as of May 15, 2007
Seattle Boston Boston Clev. Clev. New York City Sunnyvale Chicago Washington DC SNV San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub

6 ESnet 4 Backbone as of June 20, 2007
Seattle Boston Boston Clev. New York City Sunnyvale Denver Chicago Washington DC Kansas City San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Future ESnet Hub ESnet Hub

7 ESnet 4 Backbone August 1, 2007 (Last JT meeting at FNAL)
Seattle Boston Boston Clev. Clev. New York City Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

8 ESnet 4 Backbone September 30, 2007
Seattle Boston Boston Boise Clev. Clev. New York City Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

9 ESnet 4 Backbone December 2007
Seattle Boston Boston Boise Clev. Clev. New York City Sunnyvale Denver Chicago Washington DC Los Angeles Kansas City Nashville San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 2.5 Gb/s IP Tail (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

10 ESnet 4 Backbone December, 2008
Seattle Boston Clev. X2 New York City X2 X2 X2 Sunnyvale Denver Chicago Washington DC DC X2 X2 Los Angeles Kansas City X2 Nashville San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

11 ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (12/2007) Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 SINet (Japan) Russia (BINP) GÉANT - France, Germany, Italy, UK, etc CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 MREN StarTap Taiwan (TANet2, ASCC) CERN (USLHCnet: DOE+CERN funded) PacWave SEA NSF/IRNC funded KAREN / REANNZ Internet2 SINGAREN ODN Japan Telecom America USHLCNet to GÉANT Internet2 NYSERNet MAN LAN AU PNNL Starlight USLHCNet NLR CHI-SL LIGO MIT/ PSFC NEWY Abilene Abilene INL Salt Lake Lab DC Offices BNL PacWave FNAL JGI LVK ANL LLNL SNLL PPPL LBNL CHIC DOE NERSC NETL SUNN AMES SLAC Equinix PAIX-PA Equinix, etc. Equinix DOE GTN NNSA SNV1 NREL BECHTEL-NV DENV WASH NASA Ames IARC KCP ORNL JLAB YUCCA MT ORAU LANL MAXGPoP NLR ARM OSTI SDSC SNLA GA NASH AU SRS NOAA ALBU Allied Signal PANTEX AMPATH (S. America) UNM ATLA AMPATH (S. America) DOE-ALB ~45 end user sites ELPA International (1-10 Gb/s) 10 Gb/s SDN core (I2, NLR) 10Gb/s IP core MAN rings (≥ 10 Gb/s) Lab supplied links OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Office Of Science Sponsored (22) NNSA Sponsored (13+) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) commercial peering points R&E networks Specific R&E network peers Geography is only representational SUNN ESnet core hubs Other R&E peering points

12 GLORIAD (Russia and China) Science Data Network Core
ESnet4 Core networks Gbps by (10Gb/s circuits), Gbps by (100 Gb/s circuits) Canada (CANARIE) CERN (30+ Gbps) Asia-Pacific Canada (CANARIE) Asia Pacific CERN (30+ Gbps) GLORIAD (Russia and China) USLHCNet Europe (GEANT) 1625 miles / 2545 km Asia-Pacific Science Data Network Core Seattle Chicago Boston IP Core Boise Australia New York Kansas City Cleveland Denver Washington DC Sunnyvale Atlanta Tulsa LA Albuquerque Australia South America (AMPATH) San Diego IP core hubs South America (AMPATH) Houston Jacksonville Production IP core (10Gbps) SDN core ( Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections SDN hubs Primary DOE Labs Core network fiber path is ~ 14,000 miles / 24,000 km High speed cross-connects with Ineternet2/Abilene Possible hubs 2700 miles / 4300 km

13 A Tail of Two ESnet4 Hubs MX960 Switch 6509 Switch T320 Routers
Sunnyvale Ca Hub Chicago Hub ESnet’s SDN backbone is implemented with Layer2 switches; Cisco 6509s and Juniper MX960s each present their own unique challenges.

14 ESnet 4 Factoids as of January 21, 2008
ESnet4 installation to date: 32 new 10Gb/s backbone circuits Over 3 times the number from last JT meeting 20,284 10Gb/s backbone Route Miles More than doubled from last JT meeting 10 new hubs Since last meeting Seattle Sunnyvale Nashville 7 new routers 4 new switches Chicago MAN now connected to Level3 POP 2 x 10GE to ANL 2 x 10GE to FNAL 3 x 10GE to Starlight

15 ESnet Traffic Continues to Exceed 2 PetaBytes/Month
Overall traffic tracks the very large science use of the network 2.7 PBytes in July 2007 1 PBytes in April 2006 ESnet traffic historically has increased 10x every 47 months

16 When A Few Large Data Sources/Sinks Dominate Traffic it is Not Surprising that Overall Network Usage Follows the Patterns of the Very Large Users - This Trend Will Reverse in the Next Few Weeks as the Next Round of LHC Data Challenges Kicks Off

17 ESnet Continues to be Highly Reliable; Even During the Transition
“5 nines” (>99.995%) “4 nines” (>99.95%) “3 nines” (>99.5%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.

18 OSCARS OSCARS Overview Guaranteed Bandwidth Virtual Circuit Services
On-demand Secure Circuits and Advance Reservation System OSCARS Guaranteed Bandwidth Virtual Circuit Services Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy

19 OSCARS Status Update ESnet Centric Deployment
Prototype layer 3 (IP) guaranteed bandwidth virtual circuit service deployed in ESnet (1Q05) Prototype layer 2 (Ethernet VLAN) virtual circuit service deployed in ESnet (3Q07) Inter-Domain Collaborative Efforts Terapaths (BNL) Inter-domain interoperability for layer 3 virtual circuits demonstrated (3Q06) Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07) LambdaStation (FNAL) HOPI/DRAGON Inter-domain exchange of control messages demonstrated (1Q07) Integration of OSCARS and DRAGON has been successful (1Q07) DICE First draft of topology exchange schema has been formalized (in collaboration with NMWG) (2Q07), interoperability test demonstrated 3Q07 Initial implementation of reservation and signaling messages demonstrated at SC07 (4Q07) UVA Integration of Token based authorization in OSCARS under testing Nortel Topology exchange demonstrated successfully 3Q07

20 Network Measurement Update
ESnet About 1/3 of the 10GE bandwidth test platforms & 1/2 of the latency test platforms for ESnet 4 have been deployed. 10GE test systems are being used extensively for acceptance testing and debugging Structured & ad-hoc external testing capabilities have not been enabled yet. Clocking issues at a couple POPS are not resolved. Work is progressing on revamping the ESnet statistics collection, management & publication systems ESxSNMP & TSDB & PerfSONAR Measurement Archive (MA) PerfSONAR TS & OSCARS Topology DB NetInfo being restructured to be PerfSONAR based LHC and PerfSONAR PerfSONAR based network measurement solutions for the Tier 1/Tier 2 community are nearing completion. A proposal from DANTE to deploy a perfSONAR based network measurement service across the LHCOPN at all Tier1 sites is being evaluated by the Tier 1 centers

21 End


Download ppt "Networking for the Future of Science"

Similar presentations


Ads by Google