Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.

Similar presentations


Presentation on theme: "1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William."— Presentation transcript:

1 1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William E. Johnston (wej@es.net), ESnet Dept. Head, or Joe Burrescia (joeb@es.net), ESnet General Manager

2 2 DOE Office of Science and ESnet – the ESnet Mission ESnet’s primary mission is to enable the large- scale science that is the mission of DOE’s Office of Science (SC): o Sharing of massive amounts of data o Supporting thousands of collaborators world-wide o Distributed data processing o Distributed data management o Distributed simulation, visualization, and computational steering ESnet provides network and collaboration services to Office of Science laboratories and to sites of other DOE programs in cases where this increases cost effectiveness

3 TWC SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL Lab DC Offices MIT ANL BNL FNAL AMES NREL LLNL GA DOE-ALB OSC GTN NNSA International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) 42 end user sites SINet (Japan) Russia (BINP) CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP ATL DC commercial peering points MAE-E PAIX-PA Equinix, etc. PNWGPoP/ PAcificWave ESnet3 Today (Summer, 2006) Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene Abilene CERN (USLHCnet CERN+DOE funded) GÉANT - France, Germany, Italy, UK, etc NYC Starlight Chi NAP SNV Abilene SNV SDN JGI LBNL SLAC NERSC SDSC Equinix SNV ALB ORNL CHI MREN Netherlands StarTap Taiwan (TANet2, ASCC) NASA Ames AU SEA CHI-SL MAN LAN Abilene Other R&E peering points Specific R&E network peers UNM MAXGPoP AMPATH (S. America) ESnet Science Data Network (SDN) core R&E networks Office Of Science Sponsored (22)

4 4 A Changing Science Environment is the Key Driver of the Next Generation ESnet Large-scale collaborative science – big facilities, massive data, thousands of collaborators – is now a dominate feature of the Office of Science (“SC”) programs Distributed systems for data analysis, simulations, instrument operation, etc., are essential and are now common These changes are supported by network traffic pattern observations

5 Footprint of Largest SC Data Sharing Collaborators (50% of ESnet traffic) Drives the Footprint that ESnet Must Support

6 6 Terabytes / month Evolution of ESnet Traffic Patterns ESnet Monthly Accepted Traffic, January, 2000 – June, 2006 ESnet is currently transporting more than1 petabyte (1000 terabytes) per month More than 50% of the traffic is now generated by the top 100 work flows (system to system) top 100 sites

7 Traffic Volume of the Top 30 AS-AS Flows, June 2006 (AS-AS = mostly Lab to R&E site, a few Lab to R&E network, a few “other”) DOE Office of Science Program LHC / High Energy Physics - Tier 0-Tier1 LHC / HEP - T1-T2 HEP Nuclear Physics Lab - university LIGO (NSF) Lab - commodity Math. & Comp. (MICS) Terabytes FNAL -> CERN traffic is comparable to BNL -> CERN but on layer 2 flows that are not yet monitored for traffic – soon) Large-Scale Flow Trends, June 2006 Subtitle: “Onslaught of the LHC”)

8 8 Traffic Patterns are Changing Dramatically While the total traffic is increasing exponentially o Peak flow – that is system-to-system – bandwidth is decreasing o The number of large flows is increasing 1/05 1/06 6/06 2 TB/month 7/05 total traffic, TBy

9 9 The Onslaught of Grids Answer: Most large data transfers are now done by parallel / Grid data movers In June, 2006 72% of the hosts generating the 1000 work flows were involved in parallel data movers (Grid applications) This, combined with the dramatic increase in the proportion of traffic due to large-scale science (now 50% of all traffic) represents the most significant traffic pattern change in the history of ESnet This probably argues for a network architecture that favors path multiplicity and route diversity plateaus indicate the emergence of parallel transfer systems (a lot of systems transferring the same amount of data at the same time) Question: Why is peak flow bandwidth decreasing while total traffic is increasing?

10 10 Network Observation – Circuit-like Behavior Gigabytes/day (no data) For large-scale data handling projects (e.g. LIGO - Caltech) the work flows (system to system data transfers) exhibit circuit-like behavior This circuit has a duration of about 3 months (all of the top traffic generating work flows are similar to this) - this argues for a circuit-oriented element in the network architecture 9/2004 10/200411/200412/2004 1/20052/20053/20054/20055/2005 6/20057/20058/20059/2005

11 11 The Evolution of ESnet Architecture ESnet sites ESnet hubs / core network connection points Metro area rings (MANs) Other IP networks ESnet IP core ESnet Science Data Network (SDN) core ESnet to 2005: A routed IP network with sites singly attached to a national core ring ESnet from 2006-07: A routed IP network with sites dually connected on metro area rings or dually connected directly to core ring A switched network providing virtual circuit services for data-intensive science Circuit connections to other science networks (e.g. USLHCNet)

12 12 ESnet4 Internet2 has partnered with Level 3 Communications Co. for a dedicated optical fiber infrastructure with a national footprint and a rich topology - the “Internet2 Network” o The fiber will be provisioned with Infinera Dense Wave Division Multiplexing equipment that uses an advanced, integrated optical- electrical design o Level 3 will maintain the fiber and the DWDM equipment o The DWDM equipment will initially be provisioned to provide10 optical circuits (lambdas - s) across the entire fiber footprint (80 s is max.) ESnet has partnered with Internet2 to: o Share the optical infrastructure o Develop new circuit-oriented network services o Explore mechanisms that could be used for the ESnet Network Operations Center (NOC) and the Internet2/Indiana University NOC to back each other up for disaster recovery purposes

13 13 ESnet4 ESnet will build its next generation IP network and its new circuit-oriented Science Data Network primarily on the Internet2 Network circuits ( s) that are dedicated to ESnet, together with a few National Lambda Rail and other circuits o ESnet will provision and operate its own routing and switching hardware that is installed in various commercial telecom hubs around the country, as it has done for the past 20 years o ESnet’s peering relationships with the commercial Internet, various US research and education networks, and numerous international networks will continue and evolve as they have for the past 20 years

14 14 ESnet4 ESnet4 will also involve an expansion of the multi- 10Gb/s Metropolitan Area Rings in the San Francisco Bay Area, Chicago, Long Island, and Newport News, VA/Washington, DC area o provide multiple, independent connections for ESnet sites to the ESnet core network o expandable Several 10Gb/s links provided by the Labs that will be used to establish multiple, independent connections to the ESnet core o currently PNNL and ORNL

15 15 Internet2 Network Footprint Core network fiber path is ~ 14,000 miles / 24,000 km

16 16 Internet2 Network Footprint Core network fiber path is ~ 14,000 miles / 24,000 km 1625 miles / 2545 km 2700 miles / 4300 km

17 17 ESnet4 IP + SDN Configuration, mid-August, 2007 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s. Existing site supplied circuits Existing NLR circuits Existing site supplied circuits ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections 1625 miles / 2545 km 2700 miles / 4300 km

18 18 ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet4 Metro Area Rings, 2007 Configurations Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s. MAX West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC BNL 32 AoA, NYC USLHCNet JLab ELITE ODU MATP Wash., DC FNAL 600 W. Chicago Starlight ANL USLHCNet

19 19 ESnet4 IP + SDN, 2008 Configuration Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia (2 ) (1 ) (2 ) Indianapolis (? ) Atlanta Nashville (3 ) (1 ) (2 ) (1 ) (13) (21) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s, or multiples thereof, as indicated. (E.g. 2 = 20 Gb/s., etc.) ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections

20 20 ESnet4 Metro Area Rings, 2008 Configurations Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia (2 ) (1 ) (2 ) Indianapolis (? ) Atlanta Nashville (3 ) (1 ) (2 ) (1 ) (13) (21) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s, or multiples thereof, as indicated. (E.g. 2 = 20 Gb/s., etc.) ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC West Chicago MAN FNAL 600 W. Chicago Starlight ANL USLHCNet Long Island MAN BNL 111-8th 32 AoA, NYC USLHCNet Newport News - Elite JLab ELITE ODU MATP Wash., DC

21 21 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (3(10)) Boston Philadelphia (3 ) (2 ) (3 ) Indianapolis (? ) Atlanta Nashville (3 ) (2 ) (1 ) (4 ) (2 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet4 IP + SDN, 2009 Configuration ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections

22 22 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (3(10)) Boston Philadelphia (3 ) (2 ) (3 ) Indianapolis (? ) Atlanta Nashville (3 ) (2 ) (1 ) (4 ) (2 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet4 Metro Area Rings, 2009 Configurations ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC FNAL 600 W. Chicago Starlight ANL USLHCNet BNL 111-8th 32 AoA, NYC USLHCNet JLab ELITE ODU MATP Wash., DC

23 23 ESnet4 IP + SDN, 2010 Configuration Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (3(10)) Boston Philadelphia (4 ) (3 ) (4 ) Indianapolis (>1 ) Atlanta Nashville (3 ) (2 ) (6 ) (3 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections

24 24 ESnet4 IP + SDN, 2011 Configuration Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia (5 ) (4 ) (5 ) Indianapolis Atlanta Nashville (3 ) (4 ) 3 ) (8 ) (4 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections (>1 )

25 25 Cleveland Europe (GEANT) Asia-Pacific New York Chicago Washington DC Atlanta CERN (30+ Gbps) Seattle Albuquerque Australia San Diego LA Denver South America (AMPATH) South America (AMPATH) Canada (CANARIE) CERN (30+ Gbps) Canada (CANARIE) Europe (GEANT) ESnet4 Asia- Pacific Asia Pacific GLORIAD (Russia and China) Boise Houston Jacksonville Tulsa Boston Science Data Network Core IP Core Kansas City Australia Core networks: 50-60 Gbps by 2009-2010, 200-600 Gbps by 2011-2012 (IP peering connections not shown) Core network fiber path is ~ 14,000 miles / 24,000 km 1625 miles / 2545 km 2700 miles / 4300 km Sunnyvale Production IP core (10Gbps) SDN core (20-30-40-50 Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections IP core hubs Primary DOE Labs SDN hubs High speed cross-connects with Ineternet2/Abilene Possible hubs USLHCNet

26 26 Outstanding Issues (From Rome Meeting, 4/2006) Is a single point of failure at the Tier 1 edges a reasonable long term design? Bandwidth guarantees in outage scenarios o How do the networks signal that something has failed to the applications? o How do sites sharing a link during a failure coordinate BW utilization? What expectations should we set for fail-over times? o Should BGP timers be tuned? We need to monitor the backup paths ability to transfer packets end-to-end to ensure they will work when needed. o How are we going to do it?

27 27 Possible USLHCNet - US T1 Redundant Connectivity Harvey Newman

28 28 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (3(10)) Boston Philadelphia (3 ) (2 ) (3 ) Indianapolis (? ) Atlanta Nashville (3 ) (2 ) (1 ) (4 ) (2 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet4 Metro Area Rings, 2009 Configurations ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN FNAL 600 W. Chicago Starlight ANL USLHCNet BNL 111-8th 32 AoA, NYC USLHCNet


Download ppt "1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William."

Similar presentations


Ads by Google