Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.

Similar presentations


Presentation on theme: "1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William."— Presentation transcript:

1 1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William E. Johnston (wej@es.net), ESnet Dept. Head, or Joe Burrescia (joeb@es.net), ESnet General Manager

2 2 DOE Office of Science and ESnet – the ESnet Mission ESnet’s primary mission is to enable the large- scale science that is the mission of DOE’s Office of Science (SC): o Sharing of massive amounts of data o Supporting thousands of collaborators world-wide o Distributed data processing o Distributed data management o Distributed simulation, visualization, and computational steering ESnet provides network and collaboration services to Office of Science laboratories and to sites of other DOE programs in cases where this increases cost effectiveness

3 TWC SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL Lab DC Offices MIT ANL BNL FNAL AMES NREL LLNL GA DOE-ALB OSC GTN NNSA International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) 42 end user sites SINet (Japan) Russia (BINP) CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP ATL DC commercial peering points MAE-E PAIX-PA Equinix, etc. PNWGPoP/ PAcificWave ESnet3 Today (Summer, 2006) Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene Abilene CERN (USLHCnet CERN+DOE funded) GÉANT - France, Germany, Italy, UK, etc NYC Starlight Chi NAP SNV Abilene SNV SDN JGI LBNL SLAC NERSC SDSC Equinix SNV ALB ORNL CHI MREN Netherlands StarTap Taiwan (TANet2, ASCC) NASA Ames AU SEA CHI-SL MAN LAN Abilene Other R&E peering points Specific R&E network peers UNM MAXGPoP AMPATH (S. America) ESnet Science Data Network (SDN) core R&E networks Office Of Science Sponsored (22)

4 4 A Changing Science Environment is the Key Driver of the Next Generation ESnet Large-scale collaborative science – big facilities, massive data, thousands of collaborators – is now a dominate feature of the Office of Science (“SC”) programs Distributed systems for data analysis, simulations, instrument operation, etc., are essential and are now common These changes are supported by network traffic pattern observations

5 Footprint of Largest SC Data Sharing Collaborators (50% of ESnet traffic) Drives the Footprint that ESnet Must Support

6 6 Terabytes / month Evolution of ESnet Traffic Patterns ESnet Monthly Accepted Traffic, January, 2000 – July, 2006 ESnet is currently transporting more than1 petabyte (1000 terabytes) per month About 45% of the total traffic is now generated by the top 1000 work flows (host to host traffic for the month) top 1000 host-host workflows

7 Traffic Volume of the Top 30 AS-AS Flows, June 2006 (AS-AS = mostly Lab to R&E site, a few Lab to R&E network, a few “other”) DOE Office of Science Program LHC / High Energy Physics - Tier 0-Tier1 LHC / HEP - T1-T2 HEP Nuclear Physics Lab - university LIGO (NSF) Lab - commodity Math. & Comp. (MICS) Terabytes FNAL -> CERN traffic is comparable to BNL -> CERN but on layer 2 flows that are not yet monitored for traffic – soon) Large-Scale Flow Trends, June 2006 Subtitle: “Onslaught of the LHC”)

8 8 Traffic Patterns are Changing Dramatically While the total traffic is increasing exponentially o Peak flow – that is system-to-system – bandwidth is decreasing o The number of large flows is increasing 1/05 1/06 6/06 2 TB/month 7/05 total traffic, TBy

9 9 The Onslaught of Grids Answer: Most large data transfers are now done by parallel / Grid data movers In June, 2006 72% of the hosts generating the 1000 work flows were involved in parallel data movers (Grid applications) This, combined with the dramatic increase in the proportion of traffic due to large-scale science (now 50% of all traffic) represents the most significant traffic pattern change in the history of ESnet This probably argues for a network architecture that favors path multiplicity and route diversity plateaus indicate the emergence of parallel transfer systems (a lot of systems transferring the same amount of data at the same time) Question: Why is peak flow bandwidth decreasing while total traffic is increasing?

10 10 Network Observation – Circuit-like Behavior Gigabytes/day (no data) For large-scale data handling projects (e.g. LIGO - Caltech) the work flows (system to system data transfers) exhibit circuit-like behavior This circuit has a duration of about 3 months (all of the top traffic generating work flows are similar to this) - this argues for a circuit-oriented element in the network architecture 9/2004 10/200411/200412/2004 1/20052/20053/20054/20055/2005 6/20057/20058/20059/2005

11 11 The Evolution of ESnet Architecture ESnet sites ESnet hubs / core network connection points Metro area rings (MANs) Other IP networks ESnet IP core ESnet Science Data Network (SDN) core ESnet to 2005: A routed IP network with sites singly attached to a national core ring ESnet from 2006-07: A routed IP network with sites dually connected on metro area rings or dually connected directly to core ring A switched network providing virtual circuit services for data-intensive science Circuit connections to other science networks (e.g. USLHCNet)

12 12 ESnet4 Internet2 has partnered with Level 3 Communications Co. for a dedicated optical fiber infrastructure with a national footprint and a rich topology - the “Internet2 Network” o The fiber will be provisioned with Infinera Dense Wave Division Multiplexing equipment that uses an advanced, integrated optical- electrical design o Level 3 will maintain the fiber and the DWDM equipment o The DWDM equipment will initially be provisioned to provide10 optical circuits (lambdas - s) across the entire fiber footprint (80 s is max.) ESnet has partnered with Internet2 to: o Share the optical infrastructure o Develop new circuit-oriented network services o Explore mechanisms that could be used for the ESnet Network Operations Center (NOC) and the Internet2/Indiana University NOC to back each other up for disaster recovery purposes

13 13 ESnet4 ESnet will build its next generation IP network and its new circuit-oriented Science Data Network primarily on the Internet2 circuits ( s) that are dedicated to ESnet, together with a few National Lambda Rail and other circuits o ESnet will provision and operate its own routing and switching hardware that is installed in various commercial telecom hubs around the country, as it has done for the past 20 years o ESnet’s peering relationships with the commercial Internet, various US research and education networks, and numerous international networks will continue and evolve as they have for the past 20 years

14 14 ESnet4 ESnet4 will also involve an expansion of the multi- 10Gb/s Metropolitan Area Rings in the San Francisco Bay Area, Chicago, Long Island, and Newport News, VA/Washington, DC area o provide multiple, independent connections for ESnet sites to the ESnet core network o expandable Several 10Gb/s links provided by the Labs that will be used to establish multiple, independent connections to the ESnet core o currently PNNL and ORNL

15 15 Internet2 Network Footprint Core network fiber path is ~ 14,000 miles / 24,000 km 1625 miles / 2545 km 2700 miles / 4300 km

16 16 ESnet4 IP + SDN Configuration, mid-August, 2007 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s. Existing site supplied circuits Existing NLR circuits Existing site supplied circuits ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections 1625 miles / 2545 km 2700 miles / 4300 km

17 17 ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet4 Metro Area Rings, 2007 Configurations Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s. MAX West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC BNL 32 AoA, NYC USLHCNet JLab ELITE ODU MATP Wash., DC FNAL 600 W. Chicago Starlight ANL USLHCNet

18 18 ESnet4 IP + SDN, 2008 Configuration Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia (2 ) (1 ) (2 ) Indianapolis (? ) Atlanta Nashville (3 ) (1 ) (2 ) (1 ) (13) (21) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s, or multiples thereof, as indicated. (E.g. 2 = 20 Gb/s., etc.) ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections

19 19 ESnet4 Metro Area Rings, 2008 Configurations Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia (2 ) (1 ) (2 ) Indianapolis (? ) Atlanta Nashville (3 ) (1 ) (2 ) (1 ) (13) (21) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s, or multiples thereof, as indicated. (E.g. 2 = 20 Gb/s., etc.) ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC West Chicago MAN FNAL 600 W. Chicago Starlight ANL USLHCNet Long Island MAN BNL 111-8th 32 AoA, NYC USLHCNet Newport News - Elite JLab ELITE ODU MATP Wash., DC

20 20 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (3(10)) Boston Philadelphia (3 ) (2 ) (3 ) Indianapolis (? ) Atlanta Nashville (3 ) (2 ) (1 ) (4 ) (2 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet4 IP + SDN, 2009 Configuration ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections

21 21 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (3(10)) Boston Philadelphia (3 ) (2 ) (3 ) Indianapolis (? ) Atlanta Nashville (3 ) (2 ) (1 ) (4 ) (2 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet4 Metro Area Rings, 2009 Configurations ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC FNAL 600 W. Chicago Starlight ANL USLHCNet BNL 111-8th 32 AoA, NYC USLHCNet JLab ELITE ODU MATP Wash., DC

22 22 ESnet4 IP + SDN, 2010 Configuration Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (3(10)) Boston Philadelphia (4 ) (3 ) (4 ) Indianapolis (>1 ) Atlanta Nashville (3 ) (2 ) (6 ) (3 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections

23 23 ESnet4 IP + SDN, 2011 Configuration Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia (5 ) (4 ) (5 ) Indianapolis Atlanta Nashville (3 ) (4 ) 3 ) (8 ) (4 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections (>1 )

24 24 Cleveland Europe (GEANT) Asia-Pacific New York Chicago Washington DC Atlanta CERN (30+ Gbps) Seattle Albuquerque Australia San Diego LA Denver South America (AMPATH) South America (AMPATH) Canada (CANARIE) CERN (30+ Gbps) Canada (CANARIE) Europe (GEANT) ESnet4 Asia- Pacific Asia Pacific GLORIAD (Russia and China) Boise Houston Jacksonville Tulsa Boston Science Data Network Core IP Core Kansas City Australia Core networks: 50-60 Gbps by 2009-2010, 200-600 Gbps by 2011-2012 (IP peering connections not shown) Core network fiber path is ~ 14,000 miles / 24,000 km 1625 miles / 2545 km 2700 miles / 4300 km Sunnyvale Production IP core (10Gbps) SDN core (20-30-40-50 Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections IP core hubs Primary DOE Labs SDN hubs High speed cross-connects with Ineternet2/Abilene Possible hubs USLHCNet

25 25 Outstanding Issues (From Rome Meeting, 4/2006) Is a single point of failure at the Tier 1 edges a reasonable long term design? Bandwidth guarantees in outage scenarios o How do the networks signal that something has failed to the applications? o How do sites sharing a link during a failure coordinate BW utilization? What expectations should we set for fail-over times? o Should BGP timers be tuned? We need to monitor the backup paths ability to transfer packets end-to-end to ensure they will work when needed. o How are we going to do it?

26 26 Possible USLHCNet - US T1 Redundant Connectivity Harvey Newman

27 27 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (3(10)) Boston Philadelphia (3 ) (2 ) (3 ) Indianapolis (? ) Atlanta Nashville (3 ) (2 ) (1 ) (4 ) (2 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet4 Metro Area Rings, 2009 Configurations ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN FNAL 600 W. Chicago Starlight ANL USLHCNet BNL 111-8th 32 AoA, NYC USLHCNet

28 28 Site router Site gateway router ESnet production IP core hub Site Large Science Site ESnet Metropolitan Area Network Ring Architecture for High Reliability Sites ESnet managed virtual circuit services tunneled through the IP backbone Virtual Circuits to Site T320 Site LANSite edge router MAN fiber ring: 2-4 x 10 Gbps channels provisioned initially, with expansion capacity to 16-64 IP core router ESnet SDN core hub ESnet IP core hub ESnet switch ESnet managed λ / circuit services SDN circuits to site systems ESnet MAN switch Virtual Circuit to Site SDN core west SDN core east IP core west IP core east ESnet production IP service Independent port card supporting multiple 10 Gb/s line interfaces US LHCnet switch SDN core switch US LHCnet switch MAN site switch

29 29 Details

30 30 Link Substrate Rollout (ver. 2006-08-23) Ring FNYC-DC-Chicago NYC-Boston-Cleveland Nov 17, 2006 Mar 1, 2007 CHouston-El Paso-Denver-KCApr 27, 2007 EDC-Atlanta-ChicagoMar 2, 2007BEl Paso-LA-Sunnyvale Denver-SLC-Sunnyvale Apr 27, 2007 May 24, 2007 DAtlanta-Houston-KC-ChicagoMar 2, 2007BSunnyvale-Seattle-SLCJul 1, 2007

31 31 ESnet IP + SDN 2007 Configuration and Roll Out Jan 1, 2007 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Atlanta Nashville (1(3)) (1(12)) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (30) (10) (12) (3) (21) (27) (14) (24) (15) Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) 1.First complement of circuits and PoP co-location become available. All circuits are 10Gb/s. Existing site supplied circuits Existing NLR circuits Existing site supplied circuits

32 32 ESnet IP + SDN 2007 Configuration and Roll Out Apr. 15, 2007 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Atlanta Nashville (1(3)) (1(12)) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (30) (10) (12) (3) (21) (27) (14) (24) (15) Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) 1.ESnet equipment installed in Washington, DC, Dec. 31, 2006 2.ESnet equipment installed in NYC, Jan. 31, 2007 3.Connecting circuits tested and accepted, March 15, 2007. 4.ESnet equipment installed in Chicago, March 31, 2007 5.Connecting circuits tested and accepted, April 15, 2007. All circuits are 10Gb/s.

33 33 ESnet IP + SDN 2007 Configuration and Roll Out May 15, 2007 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville (1(3)) (1(12)) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (30) (10) (12) (3) (21) (27) (14) (24) (15) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site 1. ESnet equipment installed in Atlanta, Apr. 30, 2007 2. Atlanta – Wash. circuit tested and accepted, May 15, 2007. All circuits are 10Gb/s. ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)

34 34 ESnet IP + SDN 2007 Configuration and Roll Out Jun. 15, 2007 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville (1(3)) (1(12)) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (30) (10) (12) (3) (21) (27) (14) (24) (15) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site 1.ESnet equipment installed in Albuquerque and Denver, May 31, 2007. 2.Connecting circuits tested and accepted, June 15, 2007. All circuits are 10Gb/s. ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)

35 35 ESnet IP + SDN 2007 Configuration and Roll Out July 15, 2007 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville (1(3)) (1(12)) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (30) (10) (12) (3) (21) (27) (14) (24) (15) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site 1.ESnet equipment installed in Sunnyvale, CA, June, 30, 2007. 2.Albuquerque-Sunnyvale circuit tested and accepted, July 15, 2007. 3.Sunnyvale-Denver circuit tested and accepted, July 15, 2007. All circuits are 10Gb/s. ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)

36 36 ESnet IP + SDN Configuration and Roll Out August 15, 2007 (final configuration for 2007) Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville (1(3)) (1(12)) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (30) (10) (12) (3) (21) (27) (14) (24) (15) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s. 1.ESnet equipment installed in Seattle, July, 31, 2007 2.Denver-Seattle-Sunnyvale circuits tested and accepted, August 15, 2007. ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)

37 37 ESnet IP + SDN, 2008 Configuration and Roll Out Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia (2 ) (1 ) (2 ) Indianapolis (? ) Atlanta Nashville (3 ) (1 ) (2 ) (1 ) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (30) (10) (12) (3) (21) (27) (14) (24) (15) 1.Contract for 2008-2011 completed by mid-FY2007. 2.Last three IP circuits (Kansas City-Houston, Chicago-Nashville-Atlanta, and Cleveland-Washington) and associated ESnet equipment, are installed, tested, and accepted. 3.First SDN ring is completed (Sunnyvale-Atlanta) and associated ESnet equipment is installed, tested, and accepted. 4.Second SDN northern tier circuits installed, tested, and accepted. Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s, or multiples thereof, as indicated. (E.g. 2 = 20 Gb/s., etc.) ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)

38 38 ESnet IP + SDN, 2009 Configuration and Roll Out Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (1) (0) (2) (4) (5) (6) (7) (9) (3(10)) Boston (11) (13) (15) (17) (19) (20) (22) (23) (24) (25) Philadelphia (26) (27) (29) (28) (30) (32) (3 ) (2 ) (3 ) (10) (14) Indianapolis (8) (12) (16) (? ) Atlanta Nashville (3) (21) (3 ) (2 ) (1 ) (4 ) (2 ) 1.Third SDN northern tier circuits installed, tested, and accepted. 2.Second SDN southern tier circuits installed, tested, and accepted. Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s, or multiples thereof, as indicated. (E.g. 2 = 20 Gb/s., etc.) ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)

39 39 ESnet IP + SDN, 2010 Configuration and Roll Out Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (1) (0) (2) (4) (5) (6) (7) (9) (3(10)) Boston (11) (13) (15) (17) (19) (20) (22) (23) (24) (25) Philadelphia (26) (27) (29) (28) (30) (32) (4 ) (3 ) (4 ) (10) (14) Indianapolis (8) (12) (16) (? ) Atlanta Nashville (3) (21) (3 ) (2 ) (6 ) (3 ) 1.Fourth SDN northern tier circuits installed, tested, and accepted. 2.Third SDN southern tier circuits installed, tested, and accepted. Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s, or multiples thereof, as indicated. (E.g. 2 = 20 Gb/s., etc.) ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)

40 40 ESnet IP + SDN, 2011 Configuration and Roll Out Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC (1) (0) (2) (4) (5) (6) (7) (9) (3(10)) Boston (11) (13) (15) (17) (19) (20) (22) (23) (24) (25) Philadelphia (26) (27) (29) (28) (30) (32) (5 ) (4 ) (5 ) (10) (14) Indianapolis (8) (12) (16) (? ) Atlanta Nashville (3) (21) (3 ) (4 ) 3 ) (8 ) (4 ) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site All circuits are 10Gb/s, or multiples thereof, as indicated. (E.g. 2 = 20 Gb/s., etc.) 1.Fifth SDN northern tier circuits installed, tested, and accepted. 2.Fourth SDN southern tier circuits installed, tested, and accepted. ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)

41 41 Virtual Circuit Network Services Every requirements workshop involving the science community has put bandwidth-on-demand as the highest priority – e.g. for o Massive data transfers for collaborative analysis of experiment data o Real-time data analysis for remote instruments o Control channels for remote instruments o Deadline scheduling for data transfers o “Smooth” interconnection for complex Grid workflows

42 42 What is the Nature of the Required Circuits Today o Primarily to support bulk data transfer with deadlines In the near future o Support for widely distributed Grid workflow engines o Real-time instrument operation o Coupled, distributed applications To get an idea of how circuit services might be used look at the one year history of the flows that are currently the top 20 o Estimate from the flow history what would be the characteristics of a circuit set up to manage the flow

43 43 Source and Destination of the Top 20 Flows, Sept. 2005

44 44 What are Characteristics of Today’s Flows – How “Dynamic” a Circuit? Gigabytes/day LIGO – CalTech Over 1 year the “circuit” duration is about 3 months (no data)

45 45 What are Characteristics of Today’s Flows – How “Dynamic” a Circuit? Gigabytes/day SLAC - INFN (IT) Over 1 year “circuit” duration is about 1 to 3 months (no data)

46 46 What are Characteristics of Today’s Flows – How “Dynamic” a Circuit? Gigabytes/day FNAL - IN2P3 (FR) Over 1 year “circuit” duration is about 2 to 3 months (no data)

47 47 What are Characteristics of Today’s Flows – How “Dynamic” a Circuit? Gigabytes/day (no data) INFN (IT) - SLAC Over 1 year “circuit” duration is about 3 weeks to 3 months

48 48 What are Characteristics of Today’s Flows – How “Dynamic” a Circuit? Gigabytes/day SLAC - IN2P3 (FR) Over 1 year “circuit” duration is about 1 day to 1 week (no data)

49 49 Characteristics of Today’s Circuits – How “Dynamic”? These flows are candidates for circuit based services for several reasons o Traffic engineering – to manage the traffic on the IP production backbone o To satisfy deadline scheduling requirements o Traffic isolation to permit the use of efficient, but TCP unfriendly, data transfer protocols Despite the long circuit duration, this cannot be managed by hand – too many circuits o There must automated scheduling, authorization, path analysis and selection, and path setup

50 50 Virtual Circuit Network Services A top priority of the science community Today o Primarily to support bulk data transfer with deadlines In the near future o Support for widely distributed Grid workflow engines o Real-time instrument operation o Coupled, distributed applications To get an idea of how circuit services might be used to support the current trends, look at the one year history of the flows that are currently the top 20 o Estimate from the flow history what would be the characteristics of a circuit set up to manage the flow


Download ppt "1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William."

Similar presentations


Ads by Google