Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.

Similar presentations


Presentation on theme: "1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project."— Presentation transcript:

1 1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team Lawrence Berkeley National Laboratory

2 TWC JGI SNLL LBNL SLAC YUCCA MT BECHTEL PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NERSC NREL ALB HUB LLNL GA DOE-ALB SDSC Japan GTN&NNSA International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet mid-2004 SInet (Japan) Japan – Russia(BINP) CA*net4 MREN Netherlands Russia StarTap Taiwan (ASCC) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren ESnet core: Packet over SONET Optical Ring and Hubs ELP HUB SNV HUB CHI HUB ATL HUB DC HUB peering points MAE-E Fix-W PAIX-W MAE-W NY-NAP PAIX-E Euqinix PNWG SEA HUB ESnet Provides Full Internet Service to DOE Facilities and Collaborators with High-Speed Access to Major Science Collaborators hubs SNV HUB Abilene high-speed peering points Abilene MAN LAN Abilene CERN (DOE link) GEANT - Germany, France, Italy, UK, etc NYC HUB Starlight Chi NAP

3 STARLIGHT MAE-E NY-NAP PAIX-E GA LBNL ESnet’s Peering Infrastructure Connects the DOE Community With its Collaborators ESnet Peering (connections to other networks) NYC HUBS SEA HUB Japan SNV HUB MAE-W FIX-W PAIX-W 26 PEERS CA*net4 CERN MREN Netherlands Russia StarTap Taiwan (ASCC) Abilene + 7 Universities 22 PEERS MAX GPOP GEANT - Germany - France - Italy - UK - etc SInet (Japan) KEK Japan – Russia (BINP) Australia CA*net4 Taiwan (TANet2) Singaren 20 PEERS 3 PEERS LANL TECHnet 2 PEERS 39 PEERS CENIC SDSC PNW-GPOP CalREN2 CHI NAP Distributed 6TAP 19 Peers 2 PEERS KDDI (Japan) France EQX-ASH 1 PEER 5 PEERS ESnet provides access to all of the Internet by managing the full complement of Global Internet routes (about 150,000) at 10 general/commercial peering points + high-speed peerings w/ Abilene and the international R&E networks. This is a lot of work, and is very visible, but provides full access for DOE. ATL HUB University International Commercial Abilene EQX-SJ Abilene 6 PEERS Abilene

4 4 Major ESnet Changes in FY04 Dramatic increase in International traffic as major large- scale science experiments start to ramp up CERNlink connected at 10 Gb/s GEANT (main European R&E network – like Abilene and ESnet) connected at 2.5 Gb/s Abilene-ESnet high-speed cross-connects (3 @ 2.5 Gb/s and 1 @ 10 Gb/s) In order to meet the Office of Science program needs, a new architectural approach has been developed o Science Data Network (a second core network for high- volume traffic) o Metropolitan Area Networks (MANs)

5 5 Organized by Office of Science Mary Anne Scott, Chair Dave Bader Steve Eckstrand Marvin Frazier Dale Koelling Vicky White Workshop Panel Chairs Ray Bair and Deb Agarwal Bill Johnston and Mike Wilde Rick Stevens Ian Foster and Dennis Gannon Linda Winkler and Brian Tierney Sandy Merola and Charlie Catlett August 13-15, 2002 Predictive Drivers for Change Focused on science requirements that drive o Advanced Network Infrastructure o Middleware Research o Network Research o Network Governance Model The requirements for DOE science were developed by the OSC science community representing major DOE science disciplines o Climate o Spallation Neutron Source o Macromolecular Crystallography o High Energy Physics o Magnetic Fusion Energy Sciences o Chemical Sciences o Bioinformatics Available at www.es.net/#research

6 6 Evolving Quantitative Science Requirements for Networks Science AreasToday End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s100 Gb/s1000 Gb/shigh bulk throughput Climate (Data & Computation) 0.5 Gb/s160-200 Gb/sN x 1000 Gb/shigh bulk throughput SNS NanoScienceNot yet started1 Gb/s1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy0.066 Gb/s (500 MB/s burst) 0.198 Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/stime critical throughput Astrophysics0.013 Gb/s (1 TBy/week) N*N multicast1000 Gb/scomputational steering and collaborations Genomics Data & Computation 0.091 Gb/s (1 TBy/day) 100s of users1000 Gb/s + QoS for control channel high throughput and steering

7 7 Traffic coming into ESnet = Green Traffic leaving ESnet = Blue Traffic between sites % = of total ingress or egress traffic Note that more that 90% of the ESnet traffic is OSC traffic ESnet Appropriate Use Policy (AUP) All ESnet traffic must originate and/or terminate on an ESnet an site (no transit traffic is allowed) Observed Drivers for Change ESnet Inter-Sector Traffic Summary, Jan 2003 / Feb 2004: 1.7X overall traffic increase, 1.9X OSC increase (The international traffic is increasing due to BABAR at SLAC and the LHC tier 1 centers at FNAL and BNL) Peering Points Commercial R&E (mostly universities) 21/14% 17/10% 9/26% 14/12% 10/13% 4/6% ESnet ~25/18% DOE collaborator traffic, inc. data 72/68% 53/49% DOE is a net supplier of data because DOE facilities are used by universities and commercial entities, as well as by DOE researchers DOE sites International (almost entirely R&E sites)

8 8 ESnet Top 20 Data Flows, 24 hr. avg., 2004-04-20 Fermilab (US)  CERN SLAC (US)  IN2P3 (FR) 1 Terabyte/day SLAC (US)  INFN Padva (IT) Fermilab (US)  U. Chicago (US) CEBAF (US)  IN2P3 (FR) INFN Padva (IT)  SLAC (US) U. Toronto (CA)  Fermilab (US) Helmholtz-Karlsruhe (DE)  SLAC (US) DOE Lab  DOE Lab SLAC (US)  JANET (UK) Fermilab (US)  JANET (UK) Argonne (US)  Level3 (US) Argonne  SURFnet (NL) IN2P3 (FR)  SLAC (US) Fermilab (US)  INFN Padva (IT) A small number of science users account for a significant fraction of all ESnet traffic  Since BaBar production started, the top 20 ESnet flows have consistently accounted for > 50% of ESnet’s monthly total traffic (~130 of 250 TBy/mo)  As LHC data starts to move, this will increase a lot (200-2000 times)

9 9 ESnet Top 10 Data Flows, 1 week avg., 2004-07-01 FNAL (US)  IN2P3 (FR) 2.2 Terabytes SLAC (US)  INFN Padua (IT) 5.9 Terabytes U. Toronto (CA)  Fermilab (US) 0.9 Terabytes SLAC (US)  Helmholtz-Karlsruhe (DE) 0.9 Terabytes SLAC (US)  IN2P3 (FR) 5.3 Terabytes CERN  FNAL (US) 1.3 Terabytes FNAL (US)  U. Nijmegen (NL) 1.0 Terabytes FNAL (US)  Helmholtz-Karlsruhe (DE) 0.6 Terabytes FNAL (US)  SDSC (US) 0.6 Terabytes U. Wisc. (US)  FNAL (US) 0.6 Terabytes  The traffic is not transient: Daily and weekly averages are about the same.

10 10 ESnet and Abilene Abilene and ESnet together provide most of the nation’s transit networking for science Abilene provides national transit networking for most of the US universities by interconnecting the regional networks (mostly via the GigaPoPs) ESnet connects the DOE Labs ESnet and Abilene have recently established high- speed interconnects and cross-network routing Goal is that DOE Lab ↔ Univ. connectivity should be as good as Lab ↔ Lab and Univ. ↔ Univ.  Constant monitoring is the key

11 11 ESnet Abilene ORNL DEN ELP ALB DC DOE Labs w/ monitors Universities w/ monitors network hubs high-speed cross connects: ESnet ↔ Internet2/Abilene Monitoring DOE Lab ↔ University Connectivity Current monitor infrastructure (red) and target infrastructure Uniform distribution around ESnet and around Abilene Need to set up similar infrastructure with GEANT Japan CERN/Europe Europe SDG Japan CHI AsiaPac SEA NYC HOU KC LA ATL IND SNV Initial site monitors SDSC LBNL FNAL NCSU BNL OSU ESnet Abilene

12 12 Initial Monitoring is with OWAMP One-Way Delay Tests These measurements are very sensitive – e.g. NCSU Metro DWDM reroute of about 350 micro seconds is easily visible Fiber Re-Route 42.0 41.9 41.8 41.7 41.6 41.5 ms

13 13 Initial Monitor Results (http://measurement.es.net)

14 14 ESnet, GEANT, and CERNlink GEANT plays a role in Europe similar to Abilene and ESnet in the US – it interconnects the European National Research and Education networks, to which the European R&E sites connect GEANT currently carries essentially all ESnet international traffic (LHC use of CERNlink to DOE labs is still ramping up) GN2 is the second phase of the GEANT project o The architecture of GN2 is remarkably similar to the new ESnet Science Data Network + IP core network model CERNlink will be the main CERN to US, LHC data path o Both US, LHC tier 1 centers are on ESnet (FNAL and BNL) o ESnet directly connects at 10 Gb/s to the CERNlink o The ESnet new architecture (Science Data Network) will accommodate the anticipated 40 Gb/s from LHC to US

15 15 GEANT and CERNlink A recent meeting between ESnet and GEANT produced proposals in a number of areas designed to ensure robust and reliable science data networking between ESnet and Europe o A US-EU joint engineering task force (“ITechs”) should be formed to coordinate US-EU science data networking -Will include, e.g., ESnet, Abilene, GEANT, CERN -Will develop joint operational procedures o ESnet will collaborate in GEANT development activities to ensure some level of compatibility -Bandwidth-on-demand (dynamic circuit setup) -Performance measurement and authentication -End-to-end QoS and performance enhancement -Security o 10 Gb/s connectivity between GEANT and ESnet will be established by mid-2005 and backup 2.5 Gb/s will be added

16 16 New ESnet Architecture Needed to Accommodate OSC The essential DOE Office of Science requirements cannot be met with the current, telecom provided, hub and spoke architecture of ESnet The core ring has good capacity and resiliency against single point failures, but the point-to-point tail circuits are neither reliable nor scalable to the required bandwidth ESnet Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE sites

17 17 A New ESnet Architecture Goals o full redundant connectivity for every site o high-speed access for every site (at least 10 Gb/s) Three part strategy 1) MAN rings provide dual site connectivity and much higher site-to-core bandwidth 2) A Science Data Network core for -multiply connected MAN rings for protection against hub failure -expanded capacity for science data -a platform for provisioned, guaranteed bandwidth circuits -alternate path for production IP traffic -carrier circuit and fiber access neutral hubs 3) An IP core (e.g. the current ESnet core) for high-reliability

18 GEANT (Europe) Asia- Pacific ESnet IP Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Washington, DC (DC) El Paso (ELP) DOE/OSC Labs New hubs Existing hubs ESnet Science Data Network (2 nd Core) A New ESnet Architecture: Science Data Network + IP Core Possible new hubs Atlanta (ATL) Metropolitan Area Rings CERN

19 19 ESnet Long-Term Architecture ESnet Metropolitan Area Networks ESnet SDN core ring site (typ.) site production IP provisioned circuits carried over optical channels / lambdas Optical channel (λ) equipmen – Carrier management domain 10 GigEthernet switch(s) – ESnet management domain core router – Esnet management domain ESnet IP core ring provisioned circuits tunneled through the IP core via MPLS monitor ESnet management and monitoring equipment monitor site router – site management domain ESnet hub (typ.) one or more indep. fiber pairs one or more independent fiber pairs

20 20 ESnet New Architecture, Part 1: MANs The MAN architecture is designed to provide o At least one redundant path from sites to ESnet hub o Scalable bandwidth options from sites to ESnet hub o The first step in point-to-point provisioned circuits -With endpoint authentication, these are private and intrusion resistant circuits, so they should be able to bypass site firewalls if the endpoints trust each other -End-to-end provisioning will be initially provided by a combination of Ethernet switch management of λ paths in the MAN and MPLS paths in the ESnet POS backbone (OSCARS project) -Provisioning will initially be provided by manual circuit configuration, on-demand in the future (OSCARS) o Cost savings over two or three years, when including the future site needs in increased bandwidth

21 21 Site gateway router site equip. Site gateway router Qwest hub ESnet IP core ANL FNAL ESnet MAN Architecture – logical (Chicago, e.g.) CERN (DOE funded link) monitor ESnet management and monitoring ESnet managed λ / circuit services tunneled through the IP backbone monitor site equip. ESnet production IP service ESnet managed λ / circuit services T320 StarLight International peerings Site LAN ESnet SDN core T320

22 22 ESnet Metropolitan Area Network ring (MANs) In the near term MAN rings will be built in the San Francisco and Chicago areas In long term there will likely be MAN rings on Long Island, in the Newport News, VA area, in No. New Mexico, in Idaho-Wyoming, etc. San Francisco Bay Area MAN ring progress o Feasibility has been demonstrated with an engineering study from CENIC o A competitive bid and “best value source selection” methodology will select the ring provider within two months

23 23 Joint Genome Institute SLAC Qwest / ESnet hub ESnet IP Core Ring Chicago El Paso SF Bay Area MAN NLR / UltraScienceNet Seattle and Chicago LA and San Diego LLNL SNLL NERSC LBNL SF Bay Area SF BA MAN Level 3 hub ESnet Science Data Network core

24 24 Proposed Chicago MAN ESnet CHI-HUB Qwest - NBC Bld 455 N Cityfront Plaza Dr, Chicago, IL 60611 ANL 9700 S Cass Ave, Lemont, IL 60439 FNAL Feynman Computing Center, Batavia, IL 60510 StarLight 910 N Lake Shore Dr, Chicago, IL 60611

25 25 ESnet New Architecture – Part 2: Science Data Network SDN (second core): Rationale Add major points of presence in carrier circuit and fiber access neutral facilities at Sunnyvale, Seattle, San Diego, and Chicago o Enable UltraSciNet cross-connect with ESnet o Provide access to NLR and other fiber-based networks o Allow for more competition in acquiring circuits Initial steps toward Science Data Network (SDN) o Provide a second, independent path between major northern route hubs -Alternate route for ESnet core IP traffic o Provide for high-speed paths on the West Coast to reach PNNL, GA, and AsiaPac peering o Increase ESnet connectivity to other R&E networks

26 26 DEN ELP ALB ATL MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites ESnet New Architecture Goal FY05 Science Data Network Phase 1 and SF BA MAN Japan  CERN (2X10Gb/s) Europe AsiaPac SEA NYC new ESnet hubs current ESnet hubs SNV ESnet IP core (Qwest) ESnet SDN core Lab supplied Major international Europe UltraSciNet DC SDG Japan CHI 2.5 Gbs 10 Gbs Future phases Existing ESnet Core New Core

27 27 DEN ELP ALB ATL MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites ESnet New Architecture Goal FY06 Science Data Network Phase 2 and Chicago MAN Japan  CERN (3X10Gb/s) Europe AsiaPac SEA NYC new ESnet hubs current ESnet hubs SNV Europe DC SDG Japan CHI ESnet SDN core Lab supplied Major international UltraSciNet 2.5 Gbs 10 Gbs Future phases ESnet IP core (Qwest)

28 28 DEN ELP ALB ATL MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites ESnet Beyond FY07 Japan CERN Europe SDG AsiaPac SEA ESnet SDN core hubs ESnet IP core (Qwest) hubs SNV Europe 10Gb/s 30Bg/s 40Gb/s Japan CHI High-impact science core Lab supplied Major international 2.5 Gbs 10 Gbs Future phases Production IP ESnet core DC Japan NYC


Download ppt "1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project."

Similar presentations


Ads by Google