Presentation is loading. Please wait.

Presentation is loading. Please wait.

ESnet4: Networking for the Future of DOE Science

Similar presentations


Presentation on theme: "ESnet4: Networking for the Future of DOE Science"— Presentation transcript:

1 ESnet4: Networking for the Future of DOE Science
CEFNet Workshop, Sept. 19, 2007 William E. Johnston ESnet Department Head and Senior Scientist Energy Sciences Network Lawrence Berkeley National Laboratory This talk is available at Networking for the Future of Science

2 DOE’s Office of Science: Enabling Large-Scale Science
The Office of Science (SC) is the single largest supporter of basic research in the physical sciences in the United States, … providing more than 40 percent of total funding … for the Nation’s research programs in high-energy physics, nuclear physics, and fusion energy sciences. ( – SC funds 25,000 PhDs and PostDocs A primary mission of SC’s National Labs is to build and operate very large scientific instruments - particle accelerators, synchrotron light sources, very large supercomputers - that generate massive amounts of data and involve very large, distributed collaborations ESnet - the Energy Sciences Network - is an SC program whose primary mission is to enable the large-scale science of the Office of Science that depends on: Sharing of massive amounts of data Supporting thousands of collaborators world-wide Distributed data processing Distributed data management Distributed simulation, visualization, and computational steering Collaboration with the US and International Research and Education community In addition to the National Labs, ESnet serves much of the rest of DOE, including NNSA – about 75, ,000 users in total The point here is that most modern, large-scale, science and engineering is multi-disciplinary, and must use geographically distributed components, and this is what has motivated a $50-75M/yr (approx) investment in Grid technology by the US, UK, European, Japanese, and Korean governments.

3 What ESnet Is A large-scale IP network built on a national circuit infrastructure with high-speed connections to all major US and international research and education (R&E) networks An organization of 30 professionals structured for the service The ESnet organization builds and operates the network An operating entity with an FY06 budget of $26.6M A tier 1 ISP (direct peerings will all major networks) The primary DOE network provider Provides production Internet service to all of the major DOE Labs* and most other DOE sites Based on DOE Lab populations, it is estimated that between 50, ,000 users depend on ESnet for global Internet access additionally, each year more than 18,000 non-DOE researchers from universities, other government agencies, and private industry use Office of Science facilities * PNNL supplements its ESnet service with commercial service

4 ESnet History ESnet0/MFENet mid-1970s-1986 ESnet0/MFENet
56 Kbps microwave and satellite links ESnet ESnet formed to serve the Office of Science 56 Kbps, X.25 to 45 Mbps T3 ESnet Partnered with Sprint to build the first national footprint ATM network IP over 155 Mbps ATM net ESnet Partnered with Qwest to build a national Packet over SONET network and optical channel Metropolitan Area fiber IP over 10Gbps SONET ESnet Partner with Internet2 and US Research& Education community to build a dedicated national optical network IP and virtual circuits on a configurable optical infrastructure with at least 5-6 optical channels of Gbps each

5 (USLHCnet DOE+CERN funded)
2700 miles / 4300 km Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 SINet (Japan) Russia (BINP) GÉANT - France, Germany, Italy, UK, etc MREN StarTap Taiwan (TANet2, ASCC) CERN (USLHCnet DOE+CERN funded) PacificWave NLR SEA NSF/IRNC funded LIGO general MAN LAN Internet2 PNNL KAREN / REANNZ ODN Japan Telecom America Internet2 SINGAREN 1625 miles / 2545 km AU CHI-SL Starlight USLHCNet NLR Internet2 NYC PacWave NLR MIT Internet2 INEEL Denver Lab DC Offices BNL FNAL JGI general LNOC ANL LLNL PPPL LBNL SNLL SNV AMES NETL NERSC CHI MAE-E Equinix SLAC PAIX-PA Equinix MAE-W OSC GTN NNSA Equinix NREL SNV SDN BECHTEL-NV IARC KCP ORNL JLAB NASA Ames DC LANL OSTI ORAU MAXGPoP YUCCA MT ARM SDSC GA SNLA NOAA SRS AU ALB Allied Signal PANTEX UNM ATL AMPATH (S. America) AMPATH (S. America) DOE-ALB ELP International (high speed) 10 Gb/s SDN core 10G/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (2007) ~45 end sites: Other Sponsored Office Of Science Sponsored commercial peering points Specific R&E network peers Other R&E peering points R&E networks IP SNV ESnet core hubs Internet2 high-speed peering points with Internet2 5

6 ESnet FY06 Budget is Approximately $26.6M
Approximate Budget Categories Special projects (Chicago and LI MANs): $1.2M Target carryover: $1.0M SC R&D: $0.5M Carryover: $1M SC Special Projects: $1.2M Management and compliance: $0.7M Other DOE: $3.8M Collaboration services: $1.6 Internal infrastructure, security, disaster recovery: $3.4M Circuits & hubs: $12.7M SC operating: $20.1M Operations: $1.1M Engineering & research: $2.9M WAN equipment: $2.0M Total funds: $26.6M Total expenses: $26.6M

7 A Changing Science Environment is the Key Driver of the Next Generation ESnet
Large-scale collaborative science – big facilities, massive data, thousands of collaborators – is now a significant aspect of the Office of Science (“SC”) program SC science community is almost equally split between Labs and universities SC facilities have users worldwide Very large international (non-US) facilities (e.g. LHC and ITER) and international collaborators are now a key element of SC science Distributed systems for data analysis, simulations, instrument operation, etc., are essential and are now common (in fact dominate data analysis that now generates 50% of all ESnet traffic)

8 Science Networking Requirements Aggregation Summary
Science Drivers Science Areas / Facilities End2End Reliability Connectivity Today End2End Band width 5 years End2End Band width Traffic Characteristics Network Services Magnetic Fusion Energy 99.999% (Impossible without full redundancy) DOE sites US Universities Industry 200+ Mbps 1 Gbps Bulk data Remote control Guaranteed bandwidth Guaranteed QoS Deadline scheduling NERSC and ACLF - International Other ASCR supercomputers 10 Gbps 20 to 40 Gbps Remote file system sharing Deadline Scheduling PKI / Grid NLCF Backbone Band width parity Backbone band width parity Nuclear Physics (RHIC) 12 Gbps 70 Gbps Spallation Neutron Source High (24x7 operation) 640 Mbps 2 Gbps

9 Science Network Requirements Aggregation Summary
Science Drivers Science Areas / Facilities End2End Reliability Connectivity Today End2End Band width 5 years End2End Band width Traffic Characteristics Network Services Advanced Light Source - DOE sites US Universities Industry 1 TB/day 300 Mbps 5 TB/day 1.5 Gbps Bulk data Remote control Guaranteed bandwidth PKI / Grid Bioinformatics 625 Mbps 12.5 Gbps in two years 250 Gbps Point-to-multipoint High-speed multicast Chemistry / Combustion 10s of Gigabits per second Climate Science International 5 PB per year 5 Gbps High Energy Physics (LHC) 99.95+% (Less than 4 hrs/year) US Tier1 (FNAL, BNL) US Tier2 (Universities) International (Europe, Canada) 10 Gbps 60 to 80 Gbps (30-40 Gbps per US Tier1) Coupled data analysis processes Traffic isolation Immediate Requirements and Drivers

10 The Next Level of Detail In Network Requirements
Consider the LHC networking as an example, because their requirements are fairly well characterized

11 LHC will be the largest scientific experiment and generate the most data the scientific community has ever tried to manage. The data management model involves a world-wide collection of data centers that store, manage, and analyze the data and that are integrated through network connections with typical speeds in the 10+ Gbps range. closely coordinated and interdependent distributed systems that must have predictable intercommunication for effective functioning [ICFA SCIC]

12 The current, pre-production data movement sets the scale of the LHC distributed data analysis problem: The CMS experiment at LHC is already moving 32 petabytes/yr among the Tier 1 and Tier 2 sites. Accumulated data (Terabytes) moved among the CMS Data Centers (“tier1” sites) and analysis centers (“tier2” sites) during the past four months (8 petabytes of data) [LHC/CMS]

13 Estimated Aggregate Link Loadings, 2007-08
unlabeled links are 10 Gb/s 9 12.5 Seattle 13 13 9 Portland Existing site supplied circuits 2.5 Boise Boston Chicago Clev. NYC Sunnyvale Denver Salt Lake City KC Philadelphia Pitts. Wash DC Indianapolis 8.5 Raleigh LA Tulsa Nashville Albuq. OC48 6 (1(3)) San Diego (1) Atlanta 6 Jacksonville El Paso Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site Baton Rouge Houston ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections 2.5 2.5 2.5 Committed bandwidth, Gb/s

14 ESnet4 2007-8 Estimated Bandwidth Commitments
West Chicago MAN 600 W. Chicago Long Island MAN unlabeled links are 10 Gb/s BNL 32 AoA, NYC USLHCNet CERN 5 Seattle 10 (28) Portland CERN (8) Starlight 13 (29) Boise Boston 29 (total) (9) (7) Chicago Clev. (11) (10) NYC (32) Denver (13) (25) Sunnyvale USLHCNet 10 Salt Lake City KC (12) Philadelphia (15) (14) (26) (16) Pitts. San Francisco Bay Area MAN Wash DC Indianapolis (21) (27) (23) (0) (22) (30) Raleigh LA FNAL LBNL SLAC JGI LLNL SNLL NERSC ANL Tulsa Nashville Albuq. OC48 (24) (1(3)) (3) (4) San Diego (1) Newport News - Elite Atlanta (19) (20) (2) JLab ELITE ODU MATP Wash., DC Jacksonville El Paso (17) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site (6) (5) Baton Rouge Houston MAX ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections All circuits are 10Gb/s. 2.5 Committed bandwidth, Gb/s

15 Are These Estimates Realistic? YES!
Mbytes/sec FNAL Outbound CMS Traffic Max= 1064 MBy/s (8.5 Gb/s), Average = 394 MBy/s (3.2 Gb/s)

16 Footprint of Largest Office of Science Labs Data Sharing Collaborators Large-Scale Science is an International Endeavor Top 100 data flows generate 50% of all ESnet traffic (ESnet handles about 3x109 flows/mo.) 91 of the top 100 flows are from the Labs to other institutions (shown) (CY2005 data)

17 Observed Evolution of Historical ESnet Traffic Patterns
ESnet total traffic passed 2 Petabytes/mo about mid-April, 2007 ESnet Traffic has Increased by 10X Every 47 Months, on Average, Since 1990 Terabytes / month top 100 sites to site workflows site to site workflow data not available ESnet Monthly Accepted Traffic, January, 2000 – June, 2007 ESnet is currently transporting more than1 petabyte (1000 terabytes) per month More than 50% of the traffic is now generated by the top 100 sites  large-scale science dominates all ESnet traffic

18 Log Plot of ESnet Monthly Accepted Traffic, January, 1990 – June, 2007
ESnet Traffic has Increased by 10X Every 47 Months, on Average, Since 1990 Apr., 2006 1 PBy/mo. Nov., 2001 100 TBy/mo. Jul., 1998 10 TBy/mo. 53 months Oct., 1993 1 TBy/mo. 40 months Terabytes / month Aug., 1990 100 MBy/mo. 57 months 38 months Log Plot of ESnet Monthly Accepted Traffic, January, 1990 – June, 2007

19 Summary of All Requirements To-Date
Requirements from science case studies: Build the ESnet core up to 100 Gb/s within 5 years Provide high level of reliability Deploy initial network to accommodate LHC collaborator footprint Implement initial network to provide for LHC data path loadings Provide the network as a service-oriented capability Requirements from observing traffic growth and change trends in the network: Most of ESnet science traffic has a source or sink outside of ESnet Requires high-bandwidth peering Reliability and bandwidth requirements demand that peering be redundant Bandwidth and service guarantees must traverse R&E peerings Provide 15 Gb/s core within four years and 150 Gb/s core within eight years Provide a rich diversity and high bandwidth for R&E peerings Economically accommodate a very large volume of circuit-like traffic Large-scale science is now the dominant user of the network Satisfying the demands of large-scale science traffic into the future will require a purpose-built, scalable architecture Requirements from SC Programs: Provide “consulting” on system / application network tuning

20 The Evolution of ESnet Architecture
Metro Area Rings ESnet IP core ESnet IP core ESnet to 2005: A routed IP network with sites singly attached to a national core ring ESnet Science Data Network (SDN) core ESnet from : A routed IP network with sites dually connected on metro area rings or dually connected directly to core ring A switched network providing virtual circuit services for data-intensive science Rich topology offsets the lack of dual, independent national cores ESnet sites ESnet hubs / core network connection points Metro area rings (MANs) Other IP networks Circuit connections to other science networks (e.g. USLHCNet)

21 ESnet production IP core hub
ESnet Metropolitan Area Network Ring Architecture for High Reliability Sites SDN core east US LHCnet switch US LHCnet switch ESnet production IP core hub IP core east IP core west SDN core switch IP core router SDN core west SDN core switch T320 ESnet IP core hub ESnet SDN core hub MAN fiber ring: 2-4 x 10 Gbps channels provisioned initially, with expansion capacity to 16-64 ESnet managed virtual circuit services tunneled through the IP backbone Large Science Site ESnet managed λ / circuit services ESnet production IP service ESnet MAN switch Independent port card supporting multiple 10 Gb/s line interfaces MAN site switch Site ESnet switch MAN site switch Virtual Circuits to Site Virtual Circuit to Site Site router Site gateway router SDN circuits to site systems Site LAN Site edge router

22 ESnet4 Internet2 has partnered with Level 3 Communications Co. and Infinera Corp. for a dedicated optical fiber infrastructure with a national footprint and a rich topology - the “Internet2 Network” The fiber will be provisioned with Infinera Dense Wave Division Multiplexing equipment that uses an advanced, integrated optical-electrical design Level 3 will maintain the fiber and the DWDM equipment ESnet has partnered with Internet2 to: Share the optical infrastructure, but build independent L2/3 networks Develop new circuit-oriented network services Explore mechanisms that could be used for the ESnet Network Operations Center (NOC) and the Internet2/Indiana University NOC to back each other up for disaster recovery purposes

23 L1 / Optical Overlay - Internet2 and ESnet Optical Node
M320 IP core T640 ESnet metro-area networks SDN core switch grooming device Ciena CoreDirector dynamically allocated and routed waves (future) support devices: measurement out-of-band access monitoring security optical interface to R&E Regional Nets support devices: measurement out-of-band access monitoring ……. Network Testbed Implemented as a Optical Overlay various equipment and experimental control plane management systems BMM fiber east fiber west Infinera DTN Internet2/Level3/Infinera National Optical Infrastructure fiber north/south

24 Conceptual Architecture of the Infinera DTN Digital Terminal
Band Mux Module (BMM) Multiplexes 100Gb/s bands onto 400 Gb/s or 800 Gb/s line Digital Line Module (DLM) 100Gb/s DWDM line module (10 l x 10Gb/s) Integrated digital switch enables add/drop & grooming at 2.5Gb/s (ODU-1) granularity Tributary Adapter Module (TAM) Provides customer facing optical interface Supports variety of data rates and service types Up to 5 TAMs per DLM Tributary Optical Module (TOM) Pluggable client side optical module (XFP, SFP) Supports 10GE, OC48&OC192 SONET Optical Supervisory Channel (OSC) Management Plane Traffic—Includes traffic from the remote management systems to access network elements for the purpose of managing them Control Plane Traffic—GMPLS routing and signaling control protocol traffic Datawire Traffic—Customer management traffic by interconnecting customer’s 10Mbps Ethernet LAN segments at various sites through AUX port interfaces intra- and inter-DLM switch fabric DLM 10x10 Gb/s TOM TAM OE/EO TOM 100 Gb/s 10 Gb/s 100 Gb/s BMM WAN 100 Gb/s 100 Gb/s OSC Control Processor Notes: This conceptual architecture is based on W. Johnston’s understanding of the Infinera DTN All channels are full duplex, which is not made explicit here The switch fabric operates within a DLM and also interconnects the DLMs The Management Control Module is not shown

25 ESnet 4 Backbone September 30, 2007
Seattle Boston Boston Boise Clev. Clev. New York City Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles Nashville San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

26 Typical ESnet4 Wide Area Network Hub
Washington, DC

27 Typical Medium Size ESnet4 Hub
DC power controllers DC power controllers local Ethernet 10 Gbps network tester secure terminal server (telephone modem access) Cisco 7609, Science Data Network switch (about $365K, list) Juniper M7i, IP peering router, (about $60K, list) Juniper M320, core IP router, (about $1.65M, list)

28 San Francisco Bay Area MAN
Note that the major ESnet sites are now directly on the ESnet “core” network West Chicago MAN Long Island MAN FNAL 600 W. Chicago Starlight ANL USLHCNet e.g. the bandwidth into and out of FNAL is equal to, or greater, than the ESnet core bandwidth BNL 32 AoA, NYC USLHCNet Seattle (28) Portland (8) (>1 ) (29) 5 Boise Boston (9) 5 Chicago 4 (7) Clev. (11) 5 (10) NYC 5 Pitts. (32) Denver (13) Sunnyvale 5 (25) KC (12) Salt Lake City (14) Philadelphia (15) 5 5 (26) (16) 4 (21) Wash. DC 4 San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC 5 Indianapolis (27) 4 (23) (0) (22) 3 (30) Raleigh LA 4 Tulsa 5 Nashville Albuq. OC48 (24) 4 4 (3) 3 (4) San Diego (1) 3 Atlanta (19) (20) (2) JLab ELITE ODU MATP Wash., DC 4 Jacksonville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site El Paso (17) Atlanta MAN 180 Peachtree 56 Marietta (SOX) ORNL Wash., DC Houston Nashville 4 ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) (6) (5) Baton Rouge Houston

29 Aggregate Estimated Link Loadings, 2010-11
30 45 Seattle 50 (28) 15 20 Portland (8) (>1 ) (29) 5 Boise Boston (9) 5 (7) Chicago 4 Clev. (11) 5 (10) NYC 5 Pitts. (32) Denver (13) (25) Sunnyvale 5 KC (12) Salt Lake City (14) Philadelphia (15) 5 5 (26) (16) 4 (21) Wash. DC 10 4 5 5 Indianapolis (27) 4 (23) (0) (22) 3 (30) 5 Raleigh LA 4 Tulsa 5 Nashville Albuq. 20 OC48 (24) 4 4 3 (4) San Diego (3) (1) 3 Atlanta 20 20 (19) (20) 5 (2) 4 Jacksonville El Paso 4 (17) ESnet IP switch/router hubs (6) 20 (5) Baton Rouge ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) ESnet IP switch only hubs Houston ESnet SDN switch hubs Layer 1 optical nodes at eventual ESnet Points of Presence Layer 1 optical nodes not currently in ESnet plans Lab site

30 ESnet4 2010-11 Estimated Bandwidth Commitments
80 FNAL 600 W. Chicago Starlight ANL USLHCNet CERN 40 100 CMS 25 BNL 32 AoA, NYC USLHCNet CERN 65 40 20 Seattle 25 (28) 15 Portland (8) (>1 ) (29) 5 Boise Boston (9) 5 (7) Chicago 4 Clev. (11) 5 (10) NYC 5 Pitts. (32) Denver (13) Sunnyvale 5 (25) Salt Lake City KC (12) (14) Philadelphia (15) 5 5 (26) (16) 4 (21) Wash. DC 4 5 5 Indianapolis (27) 4 (23) (0) (22) 3 (30) 5 Raleigh LA 4 Tulsa 5 Nashville Albuq. 10 OC48 (24) 4 4 (3) 3 (4) San Diego (1) 3 Atlanta 20 20 (19) (20) 5 (2) 4 Jacksonville El Paso 4 (17) ESnet IP switch/router hubs (6) 10 (5) Baton Rouge ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) ESnet IP switch only hubs Houston ESnet SDN switch hubs Layer 1 optical nodes at eventual ESnet Points of Presence Layer 1 optical nodes not currently in ESnet plans Lab site

31 ESnet4 IP + SDN, 2011 Configuration
Seattle (28) Portland (8) (>1 ) (29) 5 Boise Boston 5 (9) (7) Chicago 4 Clev. (11) 5 (10) NYC 5 Pitts. (32) Denver (13) Sunnyvale 5 (25) Salt Lake City KC (12) (14) Philadelphia (15) 5 5 (26) (16) 4 (21) Wash. DC 4 5 Indianapolis (27) 4 (23) (22) 3 (0) (30) Raleigh LA 4 Tulsa 5 Nashville Albuq. OC48 (24) 4 4 3 (4) San Diego (3) (1) 3 Atlanta (19) (20) (2) 4 Jacksonville El Paso 4 Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site (17) (6) (5) Baton Rouge ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) Houston

32 GLORIAD (Russia and China) Science Data Network Core
ESnet4 Planed Configuration Gbps in , Gbps in Canada (CANARIE) Asia-Pacific Canada (CANARIE) CERN (30 Gbps, each path) Asia Pacific GLORIAD (Russia and China) Europe (GÉANT) 1625 miles / 2545 km Asia-Pacific Science Data Network Core Seattle Cleveland Boston IP Core Chicago Boise Australia New York Kansas City Denver Washington DC Sunnyvale Atlanta Tulsa LA Albuquerque Australia South America (AMPATH) San Diego South America (AMPATH) Houston Jacksonville El Paso IP core hubs Primary DOE Labs SDN (switch) hubs High speed cross-connects with Ineternet2/Abilene Possible hubs Production IP core (10Gbps) ◄ SDN core ( Gbps) ◄ MANs (20-60 Gbps) or backbone loops for site access International connections Core network fiber path is ~ 14,000 miles / 24,000 km 2700 miles / 4300 km 32

33 New Network Service: Virtual Circuits
The control plane and user interface for ESnet version of light paths Service Requirements Guaranteed bandwidth service User specified bandwidth - requested and managed in a Web Services framework Traffic isolation and traffic engineering Provides for high-performance, non-standard transport mechanisms that cannot co-exist with commodity TCP-based transport Enables the engineering of explicit paths to meet specific requirements e.g. bypass congested links, using lower bandwidth, lower latency paths End-to-end (cross-domain) connections between Labs and collaborating institutions Secure connections Provides end-to-end connections between Labs and collaborator institutions The circuits are “secure” to the edges of the network (the site boundary) because they are managed by the control plane of the network which is isolated from the general traffic Reduced cost of handling high bandwidth data flows Highly capable routers are not necessary when every packet goes to the same place Use lower cost (factor of 5x) switches to relatively route the packets

34 The Mechanisms Underlying OSCARS
Based on Source and Sink IP addresses, route of LSP between ESnet border routers is determined using topology information from OSPF-TE. Path of LSP can be explicitly directed to take SDN network. On the SDN Ethernet switches all traffic is MPLS switched (layer 2.5), which stitches together VLANs VLAN 1 VLAN 2 VLAN 3 On ingress to ESnet, packets matching reservation profile are filtered out (i.e. policy based routing), policed to reserved bandwidth, and injected into a LSP. SDN SDN SDN SDN Link SDN Link RSVP, MPLS enabled on internal interfaces Sink Label Switched Path IP Link Source IP Link IP IP IP high-priority queue standard, best-effort queue MPLS labels are attached to packets from Source and placed in separate queue to ensure guaranteed bandwidth. Regular production traffic queue. Interface queues

35 Conclusions from “Measurements On Hybrid Dedicated Bandwidth Connections,” Rao, et al
Test scenario is ESnet-USN MPLS tunnels through routed IP network Ethernet over SONET hybrid 1Gbps connections Not tested: “native” Ehthernet Basic Conclusions Transport Throughputs: UDP : all modalities are very similar TCP: SONET is slightly better for multiple streams ~6% Jitter Measurements: Ping, tcpmon, tcpcliser: all modalities are comparable Differences are minor and likely only to effect “finer” control applications “Measurements On Hybrid Dedicated Bandwidth Connections,” Nagi Rao, Bill Wing – Oak Ridge National Laboratory, Qishi Wu – University of Memphis, Nasir Ghani – Tennessee Technological University, Tom Lehman – Information Science Institute East, Chin Guok, Eli Dart – ESnet, May 11, 2007, INFOCOM2007 Workshop on High Speed Networks, Anchorage, Alaska

36 References High Performance Network Planning Workshop, August 2002
Science Case Studies Update, 2006 (contact DOE Science Networking Roadmap Meeting, June 2003 DOE Workshop on Ultra High-Speed Transport Protocols and Network Provisioning for Large-Scale Science Applications, April 2003 Science Case for Large Scale Simulation, June 2003 Workshop on the Road Map for the Revitalization of High End Computing, June 2003 (public report) ASCR Strategic Planning Workshop, July 2003 Planning Workshops-Office of Science Data-Management Strategy, March & May 2004 For more information contact Chin Guok Also see [LHC/CMS] [ICFA SCIC] “Networking for High Energy Physics.” International Committee for Future Accelerators (ICFA), Standing Committee on Inter-Regional Connectivity (SCIC), Professor Harvey Newman, Caltech, Chairperson. [E2EMON] Geant2 E2E Monitoring System –developed and operated by JRA4/WI3, with implementation done at DFN [TrViz] ESnet PerfSONAR Traceroute Visualizer


Download ppt "ESnet4: Networking for the Future of DOE Science"

Similar presentations


Ads by Google