Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 National LambdaRail Layer 2 and 3 Networks: Update 17 July 2005 Joint Techs Meeting Burnaby, BC Caren Litvanyi NLR Layer2/3 Service Center Global Research.

Similar presentations


Presentation on theme: "1 National LambdaRail Layer 2 and 3 Networks: Update 17 July 2005 Joint Techs Meeting Burnaby, BC Caren Litvanyi NLR Layer2/3 Service Center Global Research."— Presentation transcript:

1 1 National LambdaRail Layer 2 and 3 Networks: Update 17 July 2005 Joint Techs Meeting Burnaby, BC Caren Litvanyi NLR Layer2/3 Service Center Global Research NOC at Indiana University

2 2 Layer 2

3 3 HOU TUL ELP BAT LAX ALB WDC ATL NYC SEA JAC Cisco 6509 switch What’s already been Deployed (L2) 7/17/05 PIT CLE CHI KAN DEN RAL SVL

4 4 Goals “To provide a high speed, reliable, flexible, and economical layer 2 transport network offering to the NLR community, while remaining on the cutting-edge of technology and service offerings through aggressive development and upgrade cycles.” Provide circuit-like options for users who can’t use, can’t afford, or don’t need, a 10G Layer1 wave. NLR L2 Services Summary

5 5 Experiment with and provide L2 capabilities – Create distributed nationwide broadcast domain. –Create tools and procedures for automated and user-controlled provisioning of L2 resources. –Provide a level of openness not possible from ISPs, including visibility into the devices, measurement, and performance statistics. –Refine services through member feedback so they are tailored to meet the needs of the research community.

6 6 NLR Layer 2 locations Jacksonville: Level3, 814 Phillips Hwy Atlanta: Level3, 345 Courtyard, Suite 9 (?) Raleigh: Level3, 5301 Departure Drive WashDC: Level3, 1755/1757 Old Meadow Rd Suite: 111, McLean VA NYC: MANLAN, 32 Avenue of the Americas, 24th Floor Pittsburgh: Level3, 143 South 25th Cleveland: Level3, 4000 Chester Avenue. Chicago: Level3, 111 N. Canal, Suite 200 Kansas City: Level3, 1100 Walnut Street, MO Denver: Level3, 1850 Pearl St, Suite 4 Seattle: PacWave, 1000 Denny Way (Westin) Sunnyvale: Level3, 1360 Kifer Road Suite: 251 Los Angeles: Equinix, 818 W. 7th Street, 6th Floor El Paso: Wiltel, 501 W. Overland Houston: Wiltel, 1124 Hardy St. Tulsa: Wiltel, 18 W. Archer Baton Rouge: Wiltel, No Address Yet Albuquerque: Level3, 104 Gold St. (Blue means that site is installed and up, ready to take connections)

7 7 HOU TUL ELP KAN Layer 2 Initial Logical Topology BAT LAX ALB PIT WDC CLE ATL RAL CHI NYC DEN SVL SEA JAC 10GE wave Cisco 6509 switch

8 8 NLR L2 Hardware - Cisco Catalyst 6509-NEB-A Dual Sup720-3BXL PFC3/MSFC3 one 4 x 10GE WS-X GE (initially) one 24-port 1GE SFP WS-X6724-SFP DC Power: WDC power supplies 10GE will change to the 2-port 6802 cards when they become available. The 4-port cards will be retained for other connections. SFP support - most anything we can get to work, but LX is the default. SX, ZX, CWDM upon request. No copper blade by default at this time.

9 9 KSC!

10 10 Denver!

11 11 Generic NLR L1, L2, L3 PoP Layout CRS-1 Colo EastWest NLR demarc DWDM 1G wave, link or port 10G wave, link or port

12 12 NLR L2 User/Network Interface Physical connection will initially be a 1 Gbps LX connection over singlemode fiber, which the member connects or arranges to connect. One 1GE connection to the layer 2 network is part of NLR membership. Another for L3 is optional. Additional connections/services will be available for some cost.

13 13 NLR L2 User/Network Interface Tagged or native (untagged) will be supported. Trunked (802.1q) is expected default configuration. VLAN numbers will be negotiated. LACP/etherchannel will be supported. Q-in-Q supported, vlan translation only if needed.

14 14 More good-to-know about L2 connections Mac learning limit - initially 256, negotiable. No CDP on the customer-facing edge. No VTP or GVRP. Rapid per-vlan spanning tree; root guard will be used. CoS will be rewritten, DSCP passed transparently. MTU can be standard, jumbo, or custom, but needs to allow the headroom for 802.1q and Q-in-Q.

15 15 More good-to-know about L2 connections Redundancy on point-to-point connections is optional via rapid spanning tree and a configured backup path. Some users may prefer deterministic performance (unchanging path) over having a backup path. Use Q-in-Q to tunnel BPDUs as requested. PIM snooping.

16 16 Initial Services Dedicated Point to Point Ethernet – VLAN between 2 members with dedicated bandwidth from sub 1G to multiple 1G. Best Effort Point to Multipoint – Multipoint VLAN with no dedicated bandwidth. National Peering Fabric – Create a national distributed exchange point, with a single broadcast domain for all members. This can be run on the native vlan. This is experimental, and the service may morph.

17 17 Expected Near Term Services Dedicated Multipoint : Dedicated bandwidth for multipoint connections (how to count?) Scavenger: Support less-than-best-effort forwarding. This would be useable for all connections. Connections: Support 10GE user ports. (Maybe this is right away rather than soon)

18 18 Possible Long-range Services User-controlled Web-based provisioning and configuration. Allow users to automatically create new services, or reconfigure existing services on the network using a web-based tool. Time-sensitive provisioning – Allow users to have L2 connections with bandwidth dedicated only at certain times of day, or certain days.

19 19 GRNOC will manage and monitor 24x7 for device, network, and connection health using existing tools as the baseline, including statistics gathering/reporting. Also maintains databases on equipment, connections, customer contacts, and configurations. Active measurement is currently the responsibility of the end users across their layer2 connections, but we are open to other ideas. A portal/proxy will be available to execute commands on the switches directly. Currently only non-service- disrupting commands are permitted. NLR L2 Monitoring & Measurement https://ratt.uits.iu.edu/routerproxy/nlr/

20 20 Layer 3

21 21 What’s already been Deployed (L3) 6/22/05 ALB BAT RAL JAC TUL PIT HOU LAX WDC ATL NYC DEN SEA CLE Cisco CRS-1 8/S router CHI

22 22 NLR L3 Services Summary Goals: The purpose of the NLR Layer3 Network is to create a national routed infrastructure for network and application experiments, in a way that is not possible with current production commodity networks, network testbeds, or production R&E networks. In addition to provided baseline services, network researchers will have the opportunity to make use of an advanced national “breakable” infrastructure to try out technologies and configurations.

23 23 NLR L3 Services Summary Goals: A guiding design principle is flexibility. This is planned to be reflected in a more open AUP, willingness to prioritize some experimentation over baseline production services, and additional tools development.

24 24 NLR Layer 3 locations Atlanta: Level3, 345 Courtyard, Suite 9 WashDC: Level3, 1755/7 Old Meadow Rd,McLean VA. NYC: MANLAN, 32 Avenue of the Americas, 24th floor Chicago: Level3, 111 N. Canal, Suite 200 Denver: Level3, 1850 Pearl St, Suite 4 Seattle: PacWave, (Westin) Los Angeles: Equinix, 818 W. 7th Street, 6th Floor Houston: Wiltel, 1124 Hardy St. (Blue means that site is installed and up)

25 25 UNM LA Duke/NC FLR OK PSC HOU Layer 3 Initial Logical Topology LAX WDC ATL CHI NYC DEN SEA Cisco CRS-1 router 10GE wave CICITN PNWGP UCAR/FRGP LEARN Cornell CENIC GATech MAT P NLR L2 NLR L2t NLR L2

26 26 NLR Layer 3 Hardware Cisco CRS-1 (half-rack, 8-slot) 2 route processors (RPs) 4 switch fabric cards 2 Power Entry Modules 2 control plane software bundle licenses (IOS-XR) with crypto 2 memory modules for each RP (required) – 2GB each 1 or 2 8x10GE line card(s) 1 multi-service card (MSC) 1 8x10GE PLIM 1 line card software license 1 extra MSC 1 extra line card software license XENPAK 10G-LR optics (SC)

27 27 NLR Layer 3 Hardware Things to note: Providing 10GE only by default. 1GE is not yet available on the CRS SONET is available but we did not purchase any (oc768, oc192, oc48). redundant power and RPs. We will be participating in the beta program for the CRS. Midplane design

28 28 CRS1 base HW configuration Sites that had at least 7 of their 8 10GE interfaces assigned at initial installation receives a second 8x10GE, including the MSC and software license. Chicago Denver Houston. These location have a total of 12 XENPAK 10G-LR optics modules. We have been calling the first configuration “A”, and the configuration with the additional 8x10GE type “B”. The NLR layer 3 network will be comprised initially of 5 type “A” routers and 3 type “B” routers.

29 29 Denver!

30 30 Generic NLR L1, L2, L3 PoP Layout CRS-1 Colo EastWest NLR demarc DWDM 1G wave, link or port 10G wave, link or port

31 31 NLR L3 User/Network Interface Physical connection will be a 10 Gbps Ethernet (1310nm) connection over singlemode fiber, which the member connects or arranges to connect. One connection directly to the layer 3 network is part of NLR membership, a backup 1Gbps VLAN through the layer 2 network is optional and included. Additional connections/services will be available for some cost. Trunked (802.1q) will be supported. Point-to-point connections (no 802.1q) is supported.

32 32 Connecting to NLR L3 ATL HOU WDC JAC RAL NCLR router FLR router BAT 10G wave, link or port 1G wave, link or port Cisco CRS-1 router Cisco 6509 switch Regional Network router HOU, RAL, and JAC are shown as examples; ALL other MetaPoPs get a dual backhauled connection as well. LEARN router CHI

33 33 NLR L3 Backbone Baseline Configuration dual IPv4 and IPv6 ISIS core IGP BGP IPv4 Multicast: PIM, MSDP, MBGP by default. Investigating doing IPv6 multicast by default. MPLS is not currently a planned default, but some can be supported upon request.

34 34 NLR L3 Services - Features Day One Connection - each member gets a 10GE connection and a VLAN backhauled over the L2 network to a second node if they desire. General operations of the network, including base features (configuration with no experiment running), connections, and communication of experiments. Future Possibilities: Peering with other R&E networks. Commodity Internet Connections or peering. User-Controllable Resource Allocation: Will be supported as experiments, and rolled into the base feature list if there is general use and interest.

35 35 NLR L3 Connection Interests Private Test-lab Network Connections. Route Advertiser Connections: Get active commodity routing table for experiments, but no actual commodity bandwidth drainage. Pre-emptible Connections: Allow other types of connections to use unused ports on a temporary basis, such as for a conference or measurement project.

36 36 NLR L3 Services “Layer 3 Services Document” - engineering subcommittee Set user expectations for service on L3 network. Make clear the experiment support model. Service Expectations:SLA isn’t good measure since the network may appear “down” because of experiments. Network may not have same uptime as production network, but will have same level of service and support as production network.

37 37 NLR L3 Services -Experiment Support Each experiment will have a representative from the L2/L3 Support Center and a representative from the ESC. If necessary, the prospective experiment will be sent to the NNRC for review. (?) L2/L3 Services staff will be responsible for scheduling network assets for experiments and will see the experiment through to completion. In general, experiments will be scheduled on a first-come-first-serve basis.

38 38 Layer 3 Network Conditions Way of communicating the current state of the network to users. Users may choose to automatically have their interfaces shutdown under any Network Conditions they desire. Users will receive notification of changes to Network Condition, with focused communication to those who will be turned on or off because of it. Tools will be available for users to monitor and track Network Conditions.

39 39 Network Conditions NetCon 7- No Experiment Currently Active NetCon 6- Experiment Active, No Instability Expected NetCon 5- Possible Feature Instability/No General Instability Expected NetCon 4- Possible Network Instability NetCon 3- Congestion Expected NetCon 2- Probable Network Instability; Possible Impact on Connecting Networks NetCon 1- Network Completely Dedicated

40 40 NLR Layer 3 - discussion What are members’ plans? Questions? Issues? What do users want/need? Tools? Monitoring and measurement ability? Direct access to login and configure routers? A router “ghost” service? Instruction/workshops? Commodity access or ISP collaboration? Collaboration with projects like PlanetLab?

41 41 Installation Overview Layer 2: – 18 Cisco 6509s Layer 3: – 8 Cisco CRS-1 (half-rack) General Strategy: 1. Focus on sites with main interest first 2. Grow network footprint linearly from there

42 42 A word about deployment in NLR Phase 2 Sites... In addition to installing Phase 2 optical hardware, Wiltel will ready sites for L2/L3. L2/L3 deployment in phase 2 and phase2/phase1 borders will depend on this.

43 43 Upcoming Schedule SiteL2L3Date RaleighJune SunnyvaleJune SeattleJuly AtlantaAugust 2-5 WashingtonAugust 9-10 JacksonvilleAugust 1-2 LAAugust New YorkSeptember 6-9 TulsaAugust 9-10 HoustonAugust El PasoAugust Baton RougeAugust AlbuquerqueOctober?

44 44 Thank you! Caren Litvanyi


Download ppt "1 National LambdaRail Layer 2 and 3 Networks: Update 17 July 2005 Joint Techs Meeting Burnaby, BC Caren Litvanyi NLR Layer2/3 Service Center Global Research."

Similar presentations


Ads by Google