Presentation is loading. Please wait.

Presentation is loading. Please wait.

John Jamison University of Illinois at Chicago November 17, 2000

Similar presentations


Presentation on theme: "John Jamison University of Illinois at Chicago November 17, 2000"— Presentation transcript:

1 John Jamison University of Illinois at Chicago November 17, 2000
MPLS What’s in it for Research & Education Networks? John Jamison University of Illinois at Chicago November 17, 2000

2 Juniper Networks Product Family
Sept 2000 M10 Sept 2000 M5 The M5 and M10 Internet Backbone Routers are the latest in the series of truly innovative products released by Juniper Networks over the last 2 years. In September of 1998 we released our first Internet Backbone Router, the M40, which offered unparalleled processor and interface speed support at 40Mpps and wire-rate OC-48c/STM-16. In November of 1999 we released the M20, which concentrates the power of the M40 into a smaller chassis size. In March of this year we released the M160, which includes the first and only shipping OC192c/STM64 interface available. And now we’ve added the M5 and M10, which bring the power of the Juniper Networks system architecture and JUNOS software into a compact form factor suitable for network edge applications. Mar 2000 M160 Nov 1999 M20 Sept 1998 M40

3 Juniper Networks Research and Education Customers
MCI Worldcom – vBNS/vBNS+ Department of Energy – ESnet DANTE - TEN-155 (Pan-European Research & Education Backbone) NYSERNet – New York State Education & Research Network Georgia Tech – SOX GigaPoP University of Washington – Pacific/Northwest GigaPoP STAR TAP (International Research & Education Network Meet Point) APAN (Asia Pacific Advanced Network) Consortium NOAA (National Oceanographic and Atmospheric Administration) NASA – Goddard Space Flight Center NIH (National Institutes of Health) DoD (Department of Defense) US Army Engineer Research and Development Center University of Illinois – NCSA (National Center for Supercomputing Applications) University of California, San Diego - SDSC (San Diego Supercomputer Center) University of Southern California, Information Sciences Institute Indiana University Stanford University University of California, Davis California Institute of Technology North Carolina State University University of Alaska University of Hiroshima, Japan Korea Telcom Research Lab ETRI (Electronic and Transmission Research Institute), Korea

4 Original Agenda MPLS Fundamentals Traffic Engineering
Constraint-Based Routing Refreshment Break Virtual Private Networks Optical Applications for MPLS Signaling (GMPLS/MPλS) Juniper Networks Solutions Questions and Comments

5 Our Agenda MPLS Overview Traffic Engineering VPNs

6 What are we missing out on?
A bunch of pure marketing slides A bunch of filler slides Slides with content that is of interest mainly to ISPs Here is how you can use MPLS to bring in more revenue, offer different services, etc. Some Details of MPLS Signaling Protocols and RFC 2547 VPNs You can (and should) only cover so much in one talk Some MP(Lambda)S Details Seems too much like slide ware right now

7 What are we gaining? Besides being spared marketing and ISP centric stuff: We will see some examples from networks and applications we are familiar with We will save some time and cover almost as much information

8 Why Is MPLS an Important Technology?
Fully integrates IP routing & L2 switching Leverages existing IP infrastructures Optimizes IP networks by facilitating traffic engineering Enables multi-service networking Seamlessly integrates private and public networks The natural choice for exploring new and richer IP service offerings Dynamic optical bandwidth provisioning

9 What Is MPLS? IETF Working Group chartered in spring 1997
IETF solution to support multi-layer switching: IP Switching (Ipsilon/Nokia) Tag Switching (Cisco) IP Navigator (Cascade/Ascend/Lucent) ARIS (IBM) Objectives Enhance performance and scalability of IP routing Facilitate explicit routing and traffic engineering Separate control (routing) from the forwarding mechanism so each can be modified independently Develop a single forwarding algorithm to support a wide range of routing and switching functionality

10 MPLS Terminology Label Forwarding Equivalence Class (FEC)
Short, fixed-length packet identifier Unstructured Link local significance Forwarding Equivalence Class (FEC) Stream/flow of IP packets: Forwarded over the same path Treated in the same manner Mapped to the same label FEC/label binding mechanism Currently based on destination IP address prefix Future mappings based on SP-defined policy

11 MPLS Terminology Label Swapping Connection table maintains mappings
(port, label) Out (port, label) Label Operation 25 IP Port 1 Port 2 (1, 22) (1, 24) (1, 25) (2, 23) (2, 17) (3, 17) (4, 19) (3, 12) Swap Swap 19 IP Swap Port 3 Port 4 Swap Label Swapping Connection table maintains mappings Exact match lookup Input (port, label) determines: Label operation Output (port, label) Same forwarding algorithm used in Frame Relay and ATM

12 MPLS Terminology Label-Switched Path (LSP)
New York San Francisco LSP Label-Switched Path (LSP) Simplex L2 tunnel across a network Concatenation of one or more label switched hops Analogous to an ATM or Frame Relay PVC

13 MPLS Terminology Label-Switching Router (LSR)
New York LSR LSR San Francisco LSR LSP Label-Switching Router (LSR) Forwards MPLS packets using label-switching Capable of forwarding native IP packets Executes one or more IP routing protocols Participates in MPLS control protocols Analogous to an ATM or Frame Relay Switch (that also knows about IP)

14 MPLS Terminology Ingress LSR (“head-end LSR”) Transit LSR
Egress LSR Ingress LSR New York Transit LSR San Francisco Transit LSR LSP Ingress LSR (“head-end LSR”) Examines inbound IP packets and assigns them to an FEC Generates MPLS header and assigns initial label Transit LSR Forwards MPLS packets using label swapping Egress LSR (“tail-end LSR”) Removes the MPLS header

15 MPLS Header Fields IP packet is encapsulated by ingress LSR
TTL Label (20-bits) CoS S L2 Header MPLS Header IP Packet 32-bits Fields Label Experimental (CoS) Stacking bit Time to live IP packet is encapsulated by ingress LSR IP packet is de-encapsulated by egress LSR

16 IP Packet Forwarding Example
Routing Table Destination Next Hop 134.5/16 /24 2 3 5 Routing Table Destination Next Hop 134.5/16 /24 Routing Table Routing Table Destination Next Hop Destination Next Hop 134.5/16 134.5/16 /24 /24

17 MPLS Forwarding Example
MPLS Table In Out (2, 84) (6, 0) 2 6 Egress Routing Table Destination Next Hop 2 134.5/16 99 3 /24 1 2 3 5 Ingress Routing Table Destination Next Hop 56 134.5/16 (2, 84) /24 (3, 99) MPLS Table MPLS Table In Out In Out (1, 99) (2, 56) (3, 56) (5, 0)

18 How Is Traffic Mapped to an LSP?
AS 45 AS 63 BGP BGP E-BGP peers E-BGP peers AS 77 Transit SP I-BGP peers BGP BGP LSP 32 Ingress LSR Egress LSR Routing Table 134.5/16 LSP 32 Map LSP to the BGP next hop FEC = {all BGP destinations reachable via egress LSR}

19 How are LSPs Set Up? Two approaches: Manual Configuration
Egress LSR Ingress LSR LSP Two approaches: Manual Configuration Using a Signaling Protocol

20 MPLS Signaling Protocols
The IETF MPLS architecture does not assume a single label distribution protocol LDP Executes hop-by-hop Selects same physical path as IGP Does not support traffic engineering RSVP Easily extensible for explicit routes and label distribution Deployed by providers in production networks CR-LDP Extends LDP to support explicit routes Functionally identical to RSVP Not deployed

21 How Is the LSP Physical Path Determined?
Egress LSR Ingress LSR LSP Two approaches: Offline path calculation (in house or 3rd party tools) Online path calculation (constraint-based routing) A hybrid approach may be used

22 Offline Path Calculation
Simultaneously considers All link resource constraints All ingress to egress traffic trunks Benefits Similar to mechanisms used in overlay networks Global resource optimization Predictable LSP placement Stability Decision support system In-house and third-party tools Even with a constraint-based routing capability, offline path calculation tools are still important.

23 Offline Path Calculation
R6 R9 Egress LSR R2 R1 Ingress LSR R4 R7 R8 Explicit route = {R1, R4, R8, R9} R3 R5 LSP Input to offline path calculation utility: Ingress and egress points Physical topology Traffic matrix (statistics about city - router pairs) Output: Set of physical paths, each expressed as an explicit route

24 Explicit Routes: Example 1
Egress LSR R2 R1 Ingress LSR R4 R7 R8 R3 R5 LSP from R1 to R9 Partial explicit route: {loose R8, strict R9} LSP physical path R1 to R8 – follow IGP path R8 to R9 – directly connected

25 Explicit Routes: Example 2
Egress LSR R2 R1 Ingress LSR R4 R7 R8 R3 R5 LSP from R1 to R9 Full explicit route: {strict R3, strict R4, strict R7, strict R9} LSP physical path R1 to R3 – directly connected R3 to R4 – directly connected R4 to R7 – directly connected R7 to R9 – directly connected

26 Constraint-Based Routing
Egress LSR Ingress LSR User defined LSP constraints Online LSP path calculation Operator configures LSP constraints at ingress LSR Bandwidth reservation Include or exclude a specific link(s) Include specific node traversal(s) Network actively participates in selecting an LSP path that meets the constraints

27 Constraint-Based Routing
Thirty-two named groups, 0 through 31 Groups assigned to interfaces Silver San Francisco If you use administrative groups, you must configure them identically on all routers participating in an MPLS domain. You may optionally assign more than one administrative group to one physical link. [edit protocols] mpls { admin-groups { gold 1; silver 2; bronze 3; management 30; internal 31; } interface so-0/0/0 { admin-group [ good management ] interface so-0/1/0 { admin-group silver; interface so-0/2/0 { admin-group gold; interface so-0/3/0 { Gold Bronze

28 Constraint-Based Routing
Choose the path from A to I using: admin group { include [gold sliver]; } C D E F G H B A I Copper Gold Copper Bronze Bronze Silver Bronze Copper Bronze Copper Copper Gold 6 Gold Copper

29 Constraint-Based Routing
A-C-F-G-I uses only gold or silver links B Copper G Gold I Copper Bronze Bronze Silver 6 Bronze A E Copper 1 Bronze Copper Copper 2 Gold D H Gold Copper C F

30 Constraint-Based Routing: Example 1
Seattle Chicago New York San Francisco Kansas City Los Angeles Atlanta label-switched-path SF_to_NY { to New_York; from San_Francisco; admin-group {exclude green} cspf} Dallas

31 Constraint-Based Routing: Example 2
label-switched-path madrid_to_stockholm{ to Stockholm; from Madrid; admin-group {include red, green} cspf} Stockholm London Paris Munich Geneva Madrid Rome 31

32 Other Neat MPLS Stuff Secondary LSPs Fast Reroute Label Stacking GMPLS

33 MPLS Secondary LSPs Standard LSP failover Standby Secondary LSP
Primary LSP New York Data Center San Francisco Data Center Secondary LSP Standard LSP failover Failure signaled to ingress LSR Calculate & signal new LSP Reroute traffic to new LSP Standby Secondary LSP Pre-established LSP Sub-second failover

34 MPLS Fast Reroute Ingress signals fast reroute during LSP setup
Primary LSP Active Detour New York Data Center San Francisco Data Center Ingress signals fast reroute during LSP setup Each LSR computes a detour path (with same constraints) Supports failover in ~100s of ms

35 MPLS Label Stacking A label stack is an ordered set of labels
3 1 Trunk LSP LSP 1 1 2 2 3 6 2 5 4 3 5 LSP 2 5 2 TTL Label (20-bits) CoS S A label stack is an ordered set of labels Each LSR processes the top label Applications Routing hierarchy Aggregate individual LSPs into a “trunk” LSP VPNs

36 MPLS Label Stack: Example 1
Trunk LSP 3 1 42 25 IP 18 25 IP IP 25 56 IP 25 IP 1 2 2 5 6 2 5 4 3 5 5 2 MPLS Table MPLS Table MPLS Table MPLS Table In Out In Out In Out In Out (1, 25) (2, Push [42]) (5, 42) (6, 18) (2, 18) (5, Pop) (4, 25) (2, 56) (3, 35) (2, Push [42]) (4, 35) (5, 17)

37 MPLS Label Stack: Example 2
Trunk LSP 3 1 42 35 IP 18 35 IP 35 IP 1 2 2 5 6 2 5 4 3 5 IP 35 17 IP 5 2 MPLS Table MPLS Table MPLS Table MPLS Table In Out In Out In Out In Out (1, 25) (2, Push [42]) (5, 42) (6, 18) (2, 18) (5, Pop) (4, 25) (2, 56) (3, 35) (2, Push [42]) (4, 35) (5, 17)

38 Label Stacking allows you to Reduce the Number of LSPs
Trunk LSP Trunk of Trunks LSP 3 LSP 3 LSP Trunk LSP 4 LSP 4 Label stacking to create a hierarchy of LSP trunks

39 Generalized MPLS (GMPLS) Formally known as MPL(amda)S
IP Service (Routers) Optical Core Optical Transport (OXCs, WDMs) Reduce complexity Reduce cost Router subsumes functions performed by other layers Fast router interfaces eliminate the need for MUXs MPLS replaces ATM/FR for traffic engineering MPLS fast reroute obviates SONET APS restoration Dynamic provisioning of optical bandwidth is required for growth and innovative service creation Today, service provider networks are evolving to a two-layer architecture: 1) An IP service layer that is supported by routers 2) An optical transport layer consisting of OXCs and DWDM equipment. The slide lists a number of the factors that are helping to drive this evolution. Nobody knows for sure where the natural boundaries will be drawn. But, we can make an educated guess. The first guess is that the boundary will be drawn between the IP data layer and the transmission layer. This is the ODSI (Sycamore) approach. 2) A second guess is that the boundary will be drawn somewhere within the transmission layer. This causes the transmission layer to be broken into two components – a switching component and a long distance transmission component. This is the IETF/OIF approach. The IETF/OIF approach gives the service provider the flexibility to draw the line wherever they want. If the provider wants, as a first cut, the line to be drawn between the IP service layer and the optical transport layer that’s O.K. Later, if they want to draw the line between the optical switching component and the long distance transmission component they have the option of migrating their network architecture to this approach. This way, the line is drawn by the provider where it suits their operational environment rather than having the equipment vendors dictate where the line must be drawn.

40 (multiplex low-order LSPs) (demultiplex low-order LSPs)
GMPLS: LSP Hierarchy PSC Cloud TDM Cloud LSC Cloud LSC Cloud TDM Cloud PSC Cloud FSC Cloud Fiber 1 Bundle Fiber n FA-PSC FA-TDM FA-LSC Explicit Label LSPs Time-slot LSPs Time-slot LSPs Explicit Label LSPs l LSPs l LSPs Fiber LSPs (multiplex low-order LSPs) (demultiplex low-order LSPs) Nesting LSPs enhances system scalability LSPs always start and terminate on similar interface types LSP interface hierarchy Packet Switch Capable (PSC) Lowest Time Division Multiplexing Capable (TDM) Lambda Switch Capable (LSC) Fiber Switch Capable (FSC) Highest To improve the scalability of MPLS TE, it may be useful to aggregate TE LSPs. The aggregation is accomplished by (a) an LSR creating a TE LSP, (b) the LSR forming a forwarding adjacency out of that LSP (advertising this LSP as a link into ISIS/OSPF), (c) allowing other LSRs to use forwarding adjacencies for their path computation, and (d) nesting of LSPs originated by other LSRs into that LSP (by using the label stack construct). Interfaces on LSRs can be subdivided into the following classes: 1. Interfaces that recognize packet/cell boundaries and can forward data based on the content of the packet/cell header. Such interfaces are referred to as Packet-Switch Capable (PSC). 2. Interfaces that forward data based on the data's time slot in a repeating cycle. Such interfaces are referred to as Time-Division Multiplex Capable (TDM). 3. Interfaces that forward data based on the wavelength on which the data is received. Such interfaces are referred to as Lambda Switch Capable (LSC). 4. Interfaces that forward data based on a position of the data in the real world physical spaces. Such interfaces are referred to as Fiber-Switch Capable (FSC). Using the concept of nested LSPs (by using label stack) allows the system to scale by building a forwarding hierarchy. At the top of this hierarchy are FSC interfaces, followed by LSC interfaces, followed by TDM interfaces, followed by PSC interfaces. This way, an PSC-LSP that starts and ends on a PSC interface can be nested (together with other PSC-LSPs) into an LSP that starts and ends on a TDM interface. This TDM- LSP, in turn, can be nested (together with other TDM- LSPs) into an LSP that starts and ends on a LSC interface. Finally, the LSC-LSP can be nested (together with other LSC-LSPs) into an LSP that starts and ends on a FSC interface. This hierarchical nesting of LSPs is illustrated by the graphic on this slide. For additional information see: <draft-ietf-mpls-lsp-hierarchy-00.txt>

41 AGENDA MPLS Overview Traffic Engineering VPNs

42 What Is Traffic Engineering?
Source Destination Layer 3 Routing Traffic Engineering Ability to control traffic flows in the network Optimize available resources Move traffic from IGP path to less congested path If traffic engineering capabilities are required for growth, then what is traffic engineering? Prolonged congestion is the root of poor network performance. The two major causes of prolonged congestion are: Inefficient or inadequate network resources. The two approaches to eliminating this cause of prolonged congestion are: (a) expand existing capacity, and (b) classical techniques such as rate limiting, queue management, etc The inefficient mapping of traffic streams onto available network resources. Traffic engineering is the only mechanism that can be used to overcome this source of prolonged network congestion

43 Brief History Early 1990’s Internet core was connected with T1 and T3 links between routers Only a handful of routers and links to manage and configure Humans could do the work manually Metric-based traffic control was sufficient In the early 1990s, ISP networks were composed of routers interconnected by leased lines—T1 (1.5-Mbps) and T3 (45-Mbps) links. Traffic engineering was simpler then—metric-based control was adequate because Internet backbones were much smaller in terms of the number of routers, number of links, and amount of traffic. Also, in the days before the tremendous popularity of the any-to-any WWW, the Internet’s topological hierarchy forced traffic to flow across more deterministic paths. Current events on the network (for example, John Glenn and the Starr Report) did not create temporary hot spots.

44 Metric-Based Traffic Engineering
Traffic sent to A or B follows path with lowest metrics 1 1 A B The figure above shows metric-based traffic engineering in action. When sending large amounts of data to network A, traffic is routed through the top router because it has a lower overall cost. 1 2 C

45 Metric-Based Traffic Engineering
Drawbacks Redirecting traffic flow to A via C causes traffic for B to move also! Some links become underutilized or overutilized 1 4 Rerouting traffic for router A by raising metrics along the current path has the desired effect of forcing the traffic to use router C, but has the unintended effect of causing traffic destined for B to do the same. Since interior gateway protocol (IGP) route calculation was topology driven and based on a simple additive metric such as the hop count or an administrative value, the traffic patterns on the network were not taken into account when the IGP calculated its forwarding table. As a result, traffic was not evenly distributed across the network's links, causing inefficient use of expensive resources. Some links became congested, while other links remained underutilized. This might have been satisfactory in a sparsely-connected network, but in a richly-connected network (that is, bigger, more thickly meshed and more redundant) it is necessary to control the paths that traffic takes in order to balance loads. A B 1 2 C

46 Metric-Based Traffic Engineering
Drawbacks Complexity made metric control tricky Adjusting one metric might destabilize network As Internet service provider (ISP) networks became more richly connected, it became more difficult to ensure that a metric adjustment in one part of the network did not cause problems in another part of the network. Traffic engineering based on metric manipulation offers a trial-and-error approach rather than a scientific solution to an increasingly complex problem.

47 Discomfort Grows Mid 1990’s
ISPs became uncomfortable with size of Internet core Large growth spurt imminent Routers too slow Metric “engineering” too complex IGP routing calculation was topology driven, not traffic driven Router based cores lacked predictability Metric-based traffic controls continued to be an adequate traffic engineering solution until 1994 or At this point, some ISPs reached a size at which they did not feel comfortable moving forward with either metric-based traffic controls or router-based cores. Traditional software-based routers had the potential to become traffic bottlenecks under heavy load because their aggregate bandwidth and packet-processing capabilities were limited. It became increasingly difficult to ensure that a metric adjustment in one part of a huge network did not create a new problem in another part. And router-based cores did not offer the high-speed interfaces or deterministic performance that ISPs required as they planned to grow their core networks.

48 Overlay Networks are Born
ATM switches offered performance and predictable behavior ISPs created “overlay” networks that presented a virtual topology to the edge routers in their network Using ATM virtual circuits, the virtual network could be reengineered without changing the physical network Benefits Full traffic control Per-circuit statistics More balanced flow of traffic across links Asynchronous Transfer Mode (ATM) switches offered a solution when ISPs required more bandwidth to handle increasing traffic loads. The ISPs who migrated to ATM-based cores discovered that ATM permanent virtual circuits (PVCs) offered precise control over traffic flow across their networks. ISPs came to rely on the high-speed interfaces, deterministic performance, and PVC functionality that ATM switches provided. Around 1994 or 1995, Internet traffic became so high that ISPs had to migrate their networks to support trunks that were larger than T3 (45 Mbps). Fortunately, OC-3 ATM interfaces (155 Mbps) became available for switches and routers. To obtain the required speed, ISPs had to redesign their networks so that they could use higher speeds supported by a switched core. An ATM-based core fully supports traffic engineering because it can explicitly map PVCs. Mapping PVCs is done by provisioning an arbitrary virtual topology on top of the network's physical topology. PVCs are mapped from edge to edge to precisely distribute traffic across all links so that they are evenly utilized. This approach eliminates the traffic-magnet effect of least-cost routing, which results in overutilized and underutilized links. The traffic engineering capabilities supported by ATM PVCs made the ISPs more competitive within their market, so they could provide better service to their customers at a lower cost. Per-PVC statistics provided by the ATM switches facilitate monitoring traffic patterns for optimal PVC placement and management. Network designers initially provision each PVC to support specific traffic engineering objectives, and then they constantly monitor the traffic load on each PVC. If a given PVC begins to experience congestion, the ISP has the information it needs to remedy the situation by modifying either the virtual or physical topology to accommodate shifting traffic loads.

49 Overlay Networks ATM core ringed by routers
PVCs overlaid onto physical network A Physical View B C In an ATM overlay network, routers surround the edge of the ATM cloud. Each router communicates with every other router by a set of PVCs that are configured across the ATM physical topology. The PVCs function as logical circuits, providing connectivity between edge routers. The routers do not have direct access to information describing the physical topology of the underlying ATM infrastructure. The routers have knowledge only of the individual PVCs that appear to them as simple point-to-point circuits between two routers. The illustration above shows how the physical topology of an ATM core differs from the logical IP overlay topology. The distinct ATM and IP networks “meet” when the ATM PVCs are mapped to router logical interfaces. Logical interfaces on a router are associated with ATM PVCs, and then the routing protocol works to associate IP prefixes (routes) with the subinterfaces. Finally, ATM PVCs are integrated into the IP network by running the IGP across each of the PVCs to establish peer relationships and exchange routing information. Between any two routers, the IGP metric for the primary PVC is set such that it is more preferred than the backup PVC. This guarantees that the backup PVC is used only when the primary PVC is not available. Also, if the primary PVC becomes available after an outage, traffic automatically returns to the primary PVC from the backup. A Logical View C B

50 vBNS ATM Design Full UBR PVP mesh between terminal switches to carry “Best Effort” traffic

51 vBNS Backbone Network Map
Seattle C Boston National Center for Atmospheric Research Ameritech NAP Cleveland C Chicago C New York City C A C C A C C Sprint NAP Pittsburgh Supercomputing Center C Perryman, MD A San Francisco C Denver C C C National Center for Supercomputing Applications C J Washington, DC MFS NAP Los Angeles J C A C San Diego Supercomputer Center Atlanta C Ascend GRF 400 Cisco 7507 Juniper M40 FORE ASX-1000 NAP DS-3 OC-3C OC-12C OC-48 A C Houston C J

52 Overlay Nets Had Drawbacks
Growth in full mesh of ATM PVCs stresses everything Router IGP runs out of steam Practical limitation of updating configurations in each switch and router ATM 20% Cell Tax ATM SAR speed limitations OC-48 SAR very difficult/expensive to build OC-192 SAR? A network that deploys a full mesh of ATM PVCs exhibits the traditional “n- squared” problem. For relatively small or moderately sized networks, this is not a major issue. But for core ISPs with hundreds of attached routers, the challenge can be quite significant. For example, when expanding a network from five to six routers, an ISP must increase the number of simplex PVCs from 20 to However, increasing the number of attached routers from 200 to 201 requires the addition of 400 new simplex PVCs—an increase from 39,800 to 40,200 PVCs. These numbers do not include backup PVCs or additional PVCs for networks running multiple services that require more than one PVC between any two routers. A number of operational challenges are caused by the "n-squared" problem: New PVCs must be mapped over the physical topology New PVCs must be tuned so that they have minimal impact on existing PVCs The large number of PVCs might exceed the configuration and implementation capabilities of the ATM switches The configuration of each switch and router in the core must be modified Deploying a full mesh of PVCs also stresses the IGP. This stress results from the number of peer relationships that must be maintained, the challenge of processing "n-cubed" link-state updates in the event of a failure, and the complexity of performing the Dijkstra calculation over a topology containing a significant number of logical links. Any time the topology results in a full mesh, the impact on the IGP is a suboptimal topology that is extremely difficult to maintain. As an ATM core expands, the "n-squared" stress on the IGP compounds.

53 In the mean time: Routers caught up MPLS came along
Current generation of routers have High speed, wire-rate interfaces Deterministic performance Software advances MPLS came along Fuses best aspects of ATM PVCs with high-performance routing engines Uses low-overhead circuit mechanism Automates path selection and configuration Implements quick failure recovery There are many disadvantages of continuing the IP-over-ATM model when other alternatives are available. High-speed interfaces, deterministic performance, and traffic engineering using PVCs no longer distinguish ATM switches from Internet backbone routers. Furthermore, the deployment of a router-based core solves a number of inherent problems with the ATM model— the complexity and expense of coordinating two sets of equipment, the bandwidth limitations of ATM SAR interfaces, the cell tax, the “n-squared” PVC problem, the IGP stress, and the limitation of not being able to operate over a mixed-media infrastructure. The JUNOS operating system fuses the best aspects of ATM PVCs with the high- performance Packet Forwarding Engine. Using a low-overhead circuit-switching protocol with automatic path selection and maintenance, the entire network core can be optimized by distributing traffic engineering chores to each router.

54 MPLS for Traffic Engineering
Low-overhead virtual circuits for IP Originally designed to make routers faster Fixed label lookup faster than longest match used by IP routing Not true anymore Value of MPLS is now in traffic engineering Other MPLS Benefits: No second network A fully integrated IP solution – no second technology Traffic engineering Lower cost A CoS enabler Failover/link protection Multi-service and VPN support It is commonly believed that Multiprotocol Label Switching (MPLS) significantly enhances the forwarding performance of label-switching routers. It is more accurate to say that exact-match lookups, such as those performed by MPLS and ATM switches, have historically been faster than the longest match lookups performed by IP routers. However, recent advances in silicon technology allow ASIC-based route-lookup engines to run just as fast as MPLS or ATM virtual path identifier/virtual circuit identifier (VPI/VCI) lookup engines. The real benefit of MPLS is that it provides a clean separation between routing (that is, control) and forwarding (that is, moving data). This separation allows the deployment of a single forwarding algorithm—MPLS—that can be used for multiple services and traffic types. In the future, as ISPs need to develop new revenue-generating services, the MPLS forwarding infrastructure can remain the same while new services are built by simply changing the way packets are assigned to an LSP. For example, packets could be assigned to a label-switched path based on a combination of the destination subnetwork and application type, a combination of the source and destination subnetworks, a specific quality of service (QoS) requirement, an IP multicast group, or a VPN identifier. In this manner, new services can easily be migrated to operate over the common MPLS forwarding infrastructure.

55 AGENDA MPLS Overview Traffic Engineering VPNs

56 What Is a Virtual Private Network?
Corporate headquarters Intranet Branch office Shared Infrastructure Mobile users and telecommuters Remote access Suppliers, partners and customers Extranet “A private network constructed over a shared infrastructure” Virtual An artificial object simulated by computers (not really there!) Private Separate/distinct environments Separate addressing and routing systems Network A collection of devices that communicate among themselves

57 Deploying VPNs using Overlay Networks
Provider Frame Relay Network CPE FR switch FR switch CPE DLCI DLCI CPE FR switch FR switch FR switch CPE DLCI FR switch FR switch CPE CPE Operational model PVCs overlay the shared infrastructure (ATM/Frame Relay) Routing occurs at CPE Benefits Mature technologies Inherently ‘secure’ Service commitments (bandwidth, availability, etc.) Limitations Scalability and management of the overlay model Not a fully integrated IP solution

58 MPLS: A VPN Enabling Technology
Service Provider Network Site 1 Site 2 Site 3 Benefits Seamlessly integrates multiple “networks” Permits a single connection to the service provider Supports rapid delivery of new services Minimizes operational expenses Provides higher network reliability and availability

59 There are Three Types of VPNs
End to End (CPE Based) VPNs L2PT & PPTP IPSEC Layer 2 VPNs CCC CCC & MPLS Hybrid Layer3 VPNs RFC 2547bis

60 End to End VPNs: L2TP and PPTP
V.x modem Dial access server L2TP access server L2TP tunnel Dial Access Provider Service Provider or VPN PPP dial-up Dial access server PPTP access server PPTP tunnel Application: Dial access for remote users Layer 2 Tunneling Protocol (L2TP) RFC 2661 Combination of L2F and PPTP Point-to-Point Tunneling Protocol (PPTP) Bundled with Windows/Windows NT Both support IPSec for encryption Authentication & encryption at tunnel endpoints

61 End to End VPNs: The IP Security Protocol (IPSec)
Defines the IETF’s layer 3 security architecture Applications: Strong security requirements Extend a VPN across multiple service providers Security services include: Access control Data origin authentication Replay protection Data integrity Data privacy (encryption) Key management

62 End to End VPNs: IPSec – Example
Public Internet Corporate HQ Branch office CPE CPE IPSec ESP Tunnel Mode Routing must be performed at CPE Tunnels terminate on subscriber premise Only CPE equipment needs to support IPSec Modifications to shared resources are not required ESP tunnel mode Authentication insures integrity from CPE to CPE Encrypts original header/payload across internet Supports private address space

63 Layer 2 VPNs: CCC/MPLS Benefits Limitations LSPs
CPE DLCI 600 PE LSPs PE DLCI 506 CPE LSP 5 PE ATM (or Frame Relay) ATM (or Frame Relay) LSP 2 LSP 6 DLCI 610 DLCI 408 (MPLS core) In Out LSP 2 in LSP 5 DLCI 600 LSP 6 in LSP 5 DLCI 610 CCC Table In Out LSP 2 in LSP 5 DLCI 506 LSP 6 in LSP 5 DLCI 408 CCC Table CCC Function Benefits Reduces provider configuration complexity MPLS traffic engineered core Subscriber can run any Layer 3 protocol User Nets do not know there is a cloud in the middle Limitations Circuit type (ATM/FR) must be “like to like”

64 CCC Example: Abilene and ISP Service on one link
Big “I” Internet Traffic: ATM VC1 terminated, IP packets delivered to Qwest ISP Qwest ISP Abilene M40 University X Circuit Cross Connect can be used to support multiple services over an IP/MPLS backbone. That is, Frame Relay VCs or ATM VCs can be mapped directly onto MPLS LSPs. Note that there are no provisioning tools available and all mapping is static. However, a single M40 can both terminate VCs for a layer 3 lookup and serve as a layer 2 pass-through device. An ATM-MPLS-ATM circuit or a FR-MPLS-FR circuit can be provisioned across the same M40 routers that support the layer 3 forwarding of Internet traffic. Abilene Traffic: ATM VC2 mapped to port facing Abilene ATM Access An M20/40/160 can both terminate ATM PVCs (layer 3 lookup) and support CCC pass-through on the same port.

65 vBNS used CCC and MPLS to tunnel IPv6 across their backbone for SC2000
vBNS/vBNS+ IPv4 CCC CCC Chicago LSP SC2000 in Dallas ATM ATM IPv6 IPv6

66 Layer 3 VPNs: RFC 2547 - MPLS/BGP VPNs
Service Provider Network CPE PE Site 1 Site 2 Site 3 P FT MPLS (Multiprotocol Label Switching) is used for forwarding packets over the backbone BGP (Border Gateway Protocol) is used for distributing routes over the backbone Multiple Forwarding Tables (FT) on some edge routers, one for each VPN

67 Questions? Thank you for attending. You can receive more information about Juniper Networks from our web page at “

68 jjamison@juniper.net http://www.juniper.net
Thank You Thank you for attending. You can receive more information about Juniper Networks from our web page at “


Download ppt "John Jamison University of Illinois at Chicago November 17, 2000"

Similar presentations


Ads by Google