Presentation is loading. Please wait.

Presentation is loading. Please wait.

Carrier Ethernet Technology and Standards Update

Similar presentations


Presentation on theme: "Carrier Ethernet Technology and Standards Update"— Presentation transcript:

1 Carrier Ethernet Technology and Standards Update
Presented by: Rick Gregory Senior Systems Consulting Engineer May 25,2011 Good afternoon and welcome to the Carrier Ethernet Technology and Standards update. My name is Rick Gregory a Systems Cunsulting engineer and today I will be reviewing with you a number of topics regarding Carrier Ethernet from a Technology and Standards perspective, so without further ado lets move onto the next chart and review the agenda that we’ll be covering during this mornings session.

2 Carrier Ethernet: Evolution, Defined
2

3 Ethernet Evolution Timeline 1970s to today
History always provides a good perspective for the future, including with technology. Ethernet is a ubiquitous networking technology. One has to look no further than a PC or laptop to see a Ethernet NIC, usually mounted internally these days and visible through the RJ-45 copper interface connector. Ethernet is a family of frame-based computer networking technologies for local area networks (LANs), standardize by the IEEE standard. The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the Physical Layer of the OSI networking model, through means of network access at the Media Access Control (MAC) /Data Link Layer, and a common addressing format. We’ll come back to that in a minute. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, is the most widespread wired LAN technology. It has been in use from around 1980 to the present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET. But this is not a recent phenomenon. Ethernet has existed since the early 70’s when it was first proposed and demonstrated as part of a shared medium network protocol by Bob Metcalf (et al) at Xerox PARC. The experimental Ethernet described in that paper ran at 3 Mbit/s, and had eight-bit destination and source address fields, so the original Ethernet addresses were not the MAC addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet which specifies the protocol being used. This early version of Ethernet used coaxial cable and BNC connectors as the physical media. Ethernet evolved through several developments including CSMA/CD and use of copper twisted pair to overcome system and deployment practicalities. Twisted-pair Ethernet systems have been used since the mid-80s, beginning with StarLAN, but becoming widely known with 10BASE-T offering 10 megabit operation. These systems replaced the coaxial cable on which early Ethernets were deployed with a system of hubs linked with unshielded twisted pair (UTP), ultimately replacing the CSMA/CD scheme in favor of a switched full duplex system offering higher performance. During the early 1990’s 100BASE-T emerged, increasing the line rate to 100Mb/s, commonly called “Fast Ethernet”. This speed trend continued through that decade coupled with the adoption of fiber optics for extended reach as the media of choice. By 1999, 1000-BaseT was standardized, offering gigabit speeds for LAN applications over fiber as well as copper in some applications. Over the last decade, we have seen higher speed Ethernet standards emerge for Intra-Building LAN and MAN Access as well as WAN Transport implementations at speeds of 10, 40 and now 100Gbps. The optical components supporting these various implementations have matured at the same time, with the common acceptance of multi-source agreements (MSAs) and small form factor pluggable (SFP) optical modules. Let move onto the next chart and discuss some of the events that occurred in Ethernet and the associated effects. 1973 Metcalfe & Boggs of Xerox PARC invented ALOHA packet-based network access protocol over a wired shared medium 3 Mb/s operation 1982 “The Ethernet Blue Book” Digital, Intel, Xerox (DIX) 10Mb/s operation based on the Xerox PARC concepts 1985 IEEE Carrier Sense Multiple Access w/ Collision Detection (CSMA/CD) Formal standards definition, based on “Blue Book” 1999 Gigabit Ethernet standards ratified for use over copper twisted pair; vendors also implement fiber optic versions; 1000Base-T IEEE 802.3ab 2000’s Fiber standards ratified for single and multimode fiber; speeds evolve to , 40 and (eventually) 100Gbps

4 Ethernet over any media…any service over Ethernet
Ethernet Evolution Events Effect: Carrier Ethernet becomes Leading Transport Technology Events Effects International standardization Ethernet is the first global network access technology Unrivaled success in enterprise Access, metro, and wide-area applications Large number of component and equipment manufacturers Lowest cost per megabit; < 8¢ per megabit for triple-speed NIC Mature, transparent layer 2 technology Simple plug-and-play installation Ethernet was the first widely available LAN technology on the scene. Simplicity of design resulted in cheaper equipment and allowed for easy deployment. Ethernet was originally designed for a bus topology. Evolution initially created the repeater to join multiple cable segments into larger networks. Repeaters evolved to have multiple ports, becoming the Ethernet hub. With hubs, Ethernet was no longer limited to a single short cable length and networks grew and eventually increased the probability of collisions. Beyond a certain point, throughput was unacceptably degraded, leading to bridges which connected networks at the data link layer and isolated them on the physical layer. Bridges gave way to switches, which operated in hardware So, just a few events are listed here along side their long term effects. International standardization led to the adoption of Ethernet across the global community as the universally accepted network access technology; first in the LAN and increasingly in the WAN. As enterprises adopted the standard, the definition of “wide” area network changed somewhat with the reversal of the 80/20 rule with 80% of the Network Traffic now leaving the building, blurring with the telco terms of metro and , even, long haul networks as large organizations pushed Ethernet out in larger deployments. All of this implies a very large demand for underlying components and systems to support Ethernet. In turn, suppliers developed competitive solutions such as the SFP to reduce component cost. As one example, consider that a triple speed NIC costs somewhere below 8 cents per megabit. As a mature and transparent layer 2 technology, users can depend on simple installation and automated provisioning allowing for highly flexible usage with low support costs. The result of this long term evolution is that Ethernet is now the de-facto technology for transport and switching for any service type over any media, copper or fiber. Lets move onto the next chart and review some of the basics of Ethernet Bridging. Ethernet over any media…any service over Ethernet

5 Basic Ethernet Bridging (IEEE 802.1D)
Unknown Destination Multicast Broadcast A switch builds forwarding table by LEARNING where each station is (relative to itself) by watching the SA of packets it receives. Address Port A B C D E F 1 2 3 Forwarding Table At a basic level, an Ethernet switch learns about the data sources and destinations it is connected to by building a forwarding table as illustrated here. In this case, we show a 3 port bridge with several variants of connections to six different end destinations, called stations. The switch builds this forwarding table by learning where each station is located based on the source address of incoming packets. Once the table is built, the switch will either forward, filter or flood the packet to the other ports. Forwarding does exactly as the name implies; data received from A on port 1 destined for station B or C will be sent to port 2. Filtering simply throws away packets that are sourced and destined for the same port, for example a packet heading from station B to C. Flooding is when the bridge sends packets to all of its ports either because it couldn’t determine the destination or because it is a multi-cast or broadcast packet. Connection-less Ethernet is when this bridging occurs such that it is invisible to each station; multiple local networks appear to be on the same network much like stations B and C. Lets move onto the next chart and review Ethernet’s Evolution. Four Important Concepts/Operations (upon switch receipt of a packet): LEARNING: The Source MAC Address (SA) and port number, if not known FORWARDING: Looking up Destination Address (DA) in table and sending to correct port FILTERING: Discarding packets if destination port = receiving port FLOODING: Sending to all other ports if DA is unknown, multicast or broadcast

6 Ethernet’s Evolution Originally Now Bandwidth 10 Mbps, then 100M
Prior to IEEE standardization Ethernet began as a proprietary media where bandwidth was limited to 10Megs and transmission was Half Duplex. Today, Ethernet’s Bandwidth is capable of going to 100Gig and its distance is limited only by the media its transported on. Also, the end user Broadcast Domain in the initial Ethernet implementations included the Entire LAN so there was no way to control Broadcasts and the size of the Broadcast Domain on your LAN. More users that were added to the Network only exaserbated the problem. One problem that occurred early on was a phenomenon know as a Broadcast Storm. A broadcast storm on a malfunctioning NIC Card as an example could bring down an entire Ethernet Network unless you could find the PC with the malfunctioning NIC and turn it off. At this time, Network General became a viable Network Trouble Shooting company because they had a SNIFFER product that could detect where the source MAC address was that was creating the Storm (you could view excessive broadcasts being sent). Today, Ethernet has Class of Service VLAN Controlled Traffic Engineering capabilities that allow Network Operators to separate the Broadcast Domains using a 12-bit VLAN ID that allows for the creation of multiple “virtual” LANs to share the same physical cable. We will discuss later in this presentation other mechanisms that increase the Ethernet VLAN Scalability and Control. Network Operators could also Prioritize the VLAN traffic using a PCP Priority Code Point also known as a P bit from the IEEE 802.1p standard to classify, schedule and Queue your business critical Voice, Video and Data Traffic over your Best Effort Traffic as well as rate limiting the bandwidth assigned per VLAN in 64K or 1 meg increments. This is commonly referred to as Tiered SLA’s or Service Level Agreements. Lastly, Ethernet was initially a Bus Topology and the network infrastructure was Coaxial Cable. Today, Ethernet supports point to point E-Line, point to multi-point E-Tree and multi-point to multi-point E-LAN Topologies and can ride on either UTP or Fiber Optic Cabling in both the Access and Core of the Network. Now that we covered some of the basics lets move onto slide 9 and dig into some of the current and forthcoming Standards in Carrier Ethernet. Originally Now Bandwidth 10 Mbps, then 100M 1 Gbps, 10G, 40G, 100G Transmission Half Duplex Full Duplex Collisions Yes (CSMA/CD) No Collisions (Full Duplex) Broadcast Domain Entire LAN VLAN Controlled Prioritization None 802.1p Topology E-LAN, E-Tree, E-Line (Access, Trunks) Bus Cabling Coax UTP, Optical (Access, Trunks) Less Than 30% Due to Collisions Approaching 100% Utilization Distance Limited by CSMA/CD Propagation Time Limited Only by Media Characteristics

7 Standards: Current, Forthcoming, and Direction
7

8 Scaling Ethernet…beyond 802.1ad (Q-in-Q)
Preferred: “Large” number of customers Reality: One MAC domain for customer and Provider results in large forwarding table size 48-bit MAC address (no ‘prefixing’ as in IP address) Every network switch needs to learn Destination Address (DA) of customer switches Preferred: Customer Isolation/Transparency Reality: One L2 broadcast domain for customer and provider Broadcast storms in one customer’s network can affect other customers and provider as well Preferred: Million+ service instances Reality: Limited VLAN space, i.e., only 4095 (i.e., 212-1) 802.1ad (Q-in-Q) suggested 16million+ instances but forwarding only to same S-tag (4095!) Preferred: Deterministic behavior for services Reality: “p” bit for priority but no bandwidth guarantee & arbitrary forwarding/backup paths Data plane dependent on address table, vlan partition, spanning tree, bandwidth contention When 802.1Q VLANs were deployed in Large Carrier and Enterprise Networks one of the key issues that we confronted was service scalability. In the initial implementation we could only scale to 2048 Virtual LAN addresses. In an effort to increase VLAN address scalability the IEEE 802.1ad standard was ratified. This approach was known as Q-in-Q. While it addressed scalability for only a short while by doubling the VLAN count to 4096, it created another problem because the customer VLAN and the provider or Backbone VLAN shared the same MAC address. Therefore, the customer and the provider shared the same Broadcast Domain so there was no Layer 2 Broadcast Domain Demarcation and if a VLAN Explosion or Broadcast Storm occurred at the customer location it would propogate into the Service Provider’s Network or the Backbone Network if you’re building the network yourself. This also meant the learning, broadcast and flooding of Ethernet MACs as discussed earlier would be shared between the customer location and the service provider or backbone location. At the time, we also added P bit to prioritize our VLAN based applications, but we still lacked scalability running out at 4096 and had an out of control Broadcast Domain that was shared between the customers LAN and the MAN/WAN. Therefore, we needed to address both scalability and provide a more controlled Broadcast Domain that separated the customer’s LAN network from the Backbone network at Layer 2. In summary, a mechanism was needed that would scale the network to millions of VPNs and control the Broadcast, and Flooding of Ethernet. Now lets move onto the next chart and review some of the mechanisms that can be leveraged to address ethernets’ scalability and MAC address control challenges.

9 Ethernet Transport at Layer 2 & 2.5: Approaches to COE
VLAN and Stacked VLAN (Q-in-Q) Cross-Connects Explicit forwarding paths using VLAN based classification. Tunneling via VLAN tag encapsulations and translations. Defined in IEEE 802.1Q and IEEE 802.1ad specifications. Standards completed. Provider Backbone Bridging (PBB-TE) and Provider Backbone Bridging (PBB) Explicitly forwarding paths using MAC + VLAN tag. Tunneling via MAC-in-MAC encapsulations. Defined in IEEE 802.1Qay and IEEE 802.1ah specifications. Standards completed. E-SPRing Shared Ethernet Ring Topology based Protocol mechanism that delivers sub-50ms in IEEE 802.1Q and IEEE 802.1ad (Q-inQ) Ethernet Networks. Defined in ITU G.8032 specification. Standards completed. MPLS & VPLS/H-VPLS Widely deployed in the core, less so in the metro / access. Uses pseudo wire emulation edge-to- edge (PWE3) for Ethernet and multi-service tunneling over IP/MPLS. Can be point-to-point or multi- point (VPLS). Defined in IETF RFC 4364 (formerly 2547bis) and Dry Martini (IETF RFC 2026). Standards completed. Provider Link State Bridging (PLSB) Adds a SPB (Shortest Path Bridging) using IS-IS for loop suppression to make Ethernet fit for a distributed mesh and point to multi-point routing system. PBB-TE/PBB along with PLSB can operate side-by-side in the same network infrastructure. PLSB is optimized for Any to Any E-LAN and Point to Multi-Point E-Tree Network Topology Service delivery. Defined in IEEE 802.1aq specification. Standards to be completed. Target completion approximately 2H 2011. MPLS-TP Formerly know as T-MPLS (defined by ITU-T). New working group formed in IETF now called MPLS-TP. Transport-centric version of MPLS for carrying Ethernet services based on PWE3 and LSP constructs. Defined in IETF RFC Standard to be completed. Target completion approximately 1H 2012. First, you can use VLAN cross-connects in either 802.1Q or 802.1ad (Q-in-Q) formats. This is done with explicit forwarding paths using VLAN classifications. VLAN cross-connects are effective, but are not widely implemented. Second approach is you can implement PBB-TE or Provider Backbone Bridging with Traffic Engineering. As a pre-standard PBB-TE was known as PBT (or Provider Backbone Transport) prior to IEEE standards approval. This is an IEEE 802.1Qay open standards based solution that addresses both scalability by implementing a PBB (Provider Backbone Bridging) 24-bit ISID leveraging the MAC-in-MAC Header Frame Format that can scale to 16 million VPNs. Besides scalability another area that is addressed with this approach is controlling the MAC Address Domain. The MAC-in-MAC approach separates the MAC Domain between the Customer’s LAN and the Backbone. Basically, the Customer has its own Source and Destination MAC address Domain and the Backbone has it’s own Source and Destination MAC address Domain. I will be going into details on both PBB and PBB-TE during the presentation and will outline its benefits. Other mechanisms that we’ll be reviewing include ITU G.8032 E-SPRing that is a Shared Ethernet Ring Topology based Protocol that delivers sub 50ms failover in 802.1Q and 802.1ad (Q-in-Q) Ethernet Networks. Rick Gregory will be going into detail on E-SPRing during this presentation. Rick will also cover off MPLS Protocol mechanisms as well including IETF RFC 4364 BGP/MPLS IP-VPNs, RFC 2026 Layer 2 Dry Martini VPNs and RFC 5654 MPLS-TP (for MPLS Transport Profile, formerly known as T-MPLS). MPLS-TP is not an open standard in the IETF. Lastly, I will review PLSB or Provider Link State Bridging which is a Shortest Path Bridging protocol that is an Interior Gateway Protocol that leverages IS-IS for loop suppression in an effort to make Ethernet fit into a distributed mesh routing system. PLSB is presently in the IEEE 802.1aq standards body and is not yet an open standard. Lets move onto Slide 12 and take a closer look into CESD’s technology and mechanisms.

10 What’s Next in Carrier Ethernet ?
802.1aq PLSB Robust L2 Control Plane G.8032 Ethernet Shared Ring Resiliency 802.1Qay PBB-TE Traffic Engineered Ethernet Tunnels Y.1731 Performance Management Proactive Performance Management 802.1ag Fault Management Service and Infrastructure CFM Diagnostics Provider Link State Bridging (PLSB) is an IEEE 802.1aq Shortest Path Bridging Working Group. It combines IEEE 802.1ah MACinMAC, the IS-IS routing protocol, and the techniques of filtering database (FDB) population specified by IEEE 802.1Qay Provider Backbone Bridge Traffic Engineering to produce a link state based spanning tree replacement for 802.1ah based upon shortest path trees and reverse path forwarding for multicast. It is complementary to PBB-TE in that PLSB forwarding delivers virtualized broadcast LAN segments, while PBB-TE provides complete route freedom for point-to-point connections, and both may co-exist on a single platform. Our direction on implementing PLSB in our present CESD portfolio is once PLSB becomes standardized in the 802.1aq working group our PLM Team will then review if it makes sense to implement. Today, Avaya is deploying a pre-standard version of PLSB from their Nortel Enterprise Business acquisition and Alcatel Lucent appears to be close to deploying a pre-standard version as well so be prepared to battle Avaya and Alca-Lu’s pre-standard version in the field. Lets move onto the next chart and continue our discussion on PLSB. 802.1ah PBB Scalable, Secure Dataplane Ethernet has steadily evolved to address more robust networking infrastructures

11 CESD Technology and Mechanisms OAM And QOS Ethernet Service Monitoring
March 2010 11

12 802.1Q/ad domains protected using 802.1w RSTP with 50 ms restoration
Design Predictable Resilience Create a stable network, that remains stable as it scales Ciena is the leader in Connection-oriented Ethernet (COE) and provides a range of carrier-class resiliency schemes (RSTP, MPLS, PBB-TE) COE tunnels (PBB-TE, MPLS-TP (future)) are connection-oriented and traffic engineered Provides deterministic performance for predicable SLAs Better resiliency & stability of provider networks A Ciena hallmark is service and transport protection and reliability. Ciena’s Ethernet centric portfolio extends this theme with an array of protection mechanisms. For instance, some operators rely on RSTP for access and aggregation networks and insist on consistent failover performance. Several mobile backhaul operators have adopted Ciena’s multi-tiered, dual-homed PBB-TE solution as a cost-effective and reliable means to carry revenue-critical services. The diagrams depict a typical network with tiers of PBB-TE tunnels providing device and path protection, each monitored by the IEEE 802.1ag CCM control plane. The left-hand diagram shows a physical connectivity view. The right-hand diagram, sometimes called the Eiffel Tower view, shows service connectivity using virtual switches to effect LAN connectivity. This is merely one possible configuration supported by Ciena’s solution. By using a multi-tiered tunnel approach, base stations can be added, serviced, and upgraded without having to touch all layers of network elements. Only the lowest tier of PBB-TE encap/decap tunnels must be reconfigured. This simplifies the provisioning and on-going maintenance effort, reducing cost of operations. While not depicted, Ciena supports multiple simultaneous redundant multicast/IPTV sources for enhanced reliability. In the event one source (e.g., multicast router) goes offline, the backup source an use the existing E-Tree service topology to minimize service disruption. PBB-TE domain supporting sub-50 ms protection (via 802.1ag Connectivity Check Messages) 802.1Q/ad domains protected using 802.1w RSTP with 50 ms restoration

13 Granular Bandwidth Control Controlled & measurable for predictable QoS
Design Granular Bandwidth Control Controlled & measurable for predictable QoS Voice VLAN CIR/EIR MAC DA B Specific service identification with rich L1-L2 classification Segmented bandwidth via a hierarchy of “virtual ports” Flexible priority resolution for CoS mapping Traffic profiles and traffic management at all levels in the hierarchy Specify CIR/CBS, EIR/EBS, Color Aware profiles Allows efficient service upgrades 20/0 50/100 L2VPN 10/100 80/200 20/100 IP SA MAC SA A DENY 30/100 TCP port 80 10/40 20/55 Flow Interface (e.g. Combo of TCP/UDP port, IP DSCP, MAC, etc.) Sub-Port (e.g. Dept VLAN range) Logical Port (e.g. all the client ports of a Business) Ciena’s solution provides unprecedented levels of service classification. Internally, Ciena’s Service Aggregation Switches provide up to 64 class of service levels allowing greater tuning than the typical 8 found within competitive offerings. In addition, certified MEF compliant committed information rate, excess information rate and burst parameters can be configured. An example of a Ciena innovation is the use of service templates defining QoS parameters. For instance, a service provider’s “Silver” service can be easily changed from 40 Mb/s to 50 Mb/s. Every service configured to Silver is automatically changed dramatically reducing the number of configuration/provisioning steps required by the operator. Enhance revenue with Service Stratification

14 Operate Comprehensive OAM Reduce the cost to run the network and keep services profitable Complete standards-based Operations, Administration, and Maintenance (OAM) offering provides visibility, manageability, and controls Proactive SLA assurance, rapid fault isolation and minimized downtime Includes L2 and L3 based performance measurement capability as a way to differentiate services Layer 3 SLA Monitoring & Metrics: Delay, Jitter IETF RFC 5357 TWAMP Two-Way Active Measurement Protocol Layer 2 SLA Monitoring & Metrics: Delay, Jitter, Frame Loss ITU-T Y.1731 Ethernet OAM Proactive SLA assurance, rapid fault isolation and minimized downtime Service Heartbeats, End-to-End & Hop-by-Hop fault detection IEEE 802.1ag CFM Connectivity Fault Management Enhanced troubleshooting, rapid network discovery IEEE 802.3ah EFM Physical Link 14

15 Technology Options for Packet Transport
Subscriber Management “Application” “Service” Management IP/MPLS Service Edge & Core Metro access & aggregation This slide offers the technology options to support a network connectivity for packet transport: options are Routing vs Switching (or Transport) Although there are IP-over-IP and other routing models such as the use of IPsec based IP-VPNs over the public internet, deterministic transport options with traffic engineered network connectivity include Mpls based models with routing for l3-vpns & Mpls based models with bridging for l2-vpns Ethernet based models for l2-vpns and can be bridging or just transport/switching. Please note that bridging refers to forwarding based on MAC DA to members of a VLAN, example, IEEE bridging. Switching refers to x-connect between 2 physical or logical ports with forwarding based on any header field other than MAC DA only, example, vlan, mac+vlan, timeslot, “generic” tunnel label, but not IEEE bridging. Switching also eliminates MAC learning. VLAN-x-Connect is included as an option but is not an IEEE open standard. The main advantage of using VLAN-x-Connect with forwarding based on c-tag or s-tag or combination of c- and s- tags is that the model is similar to TDM timeslot or ATM vci/vpi switching/forwarding. Further, the network need not learn MAC addresses. As with other transport models this is also limited to pt-pt services. Lets move onto slide 13 and review the 3 key Technology funnels that enable Ethernet’s personality to be transformed into Carrier Grade. Routing, i.e., forward IP packets IP -over- {IPsec, GRE -over-} MPLS IP -over- {IPsec, GRE -over-} IP MPLS -over- L2TPv3 -over- IP Ethernet -over- L2TPv3 -over- IP Bridging, i.e., forward Ethernet frames based on MAC DA Ethernet -over- Ethernet: PBB Ethernet -over- MPLS: VPWS & VPLS Switching, i.e., forward of Ethernet frames based on tunnel label Ethernet -over- Ethernet: PBB-TE Ethernet -over- MPLS-TP MPLS (L3) IP PBB MPLS (L2) PBB-TE MPLS-TP Goal: cost-effective, high-performance transport

16 Mechanisms to Build the Carrier Grade Enterprise Ethernet Network
PBB PBB-TE Ethernet OAM These 3 funnels on the chart indicate the key mechanisms required to make Ethernet Carrier Grade. The first in Provider Backbone Bridging (or PBB) that increases the customer or service instance identifiers to 16 millions VPNs using a 24-bit ISID in IEEE 802.1ah MAC-in-MAC Frame Format. MAC-in-MAC minimizes the learning, Flooding and Broadcast Domains intrinsic in how Ethernet works by separating the Customer MAC and the Backbone MAC via two autonomous MAC addresses. The next mechanism is Provider Backbone Bridging with traffic engineering which is an IEEE 802.1Qay specification that allows Network operators to create traffic engineered deterministic connection oriented Ethernet Virtual Circuit tunnels in the Backbone. Lastly, ITU Y.1731 is used so Network Operators can Monitor the Performance of the Network measuring end to end Frame Delay, Frame Delay Variation and Frame Loss. IEEE 802.1ag Connectivity Fault Management implements a Continuity or Connectivity Check Message (known as a CCM), that delivers sub 50ms protection switching in the event of link or nodal failure. The CCM basically polls the links to ensure the primary link is up and operational and in the event when there is a failure of the primary link a 10ms interval CCM heart beat message is activated upon with the 3rd poll not being recognized automatically switches over sub 50ms to the back-up Ethernet tunnel. When all the mechanisms in these funnels are combined it creates a Comprehensively Managed Connection Oriented Resilient Ethernet Forwarding Plane Only Solution, without implementing any Complex Routing Control Plane Protocols. Now lets move onto Slide 15 and take a closer look at PBB. IEEE 802.1ah PBB (MAC in MAC) Secure Customer Separation Service/Tunnel Hierarchy Reduced Network State IEEE 802.1Qay Ethernet Tunneling Deterministic Service Delivery QoS & Traffic Engineering Resiliency & Restoration Connectivity / Service Checks ITU Y.1731 Performance Metrics Complete Fault Management 802.1ag

17 Performance Monitoring Connectivity Fault Management
and Connectivity Fault Management

18 Maturing Ethernet OAM into a Transport Technology
True Ethernet transport must maintain important functions from the TDM Transport Environment A Partial List of Completed and Evolving Standards Fault Management Functions Y.1731 802.1ag CCM Continuity Check P P Traffic Engineering for deterministic bandwidth utilization Network planning: Bandwidth resources & traffic placement Performance monitoring & statistics collection Fault sectionalization & propagation mechanisms Trace & loopback facilities Local Link Management Control plane for automated end-to-end provisioning and resiliency LBM/LRM Loopback P P LTM/LTR Link Trace P P IEEE 802.1Qay for PBB-TE – Connection Oriented Ethernet IEEE 802.3ah EFM defines link level diagnostics and OAM ITU Y.1731 “OAM functions and mechanisms for Ethernet based networks” IEEE 802.1ag “Connectivity Fault Management”, a subset of Y.1731 MEF10 and Y.1731 describe Packet PM MEF16 describes Ethernet-Local Management Interface (LMI) ITU G.8031 “Ethernet Protection Switching” draft-fedyk-gmpls-ethernet-PBB-TE-01.txt for Control Plane AIS Alarm Indication Signal P O RDI Remote Defect Indication P P LCK Locked Signal P O TST Test Signal P O MCC Maintenance Comms. Channel P O VSM/EXM Vendor/Experimental OAM P O Performance Management Functions Y.1731 802.1ag FLR Frame Loss Ratio P O FD Frame Delay P O FDV Frame Delay Variation P O 802.3ah (2005) Link Management Functions Discovery Link Monitoring Remote Failure Detect Rate Limiting The purpose of this chart is to provide you with a Matrix of Fault, Performance and Link Management Functions outlined in Y.1731, 802.1ag and 802.3ah, respectively. The Matrix also includes MEF UNI and LMI Functions for your reference as well. Let move onto the next chart and review some of the PBB/PBB-TE Management Properties associated with 802.1ag. Remote Loopback MEF UNI and LMI E LMI Status E-LMI VLAN mapping E-LMI BW Admission MEF-ENNI Remote Loopback 18

19 PBB / PBB-TE management 802.1ag Properties
802.1ag has the concept of maintenance levels (hierarchy). This means that OAM activity at one level can be transparent at a different level. 802.1ag has clear address and level information in every frame. When one looks at an 802.1ag frame, one knows exactly Where it originated from (SA MAC) Where is it going (DA MAC) Which maintenance level is it What action/functionality does this frame represent. Design Inherently address the OAM aspects for MP2MP connectivity (e.g. VLANs) 802.1ag has the concept of hierarchical maintenance levels that provides OAM activity at one level that can be transparent at a different level ag has clear address and level information in every frame and knows exactly where the Source Address MAC originated from, where the Destination MAC is going, What the traffic contract maintenance level is and what action or functionality does the frame represent. Lets move onto the next chart and review the graph that depicts the capabilities of 802.1ag and Y.1731 from an Operations, Administration and Maintenance perspective.

20 Built-in and on-switch
The New Ethernet OAM Standards-based IEEE 802.1ag and ITU Y.1731 802.1ag Maintenance levels/hierarchy Maintenance End Point = MEP Maintenance Intermediate Point = MIP Continuity Check (Fault) Multicast/unidirectional heartbeat Loopback – (MEP/MIP Fault Connectivity) Unicast bi-directional request/response Traceroute (MEP/MIP Link Trace - Isolation) Trace nodes in path to a specified target Discovery Service (e.g. all PEs supporting common service instance) Network (e.g. all devices common to a domain) Performance Monitoring Frame Delay Frame Delay Variation Frame Loss MEP MIP MEP Regarding 802.1ag what was added is a Continuity or Connectivity Fault Check Message that is a multicast/unidirectional heartbeat to ensure sub 50ms failover occurs in the event of link or nodal failure. Also added is A SONET like Loopback troubleshooting message that is a unicast bi-directional request/response message, as well as a Line Trace Message to provide Maintenance End Point (or MEP) and Maintenance Intermediate Point (or MIP) Link Trace Isolation. From a Y.1731 perspective Network Performance monitoring capability was added in Carrier Ethernet OAM to provide Network operators with the ability to measure Frame Delay, Frame Delay Variation and Frame Loss Ratios in the Network. Now lets move onto slide 28 and review some of the pre-standard protocol mechanisms. Conceptually: monitor the trunk or the service … or both Service Trunk 802.1ag 802.1ag Built-in and on-switch

21 Senior Systems Consulting Engineer
Carrier Ethernet Technology and Standards Update PBB/PBB-TE/E-SPRing G.8032/PLSB and MPLS/VPLS/HVPLS/MPLS-TP Presented by: Rick Gregory Senior Systems Consulting Engineer May 25,2011 Good afternoon and welcome to the Carrier Ethernet Technology and Standards update. My name is Rick Gregory a Systems Cunsulting engineer and today I will be reviewing with you a number of topics regarding Carrier Ethernet from a Technology and Standards perspective, so without further ado lets move onto the next chart and review the agenda that we’ll be covering during this mornings session.

22 Provider Backbone Bridging
(PBB) IEEE 802.1ah

23 Provider Backbone Bridge Introduction
IEEE 802.1ah is the Provider Backbone Bridge standard Also known as Mac In Mac (MiM) encapsulation PBB solves several of today’s Ethernet challenges Service Scalability – up to 16 millions VPNs Customer Segregation – Overlapping VLANs supported MAC Explosion – Customer MAC addresses only learned at edge Security – Customer BPDUs are transparently switched SA DA Payload S-V C-VID The Graph on this chart depicts the PBB MAC-in-MAC Frame Format. Again, this is a IEEE 802.1ah open standard. The red section in the Graph contains Ethernet’s Payload. The Green section on the Graph is the Customer’s LAN Address information that includes a 12-bit VLAN ID or C-VID and Destination and Source MAC addresses. Then in the orange section on the graph you have the Backbone information that includes a 24-bit ISID or Service Instance Identifier, along with the Backbone VLAN ID or B-VID and Destination and Source Backbone MAC addresses. The Backbone source MAC addresses are assigned and the destination MAC addresses are learned. Using the Priority Code Point Network Operators can P bit prioritize Backbone tunnels using the Backbone VLAN or B-VID. PBB addresses the previous issue of scalability in 802.1ad Q-in-Q stacked VLANs by increasing the Service Instance Scalability to 16 million VPNS from 4096 VLANs, as well as stopping MAC Explosions at the MAC in MAC point of Demarcation. This approach provides a secure Layer 2 separation between the customer’s LAN and the Backbone. Lets now review the Evolution of Ethernet Frame on the next chart. I-SID B-VID B-DA B-SA 802.1ah Provider Backbone Bridges

24 Ethernet Frames…Before and After
Payload Payload Payload Ethertype Ethertype Payload C-VID C-VID Ethertype Ethertype Ethertype VID S-VID S-VID Ethertype Ethertype Ethertype Ethertype Pre-existing (unchanged) SA SA SA SA On this chart we show the Ethernet Frames migration. Initially you had your basic IEEE Ethernet frame. Then with IEEE 802.1Q tagged VLAN standard Enterprises could build multiple “virtual” Ethernet LANs that could be transported over a single cable providing greater scalability and control of the Ethernet Domain. Ethernet then began to be embraced by Carrier’s too as a Service Delivery technology and with that came carrier requirements to scale beyond Q VLANs. As a result, IEEE 802.1ad standardization also known as Q-in-Q or Provider Bridges basically doubled the VLAN count from 2048 to 4096 VLANs, but that still wasn’t enough for Large Enterprise and Carrier customer’s. Another head ache this created for Network Operators was in the Q-in-Q approach there is a single MAC address that is shared by the customer and Backbone. As a result a customer’s broadcast storm or MAC/VLAN explosion negatively impacted other customer’s networks as well as the Backbone Network, which in many cases was a service providers’ network. To address this issue PBB’s MAC-in-MAC Frame format terminated broadcast storms from propogation at the point of MAC-in-MAC Demarcation. There are four key areas of PBB’s Frame that you should familiarize yourself with. Lets review these on the next chart. DA DA DA DA I-SID New (backbone) 802.1 basic 802.1Q tagged VLAN 802.1ad QinQ Provider Bridge B-DA B-SA B-VID Ethertype SA = Source MAC address DA = Destination MAC address VID = VLAN ID C-VID = Customer VID S-VID = Service VID I-SID = Service ID B-VID = Backbone VID B-DA = Backbone DA B-SA = Backbone SA 802.1ah MACinMAC PBB

25 802.1ah PBB Encapsulation Header as used by PBB-TE
Backbone Destination MAC address Backbone Source MAC address P C D E I R S1 S2 I-SID Service Ethertype 0x88C8 B-TAG Tunnel Ethertype 0x88A8 I-TAG B-SA MAC B-DA MAC B-VID DA SA 58 Bit Tunnel Address Field Size Value Backbone-DA 6 bytes Tunnel destination MAC address. This must be a Unicast address only. Multicast MAC addresses are not allowed to be specified for this field. Backbone-SA Tunnel source MAC address used to identify this node in the network. B-TAG Ether-type 2 bytes 0x88A8 (default) B-VID 12 bits Tunnel VID (802.1Q compliant). B-TAG DEI 1 bit Drop Eligibility Indicator: 1=Drop eligible, 0=Not drop eligible B-TAG PCP 3 bits Tunnel Priority Code Point (0-7) I-SID 24 bits Service identifier (1 – 16 million) I-TAG Ether-type 0x88C8 (default) RES1 2 bits Don’t care RES2 I-TAG DEI I-TAG PCP Service Priority Code Point (0-7) First is the I-SID which is a 24-bit Service Instance Identifier that can scale to 16 million VPNs, the next is B-VID which is a 12 bit Tunnel VID that is 802.1Q VLAN compliant and can be Priority Code Point or P Bit prioritized, and lastly is the Backbone-DA or Backbone Destination MAC Address that is 6 bytes and is the Tunnel destination MAC address and the Backbone-SA or Backbone Source MAC Address that is also 6 bytes and is the Tunnel source MAC address used to identify the Maintenance End Point node in the network. Lets move onto chart 18 and summarize the key Ethernet challenges that PBB addresses. 25

26 PBB: Solving Current Ethernet Challenges
Up to 16 million service instances using 24 bit service ID ISID Ethernet Challenges: Service Scalability Customer Segregation MAC explosions, Broadcast Storms Learning, Forwarding, Flooding Control Overlapping V-LANs supported Stops MAC Explosions and Broadcast Storms at MAC-in-MAC Demarcation Point First and foremost PBB addresses Scalability supporting up to 16 million VPNs, Customer Segregation supporting overlapping VLANs, containing MAC Explosions and Broadcast Storms at the Customer Point of Demarcation eliminating the propogation into the MAN/WAN and lastly controlling the learning, forwarding and flooding required in the Ethernet Network Domain by completely spearating these functions between Customer and the Backbone utilizing the MAC-in-MAC approach. Now that we’ve completed our review of PBB, let’s dig into Provider Backbone Bridging with Traffic Engineering (or PBB-TE) on slide 20 which is an IEEE 802.1Qay open standard. Customer MAC is completely separate from Backbone MAC Architected to build E-LAN, E-Tree and E-Line services

27 Provider Backbone Bridging With Traffic Engineering
(PBB-TE) IEEE 802.1Qay

28 PBB-TE PBB-TE (IEEE 802.1Qay) MPLS Services Ethernet Services
(RFC 2547 VPN, PWs etc.) Ethernet Services (EVPL, ELAN, ELINE, Multicast) PBB-TE PBB-TE kept some of the best features of MPLS including the ability for Network Operators to create deterministic end to end tunnels in the Network. However, MPLS requires a number of complex VPN and Tunnel LSP Routing Control Plane Protocols to make that happen. Ciena’s PBB-TE solution is using the Forwarding or Data Plane only and the one Protocol language we are speaking is Ethernet. Because of this our solution is much simpler to deploy and manage than an MPLS network. Also, since we are using Ethernet another key benefit to our solution is following the Price curves of Ethernet. In summary our PBB-TE solution can inter-work with existing MPLS or ATM Networks and combines the traffic engineering capabilities of MPLS as well as providing a much more cost effective solution for our Enterprise and Carrier customers than deploying expensive and complex Layer 3 IP/MPLS routers. Remember this mantra, Ethernet Switch when you Can and IP/MPLS Route when you Must. This is the message to our customers and is still applicable today. Lets move onto Slide 21 take a look at some of the traffic engineering capabilities of PBB-TE. Keep existing Ethernet, MPLS…FR/ATM…ANY & ALL services Capitalize on Ethernet as transport for significant savings Existing network-friendly solution!

29 Traffic engineered PBB-TE trunks
E-LINE Traffic engineered PBB-TE trunks As previously mentioned PBB-TE allows Network Operators to create Traffic engineered tunnels without control plane complexity and just like MPLS the ability to create deterministic point to point paths (called Label Switched Paths or LSPs in MPLS jargon and Ethernet Tunnels or Ethernet Virtual Circuits in PBB-TE jargon). PBB-TE is based on a simple concept of forwarding Ethernet MAC addresses in the backbone and can inter-operate with any 3rd party Ethernet product, which you can not do with MPLS and you will be locked into that vendors Router solution if you do. In our PBB-TE implementation we use the IEEE 802.1ah PBB MAC-in-MAC Frame Format. In PBB-TE Source MAC address are configured along with an ISID and Backbone VLAN or B-VID that can be P bit Classified for VLAN Backbone prioritization. Destination MAC addresses are learned. Ethernet Tunnels can be traffic engineered for guaranteed uptime diversity, resiliency or load spreading. We can also engineer a sub 50ms fast re-route recovery for our backbone tunnels using IEEE 802.1ag Connectivity Fault Management by applying a Connectivity Check Message or CCM as well as ITU Y.1731 to Monitor the Performance of the PBB-TE backbone Network measuring Frame Delay, Frame Delay Variation and Frame Loss Ratio. More details on 802.1ag and Y.1731 is coming in the presentation. Lets move onto slide 22 and summarize the Ethernet Challenges that PBB-TE addresses. PBB Ethernet Metro E-LINE P2P traffic engineered trunks based on existing Ethernet forwarding principles Reuses existing Ethernet forwarding plane Simple L2 networking technology Tunnels can be engineered for diversity, resiliency or load spreading 50 ms recovery with fast IEEE 802.1ag CFM OAM

30 PBB-TE Solving Current Ethernet Challenges
Full segregation in P2P model Ethernet Challenges: Customer Segregation Traffic engineering Spanning Tree challenges: Stranded bandwidth Poor convergence MAC explosions Security End to End TE With QoS & 50 ms recovery Disable STP No blocked links Fast 802.1ag convergence PBB-TE solves a number of Ethernet’s challenges. The ability to provide complete customer segregation in a point to point model, End to end Traffic Engineering of your Ethernet Tunnels without Control Pane complexity delivering sub 50ms recovery with 802.1ag messaging. We also turn off the Spanning Tree Protocol so bandwidth is not stranded and there are no blocked links or multi-Second or minute network failure recoveries. Lastly, since we’re using two different MAC Addresses between the customer and the Backbone, MAC explosions are not propogated and Security is enhanced because the customer’s MAC address Domain is completely different than the Backbone MAC address Domain. Lets move onto slide 24 and review some of the OAM mechanisms that we’re using to troubleshoot the network and monitor its overall performance. MAC Explosions Eliminated Backbone MAC is Completely Different Than Customer MAC

31 Provider Link State Bridging
(PLSB) IEEE 802.1aq

32 Introducing….PLSB PBB-TE is a trivial change to the Ethernet dataplane that has huge Benefits Explicit enforcement of configured operation Ability to have non STP based VLANs Similarly PLSB requires a further trivial change with huge Benefits Adding loop suppression to make Ethernet fit for a distributed routing system PBB-TE, PLSB and existing Ethernet control protocols can operate side-by- side in the same network infrastructure Consequence of ability to virtualize many network behaviors on a common Ethernet base…. PLSB provides Network Operators with the ability to provide explicit enforcement of a configured operation by implementing Non Spanning Tree based Virtual LANs. PLSB does this by delivering Loop Suppression functionality to make Ethernet fit for distributed routing system and can operate side by side of PBB and PBB-TE in the same network infrastructure. Lets move onto slide slide 30 and look into some of the details associated with PLSB’s approach.

33 PLSB Approach If Ethernet is going to be there….use it!
Take advantage of Ethernet’s more capable data plane Virtual partitions (VLANS), scalable multicast, comprehensive OAM PLSB uses a Single (1) Link State Control Plane protocol – IS-IS IS-IS topology and service info (B-MAC and I-SID information) Integrate service discovery into the control plane PLSB nodes use link state information to construct unicast and per service (or I-SID) multicast connectivity PLSB’s approach is to take advantage of Ethernet’s more capable data plane delivering scalable Virtual LANs and Multicast connectivity with comprehensive OAM functionality. PLSB uses IS-IS as the single link state shortest path bridging Interior Gateway protocol for Intra-AS VPN delivery using the Dijkstera algorithm. PLSB nodes use this link state information to construct unicast and per service (or I-SID) multicast connectivity learning Backbone Destination MAC Addresses and I-SID information. Basically, PLSB adds a single Control plane Tunnel and VPN protocol compared to VPLS operation where it can take up to 4 different VPN and Tunnel LSP protocols to deliver the same VPN service. For Multicast connectivity VPLS also requires VSI or Virtual Switch Instance awareness at every edge CPE where replication needs to be done for delivery of each Pseudo wire to every Edge significantly increasing control plane route computation chatter for every tunnel and VPN service to be delivered. PLSB operates differently regarding Multicast connectivity. PLSB sends out one multicast packet from the Edge to the Core. The Core then does the replication and distribution of the multipoint multicast traffic in the Backbone based on ISID and Backbone MAC addresses in the MAC-in-MAC header. This operation significantly reduces the route computation learning and Forwarding Information Base burden that is required at every Edge of the Network. PLSB’s operation makes it very efficient deploying multicast based applications because of this and reduces the burden on the Control plane and moves it to the efficiencies of the Ethernet Data Plane for forwarding. Now let move onto the next chart and review how VPLS operates in comparison to PLSB. Combines well-known networking protocol with well-known data plane to build an efficient service infrastructure

34 SONET, SDH, Ethernet, etc…
VPLS Operation Typical VPLS Implementation: IGP (IS-IS or OSPF) LDP or RSVP-TE E-LDP SONET, SDH, Ethernet, etc… BGP-AD Tunnel LSP Protocols VPN Protocols Required for Auto-Discovery Separate RR topologies (to help scale) Eases burden of statically managing VSI PWE’s Signal PWEs N2 manual session creation Base LDPs: build LSP tunnels Redundant to IGP (same paths) Base IGP: Topology Required for network topology knowledge This Chart provides an overview of a typical VPLS operation using Tunnel LSP and VPN Protocols. As you can see there are typically 4 to 5 different control plane protocols required in a VPLS operation significantly increasing the complexity and operation of the Network and Engineering expertise required to maintain it. Now lets take a look at PLSB’s operation. Physical Links Link layer headers striped off, label lookup per node VPLS CONTROL PLANE

35 Minimizing control plane = Minimized complexity = Reduced cost
PLSB Operation PLSB Implementation: PLSB (IS-IS) One IGP for Topology & Discovery One protocol now provides Auto-discovery Fast fault detection Network healing Shortest path bridging Intra-AS only Link State Protocol Dijkstra's algorithm for best path No VSI awareness required at Edge Once Standardized Ciena could deploy Own I.P. from MEN acquisition Target IEEE 802.1aq Ratification 2H 2011 Tunnel + VPN Protocols PLSB Operation as depicted on the chart provide a single protocol for auto discovery, Fast Fault detection, Network healing and shortest path bridging. What’s more efficient for Network Operators and Engineering using 4 or 5 different protocols or 1 Interior gateway protocol for all your Topology and Discovery operation. Lets move onto the next chart and summarize some of the values associated with PLSB in Carrier Ethernet Networks. Physical Links: - Link layer headers reused as a label lookup through every node Ethernet Minimizing control plane = Minimized complexity = Reduced cost

36 PPB/PBB-TE and PLSB Delivers
E-LAN Any to Any E-LINE Point to Point CESD CESD Characteristics: PLSB – ms resiliency PBB-TE – 50ms resiliency Optimized per service multicast Feature Rich OAM SLA and Service Monitoring Latency Monitoring No Spanning Tree Protocol Value: Simplest Operations Model Less Overhead and Network Layering Most Cost Effective Equipment Efficient Restoration Adding PLSB to our Carrier Ethernet solution once standardized could make sense to handle loop suppression without Spanning Tree Protocol in building scalable any to any mesh based E-LAN Networks and point to multipoint E-TREE multicast application based Networks. We will continue to keep you up to date on the progress of PLSB and pre-standard customer implementations and any advances from Avaya and Alca-Lu. That concludes my section of the presentation and I will now hand the presentation over to Rick Gregory who will cover G.8032 E-SPRing, MPLS, both Layer 2 and Layer 3 versions, and then Rick will also review when to position MPLS/VPLS versus PBB/PBB-TE. Rick will then close off the presentation with a summary of the CESD Value proposition and we will then take any questions that you may have. If we don’t get to all your questions during today’s session you can always contact us directly. Thank you and Rick please go ahead. E-TREE Point to Multi-Point CESD

37 Ethernet Shared Ring (E-SPRing) ITU G.8032
G.8032 has recently began to garner industry acceptance as a Protection mechanism and Ciena will include G8032 support in the CESD product Suite beginning in the fourth quarter of this calendar year The scope of ITU-T G includes the following key features: Rapid protection switching, as a result of ring node and/or link failure(s), can be achieved within 50ms. Ring interconnections via link(s), common node, and “shared link”, can also support rapid protection switching within 50ms. Efficient bandwidth utilization of ring traffic (e.g., via spatial reuse). Automatic and manual reversion mechanisms, upon fault recovery. Loop prevention mechanism over the ring. Operator command support (e.g., lockout of protection, force/manual switch, do not revert, etc.). Frame duplication and reorder prevention mechanisms

38 G.8032 Objectives and Principles
Use of standard 802 MAC and OAM frames around the ring. Uses standard 802.1Q (and amended Q bridges), but with xSTP disabled. Ring nodes supports standard FDB MAC learning, forwarding, flush behaviour and port blocking/unblocking mechanisms. Prevents loops within the ring by blocking one of the links (either a pre-determined link or a failed link). Monitoring of the ETH layer for discovery and identification of Signal Failure (SF) conditions. Protection and recovery switching within 50 ms for typical rings. Total communication for the protection mechanism should consume a very small percentage of total available bandwidth. So what are the basic Objectives and Principles which drove the fevelopment of G.8032. Utilization of industry standard MAC and OAM frames in the ring architecture without the use of Spanning Protocols such as STP,RSTP or MSTP All Nodes on the ring must support Standard Forwarding Data Base MAC Learning Forwarding and flushing while supporting port Blocking and forwarding Loops are prevented by engineering a break in the ring yet supporting an active but not forwarding state on blocked ports to insure that in the instance of ring failure the ring cannot fail over to an unstable path. Monitor ring health at the Ethernet layer by indetifying and acting upon signal failure conditions. Ring protection must support sub 50ms recovery Protection methodology must consume very little bandwidth

39 ITU G. 8032 Ethernet Rings a. k. a
ITU G.8032 Ethernet Rings a.k.a. E-SPRing (Ethernet Shared Protection Rings) E-SPRing Values Efficient connectivity (P2P, multipoint, multicast) Rapid service restoration (<50 msecs) Server layer technology agnostic (runs over Ethernet, OTN, SONET/SDH, etc…) Client layer technology agnostic (802.1 (Q, PB, PBB, PBB-TE), IP/MPLS, L3VPN, etc…) Fully Standardized (ITU-T SG15/Q9 G.8032) Scales to a large number of nodes and high bandwidth links (GE, 10G, 40G, 100G) E-Line, E-LAN, E-Tree Major Ring Sub Ring Fault Sub Ring Sub Ring Deterministic 50ms Protection Switching Grow ring diameter, nodes, bandwidth Multi-Layer Aggregation with Dual Homing Full service compatibility

40 The Ciena G.8032 Solution FORWARDING PLANE MANAGEMENT PLANE
Utilizes existing IEEE defined Bridging and IEEE MAC Supports IEEE 802.1Q, 802.1ad, and 802.1ah FORWARDING PLANE MANAGEMENT PLANE Ciena G.8032 solution MIB Generic Information Model Supports Ethernet OAM (802.1ag, Y.1731) fault and performance management Operator commands (e.g., manual/force switch, DNR, etc.) MANAGEMENT PLANE CONTROL PLANE Sub-50ms protection for E-LINE, E-TREE, and E-LAN services Guarantees loop freeness with prevention of frame duplication and reorder service delivery CONTROL PLANE STANDARDIZED ITU-T Q9/15 G.8032 (ERP) IEEE MAC IEEE 802.1Q, 802.1ad, 802.1ah Ethernet OAM IEEE 8021.ag Ethernet OAM ITU-T Y.1731 STANDARDIZED Ciena PORTFOLIO Carrier Ethernet: 318x, 3190, 3911, 3916, 3920, 3930, 3931, 3940, 3960, 5140, 5150 Transport: OME 6500, OM 5K, OME 6110/6130/6150 Ciena PORTFOLIO NETWORKING NETWORKING Dedicated rings Ring interconnect via shared node and dual node Dual-homed support to provider network technologies (e.g., PB, PBB, PBB-TE, MPLS, etc.) SCALABLE SCALABLE Physical/server layer agnostic Supports heterogeneous rings Leverages Ethernet BW, cost, and time-to-market curve (1GbE10GbE40GbE100GbE)

41 Example G.8032 Network Applications
Wireless Backhaul Business Services – Private Build CO Metro Packet Transport N x T1/E1s Ethernet Data Voice BSC RNC Metro/Collector G.8032 Access Other Core Technology PBX Ethernet T1/E1s Branch Office #1 Branch Office #2 Branch Office #3 HQ Standalone G.8032 Data PSTN Business Services - Access Business Services – DSL Aggregation HQ Metro Packet Transport Metro/ Collector G.8032 Access Other Core Technology Data PSTN PBX Ethernet T1/E1s Branch Office #1 Branch Office #2 Branch Office #3 Ethernet Standalone G.8032 LAG Metro Core

42 General G.8032 Concepts

43 Channel Block Function
What is a Channel Block? Blocking Port A Channel block can be an ingress/egress rule placed on a G.8032 node port The Channel block rule specifies that any traffic with a VID received over this port within a given VID space should be discarded NOTE: The Channel block function prevents traffic from being forwarded by the G.8032 node, however, it does not prevent traffic from being received by Higher Layer Entities (e.g., G Engine) on that node Each G.8032 ringlet needs at least a single channel block installed A B C D E F Channel Block Function

44 What is a Ringlet (a.k.a. Virtual Ring)?
A Ringlet is a group of traffic flows over the ring that share a common provisioned channel block NOTE: It is assumed that each traffic flow has a VLAN associated with it The traffic flows within a Ringlet is composed of A single ringlet control VID (R-APS VID) A set of traffic VIDs A group of traffic flows over the ring can be identified by a set of VIDs Multiple Ringlets on a given Ring can not have overlapping VID space Ringlet 1

45 G.8032 E-SPRing Failure/Restoration
Please view in animation mode G.8032 E-SPRing Failure/Restoration 1 2 A B C D E F A B F C E D Normal configuration Ring span failure occurs 3 4 A B A A B F C F C E D R-APS messages E D R-APS messages LOS detected Port blocking applied APS message issued R-APS causes forwarding database flush Ring block removed

46 Recovery Events V VI A E D C B F A E D C B F VII VIII A B A E D C B F
Ring span recovery detected Tx R-APS(NR) and start Guard Timer A E D C B F Guard Timer R-APS(NR) A E D C B F When RPL owner Rx R-APS(NR), it starts WTR timer. WTR R-APS(NR) Recovery Events VII VIII A B A E D C B F Normal configuration F C R-APS(NR,RB) E D When WTR expires, RPL block installed, Tx R-APS(NR,RB) Nodes flush FDB when Rx R-APS(NR,RB) Nodes remove port block when Rx R-APS(NR,RB) 46

47 G.8032 Product Specifications

48 G.8032 E-Spring Interconnections
a E-SPRing Phase 1 Standalone Ring b Phase 1 Standalone Rings, LAG interconnect E-SPRing1 E-SPRing2 c Phase 1 If each ring is different Virtual Switch E-SPRing1 E-SPRing2 d Phase 2 Dual-Homed Rings (Major and Minor rings) E-SPRing2 E-SPRing1 e E-SPRing Phase 2 Dual-Homed Ring Dual Homing

49 Chaining Rings and R-APS Protocol
Phase 2 Availability Dual-Homed Rings (Major and Minor rings) are not supported in SAOS 6.8 Chaining Rings and R-APS Protocol There can be only one R-APS session running for a given VID Group on a ring span. Major-Ringlets and Sub-Ringlets are used to chain rings. On a Sub-Ringlet, the provisioned block for the data path is at the RPL owner (or on each side of a link fault), and the control path ALWAYS has its blocks where the Sub- Ringlet is open. G E J I H F G E J I H F Data Path example Control Path example A C D E F B A C D E F B Major-Ringlet Major-Ringlet Sub-Ringlet Sub-Ringlet

50 G.8032 Terms and Concepts Ring Protection Link (RPL) – Link designated by mechanism that is blocked during Idle state to prevent loop on Bridged ring RPL Owner – Node connected to RPL that blocks traffic on RPL during Idle state and unblocks during Protected state Link Monitoring – Links of ring are monitored using standard ETH CC OAM messages (CFM) Signal Fail (SF) – Signal Fail is declared when ETH trail signal fail condition is detected No Request (NR) – No Request is declared when there are no outstanding conditions (e.g., SF, etc.) on the node Ring APS (R-APS) Messages – Protocol messages defined in Y.1731 and G.8032 Automatic Protection Switching (APS) Channel - Ring-wide VLAN used exclusively for transmission of OAM messages including R-APS messages Some of the terms and concepts you will need to understand are Ring Protection Link (RPL) – Link designated by mechanism that is blocked during Idle state to prevent loop on Bridged ring RPL Owner – Node connected to RPL that blocks traffic on RPL during Idle state and unblocks during Protected state Link Monitoring – Links of ring are monitored using standard ETH CC OAM messages (CFM) Signal Fail (SF) – Signal Fail is declared when ETH trail signal fail condition is detected No Request (NR) – No Request is declared when there are no outstanding conditions (e.g., SF, etc.) on the node Ring APS (R-APS) Messages – Protocol messages defined in Y.1731 and G.8032 Automatic Protection Switching (APS) Channel - Ring-wide VLAN used exclusively for transmission of OAM messages including R-APS messages

51 Ring Idle State Physical topology has all nodes connected in a ring
ERP guarantees lack of loop by blocking the RPL (link between 6 & 1 in figure) Logical topology has all nodes connected without a loop. Each link is monitored by its two adjacent nodes using ETH CC OAM messages Signal Failure as defined in Y.1731, is trigger to ring protection Loss of Continuity Server layer failure (e.g. Phy Link Down) ETH-CC ETH-CC RPL Owner RPL ETH-CC ETH-CC ETH-CC ETH-CC ETH-CC ETH-CC ETH-CC 1 2 6 4 3 5 RPL This is a representation of a G.8032 ring in what is known as Ring Idle State or Normal operation. A Ring Break has been engineered between node 1 and 6 Continuity Check Messages Still traverse the link between switches 1 and 6 but no traffic is forwarded, ERP ensures a loopfree environment by Blocking the RPL link Each link between switches is monitored by CCM messages Signal Failure as defined in Y.1731, is the trigger to ring protection Loss of Continuity Server layer failure (e.g. Phy Link Down) Physical topology 1 2 6 4 3 5 Logical topology

52 Protection Switching  Link Failure
Link/node failure is detected by the nodes adjacent to the failure. The nodes adjacent to the failure, block the failed link and report this failure to the ring using R- APS (SF) message R-APS (SF) message triggers RPL Owner unblocks the RPL All nodes perform FDB flushing Ring is in protection state All nodes remain connected in the logical topology. RPL Owner RPL R-APS(SF) R-APS(SF) 1 2 6 4 3 5 RPL 1 2 6 4 3 5 RPL Physical topology 1 2 6 4 3 5 1 2 6 4 3 5 Logical topology

53 Protection Switching  Failure Recovery
When the failed link recovers, the traffic is kept blocked on the nodes adjacent to the recovered link The nodes adjacent to the recovered link transmit R-APS(NR) message indicating they have no local request present When the RPL Owner receives R- APS(NR) message it Starts WTR timer Once WTR timer expires, RPL Owner blocks RPL and transmits R- APS (NR, RB) message Nodes receiving the message – perform a FDB Flush and unblock their previously blocked ports Ring is now returned to Idle state R-APS(NR, RB) RPL Owner R-APS(NR) RPL R-APS(NR) 1 2 6 4 3 5 RPL 1 2 6 4 3 5 RPL Physical topology 2 1 6 1 2 6 4 3 5 3 4 5 Logical topology

54 Multi Protocol Label Switching (Layer 3 IETF RFC 4364 / aka 2547bis)
(Layer 2 IETF RFC 2026 / Dry Martini) (Layer 2 IETF RFC 5654 / MPLS-TP) (MPLS/VPLS or PBB/PBB-TE)

55 Ethernet Access – Network Choices
Legacy Ethernet (No MEF compliance) Carrier Class Ethernet (MEF compliance) Connection-less Ethernet 802.1Q or 802.1ad or 802.1ah: VLANs Connection Oriented Ethernet 802.1Qay (PBB-TE): VLANs MPLS-TP: Traffic Engineered PWs over LSP IP control plane based IP or MPLS VPNs IP VPN: Ethernet over L2TPv3 over IP MPLS VPN: Ethernet PW or VLAN over LSP

56 MPLS vs. Ethernet – Data Plane (+OAM)
Packet transport Subscriber Management “Application” “Service” Management IP/MPLS Service Edge & Core Metro access & aggregation MPLS vs. Ethernet – Data Plane (+OAM) MPLS metro network L3 (IP/MPLS): terminate Ethernet & forward IP frames over IP PW in MPLS LSP over Ethernet port L2 (VPLS/VPWS, MPLS-TP): forward Ethernet frames over Ethernet PW in MPLS LSP over Ethernet port Multiple, varied data planes: IP, PW, LSP, Ethernet complex hw/sw interactions resulting in higher cost1 complex OAM MPLS-TP LSP OAM yet to be defined Ethernet (PBB-TE) metro network L2: forward Ethernet frames over Ethernet EVCs over Ethernet port Fewer data planes and OAM levels – Ethernet Service and Network/Link Simpler hw/sw for >40% lower cost2 IP awareness for dataplane behavior but no need for OAM at IP layer Less complex OAM using 802.1ag and Y.1731 for Ethernet service and network/tunnel layers Ethernet (PB, PBB) can enable Pt-Mpt and Mpt- Mpt, in addition to Pt-Pt For the deterministic network option it can be a L3 or L2 solution. With a L3 network solution the choice is to extend IP/MPLS in to the aggregation/metro domain. This does allow to support L3-vpns (forwarding/routing IP packets) directly from the access and/or metro and may or may not have to go to the service edge. However, this requires the network elements to have MPLS control/data planes in addition to IP routing. This can increase the Cap-ex due to multiple forwarding planes (IP, PW, LSP) and can increase the op-ex dramatically with complex protocols as well as OAM. Additionally, the complexity will be higher with the need to implement service interworking for interoperability with other protocols such as ATM/FR/TDM. Further, such L3-vpns are not transparent and may not be preferred by most customers. L2-vpns are more transparent. A simpler L2-vpn option may be implemented with MPLS, i.e., VPLS, by adding Ethernet bridging functionality to map Ethernet frames in to Ethernet PWs. However, across the aggregation/metro domains it typically will be a H-VPLS implementation to allow for a scalable model by avoiding N^2 relationships between endpoints. The N^2 relationship is thus limited to the core nodes of each VPLS instance. In this scenario the connectivity across the aggregation/metro is a PW spoke to the main VPLS cloud rather than a full mesh of PWs. Thus this does not offer any capacity efficiency and also does not reduce the operational complexity given the multiple forwarding planes (Ethernet, PW, LSP). The multiple forwarding planes will require that each has it’s own oam and the complexities associated with coordinating and managing them leads to higher opex. Further, it can lead to higher hardware cost since the processing power at the nodes need to consider the allocation of computation/storage resources for the various statistics collected for performance and fault management. Incidentally, with a transport profile of mpls, i.e., mpls-tp, work is yet to be done on the ‘path’ level connectivity oam, i.e., VPCV, for the LSP tunnel. (Note: IETF calls the tunnel a path which is different from how ITU uses the terms path, section and line for SONET/SDH networks). A simple option for the network could be the use of a L2 Ethernet network to forward the frames to the egress node with L2 vpn connectivity that can be pt-pt or pt-mpt or mpt. An Ethernet network that is enhanced with carrier class attributes will also enable a SP to offer a rich suite of connectivity at L2 for supporting multiple customer segments. This allows the SP to consolidate the complex MPLS/IP functions at the metro/core boundary and use simpler forwarding rules based on subscriber policies to constrain the connectivity as required. Further, an Ethernet network that is “IP aware” is able to offer additional benefits such as enabling ACL’s at L3 or L4 header fields for higher security and snooping of IGMP protocol for addition/deletion of multicast group members. The forwarding priority for these Ethernet frames can be SP provisioned or can be based on end user’s DSCP values in the IP header. Service IP, Ethernet IP, Ethernet Data Plane 1 Reid, Willis, Hawkins, Bilton (BT), IEEE Communications Magazine, Sep 2008 2 (40-60% less) McKinsey & Co., Jan 2008; (40% less) CIMI Corp, Jul 2008 PW VLAN (EVC) Network LSP Ethernet Ethernet Complex Simpler

57 MPLS vs. Ethernet – Control Plane (+OAM)
Packet transport Subscriber Management “Application” “Service” Management IP/MPLS Service Edge & Core Metro access & aggregation MPLS vs. Ethernet – Control Plane (+OAM) MPLS metro network Complex link-by-link label swapping – inherent source of unreliability1 Complex L3 control plane for PW/LSP signaling/routing (& PW stitching at core edge) PW/LSP labels: LDP or BGP LSP setup: RSVP-TE (signaling), OSPF-TE (routing) MPLS-TP can avoid L3 control plane; use complex NMS-based link-by-link LSP config instead Complex protocol couplings resulting in processing complexity and higher opex3 Ethernet (PBB-TE) metro network Complete, global Ethernet header BEB’s SA/DA+BVID for tunnel No label switched path setup needed E2E visibility, connectivity verification Simpler L2 control plane for discovery only No distributed routing/signaling needed Metro hub-&-spoke (vs. core mesh) affords explicit failure mode config4 <=9 such modes in large metro 12% lower opex (future: up to 44%)4 Simpler OAM: reliable & lower opex1,3 The other big difference between the 2 options is the control plane and the complexity/cost associated with it. In a L3 network there are multiple protocols like LDP or BGP for PW (inner) or tunnel (outer) label exchange. With multiple protocol options there will be interoperability issues for seamless end-to-end PW setup which is often touted as one of the primary drivers of using MPLS (or MPLS-TP) in all portions of the network. In addition, RSVP-TE is required for signaling the label setup and bandwidth reservation of each label across multiple hops. This link by link coordination is due to the label scope being local to the link. Such need for coordination along a path, although simplified by the use of a signaling protocol, gives rise to an inherent source of unreliability with respect to connection verification. Note though that it is possible to have the same label for the entire path as in wavelength continuity constraint for a GMPLS based network. However, in an MPLS network there are no such constraints. To achieve this for MPLS-TP would require additional management coordination to ensure the same labels end-to-end without causing overlaps with other MPLS-TP tunnels. MPLS-TP is expected to simplify this a little bit but only for the tunnel label although the fact that LDP (or BGP) is expected to be used for PW (inner) label exchange implied that it can also be used for tunnel (outer) label exchange as well. The only difference with MPLS will then be avoiding RSVP-TE and use NMS based provisioning of the sequence of labels to use for a given connection along the path or set of nodes. Also, OSPF-TE will become optional as well with the NMS responsible to learn the topology and determine optimal routes through the network. So MPLS-TP is expected to provide a simpler operation model but with the same data plane complexity. The case for dynamic path setup across the aggregation/metro is not clear in all service contexts. In a business service environment the number of connections may not be more than 1000s. Also, in a residential service environment the network may scale to 1mil+ service instances. But the connectivity to the aggregation nodes may not be more than 1000s. Some automatic discovery of network topology would nevertheless be useful so as to ease service/connection provisioning. Further, if the topology is typically hub-spoke with a few explicit failure modes in terms of topology variations it would be prudent to look at simpler network options to reduce both opex and capex. A simple option for the network could be the use of a L2 Ethernet network to forward the frames to the egress node. The main advantage that helps avoid complex control plane protocols would be the global label for a service instance, i.e., forward frames to a given DA that is member of a VLAN. Additional forwarding constraints can be easily added with filtering rules to limit frames sent from specific SA. The limited control plane function that might be enormously useful would be the use of 802.1AB LLDP based discovery for port, link, node, and connectivity. It is expected that with just enough data and control plane functions and current service management capabilities, it is possible to achieve a 12% reduction in opex. In the future, if and when COE network resource management becomes service-driven (e.g. through object calls through management APIs), this opex savings can grow to 44%. Studies do also show that a substantial majority of SPs (89% in a recent Heavy Reading Packet-Optical Transport survey) do prefer NMS based models in the metro instead of making the network very complex so as to contain cost to operate but without sacrificing the flexible provisioning of a rich suite of connectivity options. Ethernet provides just enough control & data plane functionality to meet all service needs while containing cost and complexity 3 Seery, Dunphy, Ovum-RHK, Dec CIMI Corp., Netwatcher newsletter, Jul 2008

58 PBB/PBB-TE or VPLS/MPLS?
Caution: Unscientific poll results Ethernet is the new paradigm Deterministic Transport with OAM&P Light Reading webinar: Building Converged Services Infrastructure PBB-TE perceived to offer cost advantages CO-Ethernet is one option Light Reading webinar: PBB-TE’s Winning Ways Light Reading webinar: Building Converged Services Infrastructure

59 PB/PBB/PBB-TE and MPLS Tunnel Inter-working
Ingress and egress virtual interfaces provide greatest flexibility and interoperability with existing and emerging technologies Dual-tag push/pop/swap enables multi-protocol interworking (e.g., PBB-TE, MPLS) Standard IEEE and popular Cisco-proprietary protocol handling enable robust L2VPNs IEEE and Cisco proprietary L2 control frame tunneling Access / Aggregation Metro Core Q-in-Q or PBB/PBB-TE MPLS H-VPLS or PBB/TE MEF UNI CJ Many contemporary switches are limited in the type and number of transport options. Ciena’s Carrier Ethernet supports multiple encapsulations that simplify interworking and addresses network evolution situations without fork-lift upgrades. As depicted here, EVCs can be seamlessly carried via Q-in-Q or PBB-TE tunnels. Then, depending on the demarcation between the Ethernet and MPLS domains, these EVCs can be mapped into pseudowires transported within MPLS tunnels/LSPs. L2VPNs require efficient handling and separation of various IEEE and Cisco-proprietary control protocols. Carrier Ethernet virtual interfaces provide granular control of these control protocols leveraging your existing assets as well as being programmable for future protocols. All of these scalability and security issues allow for controlled network growth, assuring that data is accessible only to those that are authorized to do so- Dual tag push/pop/swap EVC EVC (PW) Q-in-Q or PBB-TE Tunnel MPLS LSP EVC (PW) EVC EVC Q-in-Q or PBB-TE Tunnel Q-in-Q or PBB-TE Tunnel Seamless interworking between PB (Q-in-Q), PBB/PBB-TE and MPLS simplifies the handoff between domains

60 PBB-TE provides cost-effective robust packet transport, but why not combine that with IP/Ethernet service intelligence on one node? i.e. IP Routing isn’t deterministic, but it has useful service layer functions – multicast, differentiated services treatment Why not use IP/MPLS nodes? IP for services Multicast L3 Prioritization MPLS for services VPLS: Mpt-Mpt VPWS: Pt-Pt MPLS-TP for transport Pt-Pt Because Carrier Ethernet Switches are >40% lower cost than IP/MPLS Carrier Ethernet Switch/Routers (40-60% less) McKinsey & Co., Jan 2008 (40% less) CIMI Corp, July 2008 Some (e.g., successful large router vendors) argue that combining IP routing with Ethernet service intelligence provides fewer, more integrated network elements. Industry observers, however, note that there are significant cost advantages for operators to have purpose-built network elements. This approach allows operators to place sophisticated IP/MPLS nodes at specific gateway points in the core/metro network while deploying simpler more cost-effective Carrier Ethernet switches to perform packet transport. The combination of Carrier Ethernet switching with IP/service-aware functions provides operators with the optimal control while meeting economic metrics. Need a Carrier Ethernet Switch that combines “IP/service-aware” switching while retaining carrier-grade packet transport qualities!

61 Ethernet data plane Functions PBB-TE / PBB MPLS-TP
Ethernet Aggregation Native Ethernet (E-o-E) with less overhead. Scalability with 24-bit I-Sid Same as MPLS. Need PW & tunnel headers (E-o-PW/LSP-o-E). Can nest aggregation layers. May help with scaling Forwarding labels Unique end-to-end: DA+B-Vid Scales as # of endpoints (nodes) + service classes, if any. (tunnel) labels can be per hop or end-to-end May scale as # of links + service classes, if any. Need coordination across links along a path Transparency & Isolation Separate MAC address space (provider/Backbone vs. customer) MAC learning can be enabled for PBB-TE’s B- vid space Transparent transport for Ethernet clients No MAC learning defined but possible Topology ELINE (Point-Point): Yes ETREE (Point- Multipoint): Yes ELAN (Multipoint): Yes ELINE (Point-Point): : Yes ETREE (Point- Multipoint): : Yes ELAN (Multipoint): Needs either Pt-Mpt or full mesh of Pt- Pt LSP tunnels. May use VPLS model but need complex MPLS control plane & also requires either Pt-Mpt or full mesh of Pt-Pt PW’s. Layering, Partitioning, Hierarchy Simple: Backbone MAC address space w.r.t. Customer MAC address space Complex: additional PW/LSP layers. Nested tunnels can introduce OAM/provisioning complexity Peering MEF’s ENNI and CoS IA are work in progress for service level. IEEE already provides interface and link models Work in progress. Peering with MPLS network may mean complex MPLS control plane. Also, need PW signaling end-to-end. “other” services Adjunct platforms where needed to achieve ATM/FR IW. Possible to use PWs if necessary PW capability along with protocol zoo for ATM/FR IW

62 Ethernet Management plane
PBB-TE / PBB MPLS-TP OAM Reuse 802.1ag/Y1731. (a) CCM needs to use unicast DA (allowed by ag and already defined in Y.1731). Also, MIPs need to intercept if DA is of MIP. (b) LBM/LBR in most cases, will use same VID in forward and reverse direction and so no issues. (c) LTM/LTR is possible if MIPs can intercept/ignore frames as needed. New TLV with MIP DA to be defined Use 802.1ag/Y.1731 for Ethernet EVC PW/LSP is work in progress End-to-End visibility I-Sid for service (EVC) DA+B-vid for tunnel MEG levels Less oam levels: Ethernet customer flow, Ethernet EVC, operator and transport / link More oam levels: Ethernet customer flow, Ethernet EVC, LSP tunnel(s), operator and transport / link Protection End-to-end (1+1, m:n), IEEE Link Aggregation G.8031/G.8032 Transport network like using APS for 1+1/m:n PW and LSP level, span/segment/end-to-end may use fast re-route if control plane present

63 MPLS Protocols (net-net)
MPLS Provides: Virtually unlimited service scalability Eliminates MAC table explosions 50 ms resiliency OAM Traffic Engineering Bandwidth guarantees Increased OPEX Increased CAPEX Requires RSVP-TE + FRR everywhere OAM relies on the control plane Limited performance monitoring Requires DS-TE for multiple bandwidth pools MPLS Requires IGP+TE RSVP-TE FRR BFD PWE3 control plane VPLS control plane H-VPLS/ MS-PW for scalability MPLS forwarding plane upgrades MPLS control plane server cards PBB-TE eliminates these protocols

64 PBB/PBB-TE Protocols (net-net)
Carrier Ethernet Service Delivery Provides: Virtually unlimited service scalability Eliminates MAC table explosions 50 ms resiliency Service OAM Traffic Engineering Bandwidth guarantees Standardized Ethernet forwarding and OAM No changes to the hardware No huge learning curve Still just forwarding Ethernet Enterprise demands Simplicity Sub 50 ms recovery with PBB-TE Deterministic and scalable in-band OAM Standardized performance monitoring PBB-TE provides traffic engineering and bandwidth guarantees Carrier Ethernet Delivers: Provider Backbone Bridging Provider Backbone Bridging with TE IEEE 802.1ag, ITU Y.1731

65 Positioning Carrier Ethernet to Enterprise Customer

66 Packet Access Comparison
Connection Oriented Ethernet Packet Access Comparison Key aspects Connectionless Ethernet IP VPNs MPLS MPLS-TP (Work In Progress) PBB/PBB-TE Interoperability - Ethernet MEF Ethernet UNI/ENNI MEF Ethernet Services Interoperability - other MPLS NNI ATM/FR/TDM/MPLS UNI Transparency Address & control protocols Scalability Network & Services (Pt-Pt & MPt) Reliability 50-100msec protection Disjoint Working/Protect paths Manageability Fault sectionalization Service & Network OAM/PM Deterministic Perf/QoS Guaranteed rate, latency/jitter/loss Low CapEx and OpEx Need IWF (L2TP, GRE) Need IWF, dry Martini Need IWF (L2TP, GRE) Need IWF, dry Martini L3 L2 FRR 1+1 TBD TBD

67 Positioning Carrier Ethernet to Enterprise
Multiple VPN & Tunneling Control Plane Protocols Optimized for Large Carrier Customers with MPLS backbone and IP/MPLS knowledgeable and trained Engineering Staff Requires Extensive Engineering 2 to 3 9s SLAs Ethernet Service Delivery Second/s to Sub-second Restoration (R-STP/FRR) Q-in-Q Stacked VLANs 4096 maximum High priced MPLS HW and SW based Routers Requires strong L3/IP/MPLS Knowledge/Config Locked into a Vendor’s MPLS Products/Solution Desire to fill unused capacity Higher % sales of L3VPN Solving core not aggregation Desire protocols to provision Techs trained for L3/IP config Difficult to customer Field techs not trained Higher $$$ CPE More complex configuration VPLS/H-VPLS/MPLS PBB/PBB-TE/E-SPRing PBB-TE/PBB/E-SPRing Forwarding Plane Only Optimized for Enterprise Customers looking to minimize OPEX and CAPEX spend (low cost plug & play Network) CCIE type skills Not Required (+ Ethernet and SONET knowledgeable Engineers Get it !) Need to Lease Fiber (Typically unless you already own) High Reliability, Resiliency, Scalability, and Simplicity 4 to 5 9s SLAs Ethernet Service Delivery Sub 50ms Protection Switching / Restoration (IEEE 802.1ag) Ethernet is the single End to End Protocol Language Spoken Excellent OAM (Y.1731 and 802.1ag) – Jitter/Latency Stop MAC/VLAN explosions and Broadcast Storms (Separate MAC Tables – Customer LAN & Backbone) Minimizes MAC Learning and Distribution/Forwarding (True MAC learning Demarcation between LAN and MAN/WAN) 16 Million VPNs (IEEE 802.1ah Mac-in-Mac), PBB only Low CAPEX and OPEX Economics SONET Like Skill sets to Configure and Manage Network Ethernet Open Standards – 3rd Party Vendor Interop benefits Transport over GE Microwave

68 Carrier Ethernet Service Delivery Summary
Increased Simplicity with universally acknowledgeable Ethernet MAC Ethernet MAC is the single End to End Protocol Language (No Multi-Protocol Translation, Ethernet only) Improved Reliability with IEEE 802.1ag Sub 50ms Protection Switching / Restoration (IEEE 802.1ag Network Continuity Message that is tunable) QoS (Quality of Service) without Control Plane Complexity with IEEE 802.1Qay PBB-TE Traffic engineered tunnels with B-MAC’s B-VID pcp (p-bit) Classification Prioritization Superior OAM with IEEE 802.1ag and ITU Y.1731 Monitor Performance End to End (Varying Delay-Jitter/Delay-Latency/Loss) in and out of Network at Layer 2 Loop Back Message / Link Trace Message (SONET like) Loopback troubleshoot testing on Ethernet Enhanced Network Control applying IEEE 802.1ah MACinMAC Backbone Stop MAC/VLAN explosions and Broadcast Storms Minimize MAC Learning and MAC Distribution (Separate MAC Demarc between LAN and MAN/WAN) Massive Scalability with IEEE 802.1ah MACinMAC Backbone Frames 24 bit ISID delivers 16 Million VPNs (IEEE 802.1ah Mac-in-Mac) Only learns and forwards based on Backbone MAC Addresses (LAN MAC learning stays in the LAN) Lower OPEX and CAPEX plus Open Standards inter-operability benefits Lower OPEX, SONET and/or Ethernet Engineering Skill sets/experience to Configure and Manage Network Lower CAPEX, Open to inter-operate with “any” 3rd Party Ethernet Products, Ethernet Price Points Key Message to Customer Ethernet Switch Where You Can IP/MPLS Route Where You Must

69 Carrier Ethernet Service Delivery Value Proposition
Scalable Eliminate control plane restrictions Deployable on Optical and Broadband NEs Operationally Sound, Easier to Troubleshoot Better OAM tools: ag vs. VCCV/LSP-PING Fewer Moving Parts: No IGP, MPLS signaling etc. Consistent Operations Model with PMO Easier transition of workforce Consistent use of Metro OSS systems Number # 1 with 20% Market Share in the Layer 2 CEAD Ethernet over Fiber Market, “Light Reading July 14, 2010   SLA / Performance Measurement Built In Simplified Network Layering Ethernet is the faceplate and network layer Lower CAPEX Ethernet based infrastructure that rides Ethernet cost curves Thank you all for taking the time today to attend CESD Technology and Standards Update Webex session. Some of the key Value Proposition bullets that John and I would like for you to walk away with from today’s presentation is that Ciena’s CESD solution is scalable by eliminating Control Plane Routing restrictions, that our Solution is Operationally Sound and Easier to Troubleshoot compared to other Layer 3 solutions and lastly that we are number 1 in the Carrier Ethernet Access Delivery Ethernet over Fiber Marketplace with 20% market share according to Lightreading’s July 14th 2010 addition. With that John and I will now take any questions that you may have. Let me have a look to see if any were submitted. Thank you.

70 Thank you ! (Q & A)

71 G.8032 Terms and Concepts Ring Protection Link (RPL) – Link designated by mechanism that is blocked during Idle state to prevent loop on Bridged ring RPL Owner – Node connected to RPL that blocks traffic on RPL during Idle state and unblocks during Protected state Link Monitoring – Links of ring are monitored using standard ETH CC OAM messages (CFM) Signal Fail (SF) – Signal Fail is declared when ETH trail signal fail condition is detected No Request (NR) – No Request is declared when there are no outstanding conditions (e.g., SF, etc.) on the node Ring APS (R-APS) Messages – Protocol messages defined in Y.1731 and G.8032 Automatic Protection Switching (APS) Channel - Ring-wide VLAN used exclusively for transmission of OAM messages including R-APS messages

72 G.8032 Timers G.8032 specifies the use of different timers to avoid race conditions and unnecessary switching operations WTR (Wait to Restore) Timer – Used by the RPL Owner to verify that the ring has stabilized before blocking the RPL after SF Recovery Hold-off Timers – Used by underlying ETH layer to filter out intermittent link faults Faults will only be reported to the ring protection mechanism if this timer expires

73 Controlling the Protection Mechanism
Protection switching triggered by Detection/clearing of Signal Failure (SF) by ETH CC OAM Remote requests over R-APS channel (Y.1731) Expiration of G.8032 timers R-APS requests control the communication and states of the ring nodes Two basic R-APS messages specified - R-APS(SF) and R-APS(NR) RPL Owner may modify the R-APS(NR) indicating the RPL is blocked: R-APS(NR,RB) Ring nodes may be in one of two states Idle – normal operation, no link/node faults detected in ring Protecting – Protection switching in effect after identifying a signal fault

74 Signaling Channel Information
ERP uses R-APS messages to manage and coordinate the protection switching R-APS defined in Y OAM common fields are defined in Y.1731. Version – ‘00000’ – for this version of Recommendation OpCode – defined to be 40 in Y.1731 Flags – ‘ ’ – should be ignored by ERP 1 2 3 4 8 7 6 5 MEL Version (0) OpCode (R-APS = 40) Flags (0) TLV Offset (32) R-APS Specific Information (32 octets) .. 37 [optional TLV starts here; otherwise End TLV] last End TLV (0) Defined by Y.1731 Defined by G.8032 Non-specified content

75 R-APS Specific Information
Specific information (32octets) defined by G.8032 Request/Status(4bits) – ‘1011’ = SF | ’0000’ = NR | Other = Future Status – RB (1bit) – Set when RPL is blocked (used by RPL Owner in NR) Status – DNF (1bit) – Set when FDB Flush is not necessary (Future) NodeID (6octets) – MAC address of message source node (Informational) Reserved1(4bits), Status Reserved(6bits), Reserved2(24octets) - Future development  1 2 3 4 8 7 6 5 1 Request /State Reserved 1 Status Node ID (6 octets) R B D N F Status Reserved (Node ID) Reserved 2 (24 octets)

76 Items Under Study G.8032 is currently an initial recommendation that will continue to be enhanced. The following topics are under study for future versions of the recommendation: RPL blocked at both ends – configuration of the ring where both nodes Interconnected rings scenarios: shared node, shared links connected to the RPL control the protection mechanism Support for Manual Switch – administrative decision to close down a link and force a “recovery” situation are necessary for network maintenance Support for Signal Degrade scenarios – SD situations need special consideration for any protection mechanism Non-revertive mode– Allows the network to remain in “recovery” configuration either until a new signal failure or administrative switching RPL Displacement – Displacement of the role of the RPL to another ring link flexibly in the normal (idle) condition In-depth analysis of different optimizations (e.g., FDB flushing) Etc.


Download ppt "Carrier Ethernet Technology and Standards Update"

Similar presentations


Ads by Google