Presentation is loading. Please wait.

Presentation is loading. Please wait.

OpenFlow in Service Provider Networks AT&T Tech Talks October 2010 Rob Sherwood Saurav Das Yiannis Yiakoumis.

Similar presentations


Presentation on theme: "OpenFlow in Service Provider Networks AT&T Tech Talks October 2010 Rob Sherwood Saurav Das Yiannis Yiakoumis."— Presentation transcript:

1 OpenFlow in Service Provider Networks AT&T Tech Talks October 2010 Rob Sherwood Saurav Das Yiannis Yiakoumis

2 Talk Overview Motivation What is OpenFlow Deployments OpenFlow in the WAN –Combined Circuit/Packet Switching –Demo Future Directions

3 Million of lines of source code 5400 RFCsBarrier to entry 500M gates 10Gbytes RAM BloatedPower Hungry We have lost our way Specialized Packet Forwarding Hardware Operating System Operating System App Routing, management, mobility management, access control, VPNs, …

4 Software Control Router Hardware Datapath Authentication, Security, Access Control HELLO MPLS NAT IPV6 anycast multicast Mobile IP L3 VPN L2 VPN VLAN OSPF-TE RSVP-TE HELLO Firewa ll Multi layer multi region iBGP, eBGP IPSec Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … An industry with a mainframe-mentality

5 Deployment IdeaStandardize Wait 10 years Glacial process of innovation made worse by captive standards process Driven by vendors Consumers largely locked out Glacial innovation

6 New Generation Providers Already Buy into It In a nutshell Driven by cost and control Started in data centers…. What New Generation Providers have been Doing Within the Datacenters Buy bare metal switches/routers Write their own control/management applications on a common platform 6

7 Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Ap p Network Operating System App Change is happening in non-traditional markets

8 App Simple Packet Forwarding Hardware App Simple Packet Forwarding Hardware Network Operating System 1. Open interface to hardware 3. Well-defined open API 2. At least one good operating system Extensible, possibly open-source The Software-defined Network

9 Windows (OS) Windows (OS) Linux Mac OS x86 (Computer) Windows (OS) App Linux Mac OS Mac OS Virtualization layer App Controller 1 App Controller 2 Virtualization or Slicing App OpenFlow Controller 1 NOX (Network OS) Controller 2 Network OS Trend Computer IndustryNetwork Industry Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation

10 What is OpenFlow?

11 Short Story: OpenFlow is an API Control how packets are forwarded Implementable on COTS hardware Make deployed networks programmable –not just configurable Makes innovation easier Result: –Increased control: custom forwarding –Reduced cost: API increased competition

12 Ethernet Switch/Router

13 Data Path (Hardware) Control Path Control Path (Software)

14 Data Path (Hardware) Control Path OpenFlow OpenFlow Controller OpenFlow Protocol (SSL/TCP)

15 Controller PC Hardware Layer Software Layer Flow Table MAC src MAC dst IP Src IP Dst TCP sport TCP dport Action OpenFlow Firmware ** ***port 1 port 4port 3 port 2 port OpenFlow Flow Table Abstraction

16 OpenFlow Basics Flow Table Entries Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport RuleActionStats 1.Forward packet to port(s) 2.Encapsulate and forward to controller 3.Drop packet 4.Send to normal processing pipeline 5.Modify Fields + mask what fields to match Packet + byte counters

17 Examples Switching * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * 00:1f:.. *******port6 Flow Switching port3 Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 00:20..00:1f..0800vlan port6 Firewall * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Forward ********22drop

18 Examples Routing * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ***** ***port6 VLAN Switching * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ** vlan1 ***** port6, port7, port9 00:1f..

19 OpenFlowSwitch.org Controller OpenFlow Switch PC OpenFlow Usage Dedicated OpenFlow Network OpenFlow Switch OpenFlow Switch OpenFlow Protocol Aarons code RuleActionStatistics RuleActionStatisticsRuleActionStatistics

20 Network Design Decisions Forwarding logic (of course) Centralized vs. distributed control Fine vs. coarse grained rules Reactive vs. Proactive rule creation Likely more: open research area

21 Centralized vs Distributed Control Centralized Control OpenFlow Switch OpenFlow Switch OpenFlow Switch Controller Distributed Control OpenFlow Switch OpenFlow Switch OpenFlow Switch Controller

22 Flow Routing vs. Aggregation Both models are possible with OpenFlow Flow-Based Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks Aggregated One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone

23 Reactive vs. Proactive Both models are possible with OpenFlow Reactive First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility Proactive Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules

24 OpenFlow Application: Network Slicing Divide the production network into logical slices o each slice/service controls its own packet forwarding o users pick which slice controls their traffic: opt-in o existing production services run in their own slice e.g., Spanning tree, OSPF/BGP Enforce strong isolation between slices o actions in one slice do not affect another Allows the (logical) testbed to mirror the production network o real hardware, performance, topologies, scale, users o Prototype implementation: FlowVisor

25 Add a Slicing Layer Between Planes Data Plane Rules Excepts Slice 1 Controller Slice 2 Controller Control/Data Protocol Slice Policies Slice 3 Controller

26 Network Slicing Architecture A network slice is a collection of sliced switches/routers Data plane is unmodified –Packets forwarded with no performance penalty –Slicing with existing ASIC Transparent slicing layer –each slice believes it owns the data path –enforces isolation between slices i.e., rewrites, drops rules to adhere to slice police –forwards exceptions to correct slice(s)

27 Slicing Policies The policy specifies resource limits for each slice: –Link bandwidth –Maximum number of forwarding rules –Topology –Fraction of switch/router CPU –FlowSpace: which packets does the slice control?

28 FlowSpace: Maps Packets to Slices

29 Real User Traffic: Opt-In Allow users to Opt-In to services in real-time o Users can delegate control of individual flows to Slices o Add new FlowSpace to each slice's policy Example: o "Slice 1 will handle my HTTP traffic" o "Slice 2 will handle my VoIP traffic" o "Slice 3 will handle everything else" Creates incentives for building high-quality services

30 FlowVisor Implemented on OpenFlow Custom Control Plane Stub Control Plane Data Plane OpenFlow Protocol Switch/ Router Server Network Switch/ Router Servers OpenFlow Firmware Data Path OpenFlow Controller Switch/ Router Switch/ Router OpenFlow Firmware Data Path OpenFlow Controller OpenFlow Controller OpenFlow Controller FlowVisor OpenFlow

31 FlowVisor Message Handling OpenFlow Firmware Data Path Alice Controller Bob Controller Cathy Controller FlowVisor OpenFlow Packet Exception Policy Check: Is this rule allowed? Policy Check: Who controls this packet? Full Line Rate Forwarding Rule Packet

32 OpenFlow Deployments

33 OpenFlow has been prototyped on…. Ethernet switches – HP, Cisco, NEC, Quanta, + more underway IP routers – Cisco, Juniper, NEC Switching chips – Broadcom, Marvell Transport switches – Ciena, Fujitsu WiFi APs and WiMAX Basestations Most (all?) hardware switches now based on Open vSwitch…

34 Deployment: Stanford Our real, production network o 15 switches, 35 APs o 25+ users o 1+ year of use o my personal and web-traffic! Same physical network hosts Stanford demos o 7 different demos

35 Demo Infrastructure with Slicing

36 Deployments: GENI

37 (Public) Industry Interest Google has been a main proponent of new OpenFlow 1.1 WAN features –ECMP, MPLS-label matching –MPLS LDP-OpenFlow speaking router: NANOG50 NEC has announced commercial products –Initially for datacenters, talking to providers Ericsson –MPLS Openflow and the Split Router Architecture: A Research Approach at MPLS2010

38 OpenFlow in the WAN

39 OPEX: 60-70% CAPEX: 30-40% … and yet service providers own & operate 2 such networks : IP and Transport

40 D D C D D C D D C D D C IP/MPLS C D D C D D C D D D D D D D D D D CC D D D D GMPLS Motivation managed and operated independently resulting in duplication of functions and resources in multiple layers and significant capex and opex burdens … well known IP & Transport Networks are separate

41 D D C D D C D D C D D C IP/MPLS C D D C D D C D D D D D D D D D D CC D D D D GMPLS Motivation IP links are static and supported by static circuits or lambdas in the Transport network IP & Transport Networks do not interact

42 What does it mean for the IP network? IP backbone network design – Router connections hardwired by lambdas – 4X to 10X over-provisioned Peak traffic protection IP DWDM Big Problem - More over-provisioned links - Bigger Routers How is this scalable?? *April, 02

43 Bigger Routers? Dependence on large Backbone Routers Expensive Power Hungry Juniper TX8/T640 TX8 Cisco CRS-1 How is this scalable??

44 Functionality Issues! Dependence on large Backbone Routers Complex & Unreliable Network World 05/16/2007 Dependence on packet-switching Traffic-mix tipping heavily towards video Questionable if per-hop packet-by-packet processing is a good idea Dependence on over-provisioned links Over-provisioning masks packet switching simply not very good at providing bandwidth, delay, jitter and loss guarantees

45 How can Optics help? Optical Switches – 10X more capacity per unit volume (Gb/s/m3) – 10X less power consumption – 10X less cost per unit capacity (Gb/s) – Five 9s availability Dynamic Circuit Switching – Recover faster from failures – Guaranteed bandwidth & Bandwidth-on-demand – Good for video flows – Guaranteed low latency & jitter-free paths – Help meet SLAs – lower need for over-provisioned IP links

46 D D C D D C D D C D D C IP/MPLS C D D C D D C D D D D D D D D D D CC D D D D GMPLS Motivation IP links are static and supported by static circuits or lambdas in the Transport network IP & Transport Networks do not interact

47 What does it mean for the Transport network? IP DWDM *April, 02 Without interaction with a higher layer there is really no need to support dynamic services and thus no need for an automated control plane and so the Transport network remains manually controlled via NMS/EMS and circuits to support a service take days to provision Without visibility into higher layer services the Transport network reduces to a bandwidth-seller The Internet can help… wide variety of services different requirements that can take advantage of dynamic circuit characteristics

48 What is needed … Converged Packet and Circuit Networks manage and operate commonly benefit from both packet and circuit switches benefit from dynamic interaction between packet switching and dynamic-circuit-switching … Requires a common way to control a common way to use

49 But … Convergence is hard … mainly because the two networks have very different architecture which makes integrated operation hard … and previous attempts at convergence have assumed that the networks remain the same … making what goes across them bloated and complicated and ultimately un-usable We believe true convergence will come about from architectural change! We believe true convergence will come about from architectural change!

50 Flow Network D D C D D C D D C D D C IP/MPLS C D D C D D C D D D D D D D D D D CC D D D D GMPLS UCP

51 Flow Network … that switch at different granularities: packet, time-slot, lambda & fiber Simple, Unified, Automated Control Plane Simple,networkof FlowSwitches Research Goal: Packet and Circuit Flows Commonly Controlled & Managed pac.c

52 52 … a common way to control Exploit the cross-connect table in circuit switches Packet Flows Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 52 Circuit Flows Out Port Out Lambda Starting Time-Slot Signal Type VCG 52 In Port In Lambda Starting Time-Slot Signal Type VCG The Flow Abstraction presents a unifying abstraction … blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network

53 OpenFlow Protocol Packet Switch Circuit Switch Packet & Circuit Switch NETWORK OPERATING SYSTEM Variable Bandwidth Packet Links Variable Bandwidth Packet Links Dynamic Optical Bypass Dynamic Optical Bypass Unified Recovery Unified Control Plane Unifying Abstraction Networking Applications Packet & Circuit Switch Packet Switch VIRTUALIZATION (SLICING) PLANE Underlying Data Plane Switching Traffic Engineering Application- Aware QoS … a common way to useUnified Architecture

54 Congestion Control Example Application..via Variable Bandwidth Packet Links

55 OpenFlow Demo at SC09

56 Video ClientsVideo Server λ nm λ nm OpenFlow Controller OpenFlow Protocol GE to DWDM SFP convertor GE O-E NF2 GE E-O NetFPGA based OpenFlow packet switch NF1 25 km SMF to OSA AWG WSS based OpenFlow circuit switch 1X9 Wavelength Selective Switch (WSS) Lab Demo with Wavelength Switches

57 Openflow Circuit Switch 25 km SMF OpenFlow packet switch GE-Optical Mux/Demux Lab Demo with Wavelength Switches

58 OpenFlow Enabled Converged Packet and Circuit Switched Network Stanford University and Ciena Corporation Demonstrate a converged network, where OpenFlow is used to control both packet and circuit switches. Dynamically define flow granularity to aggregate traffic moving towards the network core. Provide differential treatment to different types of aggregated packet flows in the circuit network: VoIP : Routed over minimum delay dynamic-circuit path Video: Variable-bandwidth, jitter free path bypassing intermediate packet switches HTTP: Best-effort over static-circuits Many more new capabilities become possible in a converged network

59 OpenFlow Enabled Converged Packet and Circuit Switched Network

60 Demo Video

61 Issues with GMPLS GMPLS original goal: UCP across packet & circuit (2000) Today – the idea is dead Packet vendors and ISPs are not interested Transport n/w SPs view it as a signaling tool available to the mgmt system for provisioning private lines (not related to the Internet) After 10 yrs of development, next-to-zero significant deployment as UCP GMPLS Issues

62 Issues are when considered as a unified architecture and control plane control plane complexity escalates when unifying across packets and circuits because it makes basic assumption that the packet network remains same: IP/MPLS network – many years of legacy L2/3 baggage and that the transport network remain same - multiple layers and multiple vendor domains use of fragile distributed routing and signaling protocols with many extensions, increasing switch cost & complexity, while decreasing robustness does not take into account the conservative nature of network operation can IP networks really handle dynamic links? Do transport network service providers really want to give up control to an automated control plane? does not provide easy path to control plane virtualization Issues with GMPLS

63 Conclusions Current networks are complicated OpenFlow is an API – Interesting apps include network slicing Nation-wide academic trials underway OpenFlow has potential for Service Providers – Custom control for Traffic Engineering – Combined Packet/Circuit switched networks Thank you!

64 Conclusions Current networks are complicated OpenFlow is an API – Interesting apps include network slicing Nation-wide academic trials underway OpenFlow has potential for Service Providers – Custom control for Traffic Engineering – Combined Packet/Circuit switched networks Thank you!

65 Backup

66 It is well known that Transport Service Providers dislike giving up manual control of their networks to an automated control plane no matter how intelligent that control plane may be how to convince them? It is also well known that converged operation of packet & circuit networks is a good idea for those that own both types of networks – eg AT&T, Verizon BUT what about those who own only packet networks –eg Google they do not wish to buy circuit switches how to convince them? We believe the answer to both lies in virtualization (or slicing) Practical Considerations

67 OpenFlow Protocol CCC FLOWVISOR OpenFlow Protocol CK P P P P Basic Idea: Unified Virtualization

68 OpenFlow Protocol CCC FLOWVISOR OpenFlow Protocol CK P P P P ISP A Client Controller Private Line Client Controller ISP B Client Controller Under Transport Service Provider (TSP) control Isolated Client Network Slices Single Physical Infrastructure of Packet & Circuit Switches Deployment Scenario: Different SPs

69 Demo Topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ISP# 1s NetOS App PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ISP# 2s NetOS App PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM Transport Service Providers (TSP) virtualized network Internet Service Providers (ISP# 1) OF enabled network with slice of TSPs network Internet Service Providers (ISP# 2) OF enabled network with another slice of TSPs network TSPs private line customer

70 Demo Methodology We will show: 1.TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS. a)The FlowVisor will manage slices of the TSPs network for ISP customers, where { slice = bandwidth + control of part of TSPs switches } b)NMS/EMS can be used to manually provision circuits for Private Line customers 2.Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customers slices. 1.ISP#1 is free to do whatever it wishes within its slice a)eg. use an automated control plane (like OpenFlow) b)bring up and tear-down links as dynamically as it wants 2.ISP#2 is free to do the same within its slice 3.Neither can control anything outside its slice, nor interfere with other slices 4.TSP can still use NMS/EMS for the rest of its network

71 ISP #1s Business Model ISP# 1 pays for a slice = { bandwidth + TSP switching resources } 1.Part of the bandwidth is for static links between its edge packet switches (like ISPs do today) 2.and some of it is for redirecting bandwidth between the edge switches (unlike current practice) 3.The sum of both static bandwidth and redirected bandwidth is paid for up-front. 4.The TSP switching resources in the slice are needed by the ISP to enable the redirect capability.

72 ISP# 1s network PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Packet (virtual) topology Actual topology Notice the spare interfaces..and spare bandwidth in the slice

73 ISP# 1s network PKT ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH Packet (virtual) topology Actual topology ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!

74 ISP #1s Business Model Rationale Q. Why have spare interfaces on the edge switches? Why not use them all the time? A. Spare interfaces on the edge switches cost less than bandwidth in the core 1.sharing expensive core bandwidth between cheaper edge ports is more cost-effective for the ISP 2.gives the ISP flexibility in using dynamic circuits to create new packet links where needed, when needed 3.The comparison is between (in the simple network shown) a)3 static links + 1 dynamic link = 3 ports/edge switch + static & dynamic core bandwidth b)vs. 6 static links = 4 ports/edge switch + static core bandwidth c)as the number of edge switches increase, the gap increases

75 ISP #2s Business Model ISP# 2 pays for a slice = { bandwidth + TSP switching resources } 1.Only the bandwidth for static links between its edge packet switches is paid for up-front. 2.Extra bandwidth is paid for on a pay-per-use basis 3.TSP switching resources are required to provision/tear- down extra bandwidth 4.Extra bandwidth is not guaranteed

76 ISP# 2s network Packet (virtual) topology Actual topology PKT ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM PKTPKT ETHETH ETHETH SONETSONET SONETSONET TDMTDM ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ETHETH ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!! Only static link bw paid for up-front

77 ISP #2s Business Model Rationale Q. Why use variable bandwidth packet links? In other words why have more bandwidth at the edge (say 10G) and pay for less bandwidth in the core up-front (say 1G) A.Again it is for cost-efficiency reasons. 1.ISPs today would pay for the 10G in the core up-front and then run their links at 10% utilization. 2.Instead they could pay for say 2.5G or 5G in the core, and ramp up when they need to or scale back when they dont – pay per use.


Download ppt "OpenFlow in Service Provider Networks AT&T Tech Talks October 2010 Rob Sherwood Saurav Das Yiannis Yiakoumis."

Similar presentations


Ads by Google