Presentation on theme: "Programmable Virtual Networks"— Presentation transcript:
1Programmable Virtual Networks From Network SlicingToNetwork VirtualizationAli Al-ShabibiOpen Networking LaboratoryHi! Ia am Ali Al-Shabibi and I work at the ON.Lab. I am going to tell you about FlowVisor. Who here know FlowVisor? Who has used FlowVisor? Well you should be!!!
2Outline Define FlowVisor Describe and define Network Virtualization It’s design goalIt’s successIt’s limitationDescribe and define Network VirtualizationIntroduce the OpenVirteX (formerly known as NetVisor), which provides programmable virtual networks
3Good ideas rarely get deployed Why FlowVisor?Good ideas rarely get deployedAlso require access to real world trafficNew services may require changes to switch softwareExperimenters want to control the behaviour of their networkEvaluating new network services is hardEvaluating network sevices is diffcult and that for a variety of reasons. For one, users need control over the semantics of their network which could mean that they need to change the switches firmware. To top it off and to be credible you need access to real user traffic. So needless to say new ideas rarely get deployed.
4OK… Why is it hard? Real Networks Test beds Alright, why is this hard? Well let’s contrast a real networks and test beds.Real Networks have the port density you want backed by relatively power networking devices. Then, they have the scale that you would can only hope to have. Finally, they have real users.Test beds on the other hand, usually have a low port density because they are usually composed of linux boxes. Then, their scale is limited by the amount of money you have and worse , they only have fake users, which really isn’t credible.Test beds
5Current Virtualization à la FlowVisor Network Slice = Collection of sliced switches, links, and traffic or header spaceEach slice associated to a controllerTransparent slicing, i.e., every slice believes it has full and sole control of datapathFV enforces traffic and slice isolationNot a generalized virtualizationSo let’s look at the FlowVisor’s current virtualization. FlowVisor defines a slice as a collection of sliced switches, links and traffic. By traffic, we mean the header space that distinguishes this traffic, this is also know as flowspace. Then, each slice is associated to a controller. This controller now has control over the slice while thinking that it is the sole user of the datapath. FlowVisor is therefore responsible for enforcing isolation amongst the collection of slices that exist. Notice here that controllers and switches do not need to be modified to work with flowvisor.
6Great! What about real traffic? FlowVisor allows users to opt-in to services in real-timeIndividual flows can be delegated to a slice by a userAdmins can add policy to slice dynamicallyFlowVisorWeb SliceVoIP SliceVideoSliceAll the restSo now you know how flowvisor defines a slice, but what about adding real user traffic? The idea is to run network services as part of a slice and allow users to opt-in to the services. Users opt-in by delegating flows to slices which are themselves controlled by a specific controller. Moreover, an admin can add a service to the network by dynamically adding policy at the FlowVisor.
7Sprinkle some resource limits Slicing resources includes:Specifying the link bandwidthMaximum number of forwarding rulesFraction of switch CPUFurthermore, FlowVisor allows you to define resource limits which are made available to every slice. You can specify dataplane link bandwidth as well as the number of available tcam entries available to a slice. You can also slice the CPU of the switch on a per slice basis, based on the amount of control traffic a particular slice controller can generate.OK but how are packet classified onto a slice, by this I mean which controller contols which packet. This is achieved by the notion of flowspace.FlowSpace: Which slice controls which packet?
8Mapping Packets to Slices FlowSpace is basically the set of all possible header values defined by the OpenFlow tuple. For example, Slice 3 is defined as the traffic that ranges all IP and MAC addresses that are on a particular TCP port minus the set of Ips and macs that are in slice 1. Slice 1 ranges all TCP ports. Slice 2, overlaps with Slice 3 which is a problem because in the overlapping regions we cannot distinguish that control traffic and therefore FlowVisor does not know which controller to forward the control packets to. FlowVisor avoids this problem by assigning a priority to each flowspace definition, and therefore this means that only one controller can ever control a particular flowspace.
9FlowVisor Where does it live? Sits between switches and controllersSpeaks OpenFlow up and down.Acts like a proxy to switches and controllersDatapaths and controllers run unmodifiedSo now you know quite a bit about how flowvisor functions but sadly you do not know where it lives. Let’s fix that. FlowVisor lives between the switches and controllers and speaks openflow up to the controller and down to the switches. It basically acts like a glorified OpenFlow NAT for control traffic. Again, the controllers and datapaths can run unmodified.
10What kind of magic is this? Who controls this packet?It this action allowed?PacketIn fromdatapathSo how does this work….
11Message Handling - PacketIn Drop if controller is not connected.PacketInIs LLDP?Send to appropriate slice.YesExtract match structure and match FlowSpaceNoYesSend to slice.Are actions allowed?Log exception.NomatchDrop if controller is not connected.Has packet been send to a slice?No matchDoneInsert a drop rule.NoYesWhen a packet in is received, we first check whether the packet is an LLDP packet. If it is we send it to the appropriate slice if the controller for this slice is connected. If this packet is not LLDP, the we extract the match structure from the packet in. We use this match structure to intersect the flowspace, if we find a match we we analyze the actions, ie. the flowspace actions. Depending whether this actions are allowed we either log an exception of we send it to a slice if the slice controller is connected. Finally, the last past is a safety feature of FV. If we did not send the packet to a slice then we protect the control path by installing a drop rule.
12Message Handling - FlowMod Slicing permitted?Slice ActionsSend Error. Log exceptionNoExtract match struct and intersect FlowSpaceYesFor each intersection, rewrite original flowmod with flowspace info.Has slice permissions?IntersectionsNo IntersectionsZero rewrites?Log exceptionDoneYesNoWhen a controller send a flowmod, FlowVisor catches it. FlowVisor starts by slicing the actions, and verifying that such slicing is permitted. For example, if a flowmod rewrites a vlan id to another vlan id, then both vlans must by in the slice’s flowspace otherwise an error is send back to to controller. Then we extract the match and intersect it with the flowspace, every intersection is then used to rewrite the original flowmod with the flowspace definitions Finally if there are no rewrites, Flowvisor sends an error to the controller.
13FlowVisor Highlights Demonstrations: Deployments : Open Networking Summit ’12 and ’13GENI GEC 9Best demo at SIGCOMM ’09Deployments :GENIOFELIAStanford Production NetworkIn use at NEC and Ericsson labs, as well as other vendors3 releases in the past year1.0 release downloaded over 70 times in one day
14FlowVisor Downloaders Release 1.0 University ResearchGeorgia TechRutgersKSUU of WisconsinU of UtahClemsonR&E NetworksAPNICBBNNYSERNet CENICCommercial Network OpsAT&TComcastEarthLinkPSINetRCNVendorsGoldman SachsCiscoArubaNECEricsson
15FlowVisor SummaryFlowVisor introduces the concept of a network sliceNot a complete virtualization solution.Originally designed to test new network services on production tafficBut, it’s really only a Network Slicer!FlowVisor provides network slicing but not a complete network virtualization.
16What should Network Virtualization be? At least what I think ;)Conceptually introduces virtual network which is decoupled from physical networkShould not change the abstractions we know and love of physical networksShould provide some new one: Instantiation, deletion, service deployment, migration, etc.I’d like to define what network virtualization is… at least from my point of view.Network Virtualization should introduce the concept of a virtual network which is completely decoupled from the physical network.It should not change the abstraction that we know and love.But should provide some new ones, like instantiation, migration, snapshotting, etc.
17What is Network Virtualization? VPNOverlaysNone of these give you a virtual networkVLANVRFMPLSTRILLThey merely virtualize one aspect of a networkThere are a bunch of virtualization techniques such as VRFs, VLANs, Overlays, etc. but unfortunately none of these deliver a decoupling of your virtual net from your physical infrastructure. They basically virtualize a certain aspect of your network. In my mind, there are three main flavors of network virtualization. Topology Virtualization, Address Virtualization, and policy virtualization.Topology Virtualization is the ability to have virtual nodes and/or links. These must be logically decoupled from your physical network.Address Virtualization – Essentially this is the ability to give the illusion to the user that he has the entire addressable space. But while we do this, we should take care of maintaining the current assumption, no one will like me if I destroy TCP/IP to give you a virtual net.Policy Virtualization – This is essentially what flowvisor currently does. We want to know who can control what and what guarantees do we give.Topology VirtualizationVirtual linksVirtual nodesDecoupled from physical networkAddress VirtualizationVirtual AddressingMaintain current abstractionsAdd some new onesPolicy VirtualizationWho controls what?What guarantees are enforced?
18Network Virtualization vs. Network Slicing Say you want two networks with exactly the same properties.SlicingSorry, you can’t.You need to discriminate traffic of two networks with something other than the existing header bitsThus no address or complex topology virtualizationNetwork virtualizationVirtual nets are completely independentVirtual nets are distinguished by the tenant idComplete address and topology virtualizationOk so whats really the difference between slicing and virtualization. Say you want to have two networks with exactly the same properties? …
19Virtualization State of the Art Functionality implemented at the edgeUse of tunneling techniques, such as STT, VXLAN, GRENetwork core is not available for innovationClosed source controller controls the behaviour of the networkProvides address and topology virtualization, but limited policy virtualization.Moreover, the topology looks like only one big switchSo there already exist solution which will provide you with virtual networks. Most of these solutions use some sort of tunneling technique such as VXLAN or GRE tunnels. They basically treat the core network as a fabric of pipes that just shovel packet for one end of the network to the other. All the intelligence is implemented at the edge of the network which means that you cannot define the semantics of your network simply because that is not available to you but rather a closed source controller defines the nominal behaviour of your network. These solutions have been mostly catered to DCs and they do provide nice address and topology virtualization but unfortunately they provide limited policy virtualizaion. On top of this your topology looks like one big switch which I argue is rather limiting.
20Big Switch Abstraction E1SWITCH 1E3E1E2E2A single switch greatly limits the flexibility of the network controllerCannot specify your own routing policy.What if you want a tree topology?E5SWITCH 2The current state of the art in network virtualization revolves around mainly the big switch abstraction and and some tunneling technology (whether it’s VXLAN, STT, or something else is largely irrelevant). Currently we instantiate virtual networks by assigning endpoints to our virtual network and interconnecting them via this big switch abstraction. The issue is that this abstraction hides away an aspect of the network that we would like to control. Actually in current solution you have very limited choice type of network you would like to instantiate. We would like to change that.E4E6E3E4E5E6
21Current Virtualization vs OpenVirteX Current Virtualization SolutionsNetworks are not programmableFunctionality implemented at the edgeNetwork core is not available for innovationMust provision tunnels to provide virtual topologyAddress virtualization provided by encapsulationOpenVirteXEach virtual network is handed to a controller for programming.Edge & core available for innovationEntire physical topology may/can be exposed to the downstream controller.Address virtualization provided by remapping/rewriting header fieldsBoth dataplanes and controllers can be used unmodified.So just to summarize the differences between what exists and what we are building.
22OpenVirteX OpenVirtex By including a quantum plugin either directly in FlowVisor or possibly in FOAM we are able to spawn virtual networks which are then paired up with a controller. Each virtual network is given strict performance guarantees which therefore allow each tenant to operate his network unhindered by other tenants. Moreover, flowvisor will support any version of openflow and you will be able to use them simultaneously.As I told you earlier we achieve address and topology virtualization by rewriting control packets, basically all problems in computer science can be solved by another level of indiection.All problems in computer science can be solved by another level of indirection.- David Wheeler
24Address Space Virtualisation Control traffic address translationData traffic address translationData traffic address mapping
25Topology Virtualization - Abstractions Expose physical topology to tenantsVirtual link: collapse multi-hop path into one-hop linkApproach is also valid for proactive rulesOpenVirtex
26Abstractions (contd.)Virtual switch: collapse ports dispersed over network into a switchBig switch is virtual switch with all edge portsUse separate controller for each virtual switchAllow OpenVirteX admin to control routing within virtual switchvirtual switch. . .. . .virtualphysicalcore portsedge portsVM
27OpenVirteX Interaction with the Real-World NetVisor
28OpenVirteX API Mapping to Quantum OpenStack Management SystemNovaQuantumOtherComponentsQuantumpluginNovapluginQuantumpluginQuantumpluginVM1VM2VM3Physical Networkvirtual switchOpenFlowvSwitch
29OpenVirteX API Mapping to Quantum Create Network API✔✔Attach Port API✔Create vRouter APIVia the Router extensionConfigure Topology API
30How does it work?Finally FlowVisor removes the tag, so that the packet can be send to the destination.FlowVisor rewrites the control plane packet to match what the controller expects.FlowVisor modifies the control packet to append a tag.Switch generates a control packet, which is sent up to the controller.Packet is tagged on the dataplaneOk so let’s see how this can work. For simplicity we are going to stick to a reactive modelFirst a end host sends a packet and in openflow style a control packet is shipped by the switch to the controller which happens to be FlowVisor in this case.FlowVisor `sees the packet and forwards it to the appropriate controller, the controller does something to the control packet and sends it back to the datapath which is also FlowVisorFlowVisor rewrites this packet and appends some actions to it to case it to tag the dataplane packet.At this point the dataplane packet continues to the next hop. Which triggers another control packet. This packet arrives at FV and is rewritten by FV to match what the controller would expect (ie. It removes the tag) and ships it off to the controller.The controller does its thing and sends it back, FV rewrites the packet to maintain the tag on the dataplane.The data packet continues on its way until it reaches its last hop, at this point yet another control plane packet is shot off to FV which again rewrites it to allow the controller to understand it, sends it to the controller and the controller sends it back.But this time FV appends actions to the control packet that cause the tag to be stripped from the data packet and forwarded to the destination.Host sends a packet
31Just finised implementing a prototype High Level FeaturesSupport for more generalized network virtualization as opposed to slicing Address virtualization: use extra bits or clever use of tenant id in headerTopology virtualization: on demand topologyIntegrate with cloud using OpenStackVia the Quantum pluginSupport any OF 1.x version, simultaneouslySupport for scale, HA and security-features.Incorporate right building blocks from other OSSJust finised implementing a prototype
32Current Status Quick and dirty prototype implemented Provides Address space virtualisation/isolationTwo topology abstractions:Virtual LinkVirtual SwitchCurrent implementation not intended to scale or provide any significant performanceIt’s a proof of concept
33Future Challenges Traffic engineering, e.g., load balancing Reliability, e.g., disjoint pathsThe above needs special attention when offering topology abstractionsThey may even be severely impacted.Physical topology changesTenant may ask for reconfiguration of virtual networkExtremely challenging to get right
34Conclusion FlowVisor 1.0 will remain to be supported OpenVirteX is still in the design phaseBut our clear goal is to deliver programmable virtual networks.An initial proof of concept may be available in QContributions to FlowVisor and OpenVirteX are greatly appreciated and welcomed.