Presentation on theme: "Modular Layer 2 In OpenStack Neutron"— Presentation transcript:
1Modular Layer 2 In OpenStack Neutron Robert Kukura, Red HatKyle Mestery, Cisco
2I’ve heard the Open vSwitch and Linuxbridge Neutron Plugins are being deprecated. I’ve heard ML2 does some cool stuff!I don’t know what ML2 is but want to learn about it and what it provides.
3What is Modular Layer 2? A new Neutron core plugin in Havana Modular Drivers for layer 2 network types and mechanisms - interface with agents, hardware, controllers, ...Service plugins and their drivers for layer 3+Works with existing L2 agentsopenvswitchlinuxbridgehypervDeprecates existing monolithic plugins
4Motivations For a Modular Layer 2 Plugin Combine next few slides, one animation/graphic per use case mapping to “why ML2”
5Before Modular Layer 2 ... OR OR ... Neutron Server Neutron Server Open vSwitch PluginLinuxbridge Plugin
6Before Modular Layer 2 ... I want to write a Neutron Plugin. What a pain. :(Neutron ServerBut I have to duplicate a lot of DB, segmentation, etc. work.Vendor X Plugin
7ML2 Use Cases Replace existing monolithic plugins New features Eliminate redundant codeReduce development & maintenance effortNew featuresTop-of-Rack switch controlAvoid tunnel flooding via L2 populationMany more to come...Heterogeneous deploymentsSpecialized hypervisor nodes with distinct network mechanismsIntegrate *aaS appliancesRoll new technologies into existing deployments
9The Modular Layer 2 (ML2) Plugin is a framework allowing OpenStack Neutron to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers.
10What’s Similar?ML2 is functionally a superset of the monolithic openvswitch, linuxbridge, and hyperv plugins:Based on NeutronDBPluginV2Models networks in terms of provider attributesRPC interface to L2 agentsExtension APIsNotes:Based on NeutronDBPluginV2 - create/update/delete networks, subnets, and ports via same codeModels networks in terms of provider attributes: network_type, physical_network, segmentation_idRPC interface to L2 agentsExtension APIs - agent, binding, provider, quotas, security-groups, ...
11What’s Different?ML2 introduces several innovations to achieve its goals:Cleanly separates management of network types from the mechanisms for accessing those networksMakes types and mechanisms pluggable via driversAllows multiple mechanism drivers to access same network simultaneouslyOptional features packaged as mechanism driversSupports multi-segment networksFlexible port bindingL3 router extension integrated as a service pluginPossibly collapse the initial bullet’s pointsRPC and DB layer comments
12ML2 Architecture Diagram Neutron ServerML2 PluginAPI ExtensionsType ManagerMechanism ManagerPossibly add DB and RPC into this diagramGRE TypeDriverVLAN TypeDriverVXLAN TypeDriverAristaCisco NexusHyper-VL2 PopulationLinuxbridgeOpen vSwitchTail-F NCS
13Multi-Segment Networks VXLANphysnet1 VLAN 37physnet2 VLAN 413VM 1VM 2VM 3Created via multi-provider API extensionSegments bridged administratively (for now)Ports associated with network, not specific segmentPorts bound automatically to segment with connectivity
14Type Driver API Talk about pooling class TypeDriver(object): @abstractmethod def get_type(self): pass @abstractmethod def initialize(self): pass @abstractmethod def validate_provider_segment(self, segment): pass @abstractmethod def reserve_provider_segment(self, session, segment): pass @abstractmethod def allocate_tenant_segment(self, session): pass @abstractmethod def release_segment(self, session, segment): passTalk about pooling
16Port BindingDetermines values for port’s binding:vif_type and binding:capabilities attributes and selects segmentOccurs when binding:host_id set on port or existing valid bindingML2 plugin calls bind_port() on registered MechanismDrivers, in order listed in config, until one succeeds or all have been triedDriver determines if it can bind based on:context.network.network_segmentscontext.current[‘binding:host_id’]context.host_agents()For L2 agent drivers, binding requires live L2 agent on port’s host that:Supports the network_type of a segment of the port’s networkHas a mapping for that segment’s physical_network if applicableIf it can bind the port, driver calls context.set_binding() with binding detailsIf no driver succeeds, port’s binding:vif_type set to BINDING_FAILEDclass PortContext(object): @abstractproperty def current(self): pass @abstractproperty def original(self): pass @abstractproperty def network(self): pass @abstractproperty def bound_segment(self): pass @abstractmethod def host_agents(self, agent_type): pass @abstractmethod def set_binding(self, segment_id,vif_type, cap_port_filter): passAnimated diagram?
18Type Drivers in HavanaThe following are supported segmentation types in ML2 for the Havana release:localflatVLANGREVXLAN
19Mechanism Drivers in Havana The following ML2 MechanismDrivers exist in Havana:AristaCisco NexusHyper-V AgentL2 PopulationLinuxbridge AgentOpen vSwitch AgentTail-f NCSAdd a slide for L2 Population
20Before ML2 L2 Population MechanismDriver “VM A” wants to talk to “VM G.” “VM A” sends a broadcast packet, which is replicated to the entire tunnel mesh.VM AVM BHost 1VM IVM CHost 1Host 2VM HHost 4Host 3VM GVM FVM EVM D
21With ML2 L2 Population MechanismDriver The ARP request from “VM A” for “VM G” is intercepted and answered using a pre-populated neighbor entry.Traffic from “VM A” to “VM G” is encapsulated and sent to “Host 4” according to the bridge forwarding table entry.VM AVM BHost 1Proxy ArpVM IHost 2VM CHost 1VM HHost 4Host 3VM GVM FVM EVM D
23ML2 Futures: Deprecation Items The future of the Open vSwitch and Linuxbridge pluginsThese are planned for deprecation in IcehouseML2 supports all their functionalityML2 works with the existing OVS and Linuxbrige agentsNo new features being added in Icehouse to OVS and Linuxbridge pluginsMigration Tool being developedMove this to the beginning of the presentation: “Why do you care about ML2?”Add a slide on migrations from OVS and Linuxbridge into ML2
24Plugin vs. ML2 MechanismDriver? Advantages of writing an ML2 Driver instead of a new monolithic pluginMuch less code to write (or clone) and maintainNew neutron features supported as they are addedSupport for heterogeneous deploymentsVendors integrating new plugins should consider an ML2 Driver insteadExisting plugins may want to migrate to ML2 as well
25ML2 With Current Agents Neutron Server ML2 Plugin Host A Host B Host C Existing ML2 Plugin works with existing agentsSeparate agents for Linuxbridge, Open vSwitch, and Hyper-VNeutron ServerML2 PluginAPI NetworkHost Asee if we can combiner these slides into an animation/build, maybe also include DHCP & L3Host BHost CHost DLinuxbridge AgentHyper-V AgentOpen vSwitch AgentOpen vSwitch Agent
26ML2 With Modular L2 Agent Neutron Server ML2 Plugin Host A Host B Future direction is to combine Open Source AgentsHave a single agent which can support Linuxbridge and Open vSwitchPluggable drivers for additional vSwitches, Infiniband, SR-IOV, ...Neutron ServerML2 PluginAPI NetworkHost AHost BHost CHost DModular AgentModular AgentModular AgentModular Agent
28What the Demo Will Show ML2 running with multiple MechanismDrivers openvswitchcisco_nexusBooting multiple VMs on multiple compute hostsHosts are running FedoraConfiguration of VLANs across both virtual and physical infrastructure
29ML2 Demo Setup Host 1 Host 2 Cisco Nexus Switch VLAN is added on the VIF for VM1 and also on the br-eth2 ports by the ML2 OVS MechanismDriver.Host 2VLAN is added on the VIF for VM2 and also on the br-eth2 ports by the ML2 OVS MechanismDriver.nova apinova compute...neutron serverneutron ovs agentnova computeneutron ovs agentneutron dhcpneutron l3 agentvm1vm2VM1 can ping VM2 … we’ve successfully completed the standard network test.br-intbr-intbr-eth2br-eth2eth2eth2Mention Nexus MD during port bindingThe ML2 Cisco Nexus MechanismDriver trunks the VLAN on eth2/1.The ML2 Cisco Nexus MechanismDriver trunks the VLAN on eth2/2.Cisco Nexus Switcheth2/1eth2/2