Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Defined Networking: OpenFlow Switches & Controllers

Similar presentations


Presentation on theme: "Software Defined Networking: OpenFlow Switches & Controllers"— Presentation transcript:

1 Software Defined Networking: OpenFlow Switches & Controllers
James Won-Ki Hong Department of Computer Science and Engineering POSTECH, Korea

2 Outline OpenFlow Switches OpenFlow Controllers Conclusions NOX & POX
Floodlight Ryu Open Daylight (ODL) Open Network Operating System (ONOS) Conclusions

3 Introduction to OpenFlow Switches
Hardware-based OpenFlow Switches Commercial hardware switches with OpenFlow capability Network abstraction is realized by firmware upgrading Show high processing speed Have space limitation on saving the flow table entries Approximately store 1500 flow entries (due to expensive CAM) Not easy to upgrade Most switches only support OpenFlow up to version 1.0 Software-based OpenFlow Switches OpenFlow enabled software switch (runs on x86 commodity computer) Performance is relatively low Store large amount of flow entries with bound (theoretically) Under active development, support most recent OpenFlow spec. Hybrid OpenFlow Switch A virtual switch with specialized hardware device Much faster than software-based switches

4 Hardware-basd OpenFlow Switches
Juniper MX-series NEC IP8800 WiMax (NEC) HP Procurve 5400 Netgear 7324 PC Engines Pronto 3240/3290 Ciena Coredirector More coming soon...

5 Software-based OpenFlow Switches (1/3)
OpenvSwitch (OVS) Overview A virtual switch or Virtual Ethernet Bridge (VEB) User-space: configuration, control Kernel-space: datapath (included in main Linux kernel from v3.3) Features Support OpenFlow protocol Support multiple tunneling protocols VxLAN, Ethernet over GRE, IPsec, GRE over IPsec Fine-grained QoS Main components ovs-vswitchd: a daemon that implements the switch ovsdb-server: lightweight database server that ovs-vswitch queries to ovs-vsctl: a utility for querying and updating the config. of ovs-vswitchd ovs-dpctl: a tool for configuring and monitoring the switch kernel module ovs-ofctl: a tool for monitoring and administering OpenFlow switches ovs-controller: a simple OpenFlow controller reference implementation openvswitch.ko: OpenvSwitch switching datapath

6 Software-based OpenFlow Switches (2/3)
OpenvSwitch Architecture

7 Software-based OpenFlow Switches (3/3)
OpenFlow Software Switch An OpenFlow compatible user-space software switch implementation The original code is based on the Stanford 1.0 reference switch Initially developed by Ericsson Research Lab The implementation is feature-complete, and passes the available oftest 1.1 test case Compatible with most of OpenFlow controllers which support OF 1.1 CPqD version supports OpenFlow up to v1.3 Components ofdatapath: the switch implementation ofprotocol: a secure channel for connecting the switch to the controller oflib: a library for converting to/from 1.1 wire format dpctl: a tool for configuring the switch from the console

8 Hybrid OpenFlow Switch (1/3)
Problems of Software-based Switch Cannot fully utilize the hardware resources E.g., OVS only exploits single CPU core Tightly coupled with the OS kernel Increase the management complexity Low performance Massive RX interrupts handling for NIC device Shared data access between threads  competition makes bottleneck Hybrid OpenFlow Switch Separate the roles of virtual switch into two parties Hardware: pure packet processing Software: switch abstraction (e.g., flow table) Data Plane Development Kit (DPDK) A set of libraries and drivers for fast packet processing Incorporate with x86 CPU Fast network I/O in user space

9 Hybrid OpenFlow Switch (2/3)
Features of Hybrid OpenFlow Switch Polling-based packet handling Core assignment Assign the polling task to a dedicated CPU core  less context switching Reduction of # of access in I/O and memory Lockless-queue, batch processing NIC skb_buf Ethernet Driver API Socket API vswitch packet buffer memory Pure Software-based Switch 1. Interrupt & DMA 2. system call (read) User space Kernel space Driver 4. DMA 3. system call (write) User-mode I/O & HAL Hybrid Switch with Intel DPDK 1. Read Write DPDK Library Polling-base packet handling packet handling

10 Hybrid OpenFlow Switch (3/3)
Packet Processing using Multi Core CPUs Exploit many core CPUs Decouple I/O processing and flow processing Explicit thread assignment to CPU core Ring buffer NIC 3 RX NIC 4 RX NIC 3 TX NIC 4 TX NIC 1 RX NIC 2 RX I/O RX CPU0 I/O RX CPU1 NIC 1 TX NIC 2 TX I/O TX CPU6 I/O TX CPU7 Flow lookup packet processing CPU2 Flow lookup packet processing CPU4 Flow lookup packet processing CPU3 Flow lookup packet processing CPU5 NIC RX buffer Ring buffer NIC TX buffer

11 Introduction to OpenFlow Controller (1/2)
Problem Statement New functions require new hardware E.g., VNTag, TrustSec No support for network-wide control or high-level abstractions Distributed control reduces the controllability Management, No Controllability Monitor: collect network-wide statistics using CLI, SNMP, NetConf interfaces Control: No real control of packet/flow forwarding No much thing to do with monitored data… Network Management System (NMS) Distributed Control: Per-box control Config/Mgmt ≠ fine-grained control Network Control Datapath HW Functions: Inflexible Proprietary Expensive F1 Fn Switch 1 Switch n

12 Introduction to OpenFlow Controller (2/2)
Solution Need a Network Operating System (NOS), provide a uniform and centralized programmatic interface to the entire network NOS does not manage the network itself, instead it provides a programmatic interface Application Programming Interface (API) Network Operating System (NOS) External Controller Controllability: Fully remove to external controller Centralized control Network Operating System은 네트워크 자체를 관리하는 것이 아니라, 이를 잘 관리하기 위한 프로그래밍 인터페이스만 제공한다. 이런 Network Operating System 상위에서 개발 및 구현된 어플리케이션에서 실질적인 네트워크 관리 기능을 수행한다. 여기서 주의해야 할 것은 Network Operating System수준에서 제공하는 API는 충분히 general하게 디자인 되어 현존하는 대부분은 네트워크 관리 및 제어 기능들을 수행해야 한다. 분산 vs. 집중 1. Network Operating System은 중앙집중적인 프로그래밍 모델을 지원하여 어플리케이션 입장에서 전체 네트워크를 뷰를 볼 수 있어야 한다. 예를 들어 최단 경로를 계산하는 어플리케이션을 Network Operating System 위에서 구현해야한다고 할때 Dijkstra 알고리즘 중앙집중적인 알고리즘을 바로 적용할 수 있어야 한다. Control Datapath F1 F2 F3 Fn Switch 1 Switch n

13 Pedigree Chart of OpenFlow Controllers
NOX Classic: C++/Python NOX: C++ Scalability Comparison Cisco Controller (Proprietary) Trema Full-stack OpenFlow Framework in Ruby and C C/C++ (Proprietary) Python Big Network Controller (Proprietary) ETRI Controller Java

14 NOX & POX

15 Network As a File System
History of NOX History Started from SANE project and applied to Ethane in 2006 Ethane project was announced via SIGCOMM in 2007 Announced the OpenFlow spec in Nov. 2007, spec in May, 2008 NOX was initially developed by Nicira side-by-side with OpenFlow, and open sourced as the first OpenFlow controller in 2008 NOX was further developed by CPqD to support OpenFlow 1.3 in Nov. 2012 OpenFlow v -2.0 Topology Routing Host Tracking Network As a File System OpenFlow v -1.0 Policy Compiler SANE Ethane NOX-Classic/NOX OpenFlow v 1.0 NOX-Classic (CPqD) OpenFlow v 1.3 SANE, Ethane, NOX사이 관계 파악 필요 OpenFlow의 시작을 Ethane으로 보는 것이 일반적이나 실제 그 전에 이미 SANE이라는 프로젝트를 통하여 OpenFlow의 가장 초기 형태가 나온 상태였습니다. 하지만 Ethane에서 본격적으로 OpenFlow의 존재가 업계에 알려지기 시작하면서 그 실체를 드러내기 시작하였죠. Ethane의 초기 목적은 복잡한 네트워크 운영 환경에서 단순한 관리와 강력한 보안 기능을 제공하는 것에 초점을 두고 있었습니다. 이를 실현하기 위하여 Control Plane과 Data Plane을 분리하여 중앙에 위치한 컨트롤러에 의해 운용되는 실험이 주된 목적이였구요. Ethane은 2006년 가을에 연구되기 시작하였고, 2007년 SigCOMM 네트워크 최고 권위 학회에서 스탠퍼드 대학 교수와 학생들이 Ethane을 발표하면서 학계에서 엄청난 주목을 받기 시작했습니다. 연구를 이끈 분들이 최근에 SDN 영역에 주목을 받고 있는 Nick McKeown 교수, Scott Shenker 교수 그리고 NICIRA의 Martin Casado입니다.

16 NOX Classic vs NOX (1/2) NOX Versions NOX-Classic
Original NOX (now, officially deprecated) C++ based SDN controller, applications can be developed using Python Use SWIG to integrate Python with C++ Version Name Branch Name Version No. Release Date OF Spec GUI NOX Classic Zaku 0.9.0 v1.0 Support Destiny 0.9.1 CPqD ver. N/A v1.3 Not support (new) NOX Verity 0.9.2 Well, for starters, there’s a major new version of NOX. This is the verity branch.  Amin Tootoonchian has done the heavy lifting on this based on work he’s been doing over the past couple of years. It’s cleaner, slimmer, and faster. On the down side, it drops a lot of components and Python support. We actually don’t think that’s a huge down side, but we recognize that it’s a substantial departure and that it’s not backwards compatible, so we haven’t just killed off the older NOX entirely.  We now refer to the older branches like zaku and destiny as ‘NOX Classic. NOX vs. POX If you want an introduction to or to experiment with SDN on Mac OS, Windows, or Linux — use POX. If you want to use SDN in the classroom or to prototype SDN projects — use POX.  If you’re doing academic research — use POX. If you want to build a system that’s bound by controller performance or a finely engineered controller written in C++ — start with NOX. NOX-Classic Original NOX C++ based SDN controller, but applications can be developed using Python Provides graphical user interfaces NOX Separated from NOX-classic in 2012 Only supports C++ for application development Fewer default applications than NOX-classic, but much faster and has a much cleaner source base No graphical user interface Sample apps written in C++ Sample apps written in Python and C++ using SWIG lib Sample apps written in Python

17 NOX Classic vs NOX (2/2) NOX
New NOX, separated from NOX-classic branch Only support C++ for application development Fewer default applications than NOX-classic Enhanced performance and better source readability Written on top of Boost library No graphic user interface NOX-Classic Code Snippet NOX Code Snippet

18 POX Overview POX POX Versions POX
NOX’s younger sibling implemented using Python For the rapid development and prototyping of network control software Support all OS (e.g., Linux, Mac, Windows) Can bundle with install-free PyPy runtime Performs well compared to NOX-classic Python Note that NOX-classic does not support PyPy runtime Used for research and education purpose Controller Name Branch Name Version No. Release Date OF Spec GUI POX Angler 0.0.0 2012 Fall v1.0 Support Betta 0.1.0 2013 Spring Carp 0.2.0 2013 Fall Dart 0.3.0 2014 Summer

19 NOX/POX Overview (1/4) … … Components Northbound API Controller
Network Application Services New functions as software services Controller Topology Discovery VLAN Tagging Scan Detection Northbound API Northbound API Provide interface to network applications Not yet standardized NOX Controller – Network OS Provide system-wide abstractions Turn networking into a software problem Southbound API Standardized OpenFlow protocol Controller, switch interoperability OpenFlow Enabled Switches New functions as software services

20 NOX/POX Overview (2/4) Granularity Switch Abstraction
Choosing granularity involves trading off scalability against flexibility Observation: provides adequate information and changes slowly Includes switch-level topology, does not include the current state of network traffic Control: scale to large networks, while providing flexible control Switch Abstraction Switch instructions should be independent of the particular HW and support the flow-level control granularity Solution: adopt OpenFlow switch abstraction Finer grained control Lowest performance Coarser grained control Highest performance Per-packet control Per-flow control Prefix-based routing control

21 NOX/POX Overview (3/4) Operation
Observation: construct the network view Use DNS, DHCP, LLDP and flow-initiation Control: determine whether to forward traffic, and to which route Access control, routing and failover applications Network view generation Reactive routing Flow Table IDX SRC DST SRC P DST P 153 sw. A sw. B p2 p1 357 P1 NOX Controller SRC DST ACT h. A h. B p1 SRC DST ACT PACKET_IN NOX Controller FLOW_MOD Link Layer Discovery Protocol (LLDP): A vendor neutral link layer protocol in the Internet Protocol Suite used by network devices for advertising their identity, capabilities and neighbors on an IEEE 802 LAN. Information gathered with LLDP is stored in the device as a management information database (MIB) and can be queried with the Simple Network Management Protocol (SNMP) as specified in RFC The topology of an LLDP-enabled network can be discovered by crawling the hosts and querying this database. Normal LLDP operation doesn’t work on an OpenFlow enabled switch, as it would rely on the matching flow table entry to forward the packets. That is why it is very essential for switch vendors to verify how they handle the discovery message from controller. The OpenFlow controller performs/initiates the network discovery. It sends the LLDP packet to all the connected switches via a packet_out message. This message instructs the switch to send the LLDP packet out to all of its ports. Once a switch receives the packet_out message, it sends the LLDP packets out over all its ports to other connected devices. If the neighbor device is an OpenFlow switch, it will perform a flow lookup. Since the switch doesn’t have a flow entry for this LLDP message, it will send this packet to the controller via a packet_in message. When the controller receives the packet_in, it analyzes the packet and creates a connection in its discovery table for the two switches. All the remaining switches in the network will similarly send a packet_in to the controller, which creates a complete network topology. Based on this topology, the controller can push down different flow entries for each switch – depending upon the traffic application. PACKET_OUT with LLDP PACKET_OUT with LLDP p1 OpenFlow Switch p2 Host A Host B PACKET_IN with LLDP p2 Packet header SRC: Host A DST: Host B Host C p1 p2 p1 LLDP OpenFlow Switch A OpenFlow Switch B

22 NOX/POX Overview (4/4) Scaling Public Release
Packet level parallelization Millions of arrival packets per second Packets are handled by individual switches Flow level parallelization Flow initialization rate is high but lower than packet arrival rate Flow initialization is handled by individual controller Periodic synchronization Network view changes slowly enough that can be maintained centrally Public Release A PoC of OpenFlow controller Follow GPL license Many network applications 실험환경에 대한 설명을 살펴보고 준비할 필요가 있음 Source: Comparing OpenFlow Controller Paradigms Scalability: Reactive and Proactive. International Conference on Advanced Information Networking and Applications

23 Cooperative Threading
NOX/POX Architecture Components Common L3_ learning L2_Multi Spanning_ tree Web Services Topology Discovery OpenRoads MAC_ blocker Packet_ dump Routing Host Tracking L2_ learning Authenticator Core Component API 각 컴포넌트에 대한 소개를 간략하게 준비할 필요가 있음 Cooperative Threading Event Harness OpenFlow API Asynchronous I/O Socket I/O File I/O

24 Floodlight

25 Introduction to Floodlight
A completely open, free, OpenFlow controller developed by Big Switch Network Currently supports OpenFlow up to v1.0 Research and commercial friendly Easy to build, run, and develop Toolchain Community of OpenFlow experts, access to commercial upgrades, and frequent testing Rich set of build and debugging tools

26 Floodlight Overview (1/2)
REST API (Implement Restlet Routable Interface) Floodlight Overview (1/2) Floodlight Architecture REST Applications Circuit Pusher (python) OpenStackQuantum Plugin (python) Your Applications …  Module Applications Java API Floodlight Controller VNF StaticFlow EntryPusher Module Manager Thread Pool Packet Streamer Jython Sever Web UI Unit Tests Firewall Circuit Pusher utilizes floodlight rest APIs to create a bidirectional circuit, i.e., permanent flow entry, on all switches in route between two devices based on IP addresses with specified priority. FloodlightProvider Tracks switch add/removes Translate OpenFlow messages to Floodlight events LinkDiscoveryManager Responsible for discovering and maintaining the status of links TopologyService Maintains the topology information for the controller, and find routing in the network ThreadPool A wrapper for a Java’s ScheduledExecutorService Can be used to have threads be run at specific times or periodically Packet Streamer Can selectively stream Openflow packets exchanged between any switch and the controller to an observer Device Manager Topology Service Link Discovery Flow Cache Storage PortDown Reconciliation Forwarding Memory NoSQL OpenFlow Services Hub Learning Switch Counter Store Switches Controller Memory PerfMon Trace

27 Floodlight Overview (2/2)
Application Modules Forwarding: default reactive packet forwarding application Static Flow Entry Pusher Install specific flow entry (match + action) to a specific switch Firewall An application to apply ACL rules to allow/deny traffic based on specified match Port Down Reconciliation: reconcile flows across a network Virtual Network Filter (VNF) Simple MAC-based network isolation application Core REST APIs Static Flow Pusher REST API Allow the user to proactively insert/delete/list the flows to OpenFlow switch Firewall REST API Allow the user to insert/delete/list rules for firewall

28 Floodlight Programming Model
IFloodlightModule Java module that runs as part of Floodlight Consumes services and events exported by other modules OpenFlow (e.g., Packet-in, Packet-out…) Switch add / remove Device add / remove / move Link discovery External Application Communicates with Floodlight via REST Static Flow Pusher Add flow, delete flow, list flows, removeAll flows Normalized network state List hosts, list links, list switches, getStats, getCounters Maybe your applications? vSwitch Ifloodlight Module External Application REST Floodlight Controller Northbound APIs

29 Module Description FloodlightProvider TopologyManager Maintains the topology information for the controller Receives information from LinkDiscovery module LinkDiscovery Maintains state of links in network Uses LLDP message Forwarding Basic reactive packet forwarding module DeviceManager Manage the end-host (device) location information (mac, IP …) StorageSource DB style storage for Topology and LinkDiscovery data RestServer Implements via Restlets (restlet.org) REST API modules must implement RestletRoutable StaticFlowPusher Supports the insertion and removal of static flows

30 Ryu

31 Introduction to Ryu Flow Oriental Dragion, A god of water

32 Introduction to Ryu Ryu A platform for building OpenFlow applications
Manage “flow” control to enable intelligent networking Features Generality: vendor-neutral, support open interface Agile: not the all-purpose big monolithic “controller”, but the framework for SDN app dev. Supportability OpenFlow protocol OF 1.0, OF 1.2, OF 1.3 and OF-CONFIG 1.1 Other protocol: NetCONF, SNMP, OVSDB Apps/libs topology view, firewall, OpenFlow Restful, etc. Integration with other project OpenStack, HA with Zookeeper, IDS with snort License: Apache 2.0 Developed & Maintained By

33 ... Ryu Architecture Follow Standard SDN Architecture SDN apps
Well defined API (REST, RPC...) Ryu built-in app (Tenant Isolation, Topology Discovery, Firewall …) Ryu App Ryu App ... Application layer Event dispatcher libraries Control layer OpenFlow Parser/serializer Protocol support (OVSDB, VRRP, ...) Ryu SDN framework Open protocols (OpenFlow, OF-config, NETConfig, OVSDB) OpenFlow switch OpenFlow switch Network device

34 Ryu Implementation (1/2)
Event Driven Event source/dispatcher/sink Source Call methods of the event dispatcher to generate events Sink Subclass of class RyuApp Avoid race condition Dispatcher Decouples event sources/sinks Dispatches events based on class of event Knows which methods are interested in which event by methods attributes Event sink RyuApp RyuApp queue RyuApp RyuApp queue Event source Event Event dispatcher dispatch RyuApp RyuApp Event source Event queue Events are read only because it is shared with many RyuApps Determine which RyuApp to deliver based on class of event

35 Ryu Implementation (2/2)
Connection to OpenFlow Switch Receiving loop and sending loop OpenFlow switch Receiving thread: Generates OFPEvents Sending thread: Serialize and send OF packets Send queue Event OFP message RyuApp queue Event sink Ryu

36 ODL: Open Daylight

37 OpenDaylight Scope and Projects
OpenDayLight Controller Software for forwarding elements Southbound plugins to enable the controller to speak to the OpenDaylight supplied and other network elements Northbound plugins to expose interfaces to those writing applications to the controller Network services and application intented to run on top of the controllers, integration between the controller and other elements, and Support projects such tools, infrastructure, or testing Plugins for inter-controller communication

38 OpenDaylight Scope and Projects
OpenDaylight Projects 15 projects currently in bootstrap or incubation Project Description Originator Controller Modular, extensible, scalable, and multi-protocol SDN controller based on OSGi Cisco YANG Tools Java-based NETCONF and YANG tooling for OpenDaylight projects OpenFlow Protocol Library OF 1.3 protocol library implementation Pantheon OpenFlow Plugin Integration of OpenFlow protocol library in controller SAL Ericsson, IBM, Cisco Defense4All DDoS detection and mitigation framework Radware OVSDB OVSDB configuration and management protocol support Univ. of Kentucky LISP Flow Mapping LISP plugin, LISP mapping service ConteXtream

39 OpenDaylight Scope and Projects
Project Framework

40 OpenDaylight Controller
Features of OpenDaylight Controller Built using Java OSGi Framework – Equinox Provides Modularity & Extensibility Bundle Life-cycle management In-Service-Software Upgrade (ISSU) & multi-version support Service Abstraction Layer (SAL) Provides Multi-Protocol Southbound support Abstracts/hides southbound protocol specifics from the applications High availability & horizontal scaling using clustering Releases Hydrogen Initial release, support OpenFlow 1.0 and 1.3 Helium Current stable release, plan to support OpenFlow 1.5 Lithium The next release (6/25/2015  SR1 8/13  9/24) New project  IoT Data Management (IoTDM)

41 Hydrogen Architecture
Management GUI/CLI VTN Coordinator DDoS Protection OpenStack Neutron Network Applications Orchestration & Services OpenDaylight APIs (REST) Base Network Service Functions Affinity Service OpenStack Service Topology Mgr Stats Mgr Switch Mgr Host Tracker Shortest Path Forwarding Network Config Controller Platform LISP Service VTN Manager DOVE Mgr Service Abstraction Layer (SAL) (plug-in mgr., capability abstractions, flow programming, inventory, …) OpenFlow 1.0 1.3 NETCONF OVSDB SNMP BGP-LS PCEP LISP Southbound Interfaces & Protocol Plugins Data Plane Elements (Virtual Switches, Physical Device Interfaces) OpenFlow Enabled Devices OVSDB enabled devices Devices with Proprietary control plane Hydrogen Release 10 Base Network Service Functions Management GUI/CLI Controller Platform Southbound Interfaces & Protocol Plugins DDoS Protection OpenDaylight APIs (REST) OpenStack Service DOVE Mgr Data Plane Elements (Virtual Switches, Physical Device Interfaces) Network Config Service Abstraction Layer (SAL) (plug-in mgr., capability abstractions, flow programming, inventory, …) OpenFlow LISP Topology Mgr Stats Mgr Switch Mgr Host Tracker Shortest Path Forwarding VTN Coordinator Affinity Service Network Applications Orchestration & Services OpenStack Neutron VTN Manager VTN: Virtual Tenant Network DOVE: Distributed Overlay Virtual Ethernet DDoS: Distributed Denial Of Service LISP: Locator/Identifier Separation Protocol OVSDB: Open vSwitch DataBase Protocol BGP: Border Gateway Protocol PCEP: Path Computation Element Communication Protocol SNMP: Simple Network Management Protocol LISP Service NETCONF SNMP BGP-LS OVSDB PCEP OpenFlow Enabled Devices Main difference from other OpenFlow-centric controller platforms Devices with Proprietary control plane OVSDB enabled devices. VTN: Virtual Tenant Network DOVE: Distributed Overlay Virtual Ethernet DDoS: Distributed Denial Of Service LISP: Locator/Identifier Separation Protocol OVSDB: Open vSwitch DataBase Protocol BGP: Border Gateway Protocol PCEP: Path Computation Element Communication Protocol SNMP: Simple Network Management Protocol Main difference from other OpenFlow-centric controller platforms 41

42 Helium Architecture Dlux UI VTN Coordinator DDoS Protection OpenStack Neutron SDNi Wrapper Network Applications Orchestration & Services AAA – AuthN filter OpenDaylight APIs (REST) Base Network Service Functions OpenStack Service GBP Service SFC AAA DOCSIS Controller Platform Topology Mgr Stats Mgr Switch Mgr Host Tracker Flow Rules Mgr VTN Mgr OVSDB Neutron Open Contrail LISP Service L2 switch SNBI service SDNI aggreg GBP renderer Service Abstraction Layer (SAL) (plug-in mgr., capability abstractions, flow programming, inventory, …) OpenFlow 1.0 1.3 OVSDB NETCONF PCMM/ COPS SNBI SNMP BGP-LS PCEP LISP OPEN CONTRAIL Southbound Interfaces & Protocol Plugins TTP OpenFlow Enabled Devices OVSDB enabled devices Devices with Proprietary control plane Data Plane Elements (Virtual Switches, Physical Device Interfaces) Helium Release 11 Dlux UI Base Network Service Functions Controller Platform Southbound Interfaces & Protocol Plugins OpenDaylight APIs (REST) Data Plane Elements (Virtual Switches, Physical Device Interfaces) OpenStack Service Service Abstraction Layer (SAL) (plug-in mgr., capability abstractions, flow programming, inventory, …) OpenFlow LISP Topology Mgr Stats Mgr Switch Mgr Host Tracker Flow Rules Mgr VTN Coordinator AAA Network Applications Orchestration & Services OpenStack Neutron OVSDB PCEP OpenFlow Enabled Devices VTN: Virtual Tenant Network DDoS: Distributed Denial Of Service LISP: Locator/Identifier Separation Protocol OVSDB: Open vSwitch DataBase Protocol BGP: Border Gateway Protocol PCEP: Path Computation Element Communication Protocol PCMM: Packet Cable MultiMedia SNMP: Simple Network Management Protocol LISP Service NETCONF BGP-LS Devices with Proprietary control plane SNMP DDoS Protection OVSDB enabled devices Main difference from other OpenFlow-centric controller platforms SDNi Wrapper GBP Service SFC TTP SNBI PCMM/ COPS OPEN CONTRAIL GBP renderer OVSDB Neutron VTN Mgr Open Contrail L2 switch SNBI service DOCSIS SDNI aggreg AAA – AuthN filter VTN: Virtual Tenant Network DDoS: Distributed Denial Of Service LISP: Locator/Identifier Separation Protocol OVSDB: Open vSwitch DataBase Protocol BGP: Border Gateway Protocol PCEP: Path Computation Element Communication Protocol PCMM: Packet Cable MultiMedia SNMP: Simple Network Management Protocol Main difference from other OpenFlow-centric controller platforms 42

43 OpenDaylight Controller
Plugin Build Process

44 OpenDaylight Controller
Hydrogen OpenFlow Plugin Architecture

45 Model-Driven SAL (1/2) Model-Driven Service Abstraction Layer (SAL)

46 Model-Driven SAL (2/2) Model-Driven Service Abstraction Layer (SAL)
Yang tools – supporting for model-driven SAL Provides tooling to build Java bindings in yang from yang models API-Driven SAL Model-Driven SAL

47 ONOS: Open Network Operating System

48 SDN Evolution and ON.LAB
Non-profit, carrier and vendor neutral Provide technical shepherding, core team Build community Many organizations supports 2012 –Define SDN research agenda for the coming years And Beyond Demonstrations – SIGCOMM 2011 – Open Networking Summit, Interop 2009 – Stanford 2010 – GENI started and grew to 20 universities 2013 – 20 more campuses to be added Deployments Platform Development 2007 – Ethane 2008 – OpenFlow 2009 – FlowVisor, Mininet, NOX 2010 – Beacon Invention 2007 – Creation of SDN Concept ONRC, ONLAB에 대한 소개를 간단히 한다

49 Introduction ONOS: Open Network Operating System
SDN OS for service provider networks Key features Scalability, high availability & performance Northbound & southbound abstraction Modularity Various usage purposes, customization and development History Founded – 2012 ONOS Prototype 1 – (scalability, high availability) ONOS Prototype 2 – (performance) ONOS VERSION 1 – Open sourced on Dec 5th, 2014

50 Key Performance Requirements
Requirements for Supporting Service Provider Networks High throughput 500K ~ 1M paths setups / second, 3-6M network state operations / second High volume 500GB ~ 1TB of network state data Need to choose distribution approach!

51 ONOS Tiers and Distributed Architecture
Six tiered architecture Each ONOS instance is equipped with the same software stack instance 1 instance 2 instance 3 instance 4 Northbound Abstraction Network graph Application intents Core Distributed Protocol independent Southbound Abstraction Generalized OpenFlow Pluggable & extensive Adapters Multiple southbound protocol enabling layer Protocols Self-defined protocols using generalized SDN functions

52 ONOS Architecture (Prototype 2)
Event Notifications Hazelcast Control Application Control Application Applications ONOS Graph API Zookeeper Cassandra Distributed Key-Value Store Titan Graph DB Network Graph (Eventually consistent) In-memory Network Graph (eventually consistent) Coordination Indexing ONOS Graph Abstraction Distributed Network Graph/State RAMCloud Ultra-low latency distributed data store in DRAM Distributed Registry (Strongly Consistent) Instance 1 Instance 2 Instance 3 OpenFlow Manager+ OpenFlow Manager+ OpenFlow Manager+ Scale-out Host + Floodlight Drivers

53 ONOS Scale-Out Network Graph Global network view Distributed Network OS Instance 1 Instance 2 Instance 3 ONOS의 가장 핵심 특징은 분산된 아키텍처와 이에 따른 scale-out 퍼포먼스 및 fault tolerance 이다. ONOS는 다중 서버에서 작동하며, 매 서버마다 하나의 ONOS 인스턴스 존재한다. 인스턴스는 exclusive master OpenFlow 컨트롤러로 여러 스위치들을 관제할 수 있다. 단일 ONOS 인스턴스는 관제 중인 스위치들과 Global Network View에 바뀐 네트워크 상태를 propagation 해주는 역할을 한다. 실제 네트워크를 운영하다 보면 나날이 늘어나는 트래픽들을 처리하기 위하여 data plane capacity를 늘이거나 이를 제어하는 control plane을 증설 하는 등 필요가 있는데 ONOS를 사용하게 되면 추가적인 노력없이도 운영 중인 네트워크에 data/control plance 파워를 증설 할 수 있따. Data plane An instance is responsible for maintaining a part of network graph Control capacity can grow with network size or application need

54 ONOS Control Plane Failover
Master Switch A = ONOS 2 Candidates = ONOS 3 Master Switch A = ONOS 1 Candidates = ONOS 2, ONOS 3 Candidates = ONOS 2, ONOS 3 Master Switch A = NONE Candidates = ONOS 2, ONOS 3 Candidates = ONOS 2, ONOS 3 Distributed Registry Distributed Network OS Instance 1 Instance 2 Instance 3 ONOS 분산 아키턱처는 ONOS 인스턴스가 오작동하거나 서비스가 중절되는 등 문제가 발생할 경우 해당 인스턴스의 역할을 다른 ONOS 인스턴스에 다시 할당하여 처리 할 수 있다. 실제 동일한 Network View를 가지는 여러 인스턴스를 redundant하게 돌리고 있다가 그 중 primary 인스턴스가 정상적으로 동작하지 않으면 다른 대기 중인 인스턴스를 활성화 시켜 fail이 난 인스턴스의 작업을 take over한다. 다음 번 instance를 선택하는 과정에서 leader electing 알고리즘이 사용된다. 한 스위치는 복수개 ONOS 인스턴스와 연결되지만 오로지 하나의 인스턴스만 해당 스위치의 master가 될 수 있다. Master 로 선정된 ONOS 인스턴스는 스위치의 정보를 discovery하고 스위치를 프로그래밍 하는 등 역할을 수행한다. ONOS 인스턴스가 fail되면 나머지 인스턴스들은 새로운 master 인스턴스를 선정하고 이 인스턴스는 fail이 발생한 인스턴스가 관리하였던 스위치들에 대한 관제 임무를 떠맡는다. ONOS는 Zookeeper를 이용하여 switch-to-controller mastership을 관리한다. ONOS 인스턴스는 특정 스위치들의 master가 되기 위하여 ZooKeeper에 요청을 보내고 ZooKeeper에서 분석 후 master 권한을 인가할지 여부를 결정한다. Switch A is being controlled by Instance 1 and the registry shows it as master for switch A. Instance 1 has a failure and dies. Registry detects that instance 1 is down and release the mastership for Switch A. Remaining candidates join the mastership election within registry. Lets say Instance 2 wins the election and is marked in registry as the master for Switch A. The channel with Instance 2 becomes the active channel and other channel becomes passive. This enables a quick failover of switch when there is a control plane failure. Host A B C D E F

55 ONOS Distributed Core Distributed Core State and Properties
Responsible for all state management concerns Organized as a collection of “STORES” E.g., topology, links, link resources and etc. State management choices (ACID vs. BASE) ACID (Atomicity, Consistency, Isolation, Durability) BASE (Basically Available, Soft state, Eventually consistency) State and Properties State Properties Network Topology Eventually consistent, low latency access Flow Rules, Flow Stats Eventually consistent, shardable, soft state Switch – Controller Mapping Distributed Locks Strongly consistent, slow changing Application Intents Resource Allocations Strongly consistent, durable

56 Network Topology State
Global Network View for NB Applications State  inventory of devices, hosts and links Goal: closely track the state of physical network Fully replicated fro scale and low latency access Causal consistency using logical timestamps (logical clock systems) Logical clock system (logical local time, logical global time) logical local time event sequence # logical global time mastership term <1,1> <2,1> <3,1> <4,1> <2,2> Logical timestamp는 causal consistency 를 유지하기 위하여 사용되는 것으로 일반적으로 꼭 같은 시간을 가지고 있지 않는 분산처리 시스템에서 특정 이벤트를 차례로 처리하고 싶을 때 사용한다. Logical clock system에서 매 프로세스는 두 가지 데이터 구조를 가지며 그것들은 logical global time, logical local time이다. Logical local time은 각 로컬 프로세스에 의하여 관리되고 자체 이벤트가 발생하는 순서대로 seq #를 logical time으로 마킹한다. Logical global time은 보내는 메시지가 인스턴스를 바꿔가면서 보낼 때 +1씩 올라가고 본 예제에서는 mastership term #로 지칭한다. <mastership term #, seq #>

57 Other States States of Flow Rules and Flow Stats
Switch specific state Flow rules  switch specific match-action pairs Flow stats  flow specific traffic stats Partitioned with current switch master serving as the authoritative copy Can be fully replicated for quick failover Soft state Inconsistencies are reconciled by refreshing from source States of Other Elements Switch to controller mapping, application intents, resource allocations Require strong consistency Possible solutions Hazelcast based solution Used in ONOS version 1.0 Has some shortcoming on supporting for durability Use of Raft consensus algorithm Provides all the desired consistency and durability properties

58 Application and Intent Framework
Programming abstraction Intents Compilers Installers Execution framework Intent service Intent store Provide a high-level interface that focuses on what should be done rather than how it is specifically programmed Abstract network complexity from applications Extend easily to produce more complex functionality through combinations of other intents General Operation After an intent is submitted by an application, it will be sent immediately (but asynchronously) into a compiling phase, then to the installing phase, and finally, to the installed state. An intent may also be withdrawn if an application decides that it no longer wishes for the intent to hold. The rest of the states account for various issues that may arise: An application may ask for an objective that is not currently achievable, e.g. connectivity across to unconnected network segments. In this case, the compiling phase may fail. A change in the environment that results in the objectives being met can trigger a transition back to the compiling state. The installation phase may be disrupted. In this case, the framework will attempt to recompile the intent to see if an alternate approach is available. If recompilation succeeds, it will return to the installing phase. A loss of throughput or connectivity may impact the viability of a successfully compiled and installed intent. In this case, the framework will attempt to recompile the intent, and if an alternate approach is available, its installation will be attempted.  The failure mode for the above cases is the failed state.  The Intent framework relies on the Java Future for handling the asynchronous intent compilation process.  Compilers and Installers Intents are ultimately compiled down into a set of FlowRule model objects. The process may include: The compilation of an Intent down into installable intent(s), by an IntentCompiler The conversion of installable Intents into FlowRuleBatchOperations containing FlowRules, by an IntentInstaller Each non-installable Intent has an IntentCompiler associated with it. Similarly, installable Intents will have a corresponding IntentInstaller. For example, a PointToPointIntent must first be compiled into aPathIntent by a PointToPointIntentCompiler, before being converted into a BatchOperation by the PathIntentInstaller. The IntentManager coordinates the compilation and installation of FlowRules by managing the invocation of available IntentCompilers and IntentInstallers.

59 Intent Example Compiler & Installer
Compiler: produce more specific Intents given the environment E.g., find the corresponding paths between two hosts Installer: transform Intents into device commands Host to Host Intent Compilation Path Intent Path Intent Host to host flow provision service를 만들고 싶을 때 Intent를 만들면 framework에서 자동으로 관련된 디바이스 command들을 생성하여 flow을 setup해준다. Installation Flow Rule Batch Flow Rule Batch

60 Intent Framework Design
Translates intents into device instructions (state, policy) Reacts to changing network conditions Extends dynamically to add, modify functionality (compilers, installers) Intent Service API: 사용자가 작성한 어플리케이션들과 상호 연동하는 인터페이스를 제공한다. Intent Extension Service API: Intent가 등록되면 이것을 해석하고 디바이스에서 인식 가능한 인스트럭션으로 만들어 주는 등 여러 컴파일러와 인스톨러 extension들과 연동하는 인터페이스를 제공한다. Intent manager: 전체 Intent Framework 컴포넌트들을 관리하는 역할을 한다. Intent store: Intent들을 저장하는 역할을 한다. 이는 ONOS의 HA와 scale-out 등 특징을 가질 수 있게 하는 핵심이다. Intent에서 정의된 작업들은 batch queue 메커니즘을 통하여 특정 클러스터 노트들에 위임된다. 실행 과정은 멱등의 법칙에 따라 디자인 되었으며 장애는 batch queue 메커니즘에 의하여 detect 되고 해당 work는 다시 할당된다. Intent Installation Worker: Intent들의 상태를 관리하는 역할을 한다. Intent들은 여러 상태가 있을 수 있으며 이런 상태들은 Finite State Machine 형태로 표현된다. Intent Objective Tracker: 네트워크 이벤트 및 상태를 실시간으로 모니터링하고, 네트워크 상태가 바뀌었을 때 변화에 따른 Intent re-computation을 수행한다.

61 Representing Networks
Network Graph Graph has basic network objects like switch, port, device and links Application computes path by traversing the links from SRC to DST Application writes each flow entry for the path port switch host Flow path Flow entry on link inport outport flow Path Computation is an application which is using Network Graph. The application can find a path from source to destination by traversing links and program this path with flow entries to create a flow-path. These flow-entries are translated by ONOS core into flow table rules and pushed onto the topology. Device - A network infrastructure element, e.g. a switch, router, access-point, or middle-box. Devices have a set of interfaces/ports and a DeviceId. Devices are interior vertices of the network graph. Port - A network interface on a Device. A Port and DeviceId pair forms a ConnectPoint, which represents an endpoint of a graph edge.  Host - A network end-station, which has an IP address, MAC address, VLAN ID, and a ConnectPoint. Hosts are exterior (leaf) vertices of the network graph. Link - A directed link between two infrastructure Devices (ConnectPoints). Links are interior edges of the network graph. EdgeLink - A specialized Link connecting a Host to a Device. EdgeLinks are exterior edges of the network graph. Path - A list of one or more adjacent Links, including EdgeLinks. EdgeLinks, if present in the path, can only be exterior links (beginning and end of the list). Topology - A Snapshot of a traversable graph representing the network. Path computations against this graph may be done using an arbitrary graph traversal algorithm, e.g. BFS, Dijkstra, or Bellman-Ford.

62 Adapter Layer Design Consideration of Adapter Device Subsystem
ONOS supports multiple southbound protocols Adapters provide descriptions of data plane elements to the core Adapters hide protocol complexity from ONOS Device Subsystem Responsible for discovering and tracking the devices Enable operators and apps to control the devices Core model relies on the Device and Port model objects DeviceManager OFDeviceProvider Device OpenFlowSwitch DeviceId/ElementId Dpid Port OFPortDesc MastershipRole RoleState

63 Q&A

64 Comparisons on OpenFlow Controllers
Remarks Python controllers do not support multi-threading, no scalability Beacon shows the best scalability Scalability discrepancy due to following two reasons: Algorithm of distributing incoming messages between threads Mechanism or the libraries used for network interaction Back Source: Shalimov, Alexander, et al. "Advanced study of SDN/OpenFlow controllers." Proceedings of the 9th Central & Eastern European Software Engineering Conference in Russia. ACM, 2013.


Download ppt "Software Defined Networking: OpenFlow Switches & Controllers"

Similar presentations


Ads by Google