Goals of this Seminar By the end, everyone should know: – Knowledge about OpenFlow/SDN What these are How they relate What’s available now Where it’s going How it’s used – OpenFlow/SDN and You How you can use it How you can build on top of what’s available How you can build something completely new Have fun!
Original Question How can researchers on college campuses test out new ideas in a real network, at scale? We like to do new experiments: Mobility management New naming/address schemes Network access control New features of Cloud Computing Virtualization features ….
Problem Many good research ideas on college campuses… No way to test new ideas at scale, on real networks, with real user traffic Many good research ideas on college campuses… No way to test new ideas at scale, on real networks, with real user traffic Consequence: Almost no technology transfer
Research problems Well known problems Security, mobility, availability Well known problems Security, mobility, availability Incremental ideas Fixing BGP, multicast, access control, Mobile IP, data center networks. Incremental ideas Fixing BGP, multicast, access control, Mobile IP, data center networks. More radical changes Energy management, VM mobility, … More radical changes Energy management, VM mobility, …
The only test network large enough to evaluate future Internet technologies at scale, is the Internet itself.
Today’s Networks are Defined by the “Box” Hardware, Operating System, and Applications Built Into a “Box”. Cannot Mix and Match Barrier to Entry
Vertically integrated Closed, proprietary Slow innovation Small industry Specialized Operating System Specialized Operating System Specialized Hardware Specialized Hardware App Specialized Applications Specialized Applications Horizontal Open interfaces Rapid innovation Huge industry Microprocessor Open Interface Linux Mac OS Mac OS Windows (OS) Windows (OS) or Open Interface
Vertically integrated Closed, proprietary Slow innovation App Horizontal Open interfaces Rapid innovation Control Plane Control Plane Control Plane Control Plane Control Plane Control Plane or Open Interface Specialized Control Plane Specialized Control Plane Specialized Hardware Specialized Hardware Specialized Features Specialized Features Merchant Switching Chips Merchant Switching Chips Open Interface
Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Ap p 13 Current Internet Closed to Innovations in the Infrastructure Closed
Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Ap p Network Operating System App “Software Defined Networking” approach to open it
Software Defined Network (SDN) Global Network View Network Virtualization Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Network OS Abstract Network View Control Programs Control Programs
Making ASICs Work Specification Functional Description (RTL) Testbench & Vectors Functional Verification Logical Synthesis Static Timing Place & Route Design Rule Checking (DRC) Layout vs Schematic (LVS) Layout Parasitic Extraction (LPE) Manufacture & Validate
Making Software Work Static Code Analysis Invariant Checker Interactive Debugger Model Checking Run-time Checker Specification Testbench Functional Description (Code)
Example: New Data Center Cost 200,000 servers Fanout of 20 10,000 switches $5k vendor switch = $50M $1k commodity switch = $10M Savings in 10 data centers = $400M
Making Networks Work (Today) traceroute, ping, tcpdump, SNMP, Netflow …. er, that’s about it.
Why debugging networks is hard Complex interaction – Between multiple protocols on a switch/router. – Between state on different switches/routers. Multiple uncoordinated writers of state. Operators can’t… – Observe all state. – Control all state.
Networks are kept working by “Masters of Complexity” A handful of books Almost no papers No classes A handful of books Almost no papers No classes
Philosophy of Making Networks Work YoYo “You’re On Your Own” YoYo Ma “You’re On Your Own, Mate”
With SDN we will: 1.Formally verify that our networks are behaving correctly. 2.Identify bugs, then systematically track down their root cause.
Three Methods Static Checking “Independently checking correctness” Automatic Testing “Is the datapath behaving correctly?” Interactive Debugging “Finding bugs, and their root cause, in an operational network”
Motivations In today’s networks, simple questions are hard to answer: – Can host A talk to host B? – What are all the packet headers from A that can reach B? – Are there any loops in the network? – Is Group X provably isolated from Group Y? – What happens if I remove a line in the config file? 29
Use Cases Can host A talk to B? 34 Box 1 Box 2 Box 3 Box 4 A B T 1 (X,A) T 2 (T 1 (X,A)) T 4 (T 1 (X,A)) T 3 (T 2 (T 1 (X,A)) U T 3 (T 4 (T 1 (X,A)) T -1 3 T -1 4 T -1 2 T -1 1 All Packets sent from A can use to communicate with B
Use Cases Is there a loop in the network? – Inject an all-x text packet from every switch-port – Follow the packet until it comes back to injection port 35 Box 1 Box 2 Box 3 Box 4 T 1 (X,P) T 2 (T 1 (X,P)) T 3 (T 2 (T 1 (X,P))) T 4 (T 3 (T 2 (T 1 (X,P)))) Original HS Returned HS T -1 4 T -1 3 T -1 2 T -1 1
Use Cases Is the loop infinite? 36 Finite LoopInfinite Loop?
Header Space Analysis Consequences 1.Finds all packets from A that can reach B 2.Find loops, regardless of protocol or layer 3.Can prove that two groups are isolated Proves if network adheres to policy Works on existing networks and SDNs
Stanford Backbone Hassell tool 1.Reads Cisco IOS Configuration 2.Checks reachability, loops and isolation 3.10 mins for Stanford Backbone 4.Easily made parallel: 1 sec is feasible Hassell is available for free, for you to run
Stanford backbone network Loop detection test – run time < 10 minutes on a single laptop. 40 Vlan RED Spanning Tree Vlan RED Spanning Tree Vlan BLUE Spanning Tree
Performance 41 Generating TF Rules~150 sec Loop Detection Test (30 ports)~560 sec Average Per Port~18 sec Min Per Port~ 8 sec Max Per Port~ 135 sec Reachability Test (Avg)~13 sec Performance result for Stanford Backbone Network on a single machine: 4 core, 4GB RAM.
Short Story: OpenFlow is an API Control how packets are forwarded Implementable on COTS hardware Make deployed networks programmable – not just configurable Makes innovation easier Goal (experimenter’s perspective): – No more special purpose test-beds – Validate your experiments on deployed hardware with real traffic at full line speed
OpenFlow: a pragmatic compromise + Speed, scale, fidelity of vendor hardware + Flexibility and control Leverages hardware inside most switches today Vendors don’t need to expose implementation
Put an open platform in hands of researchers/students to test new ideas at scale through production networks. An open development environment for all researchers Give access to flow tables in switches: - lookup tables, access control list, etc.. - Control packet forwarding in routers and switches.
Data Path (Hardware) Control Path Control Path (Software)
Data Path (Hardware) Control Path OpenFlow OpenFlow Controller OpenFlow Protocol (SSL/TCP)
Controller PC Hardware Layer Software Layer Flow Table MAC src MAC dst IP Src IP Dst TCP sport TCP dport Action OpenFlow Firmware **184.108.40.206***port 1 port 4port 3 port 2 port 1 220.127.116.11.6.7.8 OpenFlow Flow Table Abstraction
OpenFlow Basics Flow Table Entries Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport RuleActionStats 1.Forward packet to port(s) 2.Encapsulate and forward to controller 3.Drop packet 4.Send to normal processing pipeline 5.Modify Fields Packet + byte counters
Examples Switching * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * 00:1f:.. *******port6 Flow Switching port3 Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 00:20..00:1f..0800vlan18.104.22.168.6.7.841726480port6 Firewall * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Forward ********22drop
Examples Routing * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action *****22.214.171.124***port6 VLAN Switching * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ** vlan1 ***** port6, port7, port9 00:1f..
Linux based Software Switch Release concurrently with specification Kernel and User Space implementations Note: no v1.0 kernel-space implementation Limited by host PC, typically 4x 1Gb/s Not targeted for real-world deployments Useful for development, testing Starting point for other implementations Available under the OpenFlow License (BSD Style) at http://www.openflowswitch.org Stanford Reference Implementation
Wireless Access Points Two Flavors: – OpenWRT based (Busybox Linux) v0.8.9 only – Vanilla Software (Full Linux) Only runs on PC Engines Hardware Debian disk image Available from Stanford Both implementations are software only.
NetFPGA NetFPGA-based implementation – Requires PC and NetFPGA card – Hardware accelerated – 4 x 1 Gb/s throughput Maintained by Stanford University $500 for academics $1000 for industry Available at http://www.netfpga.org
Linux-based Software Switch Released after specification Not just an OpenFlow switch; also supports VLAN trunks, GRE tunnels, etc Kernel and User Space implementations Limited by host PC, typically 4x 1Gb/s Available under the Apache License (BSD Style) at http://www.openvswitch.org Open vSwitch
OpenFlow Vendor Hardware more to follow... NEC IP8800 HP ProCurve 5400 Juniper MX-series Cisco Catalyst 6k Core Enterprise Campus/DC Circuit Switch Wireless Pronto Prototype Product Ciena CoreDirector WiMAX (NEC) Cisco Cat3750 Arista 7100 series (Q4 2010) 63
HP ProCurve 5400 Series Praveen Yalagandula Jean Tourrilhes Sujata Banerjee Rick McGeer Charles Clark Chassis switch with up to 288 ports of 1G or 48x10G (+ other interfaces available) Line-rate support for OpenFlow Deployed in 23 wiring closets at Stanford Limited availability for Campus Trials Contact HP for support details
NEC IP8800 24x/48x 1GE + 2x 10 GE Line-rate support for OpenFlow Deployed at Stanford Available for Campus Trials Supported as a product Contact NEC for details: Don Clark (email@example.com)firstname.lastname@example.org Atsushi Iwata (email@example.com)firstname.lastname@example.org Hideyuki Shimonishi Jun Suzuki Masanori Takashima Nobuyuki Enomoto Philavong Minaxay Shuichi Saito Tatsuya Yabe Yoshihiko Kanaumi (NEC/NICT) Atsushi Iwata (NEC/NICT)
Umesh Krishnaswamy Michaela Mezo Parag Bajaria James Kelly Bobby Vandalore Juniper MX Series Up to 24-ports 10GE or 240-ports 1GE OpenFlow added via Junos SDK Hardware forwarding Deployed in Internet2 in NY and at Stanford Prototype Availability TBD
Flavio Bonomi Sailesh Kumar Pere Monclus Various configurations available Software forwarding only Limited deployment as part of demos Availability TBD Work on other Cisco models in progress Cisco 6500 Series
– The individual controllers and the FlowVisor are applications on commodity PCs (not shown) Demo Infrastructure with Slicing Flows OpenFlow switches WiMax Packet processors WiFi APs Be sure to check out the demos in www.openflow.org
FlowVisor Creates Virtual Networks OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Protocol FlowVisor OpenPipes Demo OpenRoads Demo OpenFlow Protocol PlugNServe Load-balancer OpenPipes Policy FlowVisor slices OpenFlow networks, creating multiple isolated and programmable logical networks on the same physical topology. Each demo described here runs in an isolated slice of Stanford’s production network.
Plumbing with OpenFlow to build hardware systems OpenPipes Partition hardware designs Test Mix resources
Goal: Load-balancing requests in unstructured networks Plug-n-Serve : Load-Balancing Web Traffic using OpenFlow OpenFlow means… Complete control over traffic within the network Visibility into network conditions Ability to use existing commodity hardware What we are showing OpenFlow-based distributed load-balancer Smart load-balancing based on network and server load Allows incremental deployment of additional resources This demo runs on top of the FlowVisor, sharing the same physical network with other experiments and production traffic.
ElasticTree: Reducing Energy in Data Center Networks The demo: Hardware-based 16-node Fat Tree Your choice of traffic pattern, bandwidth, optimization strategy Graph shows live power and latency variation Shuts off links and switches to reduce data center power Choice of optimizers to balance power, fault tolerance, and BW OpenFlow provides network routes and port statistics
Available at http://NOXrepo.orghttp://NOXrepo.org Open Source (GPL) Modular design, programmable in C++ or Python High-performance (usually switches are the limit) Deployed as main controller in Stanford NOX Controller Martin Casado Scott Shenker Teemu Koponen Natasha Gude Justin Pettit