Christian Esteve Rothenberg

Slides:



Advertisements
Similar presentations
Distributed Systems Architectures
Advertisements

1 Building a Fast, Virtualized Data Plane with Programmable Hardware Bilal Anwer Nick Feamster.
1 Resonance: Dynamic Access Control in Enterprise Networks Ankur Nayak, Alex Reimers, Nick Feamster, Russ Clark School of Computer Science Georgia Institute.
Video Services over Software-Defined Networks
OpenFlow and Software Defined Networks. Outline o The history of OpenFlow o What is OpenFlow? o Slicing OpenFlow networks o Software Defined Networks.
Jennifer Rexford Princeton University MW 11:00am-12:20pm Logically-Centralized Control COS 597E: Software Defined Networking.
Chapter 1: Introduction to Scaling Networks
The Platform as a Service Model for Networking Eric Keller, Jennifer Rexford Princeton University INM/WREN 2010.
Multi-Domain SDN Exchanges GENI Operations Atlanta – 18 March
Towards Software Defined Cellular Networks
1 IU Campus GENI/Openflow Experience Matt Davy Quilt Meeting, July 22nd 2010.
OpenFlow for Campuses A Tutorial at GEC10 March 15, 2011
An Overview of OpenFlow Andrew Williams. Agenda What is OpenFlow? OpenFlow-enabled Projects Plans for a large-scale OpenFlow deployment through GENI OpenFlow.
An Overview of Software-Defined Network Presenter: Xitao Wen.
Today1 Software Defined Networks  A quick overview  Based primarily on the presentations of Prof. Scott Shenker of UC Berkeley “The Future of Networking,
OpenFlow Costin Raiciu Using slides from Brandon Heller and Nick McKeown.
Can the Production Network Be the Testbed? Rob Sherwood Deutsche Telekom Inc. R&D Lab Glen Gibb, KK Yap, Guido Appenzeller, Martin Cassado, Nick McKeown,
Mobile Communication and Internet Technologies
Software-Defined Networking, OpenFlow, and how SPARC applies it to the telecommunications domain Pontus Sköldström - Wolfgang John – Elisa Bellagamba November.
OpenFlow : Enabling Innovation in Campus Networks SIGCOMM 2008 Nick McKeown, Tom Anderson, et el. Stanford University California, USA Presented.
CSCI-1680 Software-Defined Networking Rodrigo Fonseca Most content from lecture notes by Scott Shenker.
SDN and Openflow.
Virtualization and OpenFlow Nick McKeown Nick McKeown VISA Workshop, Sigcomm 2009 Supported by NSF, Stanford Clean.
Flowspace revisited OpenFlow Basics Flow Table Entries Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot L4 sport L4 dport Rule Action.
Professor Yashar Ganjali Department of Computer Science University of Toronto
OpenFlow on top of NetFPGA Part I: Introduction to OpenFlow NetFPGA Spring School 2010 Some slides with permission from Prof. Nick McKeown. OpenFlow was.
An Overview of Software-Defined Network
An Overview of Software-Defined Network Presenter: Xitao Wen.
Software-defined Networks October 2009 With Martin Casado and Scott Shenker And contributions from many others.
Professor Yashar Ganjali Department of Computer Science University of Toronto
OpenFlow/Software Defined Networks 1. Exec Summary OpenFlow/SDN enables innovations within – Enterprise, backbone, & data center networks – Represents.
How SDN will shape networking
Information-Centric Networks10b-1 Week 13 / Paper 1 OpenFlow: enabling innovation in campus networks –Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru.
OpenFlow: Enabling Technology Transfer to Networking Industry Nikhil Handigol Nikhil Handigol Cisco Nerd.
Introduction to SDN & OpenFlow Based on Tutorials from: Srini Seetharaman, Deutsche Telekom Innovation Center FloodLight Open Flow Controller, floodlight.openflowhub.org.
Software-Defined Networks Jennifer Rexford Princeton University.
Specialized Packet Forwarding Hardware Feature Specialized Packet Forwarding Hardware Operating System Operating System Operating System Operating System.
Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar Stanford University In collaboration with Martin Casado and Scott.
Brent Salisbury CCIE#11972 Network Architect University of Kentucky 9/22/ OpenStack & OpenFlow Demo.
Aaron Gember Aditya Akella University of Wisconsin-Madison
Software Defined-Networking. Network Policies Access control: reachability – Alice can not send packets to Bob Application classification – Place video.
OpenFlow: Enabling Innovation in Campus Networks
Aditya Akella (Based on slides from Aaron Gember and Nick McKeown)
CS : Software Defined Networks 3rd Lecture 28/3/2013
Sponsored by the National Science Foundation Tutorial: An Introduction to OpenFlow using POX GENI Engineering Conference 20 June 2014.
A Simple Unified Control Plane for Packet and Circuit Networks Saurav Das, Guru Parulkar, Nick McKeown Stanford University.
OpenFlow/SDN tutorial OFC/NFOEC March, 2012 Srini Seetharaman Deutsche Telekom Silicon Valley Innovation Center 1.
Sponsored by the National Science Foundation 1 GEC16, March 21, 2013 Are you ready for the tutorial? 1.Did you do the pre-work? A.Are you able to login.
SDN and Openflow. Motivation Since the invention of the Internet, we find many innovative ways to use the Internet – Google, Facebook, Cloud computing,
Improving Network Management with Software Defined Network Group 5 : z Xuling Wu z Haipeng Jiang z Sichen Wu z Aparna Sanil.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
Information-Centric Networks Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics.
3.6 Software-Defined Networks and OpenFlow
SDN and Beyond Ghufran Baig Mubashir Adnan Qureshi.
SDN basics and OpenFlow. Review some related concepts SDN overview OpenFlow.
OpenFlow/SDN tutorial OFC/NFOEC March, 2012
Software Defined Networks
Week 6 Software Defined Networking (SDN): Concepts
SDN Overview for UCAR IT meeting 19-March-2014
SDN basics and OpenFlow
Software Defined Networking (SDN)
Stanford University Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar In collaboration with Martin Casado and Scott.
The Stanford Clean Slate Program
Software Defined Networking (SDN)
Software Defined Networking
Handout # 18: Software-Defined Networking
An Introduction to Software Defined Networking and OpenFlow
Software Defined Networking
An Introduction to Software Defined Networking and OpenFlow
Chapter 4: outline 4.1 Overview of Network layer data plane
Presentation transcript:

Christian Esteve Rothenberg Future Internet: New Network Architectures and Technologies An Introduction to OpenFlow and Software-Defined Networking Christian Esteve Rothenberg esteve@cpqd.com.br

Agenda Future Internet Research Software-Defined Networking An introduction to OpenFlow/SDN The Role of Abstractions in Networking Sample research projects RouteFlow

Credits Nick McKeown's SDN slides GEC 10 OpenFlow Tutorial http://www.openflow.org/downloads/NickTalk_OFELIA_Feb2011.ppt GEC 10 OpenFlow Tutorial http://www.openflow.org/downloads/OpenFlowTutorial_GEC10.ppt Scott Shenker's talk on the “The Future of Networking, and the Past of Protocols” http://www.slideshare.net/martin_casado/sdn-abstractions The OpenFlow Consortium http://www.openflow.org What: protocol,

Introduction to OpenFlow What, Why and How of OpenFlow? Potential Limitations Current vendors Trials and deployments Research examples Discussion Questions are welcome at any time!

What is OpenFlow?

Short Story: OpenFlow is an API Control how packets are forwarded (and manipulated) Implementable on COTS hardware Make deployed networks programmable not just configurable Vendor-independent Makes innovation easier Goal (experimenter’s perspective): Validate your experiments on deployed hardware with real traffic at full line speed Goal (industry perspective): Reduced equipment costs through commoditization and competition in the controller / application space Customization and in-house (or 3rd party) development of new networking features (e.g. protocols).

Why OpenFlow?

Specialized Packet Forwarding Hardware The Ossified Network Routing, management, mobility management, access control, VPNs, … Feature Feature Million of lines of source code 5400 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware Billions of gates Bloated Power Hungry Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … An industry with a “mainframe-mentality”, reluctant to change 8 8

Research Stagnation Lots of deployed innovation in other areas OS: filesystems, schedulers, virtualization DS: DHTs, CDNs, MapReduce Compilers: JITs, vectorization Networks are largely the same as years ago Ethernet, IP, WiFi Rate of change of the network seems slower in comparison Need better tools and abstractions to demonstrate and deploy In these other areas, you can real systems on top of high-quality, open platforms, and you don’t need to reinvent the wheel

Closed Systems (Vendor Hardware) Stuck with interfaces (CLI, SNMP, etc) Hard to meaningfully collaborate Vendors starting to open up, but not usefully Need a fully open system – a Linux equivalent

none have all the desired attributes! Open Systems Performance Fidelity Scale Real User Traffic? Complexity Open Simulation medium no yes Emulation low Software Switches poor NetFPGA high Network Processors Vendor Switches gap in the tool space none have all the desired attributes!

See Ethane SIGCOMM 2007 paper for details Ethane, a precursor to OpenFlow Centralized, reactive, per-flow control Controller Flow Switch Flow Switch Flow Switch Host B Host A Flow Switch See Ethane SIGCOMM 2007 paper for details

OpenFlow: a pragmatic compromise + Speed, scale, fidelity of vendor hardware + Flexibility and control of software and simulation Vendors don’t need to expose implementation Leverages hardware inside most switches today (ACL tables)

How does OpenFlow work? 14

Ethernet Switch

Control Path (Software) Data Path (Hardware)

OpenFlow Controller Control Path OpenFlow Data Path (Hardware) OpenFlow Protocol (SSL/TCP) Control Path OpenFlow Data Path (Hardware)

OpenFlow Client Controller PC OpenFlow Example Software Layer MAC src Flow Table MAC src dst IP Src Dst TCP sport dport Action Hardware Layer * 5.6.7.8 port 1 port 1 port 2 port 3 port 4 5.6.7.8 1.2.3.4

OpenFlow Basics Flow Table Entries Rule Action Stats Packet + byte counters Forward packet to zero or more ports Encapsulate and forward to controller Send to normal processing pipeline Modify Fields Any extensions you add! Now I’ll describe the API that tries to meet these goals. Switch Port VLAN ID VLAN pcp MAC src MAC dst Eth type IP Src IP Dst IP ToS IP Prot L4 sport L4 dport + mask what fields to match

Examples Switching Flow Switching Firewall Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action * * 00:1f:.. * * * * * * * port6 Flow Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action port3 00:20.. 00:1f.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6 Firewall Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action * * * * * * * * * 22 drop

Examples Routing VLAN Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action * * * * * * 5.6.7.8 * * * port6 VLAN Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action port6, port7, port9 * * 00:1f.. * vlan1 * * * * *

Centralized vs Distributed Control Both models are possible with OpenFlow Centralized Control Distributed Control Controller Controller OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch

Flow Routing vs. Aggregation Both models are possible with OpenFlow Flow-Based Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks Aggregated One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone

Reactive vs. Proactive (pre-populated) Both models are possible with OpenFlow First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility Proactive Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules

Usage examples Alice’s code: Simple learning switch Per Flow switching Network access control/firewall Static “VLANs” Her own new routing protocol: unicast, multicast, multipath Home network manager Packet processor (in controller) IPvAlice VM migration Server Load balancing Mobility manager Power management Network monitoring and visualization Network debugging Network slicing What is possible in the controller? Anything that needs intelligent routing of a flow At Stanford, we have even shown how OpenFlow may be used for: VM migration Power management Load balancing Network monitoring and debugging Easier network visualization … and much more you can create! 25 25

Intercontinental VM Migration Moved a VM from Stanford to Japan without changing its IP. VM hosted a video game server with active network connections.

Quiz Time How do I provide control connectivity? Is it really clean slate? Why aren’t users complaining about time to setup flows over OpenFlow? (Hint: What is the predominant traffic today?) Considering switch CPU is the major limit, how can one take down an OpenFlow network? How to perform topology discovery over OpenFlow-enabled switches? What happens when you have a non-OpenFlow switch in between? What if there are two islands connected to same controller? How scalable is OpenFlow? How does one scale deployments?

What can you not do with OpenFlow ver1.0 Non-flow-based (per-packet) networking ex. Per-packet next-hop selection (in wireless mesh) yes, this is a fundamental limitation BUT OpenFlow can provide the plumbing to connect these systems Use all tables on switch chips yes, a major limitation (cross-product issue) BUT an upcoming OF version will expose these

What can you not do with OpenFlow ver1.0 New forwarding primitives BUT provides a nice way to integrate them through extensions New packet formats/field definitions BUT a generalized OpenFlow (2.0) is on the horizon Optical Circuits BUT efforts underway to apply OpenFlow model to circuits Low-setup-time individual flows BUT can push down flows proactively to avoid delays

Where it’s going OF v1.1: Extensions for WAN OF v2+ multiple tables: leverage additional tables tags and tunnels multipath forwarding OF v2+ generalized matching and actions: an “instruction set” for networking

OpenFlow Implementations (Switch and Controller) 31

OpenFlow building blocks Monitoring/ debugging tools oftrace oflops openseer Stanford Provided ENVI (GUI) LAVI n-Casting Expedient Applications NOX Beacon Helios Maestro SNAC Controller Slicing Software FlowVisor Console FlowVisor There are components at different levels that work together in making it work The commercial switch details will follow in next slide There are a plethora of applications possible. I only list those available at Stanford Commercial Switches Stanford Provided Software Ref. Switch NetFPGA Broadcom Ref. Switch HP, NEC, Pronto, Juniper.. and many more OpenFlow Switches OpenWRT PCEngine WiFi AP OpenVSwitch 32

Current OpenFlow hardware Juniper MX-series NEC IP8800 UNIVERGE PF5240 WiMax (NEC) HP Procurve 5400 Netgear 7324 PC Engines Pronto 3240/3290 Ciena Coredirector More coming soon... Outdated! See ONF: https://www.opennetworking.org/ ONS: http://opennetsummit.org/ 33

Commercial Switch Vendors Model Virtualize Notes HP Procurve 5400zl or 6600 1 OF instance per VLAN LACP, VLAN and STP processing before OpenFlow Wildcard rules or non-IP pkts processed in s/w Header rewriting in s/w CPU protects mgmt during loop NEC IP8800 Series and UNIVERGE PF5240 OpenFlow takes precedence Most actions processed in hardware MAC header rewriting in h/w More than 100K flows (PF5240) Pronto 3240 or 3290 with Pica8 or Indigo firmware 1 OF instance per switch No legacy protocols (like VLAN and STP) All support ver 1.0 All have approx 1500 flow table entry limit 34

Controller Vendors Outdated! See ONF: https://www.opennetworking.org/ Notes Nicira’s NOX Open-source GPL C++ and Python Researcher friendly Nicira’s ONIX Closed-source Datacenter networks SNAC Code based on NOX0.4 Enterprise network C++, Python and Javascript Currently used by campuses Vendor Notes Stanford’s Beacon Open-source Researcher friendly Java-based BigSwitch controller Closed source Based on Beacon Enterprise network Maestro (from Rice Univ) Based on Java NEC’s Helios Written in C and Ruby NEC UNIVERGE PFC Based on Helios Outdated! See ONF: https://www.opennetworking.org/ ONS: http://opennetsummit.org/ 35

Providers and business-unit Growing Community Vendors and start-ups Providers and business-unit More... More... Note: Level of interest varies 36

Industry commitment Big players forming the Open Networking Foundation (ONF) to promote a new approach to networking called Software-Defined Networking (SDN). http://www.opennetworkingfoundation.org/

Software-Defined Networking (SDN) 38

Current Internet Closed to Innovations in the Infrastructure Closed App Operating System App Specialized Packet Forwarding Hardware Operating System App Specialized Packet Forwarding Hardware Operating System The next 3 slides are a set of animation to show how we enable innovation: - Infrastructure is closed to innovation and only driven by vendors. Consumers have little say - Business model makes it hard for new features to be added App Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware App Operating System Specialized Packet Forwarding Hardware 39

“Software Defined Networking” approach to open it Network Operating System App Operating System App Specialized Packet Forwarding Hardware Operating System App Specialized Packet Forwarding Hardware Operating System How do we redefine the architecture to open up networking infrastructure and the industry! By bring to the networking industry what we did to the computing world App Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware App Operating System Specialized Packet Forwarding Hardware 40

The “Software-defined Network” 2. At least one good operating system Extensible, possibly open-source 3. Well-defined open API App App App Network Operating System 1. Open interface to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Switches, routers and other middleboxes are dumbed down The key is to have a standardized control interface that speaks directly to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware 41

Virtualizing OpenFlow 42

Virtualization or “Slicing” Trend Controller 1 App Controller 2 Virtualization or “Slicing” OpenFlow NOX (Network OS) Network OS App App App Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Virtualization layer x86 (Computer) Hidden slide (just for backup reasons) Shows how far along we can go in opening up the network Computer Industry Network Industry 43

Virtualization or “Slicing” Layer Isolated “slices” Many operating systems, or Many versions App Network Operating System 1 Network Operating System 2 Network Operating System 3 Network Operating System 4 Open interface to hardware Virtualization or “Slicing” Layer Open interface to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware

Switch Based Virtualization Exists for NEC, HP switches but not flexible enough Controller Research VLAN 2 Flow Table Controller Research VLAN 1 Flow Table Production VLANs Normal L2/L3 Processing Experiments running on PRODUCTION infrastructure Key to get scale, key to get traffic on the network (e.g. can’t just do a reset...)

FlowVisor-based Virtualization Heidi’s Controller Craig’s Controller Aaron’s Controller Topology discovery is per slice OpenFlow Protocol OpenFlow FlowVisor & Policy Control OpenFlow Switch OpenFlow Protocol OpenFlow Switch OpenFlow Switch 46

FlowVisor-based Virtualization http Load-balancer Multicast Broadcast Separation not only by VLANs, but any L1-L4 pattern OpenFlow Protocol dl_dst=FFFFFFFFFFFF tp_src=80, or tp_dst=80 OpenFlow FlowVisor & Policy Control OpenFlow Switch OpenFlow Protocol OpenFlow Switch OpenFlow Switch

FlowSpace: Maps Packets to Slices

FlowVisor Message Handling OpenFlow Firmware Data Path Alice Controller Bob Cathy FlowVisor Rule Policy Check: Is this rule allowed? Policy Check: Who controls this packet? Full Line Rate Forwarding Exception Packet Packet

Use Case: New CDN - Turbo Coral ++ Basic Idea: Build a CDN where you control the entire network All traffic to or from Coral IP space controlled by Experimenter All other traffic controlled by default routing Topology is entire network End hosts are automatically added (no opt-in) Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport * 84.65.* *

Use Case: Aaron’s IP A new layer 3 protocol Replaces IP Defined by a new Ether Type Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport * AaIP * !AaIP

Untouched production traffic Slicing traffic Untouched production traffic All network traffic Research traffic Experiment #1 Experiment #2 … Experiment N

Ways to use slicing Slice by feature Slice by user Home-grown protocols Download new feature Versioning

SDN Interlude… Returning to fundamentals 55

Software-Defined Networking (SDN) Not just an idle academic daydream Tapped into some strong market need One of those rare cases where we “know the future” Still in development, but consensus on inevitability Much more to story than “OpenFlow” trade rag hype A revolutionary paradigm shift in the network control plane Scott Shenker's talk “The Future of Networking, and the Past of Protocols”, http://www.youtube.com/watch?v=WVs7Pc99S7w The Role of Abstractions in Networking

Weaving Together Three Themes Networking currently built on weak foundation Lack of fundamental abstractions Network control plane needs three abstractions Leads to SDN v1 and v2 Key abstractions solve other architecture problems Two simple abstractions make architectures evolvable

Weak Intellectual Foundations OS courses teach fundamental principles Mutual exclusion and other synchronization primitives Files, file systems, threads, and other building blocks  Networking courses teach a big bag of protocols No formal principles, just vague design guidelines Source: Scott Shenker

Weak Practical Foundations Computation and storage have been virtualized Creating a more flexible and manageable infrastructure Networks are still notoriously hard to manage Network administrators large share of sysadmin staff Source: Scott Shenker

Weak Evolutionary Foundations Ongoing innovation in systems software New languages, operating systems, etc. Networks are stuck in the past Routing algorithms change very slowly Network management extremely primitive Source: Scott Shenker

Why Are Networking Foundations Weak? Networks used to be simple Basic Ethernet/IP straightforward, easy to manage New control requirements have led to complexity ACLs, VLANs, TE, Middleboxes, DPI,… The infrastructure still works... Only because of our great ability to master complexity Ability to master complexity both blessing and curse Source: Scott Shenker

How Programming Made the Transition Machine languages: no abstractions Had to deal with low-level details Higher-level languages: OS and other abstractions File system, virtual memory, abstract data types, ... Modern languages: even more abstractions Object orientation, garbage collection,... Abstractions simplify programming Easier to write, maintain, reason about programs Source: Scott Shenker

Why Are Abstractions/Interfaces Useful? Interfaces are instantiations of abstractions Interface shields a program’s implementation details Allows freedom of implementation on both sides Which leads to modular program structure Barbara Liskov: “Power of Abstractions” talk “Modularity based on abstraction is the way things get done” http://www.infoq.com/presentations/liskov-power-of-abstraction So, what role do abstractions play in networking? Source: Scott Shenker

Layers are Main Network Abstractions Layers provide nice data plane service abstractions IP's best effort delivery TCP's reliable byte-stream Aside: good abstractions, terrible interfaces Don’t sufficiently hide implementation details Main Point: No control plane abstractions No sophisticated management/control building blocks Source: Scott Shenker

No Abstractions = Increased Complexity Each control requirement leads to new mechanism TRILL, LISP, etc. We are really good at designing mechanisms So we never tried to make life easier for ourselves And so networks continue to grow more complex But this is an unwise course: Mastering complexity cannot be our only focus Because it helps in short term, but harms in long term We must shift our attention from mastering complexity to extracting simplicity…. Source: Scott Shenker

How Do We Build Control Plane? We define a new protocol from scratch E.g., routing Or we reconfigure an existing mechanism E.g., traffic engineering Or leave it for manual operator configuration E.g., access control, middleboxes Source: Scott Shenker

What Are The Design Constraints? Operate within the confines of a given datapath Must live with the capabilities of IP Operate without communication guarantees A distributed system with arbitrary delays and drops Compute the configuration of each physical device Switch, router, middlebox FIB, ACLs, etc. This is insanity! Source: Scott Shenker

Programming Analogy What if programmers had to: Specify where each bit was stored Explicitly deal with all internal communication errors Within a programming language with limited expressability Programmers would redefine problem by: Defining higher level abstractions for memory Building on reliable communication primitives Using a more general language Abstractions divide problem into tractable pieces Why aren’t we doing this for network control? Source: Scott Shenker

Central Question What abstractions can simplify the control plane? i.e., how do we separate the problems? Not about designing new mechanisms! We have all the mechanisms we need Extracting simplicity vs mastering complexity Separating problems vs solving problems Defining abstractions vs designing mechanisms Source: Scott Shenker

Abstractions Must Separate 3 Problems Constrained forwarding model Distributed state Detailed configuration Source: Scott Shenker

Forwarding Abstraction Control plane needs flexible forwarding model With behavior specified by control program This abstracts away forwarding hardware Crucial for evolving beyond vendor-specific solutions Flexibility and vendor-neutrality are both valuable But one economic, the other architectural Possibilities: General x86 program, MPLS, OpenFlow,….. Different flexibility/performance/deployment tradeoffs Source: Scott Shenker

State Distribution Abstraction Control program should not have to deal with vagaries of distributed state Complicated, source of many errors Abstraction should hide state dissemination/collection Proposed abstraction: global network view Don’t worry about how to get that view (next slide) Control program operates on network view Input: global network view (graph) Output: configuration of each network device Source: Scott Shenker

Network Operating System (NOS) NOS: distributed system that creates a network view Runs on servers (controllers) in the network Communicates with forwarding elements in network Gets state information from forwarding elements Communicates control directives to forwarding elements Using forwarding abstraction Control program operates on view of network Control program is not a distributed system NOS plus Forwarding Abstraction = SDN (v1) Source: Scott Shenker

Software-Defined Networking (v1) Network Operating System Current Networks Software-Defined Networking (v1) Control Program Global Network View Network Operating System Control via forwarding interface Protocols Source: Scott Shenker

Major Change in Paradigm No longer designing distributed control protocols Now just defining a centralized control function Control program: Configuration = Function(view) Why is this an advance? Much easier to write, verify, maintain, reason about, …. NOS handles all state dissemination/collection Abstraction breaks this off as tractable piece Serves as fundamental building block for control Source: Scott Shenker

What Role Does OpenFlow Play? NOS conveys configuration of global network view to actual physical devices But nothing in this picture limits what the meaning of “configuration” is OpenFlow is one possible definition of how to model the configuration of a physical device Is it the right one? Absolutely not. Crucial for vendor-independence But not the right abstraction (yet) Source: Scott Shenker

Are We Done Yet? This approach requires control program (or operator) to configure each individual network device This is much more complicated than it should be! NOS eases implementation of functionality But does not help specification of functionality! We need a specification abstraction Source: Scott Shenker

Specification Abstraction Give control program abstract view of network Where abstract view is function of global view Then, control program is abstract mapping Abstract configuration = Function(abstract view) Model just enough detail to specify goals Don’t provide information needed to implement goals Source: Scott Shenker

One Simple Example: Access Control Abstract Network View Full Network View Source: Scott Shenker

Service model can generally be described by a table pipeline More Detailed Model Service model can generally be described by a table pipeline Packet In L2 L3 ACL Packet Out This can be pretty much any forwarding element in networks today. Source: Scott Shenker

Implementing Specification Abstraction ACL Given: Abstract Table Pipeline Network Hypervisor (Nypervisor) Compiles abstract pipeline into physical configuration The short answer is that it solves a bunch of operational problems today Need: pipeline operations distributed over network of physical switches Source: Scott Shenker

Two Examples Scale-out router: Multi-tenant networks: Abstract view is single router Physical network is collection of interconnected switches Nypervisor allows routers to “scale out, not up” Multi-tenant networks: Each tenant has control over their “private” network Nypervisor compiles all of these individual control requests into a single physical configuration “Network Virtualization” Source: Scott Shenker

Moving from SDNv1 to SDNv2 Abstract Network View Control Program Nypervisor Global Network View Network Operating System Source: Scott Shenker

Clean Separation of Concerns Network control program specifies the behavior it wants on the abstract network view Control program: maps behavior to abstract view Abstract model should be chosen so that this is easy Nypervisor maps the controls expressed on the abstract view into configuration of the global view Nypervisor: abstract model to global view Models such as single cross-bar, single “slice”,… NOS distributes configuration to physical switches NOS: global view to physical switches Source: Scott Shenker

Three Basic Network Interfaces Forwarding interface: abstract forwarding model Shields higher layers from forwarding hardware Distribution interface: global network view Shields higher layers from state dissemination/collection Specification interface: abstract network view Shields control program from details of physical network Source: Scott Shenker

Control Plane Research Agenda Create three modular, tractable pieces Nypervisor Network Operating System Design and implement forwarding model Build control programs over abstract models Identify appropriate models for various applications Virtualization, Access Control, TE, Routing,… Source: Scott Shenker

Implementations vs Abstractions SDN is an instantiation of fundamental abstractions Don’t get distracted by the mechanisms The abstractions were needed to separate concerns Network challenges are now modular and tractable The abstractions are fundamental SDN implementations are ephemeral OpenFlow/NOX/etc. particular implementations Source: Scott Shenker

Future of Networking, Past of Protocols The future of networking lies in cleaner abstractions Not in defining complicated distributed protocols SDN is only the beginning of needed abstractions Took OS researchers years to find their abstractions First they made it work, then they made it simple Networks work, but they are definitely not simple It is now networking’s turn to make this transition Our task: make networking a mature discipline By extracting simplicity from the sea of complexity… Source: Scott Shenker

"SDN has won the war of words, the real battle over customer adoption is just beginning...." - Scott Shenker

OpenFlow Deployments 90

Current Trials and Deployments 68 Trials/Deployments - 13 Countries

Current Trials and Deployments Brazil University of Campinas Federal University of Rio de Janeiro Federal University of Amazonas Foundation Center of R&D in Telecomm. Canada University of Toronto Germany T-Labs Berlin Leibniz Universität Hannover France ENS Lyon/INRIA India VNIT Mahindra Satyam Italy Politecnico di Torino United Kingdom University College London Lancaster University University of Essex Taiwan National Center for High-Performance Computing Chunghwa Telecom Co Japan NEC JGN Plus NICT University of Tokyo Tokyo Institute of Technology Kyushu Institute of Technology NTT Network Innovation Laboratories KDDI R&D Laboratories Unnamed University South Korea KOREN Seoul National University Gwangju Institute of Science & Tech Pohang University of Science & Tech Korea Institute of Science & Tech ETRI Chungnam National University Kyung Hee University Spain University of Granada Switzerland CERN

3 New EU Projects: OFELIA, SPARC, CHANGE

EU Project Participants Germany ACREO AB (Sweden) Deutsch Telekom Laboratories Ericsson AB Sweden (Sweden) Technishche Universitat Berlin Hungary European Center for ICT Ericsson Magyarorszag Kommunikacios Rendszerek KFT ADVA AG Optical Networking NEC Europe Ltd. Switzerland Eurescom Dreamlab Technologies United Kingdom Eidgenossische Technische Hochschule Zurich University of Essex Lancaster University Italy University College London Nextworks Spain Universita` di Pisa i2CAT Foundation Belgium University of the Basque Country, Bilbao Interdisciplinary Institute for Broadband Technology Romania Universite catholique de Louvain Universitatea Politehnica Bucuresti Sweden

GENI OpenFlow deployment (2010) 10 institutions and 2 National Research Backbones Kansas State

GENI Network Evolution National Lambda Rail

GENI Network Evolution National Lambda Rail

GENI Integration FlowVisor Expedient Opt-in Manager Slicing control Experimenter’s portal for slice management Opt-in Manager Network admins’ portal to approve/ deny expt requests for traffic Expedient3 GENI API API X Expedient1 Expedient2 API X API X Opt-in Mgr1 Opt-in Mgr2 FlowVisor API FlowVisor API FlowVisor1 FlowVisor2 OpenFlow OpenFlow Substrate 1 Substrate 2

FIBRE: FI testbeds between BRazil and Europe Project between 9 partners from Brazil (6 from GIGA), 5 from Europe (4 from Ofelia and OneLab) and 1 from Australia (from OneLab), with a proposal for the design, implementation and validation of a shared Future Internet sliceable/programmable research facility, supporting the joint experimentation of European and Brazilian researchers. The objectives include: the development and operation of a new experimental facility in Brazil the development and operation of a FI facility in Europe based on enhancements and the federation of the existing OFELIA and OneLab infrastructures The federation of the Brazilian and European experimental facilities, to support the provisioning of slices using resources from both testbeds

Project at a glance What? Main goal Who? 15 partners Create a common space between the EU and Brazil for Future Internet (FI) experimental research into network infrastructure and distributed applications, by building and operating a federated EU-Brazil Future Internet experimental facility. Who? 15 partners How? Requested to the EC ~1.1M€ and CNPq R$ 2.6 in funding to perform 6 activities WP1: Project management WP2, WP3: Building and operating the Brazilian (WP2) and European (WP3) facilities WP4: Federation of FIBRE-EU and FIBRE-BR facilities WP5: Joint pilot experiments to showcase the potential of the federated FIBRE facility WP6: Dissemination and collaboration UFPA UEssex UNIFACS UPMC UFG NICTA Nextworks UFSCar CPqD,USP i2CAT RNP, UFF UFRJ UTH 100

The FIBRE consortium in Brazil The map shows the 9 participating Brazilian sites (islands) and the expected topology of their interconnecting private L2 network Over GIGA, Kyatera and RNP experimental networks

FIBRE site in Brazil

Technology pilots examples High-definition content delivery Seamless mobility testbed 103

OpenFlow Deployment at Stanford Switches (23) APs (50) WiMax (1)

Industry interest 105

Cellular industry Recently made transition to IP Billions of mobile users Need to securely extract payments and hold users accountable IP sucks at both, yet hard to change

Telco Operators Global IP traffic growing 40-50% per year End-customer monthly bill remains unchanged Therefore, CAPEX and OPEX need to reduce 40-50% per Gb/s per year But in practice, reduces by ~20% per year

Example: New Data Center Cost 200,000 servers Fanout of 20  10,000 switches $5k vendor switch = $50M $1k commodity switch = $10M Savings in 10 data centers = $400M Control More flexible control Tailor network for services Quickly improve and innovate

Research Examples

Example 1 Load-balancing as a network primitive Nikhil Handigol, Mario Flajslik, Srini Seetharaman

LOAD-BALANCER 111

112

113

Load Balancing is just Smart Routing 114

Nikhil’s Experiment: <500 lines of code Feature Network OS (NOX) 115

Example 2 Energy Management in Data Centers Brandon Heller

ElasticTree Goal: Reduce energy usage in data center networks Approach: Reroute traffic Shut off links and switches to reduce power Network OS DC Manager “Pick paths” [Brandon Heller, NSDI 2010]

X ElasticTree Goal: Reduce energy usage in data center networks Approach: Reroute traffic Shut off links and switches to reduce power X Network OS DC Manager “Pick paths” [Brandon Heller, NSDI 2010]

Example 3 Unified control plane for packet and circuit networks Saurav Das

Converging Packet and Circuit Networks Goal: Common control plane for “Layer 3” and “Layer 1” networks Approach: Add OpenFlow to all switches; use common network OS Feature Feature Network OS OpenFlow Protocol OpenFlow Protocol WDM Switch IP Router IP Router TDM Switch WDM Switch [Saurav Das and Yiannis Yiakoumis, Supercomputing 2009 Demo] [Saurav Das, OFC 2010]

Example 4 Using all the wireless capacity around us KK Yap, Masayoshi Kobayashi, Yiannis Yiakoumis, TY Huang

KK’s Experiment: <250 lines of code Feature Network OS (NOX) WiMax 122

Evolving the IP routing landscape with Software-Defined Networking technology

Providing IP Routing & Forwarding Services in OpenFlow networks http://go.cpqd.com.br/routeflow Unicamp Unirio Ufscar Ufes Ufpa .... Indiana University Stanford Google Deutsche Telekom NTT MCL ....

Motivation Current “mainframe” model of networking equipment: Costly systems based on proprietary HW and closed SW Lack of programmability limits cutomization and in-house innovation Inefficient and costly network solutions Ossified Internet Control Logic RIP BGP OSPF ISIS O.S. Driver Hardware R O U T E R Proprietary IPC / API Management Telnet, SSH, Web, SNMP

RouteFlow: Main Goal Open commodity routing solutions: + open-source routing protocol stacks (e.g. Quagga) + commercial networking HW with open API (i.e. OpenFlow) = line-rate performace, cost-efficiency, and flexibility! Innovation in the control plane and network services Control Logic RIP BGP OSPF O.S. Driver Hardware API Standard API (i.e. OpenFlow) Switch Controller Management

RouteFlow: Plain Solution

RouteFlow: Control Traffic (IGP) Internet RouteFlow A RouteFlow B OSPF HELLO RouteFlow A VM RouteFlow B VM

(acts as eBGP speaking Routing Control Platform) RouteFlow: Control Traffic (BGP) Internet BGP Message RouteFlow A RouteFlow B RouteFlow A VM (acts as eBGP speaking Routing Control Platform) RouteFlow B VM

RouteFlow: Data Traffic Internet Packet Dest IP: 192.168.1.1 Src MAC: 00:00:00:00:00:01 RouteFlow A RouteFlow B 1) Change Src MAC to A’s MAC address 00:00:00:00:00:02 2) Decrement TTL* 3) 192.168.1.0/24 -> port 2 Packet Dest IP: 192.168.1.1 Src MAC: 00:00:00:00:00:02 RouteFlow A VM RouteFlow B VM

RouteFlow: Architecture Key Features Separation of data and control planes Loosely coupled architecture: Three RF components: 1. Controller, 2. Server, 3. Slave(s) Unmodified routing protocol stacks Routing protocol messages can be sent 'down' or kept in the virtual environment Portable to multiple controllers RF-Controller acts as a “proxy” app. Multi-virtualization technologies Multi-vendor data plane hardware

Advancing the Use Cases and Modes of Operation From logical routers to flexible virtual networks

NOX OpenFlow- Controller Prototype evaluation NOX OpenFlow- Controller RF-Server 5 x NetFPGA “Routers” Lab Setup NOX controller Quagga routing engine 5 x NetFPGAs switches Results Interoperability with traditional networking gear Route convergence time is dominated by the protocol time-out configuration (e.g., 4 x HELLO in OSPF) not by slow-path operations Compared to commercial routers (Cisco, Extreme, Broadcom-based), larger latency only for those packets that need to go to the slow-path: Lack FIB entry, need processing by the OS networking / routing stack e.g., ARP, PING, routing protocol messages

Field Trial at the University of Indiana Network setup 1 physical OpenFlow switch Pronto 3290 4 Virtual routers out of the physical OpenFlow switch 10 Gig and 1 Gig connections 2 BGP connections to external networks Juniper routers in Chicago and Indianapolis Remote Controller New User Interface

Field Trial at the University of Indiana User Interface

Demonstrations

Demonstration at Supercomputing 11 Routing configuration at your fingertips

days since Project Launch RouteFlow community 10k visits from across the world (from 1000 different cities): 43% new visits ~25% from Brazil, ~25% from the USA, ~25% Europe, ~20% from Asia 100s of downloads 10s of known users from academia and industry 303 days since Project Launch

Key Partners and Contributors

Conclusions Worldwide pioneer virtual routing solution for OpenFlow networks! RouteFlow proposes a commodity routing architecture that combines line-rate performance of commercial hardware with the flexibility of open-source routing stacks (remotely) running on PCs; Allows for a flexible resource association between IP routing protocols and a programmable physical substrate: Multiple use cases around virtualized IP routing services. IP routing protocol optimization Migration path from traditional IP deployments to SDN Hundreds of users from around the world Yet to prove in large-scale real scenarios To run on the NDDI/OS3E OpenFlow testbed

Thank you! questions? 141 141