Presentation is loading. Please wait.

Presentation is loading. Please wait.

Christian Esteve Rothenberg

Similar presentations

Presentation on theme: "Christian Esteve Rothenberg"— Presentation transcript:

1 Christian Esteve Rothenberg
Future Internet: New Network Architectures and Technologies An Introduction to OpenFlow and Software-Defined Networking Christian Esteve Rothenberg

2 Agenda Future Internet Research Software-Defined Networking
An introduction to OpenFlow/SDN The Role of Abstractions in Networking Sample research projects RouteFlow

3 Credits Nick McKeown's SDN slides GEC 10 OpenFlow Tutorial
GEC 10 OpenFlow Tutorial Scott Shenker's talk on the “The Future of Networking, and the Past of Protocols” The OpenFlow Consortium What: protocol,

4 Introduction to OpenFlow
What, Why and How of OpenFlow? Potential Limitations Current vendors Trials and deployments Research examples Discussion Questions are welcome at any time!

5 What is OpenFlow?

6 Short Story: OpenFlow is an API
Control how packets are forwarded (and manipulated) Implementable on COTS hardware Make deployed networks programmable not just configurable Vendor-independent Makes innovation easier Goal (experimenter’s perspective): Validate your experiments on deployed hardware with real traffic at full line speed Goal (industry perspective): Reduced equipment costs through commoditization and competition in the controller / application space Customization and in-house (or 3rd party) development of new networking features (e.g. protocols).

7 Why OpenFlow?

8 Specialized Packet Forwarding Hardware
The Ossified Network Routing, management, mobility management, access control, VPNs, … Feature Feature Million of lines of source code 5400 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware Billions of gates Bloated Power Hungry Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … An industry with a “mainframe-mentality”, reluctant to change 8 8

9 Research Stagnation Lots of deployed innovation in other areas
OS: filesystems, schedulers, virtualization DS: DHTs, CDNs, MapReduce Compilers: JITs, vectorization Networks are largely the same as years ago Ethernet, IP, WiFi Rate of change of the network seems slower in comparison Need better tools and abstractions to demonstrate and deploy In these other areas, you can real systems on top of high-quality, open platforms, and you don’t need to reinvent the wheel

10 Closed Systems (Vendor Hardware)
Stuck with interfaces (CLI, SNMP, etc) Hard to meaningfully collaborate Vendors starting to open up, but not usefully Need a fully open system – a Linux equivalent

11 none have all the desired attributes!
Open Systems Performance Fidelity Scale Real User Traffic? Complexity Open Simulation medium no yes Emulation low Software Switches poor NetFPGA high Network Processors Vendor Switches gap in the tool space none have all the desired attributes!

12 See Ethane SIGCOMM 2007 paper for details
Ethane, a precursor to OpenFlow Centralized, reactive, per-flow control Controller Flow Switch Flow Switch Flow Switch Host B Host A Flow Switch See Ethane SIGCOMM 2007 paper for details

13 OpenFlow: a pragmatic compromise
+ Speed, scale, fidelity of vendor hardware + Flexibility and control of software and simulation Vendors don’t need to expose implementation Leverages hardware inside most switches today (ACL tables)

14 How does OpenFlow work? 14

15 Ethernet Switch

16 Control Path (Software)
Data Path (Hardware)

17 OpenFlow Controller Control Path OpenFlow Data Path (Hardware)
OpenFlow Protocol (SSL/TCP) Control Path OpenFlow Data Path (Hardware)

18 OpenFlow Client Controller PC OpenFlow Example Software Layer MAC src
Flow Table MAC src dst IP Src Dst TCP sport dport Action Hardware Layer * port 1 port 1 port 2 port 3 port 4

19 OpenFlow Basics Flow Table Entries
Rule Action Stats Packet + byte counters Forward packet to zero or more ports Encapsulate and forward to controller Send to normal processing pipeline Modify Fields Any extensions you add! Now I’ll describe the API that tries to meet these goals. Switch Port VLAN ID VLAN pcp MAC src MAC dst Eth type IP Src IP Dst IP ToS IP Prot L4 sport L4 dport + mask what fields to match

20 Examples Switching Flow Switching Firewall Switch Port MAC src dst Eth
type VLAN ID IP Src Dst Prot TCP sport dport Action * * 00:1f:.. * * * * * * * port6 Flow Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action port3 00:20.. 00:1f.. 0800 vlan1 4 17264 80 port6 Firewall Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action * * * * * * * * * 22 drop

21 Examples Routing VLAN Switching Switch Port MAC src dst Eth type VLAN
ID IP Src Dst Prot TCP sport dport Action * * * * * * * * * port6 VLAN Switching Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action port6, port7, port9 * * 00:1f.. * vlan1 * * * * *

22 Centralized vs Distributed Control Both models are possible with OpenFlow
Centralized Control Distributed Control Controller Controller OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch Controller OpenFlow Switch OpenFlow Switch

23 Flow Routing vs. Aggregation Both models are possible with OpenFlow
Flow-Based Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks Aggregated One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone

24 Reactive vs. Proactive (pre-populated) Both models are possible with OpenFlow
First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility Proactive Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules

25 Usage examples Alice’s code: Simple learning switch Per Flow switching
Network access control/firewall Static “VLANs” Her own new routing protocol: unicast, multicast, multipath Home network manager Packet processor (in controller) IPvAlice VM migration Server Load balancing Mobility manager Power management Network monitoring and visualization Network debugging Network slicing What is possible in the controller? Anything that needs intelligent routing of a flow At Stanford, we have even shown how OpenFlow may be used for: VM migration Power management Load balancing Network monitoring and debugging Easier network visualization … and much more you can create! 25 25

26 Intercontinental VM Migration Moved a VM from Stanford to Japan without changing its IP. VM hosted a video game server with active network connections.

27 Quiz Time How do I provide control connectivity? Is it really clean slate? Why aren’t users complaining about time to setup flows over OpenFlow? (Hint: What is the predominant traffic today?) Considering switch CPU is the major limit, how can one take down an OpenFlow network? How to perform topology discovery over OpenFlow-enabled switches? What happens when you have a non-OpenFlow switch in between? What if there are two islands connected to same controller? How scalable is OpenFlow? How does one scale deployments?

28 What can you not do with OpenFlow ver1.0
Non-flow-based (per-packet) networking ex. Per-packet next-hop selection (in wireless mesh) yes, this is a fundamental limitation BUT OpenFlow can provide the plumbing to connect these systems Use all tables on switch chips yes, a major limitation (cross-product issue) BUT an upcoming OF version will expose these

29 What can you not do with OpenFlow ver1.0
New forwarding primitives BUT provides a nice way to integrate them through extensions New packet formats/field definitions BUT a generalized OpenFlow (2.0) is on the horizon Optical Circuits BUT efforts underway to apply OpenFlow model to circuits Low-setup-time individual flows BUT can push down flows proactively to avoid delays

30 Where it’s going OF v1.1: Extensions for WAN OF v2+
multiple tables: leverage additional tables tags and tunnels multipath forwarding OF v2+ generalized matching and actions: an “instruction set” for networking

31 OpenFlow Implementations (Switch and Controller)

32 OpenFlow building blocks
Monitoring/ debugging tools oftrace oflops openseer Stanford Provided ENVI (GUI) LAVI n-Casting Expedient Applications NOX Beacon Helios Maestro SNAC Controller Slicing Software FlowVisor Console FlowVisor There are components at different levels that work together in making it work The commercial switch details will follow in next slide There are a plethora of applications possible. I only list those available at Stanford Commercial Switches Stanford Provided Software Ref. Switch NetFPGA Broadcom Ref. Switch HP, NEC, Pronto, Juniper.. and many more OpenFlow Switches OpenWRT PCEngine WiFi AP OpenVSwitch 32

33 Current OpenFlow hardware
Juniper MX-series NEC IP8800 UNIVERGE PF5240 WiMax (NEC) HP Procurve 5400 Netgear 7324 PC Engines Pronto 3240/3290 Ciena Coredirector More coming soon... Outdated! See ONF: ONS: 33

34 Commercial Switch Vendors
Model Virtualize Notes HP Procurve 5400zl or 6600 1 OF instance per VLAN LACP, VLAN and STP processing before OpenFlow Wildcard rules or non-IP pkts processed in s/w Header rewriting in s/w CPU protects mgmt during loop NEC IP8800 Series and UNIVERGE PF5240 OpenFlow takes precedence Most actions processed in hardware MAC header rewriting in h/w More than 100K flows (PF5240) Pronto 3240 or 3290 with Pica8 or Indigo firmware 1 OF instance per switch No legacy protocols (like VLAN and STP) All support ver 1.0 All have approx 1500 flow table entry limit 34

35 Controller Vendors Outdated! See ONF:
Notes Nicira’s NOX Open-source GPL C++ and Python Researcher friendly Nicira’s ONIX Closed-source Datacenter networks SNAC Code based on NOX0.4 Enterprise network C++, Python and Javascript Currently used by campuses Vendor Notes Stanford’s Beacon Open-source Researcher friendly Java-based BigSwitch controller Closed source Based on Beacon Enterprise network Maestro (from Rice Univ) Based on Java NEC’s Helios Written in C and Ruby NEC UNIVERGE PFC Based on Helios Outdated! See ONF: ONS: 35

36 Providers and business-unit
Growing Community Vendors and start-ups Providers and business-unit More... More... Note: Level of interest varies 36

37 Industry commitment Big players forming the Open Networking Foundation (ONF) to promote a new approach to networking called Software-Defined Networking (SDN).

38 Software-Defined Networking (SDN)

39 Current Internet Closed to Innovations in the Infrastructure Closed
App Operating System App Specialized Packet Forwarding Hardware Operating System App Specialized Packet Forwarding Hardware Operating System The next 3 slides are a set of animation to show how we enable innovation: - Infrastructure is closed to innovation and only driven by vendors. Consumers have little say - Business model makes it hard for new features to be added App Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware App Operating System Specialized Packet Forwarding Hardware 39

40 “Software Defined Networking” approach to open it
Network Operating System App Operating System App Specialized Packet Forwarding Hardware Operating System App Specialized Packet Forwarding Hardware Operating System How do we redefine the architecture to open up networking infrastructure and the industry! By bring to the networking industry what we did to the computing world App Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware App Operating System Specialized Packet Forwarding Hardware 40

41 The “Software-defined Network”
2. At least one good operating system Extensible, possibly open-source 3. Well-defined open API App App App Network Operating System 1. Open interface to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Switches, routers and other middleboxes are dumbed down The key is to have a standardized control interface that speaks directly to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware 41

42 Virtualizing OpenFlow

43 Virtualization or “Slicing”
Trend Controller 1 App Controller 2 Virtualization or “Slicing” OpenFlow NOX (Network OS) Network OS App App App Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Virtualization layer x86 (Computer) Hidden slide (just for backup reasons) Shows how far along we can go in opening up the network Computer Industry Network Industry 43

44 Virtualization or “Slicing” Layer
Isolated “slices” Many operating systems, or Many versions App Network Operating System 1 Network Operating System 2 Network Operating System 3 Network Operating System 4 Open interface to hardware Virtualization or “Slicing” Layer Open interface to hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware

45 Switch Based Virtualization Exists for NEC, HP switches but not flexible enough
Controller Research VLAN 2 Flow Table Controller Research VLAN 1 Flow Table Production VLANs Normal L2/L3 Processing Experiments running on PRODUCTION infrastructure Key to get scale, key to get traffic on the network (e.g. can’t just do a reset...)

46 FlowVisor-based Virtualization
Heidi’s Controller Craig’s Controller Aaron’s Controller Topology discovery is per slice OpenFlow Protocol OpenFlow FlowVisor & Policy Control OpenFlow Switch OpenFlow Protocol OpenFlow Switch OpenFlow Switch 46

47 FlowVisor-based Virtualization
http Load-balancer Multicast Broadcast Separation not only by VLANs, but any L1-L4 pattern OpenFlow Protocol dl_dst=FFFFFFFFFFFF tp_src=80, or tp_dst=80 OpenFlow FlowVisor & Policy Control OpenFlow Switch OpenFlow Protocol OpenFlow Switch OpenFlow Switch

48 FlowSpace: Maps Packets to Slices

49 FlowVisor Message Handling
OpenFlow Firmware Data Path Alice Controller Bob Cathy FlowVisor Rule Policy Check: Is this rule allowed? Policy Check: Who controls this packet? Full Line Rate Forwarding Exception Packet Packet

50 Use Case: New CDN - Turbo Coral ++
Basic Idea: Build a CDN where you control the entire network All traffic to or from Coral IP space controlled by Experimenter All other traffic controlled by default routing Topology is entire network End hosts are automatically added (no opt-in) Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport * 84.65.* *

51 Use Case: Aaron’s IP A new layer 3 protocol Replaces IP
Defined by a new Ether Type Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport * AaIP * !AaIP

52 Untouched production traffic
Slicing traffic Untouched production traffic All network traffic Research traffic Experiment #1 Experiment #2 Experiment N

53 Ways to use slicing Slice by feature Slice by user
Home-grown protocols Download new feature Versioning


55 SDN Interlude… Returning to fundamentals

56 Software-Defined Networking (SDN)
Not just an idle academic daydream Tapped into some strong market need One of those rare cases where we “know the future” Still in development, but consensus on inevitability Much more to story than “OpenFlow” trade rag hype A revolutionary paradigm shift in the network control plane Scott Shenker's talk “The Future of Networking, and the Past of Protocols”, The Role of Abstractions in Networking

57 Weaving Together Three Themes
Networking currently built on weak foundation Lack of fundamental abstractions Network control plane needs three abstractions Leads to SDN v1 and v2 Key abstractions solve other architecture problems Two simple abstractions make architectures evolvable

58 Weak Intellectual Foundations
OS courses teach fundamental principles Mutual exclusion and other synchronization primitives Files, file systems, threads, and other building blocks  Networking courses teach a big bag of protocols No formal principles, just vague design guidelines Source: Scott Shenker

59 Weak Practical Foundations
Computation and storage have been virtualized Creating a more flexible and manageable infrastructure Networks are still notoriously hard to manage Network administrators large share of sysadmin staff Source: Scott Shenker

60 Weak Evolutionary Foundations
Ongoing innovation in systems software New languages, operating systems, etc. Networks are stuck in the past Routing algorithms change very slowly Network management extremely primitive Source: Scott Shenker

61 Why Are Networking Foundations Weak?
Networks used to be simple Basic Ethernet/IP straightforward, easy to manage New control requirements have led to complexity ACLs, VLANs, TE, Middleboxes, DPI,… The infrastructure still works... Only because of our great ability to master complexity Ability to master complexity both blessing and curse Source: Scott Shenker

62 How Programming Made the Transition
Machine languages: no abstractions Had to deal with low-level details Higher-level languages: OS and other abstractions File system, virtual memory, abstract data types, ... Modern languages: even more abstractions Object orientation, garbage collection,... Abstractions simplify programming Easier to write, maintain, reason about programs Source: Scott Shenker

63 Why Are Abstractions/Interfaces Useful?
Interfaces are instantiations of abstractions Interface shields a program’s implementation details Allows freedom of implementation on both sides Which leads to modular program structure Barbara Liskov: “Power of Abstractions” talk “Modularity based on abstraction is the way things get done” So, what role do abstractions play in networking? Source: Scott Shenker

64 Layers are Main Network Abstractions
Layers provide nice data plane service abstractions IP's best effort delivery TCP's reliable byte-stream Aside: good abstractions, terrible interfaces Don’t sufficiently hide implementation details Main Point: No control plane abstractions No sophisticated management/control building blocks Source: Scott Shenker

65 No Abstractions = Increased Complexity
Each control requirement leads to new mechanism TRILL, LISP, etc. We are really good at designing mechanisms So we never tried to make life easier for ourselves And so networks continue to grow more complex But this is an unwise course: Mastering complexity cannot be our only focus Because it helps in short term, but harms in long term We must shift our attention from mastering complexity to extracting simplicity…. Source: Scott Shenker

66 How Do We Build Control Plane?
We define a new protocol from scratch E.g., routing Or we reconfigure an existing mechanism E.g., traffic engineering Or leave it for manual operator configuration E.g., access control, middleboxes Source: Scott Shenker

67 What Are The Design Constraints?
Operate within the confines of a given datapath Must live with the capabilities of IP Operate without communication guarantees A distributed system with arbitrary delays and drops Compute the configuration of each physical device Switch, router, middlebox FIB, ACLs, etc. This is insanity! Source: Scott Shenker

68 Programming Analogy What if programmers had to:
Specify where each bit was stored Explicitly deal with all internal communication errors Within a programming language with limited expressability Programmers would redefine problem by: Defining higher level abstractions for memory Building on reliable communication primitives Using a more general language Abstractions divide problem into tractable pieces Why aren’t we doing this for network control? Source: Scott Shenker

69 Central Question What abstractions can simplify the control plane?
i.e., how do we separate the problems? Not about designing new mechanisms! We have all the mechanisms we need Extracting simplicity vs mastering complexity Separating problems vs solving problems Defining abstractions vs designing mechanisms Source: Scott Shenker

70 Abstractions Must Separate 3 Problems
Constrained forwarding model Distributed state Detailed configuration Source: Scott Shenker

71 Forwarding Abstraction
Control plane needs flexible forwarding model With behavior specified by control program This abstracts away forwarding hardware Crucial for evolving beyond vendor-specific solutions Flexibility and vendor-neutrality are both valuable But one economic, the other architectural Possibilities: General x86 program, MPLS, OpenFlow,….. Different flexibility/performance/deployment tradeoffs Source: Scott Shenker

72 State Distribution Abstraction
Control program should not have to deal with vagaries of distributed state Complicated, source of many errors Abstraction should hide state dissemination/collection Proposed abstraction: global network view Don’t worry about how to get that view (next slide) Control program operates on network view Input: global network view (graph) Output: configuration of each network device Source: Scott Shenker

73 Network Operating System (NOS)
NOS: distributed system that creates a network view Runs on servers (controllers) in the network Communicates with forwarding elements in network Gets state information from forwarding elements Communicates control directives to forwarding elements Using forwarding abstraction Control program operates on view of network Control program is not a distributed system NOS plus Forwarding Abstraction = SDN (v1) Source: Scott Shenker

74 Software-Defined Networking (v1) Network Operating System
Current Networks Software-Defined Networking (v1) Control Program Global Network View Network Operating System Control via forwarding interface Protocols Source: Scott Shenker

75 Major Change in Paradigm
No longer designing distributed control protocols Now just defining a centralized control function Control program: Configuration = Function(view) Why is this an advance? Much easier to write, verify, maintain, reason about, …. NOS handles all state dissemination/collection Abstraction breaks this off as tractable piece Serves as fundamental building block for control Source: Scott Shenker

76 What Role Does OpenFlow Play?
NOS conveys configuration of global network view to actual physical devices But nothing in this picture limits what the meaning of “configuration” is OpenFlow is one possible definition of how to model the configuration of a physical device Is it the right one? Absolutely not. Crucial for vendor-independence But not the right abstraction (yet) Source: Scott Shenker

77 Are We Done Yet? This approach requires control program (or operator) to configure each individual network device This is much more complicated than it should be! NOS eases implementation of functionality But does not help specification of functionality! We need a specification abstraction Source: Scott Shenker

78 Specification Abstraction
Give control program abstract view of network Where abstract view is function of global view Then, control program is abstract mapping Abstract configuration = Function(abstract view) Model just enough detail to specify goals Don’t provide information needed to implement goals Source: Scott Shenker

79 One Simple Example: Access Control
Abstract Network View Full Network View Source: Scott Shenker

80 Service model can generally be described by a table pipeline
More Detailed Model Service model can generally be described by a table pipeline Packet In L2 L3 ACL Packet Out This can be pretty much any forwarding element in networks today. Source: Scott Shenker

81 Implementing Specification Abstraction
ACL Given: Abstract Table Pipeline Network Hypervisor (Nypervisor) Compiles abstract pipeline into physical configuration The short answer is that it solves a bunch of operational problems today Need: pipeline operations distributed over network of physical switches Source: Scott Shenker

82 Two Examples Scale-out router: Multi-tenant networks:
Abstract view is single router Physical network is collection of interconnected switches Nypervisor allows routers to “scale out, not up” Multi-tenant networks: Each tenant has control over their “private” network Nypervisor compiles all of these individual control requests into a single physical configuration “Network Virtualization” Source: Scott Shenker

83 Moving from SDNv1 to SDNv2
Abstract Network View Control Program Nypervisor Global Network View Network Operating System Source: Scott Shenker

84 Clean Separation of Concerns
Network control program specifies the behavior it wants on the abstract network view Control program: maps behavior to abstract view Abstract model should be chosen so that this is easy Nypervisor maps the controls expressed on the abstract view into configuration of the global view Nypervisor: abstract model to global view Models such as single cross-bar, single “slice”,… NOS distributes configuration to physical switches NOS: global view to physical switches Source: Scott Shenker

85 Three Basic Network Interfaces
Forwarding interface: abstract forwarding model Shields higher layers from forwarding hardware Distribution interface: global network view Shields higher layers from state dissemination/collection Specification interface: abstract network view Shields control program from details of physical network Source: Scott Shenker

86 Control Plane Research Agenda
Create three modular, tractable pieces Nypervisor Network Operating System Design and implement forwarding model Build control programs over abstract models Identify appropriate models for various applications Virtualization, Access Control, TE, Routing,… Source: Scott Shenker

87 Implementations vs Abstractions
SDN is an instantiation of fundamental abstractions Don’t get distracted by the mechanisms The abstractions were needed to separate concerns Network challenges are now modular and tractable The abstractions are fundamental SDN implementations are ephemeral OpenFlow/NOX/etc. particular implementations Source: Scott Shenker

88 Future of Networking, Past of Protocols
The future of networking lies in cleaner abstractions Not in defining complicated distributed protocols SDN is only the beginning of needed abstractions Took OS researchers years to find their abstractions First they made it work, then they made it simple Networks work, but they are definitely not simple It is now networking’s turn to make this transition Our task: make networking a mature discipline By extracting simplicity from the sea of complexity… Source: Scott Shenker

89 "SDN has won the war of words, the real battle over customer adoption is just beginning...." - Scott Shenker

90 OpenFlow Deployments 90

91 Current Trials and Deployments 68 Trials/Deployments - 13 Countries

92 Current Trials and Deployments
Brazil University of Campinas Federal University of Rio de Janeiro Federal University of Amazonas Foundation Center of R&D in Telecomm. Canada University of Toronto Germany T-Labs Berlin Leibniz Universität Hannover France ENS Lyon/INRIA India VNIT Mahindra Satyam Italy Politecnico di Torino United Kingdom University College London Lancaster University University of Essex Taiwan National Center for High-Performance Computing Chunghwa Telecom Co Japan NEC JGN Plus NICT University of Tokyo Tokyo Institute of Technology Kyushu Institute of Technology NTT Network Innovation Laboratories KDDI R&D Laboratories Unnamed University South Korea KOREN Seoul National University Gwangju Institute of Science & Tech Pohang University of Science & Tech Korea Institute of Science & Tech ETRI Chungnam National University Kyung Hee University Spain University of Granada Switzerland CERN

93 3 New EU Projects: OFELIA, SPARC, CHANGE

94 EU Project Participants
Germany ACREO AB (Sweden) Deutsch Telekom Laboratories Ericsson AB Sweden (Sweden) Technishche Universitat Berlin Hungary European Center for ICT Ericsson Magyarorszag Kommunikacios Rendszerek KFT ADVA AG Optical Networking NEC Europe Ltd. Switzerland Eurescom Dreamlab Technologies United Kingdom Eidgenossische Technische Hochschule Zurich University of Essex Lancaster University Italy University College London Nextworks Spain Universita` di Pisa i2CAT Foundation Belgium University of the Basque Country, Bilbao Interdisciplinary Institute for Broadband Technology Romania Universite catholique de Louvain Universitatea Politehnica Bucuresti Sweden

95 GENI OpenFlow deployment (2010)
10 institutions and 2 National Research Backbones Kansas State

96 GENI Network Evolution
National Lambda Rail

97 GENI Network Evolution
National Lambda Rail

98 GENI Integration FlowVisor Expedient Opt-in Manager Slicing control
Experimenter’s portal for slice management Opt-in Manager Network admins’ portal to approve/ deny expt requests for traffic Expedient3 GENI API API X Expedient1 Expedient2 API X API X Opt-in Mgr1 Opt-in Mgr2 FlowVisor API FlowVisor API FlowVisor1 FlowVisor2 OpenFlow OpenFlow Substrate 1 Substrate 2

99 FIBRE: FI testbeds between BRazil and Europe
Project between 9 partners from Brazil (6 from GIGA), 5 from Europe (4 from Ofelia and OneLab) and 1 from Australia (from OneLab), with a proposal for the design, implementation and validation of a shared Future Internet sliceable/programmable research facility, supporting the joint experimentation of European and Brazilian researchers. The objectives include: the development and operation of a new experimental facility in Brazil the development and operation of a FI facility in Europe based on enhancements and the federation of the existing OFELIA and OneLab infrastructures The federation of the Brazilian and European experimental facilities, to support the provisioning of slices using resources from both testbeds

100 Project at a glance What? Main goal Who? 15 partners
Create a common space between the EU and Brazil for Future Internet (FI) experimental research into network infrastructure and distributed applications, by building and operating a federated EU-Brazil Future Internet experimental facility. Who? 15 partners How? Requested to the EC ~1.1M€ and CNPq R$ 2.6 in funding to perform 6 activities WP1: Project management WP2, WP3: Building and operating the Brazilian (WP2) and European (WP3) facilities WP4: Federation of FIBRE-EU and FIBRE-BR facilities WP5: Joint pilot experiments to showcase the potential of the federated FIBRE facility WP6: Dissemination and collaboration UFPA UEssex UNIFACS UPMC UFG NICTA Nextworks UFSCar CPqD,USP i2CAT RNP, UFF UFRJ UTH 100

101 The FIBRE consortium in Brazil
The map shows the 9 participating Brazilian sites (islands) and the expected topology of their interconnecting private L2 network Over GIGA, Kyatera and RNP experimental networks

102 FIBRE site in Brazil

103 Technology pilots examples
High-definition content delivery Seamless mobility testbed 103

104 OpenFlow Deployment at Stanford
Switches (23) APs (50) WiMax (1)

105 Industry interest 105

106 Cellular industry Recently made transition to IP
Billions of mobile users Need to securely extract payments and hold users accountable IP sucks at both, yet hard to change

107 Telco Operators Global IP traffic growing 40-50% per year
End-customer monthly bill remains unchanged Therefore, CAPEX and OPEX need to reduce 40-50% per Gb/s per year But in practice, reduces by ~20% per year

108 Example: New Data Center
Cost 200,000 servers Fanout of 20  10,000 switches $5k vendor switch = $50M $1k commodity switch = $10M Savings in 10 data centers = $400M Control More flexible control Tailor network for services Quickly improve and innovate

109 Research Examples

110 Example 1 Load-balancing as a network primitive
Nikhil Handigol, Mario Flajslik, Srini Seetharaman


112 112

113 113

114 Load Balancing is just Smart Routing 114

115 Nikhil’s Experiment: <500 lines of code
Feature Network OS (NOX) 115

116 Example 2 Energy Management in Data Centers
Brandon Heller

117 ElasticTree Goal: Reduce energy usage in data center networks
Approach: Reroute traffic Shut off links and switches to reduce power Network OS DC Manager “Pick paths” [Brandon Heller, NSDI 2010]

118 X ElasticTree Goal: Reduce energy usage in data center networks
Approach: Reroute traffic Shut off links and switches to reduce power X Network OS DC Manager “Pick paths” [Brandon Heller, NSDI 2010]

119 Example 3 Unified control plane for packet and circuit networks
Saurav Das

120 Converging Packet and Circuit Networks
Goal: Common control plane for “Layer 3” and “Layer 1” networks Approach: Add OpenFlow to all switches; use common network OS Feature Feature Network OS OpenFlow Protocol OpenFlow Protocol WDM Switch IP Router IP Router TDM Switch WDM Switch [Saurav Das and Yiannis Yiakoumis, Supercomputing 2009 Demo] [Saurav Das, OFC 2010]

121 Example 4 Using all the wireless capacity around us
KK Yap, Masayoshi Kobayashi, Yiannis Yiakoumis, TY Huang

122 KK’s Experiment: <250 lines of code
Feature Network OS (NOX) WiMax 122

123 Evolving the IP routing landscape with Software-Defined Networking technology

124 Providing IP Routing & Forwarding Services in OpenFlow networks
Unicamp Unirio Ufscar Ufes Ufpa .... Indiana University Stanford Google Deutsche Telekom NTT MCL ....

125 Motivation Current “mainframe” model of networking equipment:
Costly systems based on proprietary HW and closed SW Lack of programmability limits cutomization and in-house innovation Inefficient and costly network solutions Ossified Internet Control Logic RIP BGP OSPF ISIS O.S. Driver Hardware R O U T E R Proprietary IPC / API Management Telnet, SSH, Web, SNMP

126 RouteFlow: Main Goal Open commodity routing solutions:
+ open-source routing protocol stacks (e.g. Quagga) + commercial networking HW with open API (i.e. OpenFlow) = line-rate performace, cost-efficiency, and flexibility! Innovation in the control plane and network services Control Logic RIP BGP OSPF O.S. Driver Hardware API Standard API (i.e. OpenFlow) Switch Controller Management

127 RouteFlow: Plain Solution

128 RouteFlow: Control Traffic (IGP)
Internet RouteFlow A RouteFlow B OSPF HELLO RouteFlow A VM RouteFlow B VM

129 (acts as eBGP speaking Routing Control Platform)
RouteFlow: Control Traffic (BGP) Internet BGP Message RouteFlow A RouteFlow B RouteFlow A VM (acts as eBGP speaking Routing Control Platform) RouteFlow B VM

130 RouteFlow: Data Traffic
Internet Packet Dest IP: Src MAC: 00:00:00:00:00:01 RouteFlow A RouteFlow B 1) Change Src MAC to A’s MAC address 00:00:00:00:00:02 2) Decrement TTL* 3) /24 -> port 2 Packet Dest IP: Src MAC: 00:00:00:00:00:02 RouteFlow A VM RouteFlow B VM

131 RouteFlow: Architecture
Key Features Separation of data and control planes Loosely coupled architecture: Three RF components: 1. Controller, 2. Server, 3. Slave(s) Unmodified routing protocol stacks Routing protocol messages can be sent 'down' or kept in the virtual environment Portable to multiple controllers RF-Controller acts as a “proxy” app. Multi-virtualization technologies Multi-vendor data plane hardware

132 Advancing the Use Cases and Modes of Operation
From logical routers to flexible virtual networks

133 NOX OpenFlow- Controller
Prototype evaluation NOX OpenFlow- Controller RF-Server 5 x NetFPGA “Routers” Lab Setup NOX controller Quagga routing engine 5 x NetFPGAs switches Results Interoperability with traditional networking gear Route convergence time is dominated by the protocol time-out configuration (e.g., 4 x HELLO in OSPF) not by slow-path operations Compared to commercial routers (Cisco, Extreme, Broadcom-based), larger latency only for those packets that need to go to the slow-path: Lack FIB entry, need processing by the OS networking / routing stack e.g., ARP, PING, routing protocol messages

134 Field Trial at the University of Indiana Network setup
1 physical OpenFlow switch Pronto 3290 4 Virtual routers out of the physical OpenFlow switch 10 Gig and 1 Gig connections 2 BGP connections to external networks Juniper routers in Chicago and Indianapolis Remote Controller New User Interface

135 Field Trial at the University of Indiana User Interface

136 Demonstrations

137 Demonstration at Supercomputing 11
Routing configuration at your fingertips

138 days since Project Launch
RouteFlow community 10k visits from across the world (from 1000 different cities): 43% new visits ~25% from Brazil, ~25% from the USA, ~25% Europe, ~20% from Asia 100s of downloads 10s of known users from academia and industry 303 days since Project Launch

139 Key Partners and Contributors

140 Conclusions Worldwide pioneer virtual routing solution for OpenFlow networks! RouteFlow proposes a commodity routing architecture that combines line-rate performance of commercial hardware with the flexibility of open-source routing stacks (remotely) running on PCs; Allows for a flexible resource association between IP routing protocols and a programmable physical substrate: Multiple use cases around virtualized IP routing services. IP routing protocol optimization Migration path from traditional IP deployments to SDN Hundreds of users from around the world Yet to prove in large-scale real scenarios To run on the NDDI/OS3E OpenFlow testbed

141 Thank you! questions? 141 141

Download ppt "Christian Esteve Rothenberg"

Similar presentations

Ads by Google