Presentation is loading. Please wait.

Presentation is loading. Please wait.

Future Internet: New Network Architectures and Technologies An Introduction to OpenFlow and Software-Defined Networking Christian Esteve Rothenberg

Similar presentations

Presentation on theme: "Future Internet: New Network Architectures and Technologies An Introduction to OpenFlow and Software-Defined Networking Christian Esteve Rothenberg"— Presentation transcript:

1 Future Internet: New Network Architectures and Technologies An Introduction to OpenFlow and Software-Defined Networking Christian Esteve Rothenberg

2 Agenda Future Internet Research Software-Defined Networking – An introduction to OpenFlow/SDN – The Role of Abstractions in Networking – Sample research projects – RouteFlow

3 Credits Nick McKeown's SDN slides – GEC 10 OpenFlow Tutorial – Scott Shenker's talk on the The Future of Networking, and the Past of Protocols – The OpenFlow Consortium –

4 Introduction to OpenFlow What, Why and How of OpenFlow? Potential Limitations Current vendors Trials and deployments Research examples Discussion – Questions are welcome at any time! 4

5 What is OpenFlow?

6 Short Story: OpenFlow is an API Control how packets are forwarded (and manipulated) Implementable on COTS hardware Make deployed networks programmable – not just configurable – Vendor-independent Makes innovation easier Goal (experimenters perspective): – Validate your experiments on deployed hardware with real traffic at full line speed Goal (industry perspective): – Reduced equipment costs through commoditization and competition in the controller / application space – Customization and in-house (or 3rd party) development of new networking features (e.g. protocols).

7 Why OpenFlow? 7

8 Million of lines of source code 5400 RFCsBarrier to entry Billions of gates BloatedPower Hungry Many complex functions baked into the infrastructure OSPF, BGP, multicast, differentiated services, Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … An industry with a mainframe-mentality, reluctant to change The Ossified Network Specialized Packet Forwarding Hardware Operating System Operating System Feature Routing, management, mobility management, access control, VPNs, … 8

9 Research Stagnation Lots of deployed innovation in other areas – OS: filesystems, schedulers, virtualization – DS: DHTs, CDNs, MapReduce – Compilers: JITs, vectorization Networks are largely the same as years ago – Ethernet, IP, WiFi Rate of change of the network seems slower in comparison – Need better tools and abstractions to demonstrate and deploy 9

10 Closed Systems (Vendor Hardware) Stuck with interfaces (CLI, SNMP, etc) Hard to meaningfully collaborate Vendors starting to open up, but not usefully Need a fully open system – a Linux equivalent 10

11 Open Systems Performance Fidelity ScaleReal User Traffic? ComplexityOpen Simulationmedium nomediumyes Emulationmediumlownomediumyes Software Switches poorlowyesmediumyes NetFPGAhighlowyeshighyes Network Processors highmediumyeshighyes Vendor Switches high yeslowno gap in the tool space none have all the desired attributes! 11

12 Ethane, a precursor to OpenFlow Centralized, reactive, per-flow control Controller Flow Switch Host A Host B Flow Switch See Ethane SIGCOMM 2007 paper for details 12

13 OpenFlow: a pragmatic compromise + Speed, scale, fidelity of vendor hardware + Flexibility and control of software and simulation Vendors dont need to expose implementation Leverages hardware inside most switches today (ACL tables) 13

14 How does OpenFlow work? 14

15 Ethernet Switch 15

16 Data Path (Hardware) Control Path Control Path (Software) 16

17 Data Path (Hardware) Control Path OpenFlow OpenFlow Controller OpenFlow Protocol (SSL/TCP) 17

18 Controller PC Hardware Layer Software Layer Flow Table MAC src MAC dst IP Src IP Dst TCP sport TCP dport Action OpenFlow Client ** ***port 1 port 4port 3 port 2 port OpenFlow Example 18

19 OpenFlow Basics Flow Table Entries Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot L4 sport L4 dport RuleActionStats 1.Forward packet to zero or more ports 2.Encapsulate and forward to controller 3.Send to normal processing pipeline 4.Modify Fields 5.Any extensions you add! + mask what fields to match Packet + byte counters 19 VLAN pcp IP ToS

20 Examples Switching * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action * 00:1f:.. *******port6 Flow Switching port3 Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action 00:20..00:1f..0800vlan port6 Firewall * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ********22drop 20

21 Examples Routing * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ***** ***port6 VLAN Switching * Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport Action ** vlan1 ***** port6, port7, port9 00:1f.. 21

22 Centralized vs Distributed Control Both models are possible with OpenFlow Centralized Control OpenFlow Switch OpenFlow Switch OpenFlow Switch Controller Distributed Control OpenFlow Switch OpenFlow Switch OpenFlow Switch Controller 22

23 Flow Routing vs. Aggregation Both models are possible with OpenFlow Flow-Based Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks Aggregated One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone 23

24 Reactive vs. Proactive (pre-populated) Both models are possible with OpenFlow Reactive First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility Proactive Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules 24

25 Usage examples Alices code: – Simple learning switch – Per Flow switching – Network access control/firewall – Static VLANs – Her own new routing protocol: unicast, multicast, multipath – Home network manager – Packet processor (in controller) – IPvAlice – VM migration – Server Load balancing – Mobility manager – Power management – Network monitoring and visualization – Network debugging – Network slicing … and much more you can create!

26 Intercontinental VM Migration Moved a VM from Stanford to Japan without changing its IP. VM hosted a video game server with active network connections.

27 Quiz Time How do I provide control connectivity? Is it really clean slate? Why arent users complaining about time to setup flows over OpenFlow? (Hint: What is the predominant traffic today?) Considering switch CPU is the major limit, how can one take down an OpenFlow network? How to perform topology discovery over OpenFlow-enabled switches? What happens when you have a non-OpenFlow switch in between? What if there are two islands connected to same controller? How scalable is OpenFlow? How does one scale deployments? 27

28 What can you not do with OpenFlow ver1.0 Non-flow-based (per-packet) networking – ex. Per-packet next-hop selection (in wireless mesh) – yes, this is a fundamental limitation – BUT OpenFlow can provide the plumbing to connect these systems Use all tables on switch chips – yes, a major limitation (cross-product issue) – BUT an upcoming OF version will expose these 28

29 What can you not do with OpenFlow ver1.0 New forwarding primitives – BUT provides a nice way to integrate them through extensions New packet formats/field definitions – BUT a generalized OpenFlow (2.0) is on the horizon Optical Circuits – BUT efforts underway to apply OpenFlow model to circuits Low-setup-time individual flows – BUT can push down flows proactively to avoid delays

30 Where its going OF v1.1: Extensions for WAN – multiple tables: leverage additional tables – tags and tunnels – multipath forwarding OF v2+ – generalized matching and actions: an instruction set for networking 30

31 OpenFlow Implementations (Switch and Controller) 31

32 OpenFlow building blocks Controller NOX Slicing Software FlowVisor Console 32 Applications LAVI ENVI (GUI) Expedient n-Casting NetFPGA Software Ref. Switch Software Ref. Switch Broadcom Ref. Switch Broadcom Ref. Switch OpenWRT PCEngine WiFi AP Commercial Switches Stanford Provided OpenFlow Switches SNAC Stanford Provided Monitoring/ debugging tools oflops oftrace openseer OpenVSwitch HP, NEC, Pronto, Juniper.. and many more Beacon Helios Maestro

33 Ciena Coredirector NEC IP8800 UNIVERGE PF5240 Current OpenFlow hardware More coming soon... Juniper MX-series HP Procurve 5400 Pronto 3240/3290 WiMax (NEC) PC Engines Netgear Outdated! See ONF: ONS:

34 Commercial Switch Vendors ModelVirtualizeNotes HP Procurve 5400zl or OF instance per VLAN -LACP, VLAN and STP processing before OpenFlow -Wildcard rules or non-IP pkts processed in s/w -Header rewriting in s/w -CPU protects mgmt during loop NEC IP8800 Series and UNIVERGE PF OF instance per VLAN -OpenFlow takes precedence -Most actions processed in hardware -MAC header rewriting in h/w -More than 100K flows (PF5240) Pronto 3240 or 3290 with Pica8 or Indigo firmware 1 OF instance per switch -No legacy protocols (like VLAN and STP) -Most actions processed in hardware -MAC header rewriting in h/w 34

35 Controller Vendors VendorNotes Niciras NOX Open-source GPL C++ and Python Researcher friendly Niciras ONIX Closed-source Datacenter networks SNAC Open-source GPL Code based on NOX0.4 Enterprise network C++, Python and Javascript Currently used by campuses VendorNotes Stanfords Beacon Open-source Researcher friendly Java-based BigSwitch controller Closed source Based on Beacon Enterprise network Maestro (from Rice Univ) Open-source Based on Java NECs Helios Open-source Written in C and Ruby NEC UNIVERGE PFC Closed source Based on Helios 35 Outdated! See ONF: ONS:

36 Growing Community Vendors and start-upsProviders and business-unit More Note: Level of interest varies

37 Industry commitment Big players forming the Open Networking Foundation (ONF) to promote a new approach to networking called Software-Defined Networking (SDN).

38 Software-Defined Networking (SDN) 38

39 Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Ap p 39 Current Internet Closed to Innovations in the Infrastructure Closed

40 Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Ap p Specialized Packet Forwarding Hardware Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Operating System Ap p Network Operating System App Software Defined Networking approach to open it 40

41 App Simple Packet Forwarding Hardware App Simple Packet Forwarding Hardware Network Operating System 1. Open interface to hardware 3. Well-defined open API 2. At least one good operating system Extensible, possibly open-source The Software-defined Network 41

42 Virtualizing OpenFlow 42

43 Windows (OS) Windows (OS) Windows (OS) Windows (OS) Linux Mac OS Mac OS x86 (Computer) x86 (Computer) Windows (OS) Windows (OS) App Linux Mac OS Mac OS Mac OS Mac OS Virtualization layer App Controller 1 App Controller 2 Controller 2 Virtualization or Slicing App OpenFlow Controller 1 NOX (Network OS) NOX (Network OS) Controller 2 Controller 2 Network OS Trend Computer IndustryNetwork Industry

44 Simple Packet Forwarding Hardware Network Operating System 1 Open interface to hardware Virtualization or Slicing Layer Network Operating System 2 Network Operating System 3 Network Operating System 4 App Many operating systems, or Many versions Open interface to hardware Isolated slices Simple Packet Forwarding Hardware 44

45 Switch Based Virtualization Exists for NEC, HP switches but not flexible enough Normal L2/L3 Processing Flow Table Production VLANs Research VLAN 1 Controller Research VLAN 2 Flow Table Controller 45

46 FlowVisor-based Virtualization OpenFlow Switch OpenFlow Protocol OpenFlow Protocol OpenFlow FlowVisor & Policy Control Craigs Controller Heidis Controller Aarons Controller OpenFlow Protocol OpenFlow Protocol OpenFlow Switch OpenFlow Switch 46 Topology discovery is per slice

47 OpenFlow Protocol OpenFlow FlowVisor & Policy Control Broadcast Multicast OpenFlow Protocol http Load-balancer FlowVisor-based Virtualization OpenFlow Switch OpenFlow Switch OpenFlow Switch 47 Separation not only by VLANs, but any L1-L4 pattern dl_dst=FFFFFFFFFFFF tp_src=80, or tp_dst=80

48 FlowSpace: Maps Packets to Slices

49 FlowVisor Message Handling OpenFlow Firmware Data Path Alice Controller Bob Controller Cathy Controller FlowVisor OpenFlow Packet Exception Policy Check: Is this rule allowed? Policy Check: Who controls this packet? Full Line Rate Forwarding Rule Packet

50 Use Case: New CDN - Turbo Coral ++ Basic Idea: Build a CDN where you control the entire network – All traffic to or from Coral IP space controlled by Experimenter – All other traffic controlled by default routing – Topology is entire network – End hosts are automatically added (no opt-in) Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport ***** * **** ****** *** ********** 50

51 Use Case: Aarons IP A new layer 3 protocol Replaces IP Defined by a new Ether Type Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport ***AaIP* * **** ***!AaIP****** 51

52 Slicing traffic All network traffic Untouched production traffic Research traffic Experiment #1 Experiment #2 … Experiment N

53 Ways to use slicing Slice by feature Slice by user Slice by feature Slice by user Home-grown protocols Download new feature Versioning Home-grown protocols Download new feature Versioning


55 SDN Interlude… Returning to fundamentals 55

56 Software-Defined Networking (SDN) Not just an idle academic daydream – Tapped into some strong market need One of those rare cases where weknow the future – Still in development, but consensus on inevitability Much more to story than OpenFlow trade rag hype – A revolutionary paradigm shift in the network control plane Scott Shenker's talk The Future of Networking, and the Past of Protocols, The Role of Abstractions in Networking

57 Weaving Together Three Themes Networking currently built on weak foundation – Lack of fundamental abstractions Network control plane needs three abstractions – Leads to SDN v1 and v2 Key abstractions solve other architecture problems – Two simple abstractions make architectures evolvable

58 Weak Intellectual Foundations OS courses teach fundamental principles – Mutual exclusion and other synchronization primitives – Files, file systems, threads, and other building blocks Networking courses teach a big bag of protocols – No formal principles, just vague design guidelines Source: Scott Shenker

59 Weak Practical Foundations Computation and storage have been virtualized – Creating a more flexible and manageable infrastructure Networks are still notoriously hard to manage – Network administrators large share of sysadmin staff Source: Scott Shenker

60 Weak Evolutionary Foundations Ongoing innovation in systems software – New languages, operating systems, etc. Networks are stuck in the past – Routing algorithms change very slowly – Network management extremely primitive Source: Scott Shenker

61 Why Are Networking Foundations Weak? Networks used to be simple – Basic Ethernet/IP straightforward, easy to manage New control requirements have led to complexity – ACLs, VLANs, TE, Middleboxes, DPI,… The infrastructure still works... – Only because of our great ability to master complexity Ability to master complexity both blessing and curse Source: Scott Shenker

62 How Programming Made the Transition Machine languages: no abstractions – Had to deal with low-level details Higher-level languages: OS and other abstractions – File system, virtual memory, abstract data types,... Modern languages: even more abstractions – Object orientation, garbage collection,... Abstractions simplify programming Easier to write, maintain, reason about programs Source: Scott Shenker

63 Why Are Abstractions/Interfaces Useful? Interfaces are instantiations of abstractions Interface shields a programs implementation details – Allows freedom of implementation on both sides – Which leads to modular program structure Barbara Liskov: Power of Abstractions talk Modularity based on abstraction is the way things get done So, what role do abstractions play in networking? Source: Scott Shenker

64 Layers are Main Network Abstractions Layers provide nice data plane service abstractions – IP's best effort delivery – TCP's reliable byte-stream Aside: good abstractions, terrible interfaces – Dont sufficiently hide implementation details Main Point: No control plane abstractions – No sophisticated management/control building blocks Source: Scott Shenker

65 No Abstractions = Increased Complexity Each control requirement leads to new mechanism – TRILL, LISP, etc. We are really good at designing mechanisms – So we never tried to make life easier for ourselves – And so networks continue to grow more complex But this is an unwise course: – Mastering complexity cannot be our only focus – Because it helps in short term, but harms in long term – We must shift our attention from mastering complexity to extracting simplicity…. Source: Scott Shenker

66 How Do We Build Control Plane? We define a new protocol from scratch – E.g., routing Or we reconfigure an existing mechanism – E.g., traffic engineering Or leave it for manual operator configuration – E.g., access control, middleboxes Source: Scott Shenker

67 What Are The Design Constraints? Operate within the confines of a given datapath – Must live with the capabilities of IP Operate without communication guarantees – A distributed system with arbitrary delays and drops Compute the configuration of each physical device – Switch, router, middlebox – FIB, ACLs, etc. This is insanity! Source: Scott Shenker

68 Programming Analogy What if programmers had to: – Specify where each bit was stored – Explicitly deal with all internal communication errors – Within a programming language with limited expressability Programmers would redefine problem by: – Defining higher level abstractions for memory – Building on reliable communication primitives – Using a more general language Abstractions divide problem into tractable pieces – Why arent we doing this for network control? Source: Scott Shenker

69 Central Question What abstractions can simplify the control plane? – i.e., how do we separate the problems? Not about designing new mechanisms! – We have all the mechanisms we need Extracting simplicity vs mastering complexity – Separating problems vs solving problems – Defining abstractions vs designing mechanisms Source: Scott Shenker

70 Abstractions Must Separate 3 Problems Constrained forwarding model Distributed state Detailed configuration Source: Scott Shenker

71 Forwarding Abstraction Control plane needs flexible forwarding model – With behavior specified by control program This abstracts away forwarding hardware – Crucial for evolving beyond vendor-specific solutions Flexibility and vendor-neutrality are both valuable – But one economic, the other architectural Possibilities: – General x86 program, MPLS, OpenFlow,….. – Different flexibility/performance/deployment tradeoffs Source: Scott Shenker

72 State Distribution Abstraction Control program should not have to deal with vagaries of distributed state – Complicated, source of many errors – Abstraction should hide state dissemination/collection Proposed abstraction: global network view – Dont worry about how to get that view (next slide) Control program operates on network view – Input: global network view (graph) – Output: configuration of each network device Source: Scott Shenker

73 Network Operating System (NOS) NOS: distributed system that creates a network view – Runs on servers (controllers) in the network Communicates with forwarding elements in network – Gets state information from forwarding elements – Communicates control directives to forwarding elements Using forwarding abstraction Control program operates on view of network – Control program is not a distributed system NOS plus Forwarding Abstraction = SDN (v1) Source: Scott Shenker

74 Global Network View Protocols Control Program Network Operating System Current NetworksSoftware-Defined Networking (v1) Control via forwarding interface Source: Scott Shenker

75 Major Change in Paradigm No longer designing distributed control protocols – Now just defining a centralized control function Control program: Configuration = Function(view) Why is this an advance? – Much easier to write, verify, maintain, reason about, …. NOS handles all state dissemination/collection – Abstraction breaks this off as tractable piece – Serves as fundamental building block for control Source: Scott Shenker

76 What Role Does OpenFlow Play? NOS conveys configuration of global network view to actual physical devices But nothing in this picture limits what the meaning of configuration is OpenFlow is one possible definition of how to model the configuration of a physical device Is it the right one? Absolutely not. – Crucial for vendor-independence – But not the right abstraction (yet) Source: Scott Shenker

77 Are We Done Yet? This approach requires control program (or operator) to configure each individual network device This is much more complicated than it should be! NOS eases implementation of functionality – But does not help specification of functionality! We need a specification abstraction Source: Scott Shenker

78 Specification Abstraction Give control program abstract view of network – Where abstract view is function of global view Then, control program is abstract mapping – Abstract configuration = Function(abstract view) Model just enough detail to specify goals – Dont provide information needed to implement goals Source: Scott Shenker

79 One Simple Example: Access Control Full Network View Abstract Network View Source: Scott Shenker

80 More Detailed Model L2 L3ACL Packet In Packet Out Service model can generally be described by a table pipeline Source: Scott Shenker

81 Implementing Specification Abstraction L2 L3 ACL Network Hypervisor (Nypervisor) Compiles abstract pipeline into physical configuration Given: Abstract Table Pipeline Need: pipeline operations distributed over network of physical switches Source: Scott Shenker

82 Two Examples Scale-out router: – Abstract view is single router – Physical network is collection of interconnected switches – Nypervisor allows routers to scale out, not up Multi-tenant networks: – Each tenant has control over their private network – Nypervisor compiles all of these individual control requests into a single physical configuration – Network Virtualization Source: Scott Shenker

83 Nypervisor Abstract Network View Global Network View Network Operating System Moving from SDNv1 to SDNv2 Control Program Source: Scott Shenker

84 Clean Separation of Concerns Network control program specifies the behavior it wants on the abstract network view –Control program: maps behavior to abstract view –Abstract model should be chosen so that this is easy Nypervisor maps the controls expressed on the abstract view into configuration of the global view –Nypervisor: abstract model to global view –Models such as single cross-bar, single slice,… NOS distributes configuration to physical switches –NOS: global view to physical switches Source: Scott Shenker

85 Three Basic Network Interfaces Forwarding interface: abstract forwarding model – Shields higher layers from forwarding hardware Distribution interface: global network view – Shields higher layers from state dissemination/collection Specification interface: abstract network view – Shields control program from details of physical network Source: Scott Shenker

86 Control Plane Research Agenda Create three modular, tractable pieces – Nypervisor – Network Operating System – Design and implement forwarding model Build control programs over abstract models – Identify appropriate models for various applications – Virtualization, Access Control, TE, Routing,… Source: Scott Shenker

87 Implementations vs Abstractions SDN is an instantiation of fundamental abstractions – Dont get distracted by the mechanisms The abstractions were needed to separate concerns – Network challenges are now modular and tractable The abstractions are fundamental – SDN implementations are ephemeral – OpenFlow/NOX/etc. particular implementations Source: Scott Shenker

88 Future of Networking, Past of Protocols The future of networking lies in cleaner abstractions – Not in defining complicated distributed protocols – SDN is only the beginning of needed abstractions Took OS researchers years to find their abstractions – First they made it work, then they made it simple Networks work, but they are definitely not simple – It is now networkings turn to make this transition Our task: make networking a mature discipline – By extracting simplicity from the sea of complexity… Source: Scott Shenker

89 "SDN has won the war of words, the real battle over customer adoption is just beginning...." - Scott Shenker

90 OpenFlow Deployments 90

91 Current Trials and Deployments 68 Trials/Deployments - 13 Countries

92 Brazil University of Campinas Federal University of Rio de Janeiro Federal University of Amazonas Foundation Center of R&D in Telecomm. Canada University of Toronto Germany T-Labs Berlin Leibniz Universität Hannover France ENS Lyon/INRIA India VNIT Mahindra Satyam Italy Politecnico di Torino United Kingdom University College London Lancaster University University of Essex Taiwan National Center for High-Performance Computing Chunghwa Telecom Co Current Trials and Deployments Japan NEC JGN Plus NICT University of Tokyo Tokyo Institute of Technology Kyushu Institute of Technology NTT Network Innovation Laboratories KDDI R&D Laboratories Unnamed University South Korea KOREN Seoul National University Gwangju Institute of Science & Tech Pohang University of Science & Tech Korea Institute of Science & Tech ETRI Chungnam National University Kyung Hee University Spain University of Granada Switzerland CERN

93 3 New EU Projects: OFELIA, SPARC, CHANGE

94 EU Project Participants Germany – Deutsch Telekom Laboratories – Technishche Universitat Berlin – European Center for ICT – ADVA AG Optical Networking – NEC Europe Ltd. – Eurescom United Kingdom – University of Essex – Lancaster University – University College London Spain – i2CAT Foundation – University of the Basque Country, Bilbao Romania – Universitatea Politehnica Bucuresti Sweden – ACREO AB (Sweden) – Ericsson AB Sweden (Sweden) Hungary – Ericsson Magyarorszag Kommunikacios Rendszerek KFT Switzerland – Dreamlab Technologies – Eidgenossische Technische Hochschule Zurich Italy – Nextworks – Universita` di Pisa Belgium – Interdisciplinary Institute for Broadband Technology – Universite catholique de Louvain

95 Kansas State GENI OpenFlow deployment (2010) 10 institutions and 2 National Research Backbones 95

96 National Lambda Rail GENI Network Evolution

97 National Lambda Rail GENI Network Evolution

98 FlowVisor API GENI Integration FlowVisor – Slicing control Expedient – Experimenters portal for slice management Opt-in Manager – Network admins portal to approve/ deny expt requests for traffic FlowVisor1 FlowVisor2 Substrate 1 Substrate 2 OpenFlow Opt-in Mgr1 Opt-in Mgr2 Expedient1Expedient2 API X Expedient3 GENI API API X

99 FIBRE: FI testbeds between BRazil and Europe Project between 9 partners from Brazil (6 from GIGA), 5 from Europe (4 from Ofelia and OneLab) and 1 from Australia (from OneLab), with a proposal for the design, implementation and validation of a shared Future Internet sliceable/programmable research facility, supporting the joint experimentation of European and Brazilian researchers. The objectives include: – the development and operation of a new experimental facility in Brazil – the development and operation of a FI facility in Europe based on enhancements and the federation of the existing OFELIA and OneLab infrastructures – The federation of the Brazilian and European experimental facilities, to support the provisioning of slices using resources from both testbeds

100 What? Main goal – Create a common space between the EU and Brazil for Future Internet (FI) experimental research into network infrastructure and distributed applications, by building and operating a federated EU-Brazil Future Internet experimental facility. Who? 15 partners How? Requested to the EC ~1.1M and CNPq R$ 2.6 in funding to perform 6 activities – WP1: Project management – WP2, WP3: Building and operating the Brazilian (WP2) and European (WP3) facilities – WP4: Federation of FIBRE-EU and FIBRE-BR facilities – WP5: Joint pilot experiments to showcase the potential of the federated FIBRE facility – WP6: Dissemination and collaboration Project at a glance 100 Nextworks UEssex i2CAT UTH UPMC NICTA UNIFACS UFPA UFG UFSCar CPqD,USP RNP, UFF UFRJ

101 The FIBRE consortium in Brazil The map shows the 9 participating Brazilian sites (islands) and the expected topology of their interconnecting private L2 network Over GIGA, Kyatera and RNP experimental networks

102 FIBRE site in Brazil

103 Technology pilots examples 103 High-definition content delivery Seamless mobility testbed

104 OpenFlow Deployment at Stanford 104 Switches (23) APs (50) WiMax (1)

105 Industry interest 105

106 Cellular industry Recently made transition to IP Billions of mobile users Need to securely extract payments and hold users accountable IP sucks at both, yet hard to change

107 Telco Operators Global IP traffic growing 40-50% per year End-customer monthly bill remains unchanged Therefore, CAPEX and OPEX need to reduce % per Gb/s per year But in practice, reduces by ~20% per year

108 Example: New Data Center Cost 200,000 servers Fanout of 20 10,000 switches $5k vendor switch = $50M $1k commodity switch = $10M Savings in 10 data centers = $400M Control More flexible control Tailor network for services Quickly improve and innovate

109 Research Examples

110 Example 1 Load-balancing as a network primitive Nikhil Handigol, Mario Flajslik, Srini Seetharaman




114 Load Balancing is just Smart Routing

115 Nikhils Experiment: <500 lines of code Feature Network OS (NOX)

116 Example 2 Energy Management in Data Centers Brandon Heller

117 ElasticTree Goal: Reduce energy usage in data center networks Approach: 1.Reroute traffic 2.Shut off links and switches to reduce power [Brandon Heller, NSDI 2010] Network OS DC Manager DC Manager Pick paths

118 ElasticTree Goal: Reduce energy usage in data center networks Approach: 1.Reroute traffic 2.Shut off links and switches to reduce powerXX X X X Network OS DC Manager DC Manager Pick paths [Brandon Heller, NSDI 2010]

119 Example 3 Unified control plane for packet and circuit networks Saurav Das

120 Feature Network OS Converging Packet and Circuit Networks IP Router IP Router TDM Switch TDM Switch WDM Switch WDM Switch WDM Switch WDM Switch IP Router IP Router Goal: Common control plane for Layer 3 and Layer 1 networks Approach: Add OpenFlow to all switches; use common network OS OpenFlow Protocol OpenFlow Protocol [Saurav Das and Yiannis Yiakoumis, Supercomputing 2009 Demo] [Saurav Das, OFC 2010]

121 Example 4 Using all the wireless capacity around us KK Yap, Masayoshi Kobayashi, Yiannis Yiakoumis, TY Huang

122 KKs Experiment: <250 lines of code WiMax Feature Network OS (NOX)

123 Evolving the IP routing landscape with Software-Defined Networking technology

124 Providing IP Routing & Forwarding Services in OpenFlow networks Unicamp Unirio Ufscar Ufes Ufpa.... Indiana University Stanford Google Deutsche Telekom NTT MCL....

125 Motivation Current mainframe model of networking equipment: - Costly systems based on proprietary HW and closed SW - Lack of programmability limits cutomization and in-house innovation - Inefficient and costly network solutions - Ossified Internet Control Logic RIPBGPOSPFISIS O.S. Driver Hardware ROUTERROUTER Proprietary IPC / API Management Telnet, SSH, Web, SNMP

126 RouteFlow: Main Goal Control Logic RIPBGPOSPF O.S. Driver Hardware O.S. API Standard API (i.e. OpenFlow) Switch Controller Management API Open commodity routing solutions: + open-source routing protocol stacks (e.g. Quagga) + commercial networking HW with open API (i.e. OpenFlow) = line-rate performace, cost-efficiency, and flexibility! Innovation in the control plane and network services

127 RouteFlow: Plain Solution

128 RouteFlow A RouteFlow B OSPF HELLO RouteFlow A VMRouteFlow B VM Internet RouteFlow: Control Traffic (IGP)

129 RouteFlow A RouteFlow B BGP Message BGP Message RouteFlow A VM (acts as eBGP speaking Routing Control Platform) RouteFlow B VM Internet RouteFlow: Control Traffic (BGP)

130 RouteFlow A RouteFlow B Packet Dest IP: Src MAC: 00:00:00:00:00:01 Packet Dest IP: Src MAC: 00:00:00:00:00:01 RouteFlow A VM RouteFlow B VM 1) Change Src MAC to As MAC address 00:00:00:00:00:02 2) Decrement TTL * 3) /24 -> port 2 Packet Dest IP: Src MAC: 00:00:00:00:00:02 Packet Dest IP: Src MAC: 00:00:00:00:00:02 Internet RouteFlow: Data Traffic

131 RouteFlow: Architecture Key Features Separation of data and control planes Loosely coupled architecture: – Three RF components: 1. Controller, 2. Server, 3. Slave(s) Unmodified routing protocol stacks – Routing protocol messages can be sent 'down' or kept in the virtual environment Portable to multiple controllers – RF-Controller acts as a proxy app. Multi-virtualization technologies Multi-vendor data plane hardware

132 Advancing the Use Cases and Modes of Operation From logical routers to flexible virtual networks

133 Prototype evaluation Lab Setup - NOX controller - Quagga routing engine - 5 x NetFPGAs switches Results - Interoperability with traditional networking gear - Route convergence time is dominated by the protocol time-out configuration (e.g., 4 x HELLO in OSPF) not by slow-path operations - Compared to commercial routers (Cisco, Extreme, Broadcom- based), larger latency only for those packets that need to go to the slow-path: Lack FIB entry, need processing by the OS networking / routing stack e.g., ARP, PING, routing protocol messages NOX OpenFlow- Controller RF-Server 5 x NetFPGA Routers

134 1 physical OpenFlow switch – Pronto Virtual routers out of the physical OpenFlow switch 10 Gig and 1 Gig connections 2 BGP connections to external networks – Juniper routers in Chicago and Indianapolis Remote Controller New User Interface Field Trial at the University of Indiana Network setup

135 Field Trial at the University of Indiana User Interface

136 Demonstrations

137 Demonstration at Supercomputing 11 Routing configuration at your fingertips

138 RouteFlow community 10k visits from across the world (from 1000 different cities): - 43% new visits - ~25% from Brazil, ~25% from the USA, ~25% Europe, ~20% from Asia 100s of downloads 10s of known users from academia and industry 303 days since Project Launch

139 Key Partners and Contributors

140 Conclusions Worldwide pioneer virtual routing solution for OpenFlow networks! RouteFlow proposes a commodity routing architecture that combines line-rate performance of commercial hardware with the flexibility of open-source routing stacks (remotely) running on PCs; Allows for a flexible resource association between IP routing protocols and a programmable physical substrate: - Multiple use cases around virtualized IP routing services. - IP routing protocol optimization - Migration path from traditional IP deployments to SDN Hundreds of users from around the world Yet to prove in large-scale real scenarios - To run on the NDDI/OS3E OpenFlow testbed

141 questions? Thank you!

Download ppt "Future Internet: New Network Architectures and Technologies An Introduction to OpenFlow and Software-Defined Networking Christian Esteve Rothenberg"

Similar presentations

Ads by Google