Presentation is loading. Please wait.

Presentation is loading. Please wait.

Recall: Logically Centralized Control Plane

Similar presentations


Presentation on theme: "Recall: Logically Centralized Control Plane"— Presentation transcript:

1 Recall: Logically Centralized Control Plane
A distinct (typically remote) controller interacts with local control agents (CAs) in routers to compute forwarding tables Remote Controller CA data plane control

2 Software Defined Networking (SDN)
Why a logically centralized control plane? easier network management: avoid router misconfigurations, greater flexibility of traffic flows table-based forwarding (OpenFlow API) allows “programming” routers centralized “programming” easier: compute tables centrally and distribute distributed “programming” more difficult: compute tables as result of distributed algorithm (protocol) implemented in each and every router open (non-proprietary) implementation of control plane

3 Generalized Forwarding and SDN
Each router contains a flow table that is computed and distributed by a logically centralized routing controller logically-centralized routing controller control plane data plane local flow table headers counters actions 1 2 3 values in arriving packet’s header

4 OpenFlow Data Plane Abstraction
flow: defined by header fields generalized forwarding: simple packet-handling rules Pattern: match values in packet header fields Actions: for matched packet: drop, forward, modify, matched packet or send matched packet to controller Priority: disambiguate overlapping patterns Counters: #bytes and #packets Flow table in a router (computed and distributed by controller) define router’s match+action rules

5 OpenFlow Data Plane Abstraction
flow: defined by header fields generalized forwarding: simple packet-handling rules Pattern: match values in packet header fields Actions: for matched packet: drop, forward, modify, matched packet or send matched packet to controller Priority: disambiguate overlapping patterns Counters: #bytes and #packets * : wildcard src=1.2.*.*, dest=3.4.5.*  drop src = *.*.*.*, dest=3.4.*.*  forward(2) 3. src= , dest=*.*.*.*  send to controller

6 OpenFlow: Flow Table Entries
Rule Action Stats Packet + byte counters Forward packet to port(s) Encapsulate and forward to controller Drop packet Send to normal processing pipeline Modify Fields Now I’ll describe the API that tries to meet these goals. Switch Port VLAN ID MAC src MAC dst Eth type IP Src IP Dst IP Prot TCP sport TCP dport Link layer Network layer Transport layer 6

7 Examples Destination-based forwarding: Firewall:
Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Action * * * * * * * * * port6 IP datagrams destined to IP address should be forwarded to router output port 6 Firewall: Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Forward * * * * * * * * * 22 drop do not forward (block) all datagrams destined to TCP port 22 Switch Port MAC src dst Eth type VLAN ID IP Src Dst Prot TCP sport dport Forward drop * * * * * * * * * do not forward (block) all datagrams sent by host

8 OpenFlow Abstraction match+action: unifies different kinds of devices
Router match: longest destination IP prefix action: forward out a link Switch match: destination MAC address action: forward or flood Firewall match: IP addresses and TCP/UDP port numbers action: permit or deny NAT match: IP address and port action: rewrite address and port

9 OpenFlow Example Example: datagrams from hosts h5 and h6 should be sent to h3 or h4, via s1 and from there to s2 IP Src = 10.3.*.* IP Dst = 10.2.*.* forward(3) match action Host h6 controller 1 s3 2 4 3 Host h5 s1 1 s2 1 2 Host h4 4 2 4 ingress port = 2 IP Dst = IP Dst = forward(3) match action forward(4) ingress port = 1 IP Src = 10.3.*.* IP Dst = 10.2.*.* forward(4) match action Host h1 3 3 Host h2 Host h3

10 Simple SDN Network L2 Forwarding Controller APIs POX, Floodlight, …
Communication Protocol Software or Hardware Switch Ethernet,IP,ARP,TCP,HTTP,… Host1 Host2

11 Communication Protocol (OpenFlow)

12 OpenFlow (Main Components)
Flow tables Matching Manipulation Counters Communication messages Controller to switch Asynchronous Symmetric Controller Pipeline Secure Channel Flow Table

13 Flow Tables (Structure)
A flow-table consists of a set of flow entries a table miss configuration Drop packet Send to controller Process using the next flow-table Match fields Counters Instructions

14 Flow Tables (Matching)
Physical  Ingress port Metadata L2  MAC src/dst, EtherType, VLAN, MPLS L3  IP src/dst, IP proto, IP ToS, ARP code L4  TCP/UDP src/dst, ICMP

15 Flow Tables (Counters)
Table counters e.g. Packet lookups/matches Flow counters e.g. packets/bytes received Port counters e.g. packets/bytes transmitted/received, drops

16 Flow Tables (Actions) Output Drop Push/Pop VLAN/MPLS tag Set-Field
IN_PORT  send packet to ingress port CONTROLLER  encapsulate and send to controller FLOOD  send packet to ports except ingress port Drop Push/Pop VLAN/MPLS tag Set-Field IPv4 src/dst addresses MAC src/dst addresses TCP src/dst ports

17 Communication Messages (Controller to Switch)
Features Switch replies with list of ports, ports speeds, supported tables and actions Modify state Add, delete, or modify flow tables Read state Controller queries table, flow, or port counters Packet out Used by controller to send packets out of a specified port on the switch

18 Communication Messages (Asynchronous)
Packet in All packets that do not have a matching flow entry are encapsulated and sent to the controller Flow removed Sent to controller when flow expires due to idle or hard timeouts Port status Generated if a port is brought down

19 SDN Controller (POX)

20 POX SDN controller written in Python Many build-in modules
Python APIs to enable user extensions

21 POX (Built-in Modules)
Hub flood packets L2 forwarding MAC learning L3 forwarding IP learning

22 POX (Python APIs) Publish-Subscribe system A module can raise events
A module can register for events provides by other modules A module must have a launch function

23 POX (Python APIs)

24 POX (Python APIs)

25 Software Switch (Open vSwitch)

26 Open vSwitch (Features)
L2-L4 matching VLANs with trunking Tunneling protocols such as GRE Remote configuration protocol Multi-table forwarding pipeline Monitoring via NetFlow, sFlow Spanning Tree Protocol Fine-grained QoS OpenFlow support Runs in either Standalone mode OpenFlow mode

27 A Network in a Laptop (Mininet)

28 Mininet Mininet is a system for rapidly prototyping large networks on a single laptop Lightweight OS-level virtualization Isolated network namespace Constrained CPU usage on isolated namespace CLI and Python APIs Can Create custom topologies Run real programs Custom packet forwarding using OpenFlow

29 Mininet API Basics net = Mininet() # net is a Mininet() object
h1 = net.addHost( 'h1' ) # h1 is a Host() object h2 = net.addHost( 'h2' ) # h2 is a Host() s1 = net.addSwitch( 's1' ) # s1 is a Switch() object c0 = net.addController( 'c0' ) # c0 is a Controller() net.addLink( h1, s1 ) # creates a Link() object net.addLink( h2, s1 ) net.start() CLI( net ) net.stop() c0 s1 h1 h2

30 Performance Modeling in Mininet
# Use performance-modeling link class net = Mininet(link=TCLink) # Limit link bandwidth and add delay net.addLink(h2, s1, bw=10, delay='50ms') controller s1 10 Mbps, 50 ms h2 h1

31 Mininet (Examples) Demo time

32 References OpenFlow http://www.openflow.org
Open vSwitch POX Mininet Floodlight OpenFlow Controller

33 Data Center Networks 10’s to 100’s of thousands of hosts, often closely coupled, in close proximity: e-business (e.g. Amazon) content-servers (e.g., YouTube, Akamai, Apple, Microsoft) search engines, data mining (e.g., Google) challenges: multiple applications, each serving massive numbers of clients managing/balancing load, avoiding processing, networking, data bottlenecks Inside a 40-ft Microsoft container, Chicago data center

34 Data Center Networks load balancer: application-layer routing
receives external client requests directs workload within data center returns results to external client (hiding data center internals from client) Internet Border router Load balancer Load balancer Access router Tier-1 switches B A C Tier-2 switches TOR switches Server racks 1 2 3 4 5 6 7 8

35 Data Center Networks rich interconnection among switches, racks:
increased throughput between racks (multiple routing paths possible) increased reliability via redundancy Server racks TOR switches Tier-1 switches Tier-2 switches 1 2 3 4 5 6 7 8


Download ppt "Recall: Logically Centralized Control Plane"

Similar presentations


Ads by Google