Download presentation
Presentation is loading. Please wait.
Published byBrendan Little Modified over 8 years ago
1
Network Virtualization Ben Pfaff Nicira Networks, Inc.
2
Preview ● Data Centers ● Problems: Isolation, Connectivity ● Solution: Network Virtualization ● Network Tunnels ● A Network Virtualization Architecture ● Open vSwitch Design ● Questions?
3
Data Centers Front of a rack Rear of a rack “Top of Rack” switch A data center has many racks. A rack has 20-40 servers. The servers in a rack connect to a 48-port “top of rack” (ToR) switch. Data centers buy the cheapest ToR switches they can find. They are pretty dumb devices. Data centers do not rewire their networks without a really good reason.
4
Data Center Network Design before VMs Machine 1 Machine 40 Machine 2... “Top of Rack” Switch One rack of machines Aggregation Switch other ToRs Core Switch other agg switches
5
Data Center Network Design with VMs Machine 1 Machine 40 Machine 2... “Top of Rack” Switch One rack of machines Aggregation Switch other ToRs Core Switch other agg switches VMVM VMVM VMVM up to 128 VMs each VMVM VMVM VMVM VMVM VMVM VMVM virtual switch (= vswitch)
6
Problem: Isolation Machine 1 Machine 40 Machine 2... “Top of Rack” Switch One rack of machines Aggregation Switch other ToRs Core Switch other agg switches VMVM VMVM VMVM up to 128 VMs each VMVM VMVM VMVM VMVM VMVM VMVM virtual switch (= vswitch) All VMs can talk to each other by default. You don't want someone in engineering screwing up the finance network. You don't want a break- in to your production website to allow stealing human resources data. Some switches have security features but: ● You bought the cheap ones instead. ● There are hundreds of switches to set up.
7
Problem: Connectivity Machine 1 Machine 40 Machine 2... “Top of Rack” Switch One rack of machines Aggregation Switch other ToRs Core Switch other agg switches VMVM VMVM VMVM up to 128 VMs each VMVM VMVM VMVM VMVM VMVM VMVM virtual switch (= vswitch) The VMs in a data center can name each other by their MAC addresses (L2 addresses). This only works within a data center. To access machines or VMs in another data center, IP addresses (L3 addresses) must be used. And those IP addresses have to be globally routable. The Internet L3 L2
8
Non-Solution: VLANs A VLAN partitions a physical Ethernet network into isolated virtual Ethernet networks: Ethernet IP TCP VLA N L2 L3 L4 The Internet is an L3 network. When a packet crosses the Internet, it loses all its L2 headers, including the VLAN tag. You lose all the isolation when your traffic crosses the Internet. Other problems: limited number, static allocation.
9
Solution: Network Virtualization Virtualization Layering Network Virtualization Ethernet IP TCP Ethernet IP TCP Ethernet IP GRE Tunneling: Separating Virtual and Physical Network Physical HeadersVirtual Headers
10
Path of a Packet (No Tunnel) Machine 1 Machine 40 Machine 2... “Top of Rack” Switch One rack of machines Aggregation Switch other ToRs Core Switch other agg switches VMVM VMVM VMVM up to 128 VMs each VMVM VMVM VMVM VMVM VMVM VMVM virtual switch (= vswitch) A packet from one VM to another passes through a number of switches along the way. Each switch only looks at the destination MAC address to decide where the packet should go.
11
Path of a Packet (Via Tunnel) Machine 1 Machine 40 Machine 2... “Top of Rack” Switch Aggregation Switch Core Switch VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM Aggregation Switch “Top of Rack” Switch... Machine 2 Machine 40 Machine 1 The Internet routing switching physical virtual Ethernet IP TCP Ethernet IP GRE Physical Headers Virtual Headers Data Center 1Data Center 2
12
Challenges ● Setting up the tunnels: ● Initially. ● After VM startup/shutdown ● After VM migration ● Handling network failures ● Monitoring, administration ● Administration Þ Use a central controller to set up the tunnels.
13
A Network Virtualization Distributed System Machine 1 Machine 2 Machine 3 Machine 4 controller OVSOVS OVSOVS OVSOVS OVSOVS control protocols VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM Data Center 1 Data Center 2 “Top of Rack” Switch Aggregation Switch Core Switch wires The Internet “Top of Rack” Switch Aggregation Switch Core Switch
14
Controller Duties ● Monitor: ● Physical network ● VM locations, states ● Control: ● Tunnel setup ● All packets on virtual and physical network ● Virtual/physical mapping ● Tells OVS running everywhere else what to do
15
Open vSwitch ● Ethernet switch implemented in software ● Can be remotely controlled ● Tunnels (GRE and others) ● Integrates with VMMs, e.g. XenServer, KVM ● Free and open source openvswitch.org
16
Open vSwitch: OVSDB protocol ● Slow-moving state: ● VM placement (via VMM integration) ● Tunnel setup ● Buzzwords: ● Lightweight ● Transactional ● Not SQL ● Persistent
17
Open vSwitch: OpenFlow protocol Ethernet switch Flow table = ordered list of “if-then” rules: “If this packet comes from VM A and going to VM B, then send it out via tunnel 42.” (No rule: send to controller.)
18
OpenFlow in the Data Center (One Possibility) Machine 1 Machine 2 Machine 3 Machine 4 controller OVSOVS OVSOVS OVSOVS OVSOVS control protocols VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM VMVM Data Center 1 Data Center 2 “Top of Rack” Switch Aggregation Switch Core Switch wires The Internet “Top of Rack” Switch Aggregation Switch Core Switch 1. VM sends packet. 2. Open vSwitch checks flow table – no match. Sends packet to controller. 3. Controller tells OVS to set up a tunnel to the destination and send the packet on that tunnel. 4. OVS sends packet on the new tunnel. 5. Normal switching and routing carry the packet to its destination in the usual way. The same process repeats on the other end to send the reply back. This is done at most on a per-”flow” basis, and other optimizations keep it from happening too frequently. 2 1 3 4 5
19
Open vSwitch: Design Overview NIC Host operating system VM 1VM 2VM 3 VNI C Virtual machines Hypervisor physical machine Controller ovs-vswitchd Adminstrative CLI/GUI...other network elements...
20
Open vSwitch: Design Details OVS kernel module ovs-vswitchd NIC Hypervisor Host operating system user kernel VM 1VM 2VM 3 VNI C Virtual machines Hypervisor physical machine Controller Cache hierarchy: <1us: Kernel module <1ms: ovs-vswitchd <10ms: controller
21
OpenFlow: Another Use 123 456 789 Redundant Wiring Between Switches:
22
OpenFlow: Another Use 123 456 789 Effective topology with normal L2 switching:
23
OpenFlow: Another Use 123 456 789 controller L2 routing managed by controller: (Requires all switches to support OpenFlow)
24
Conclusion ● Companies spread VMs across data centers. ● Ordinary networking exposes differences between VMs in the same data center and those in different data centers. ● Tunnels can hide the differences. ● A controller and OpenFlow switches at the edge of the network can set up and maintain the tunnels.
25
Questions?
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.