Download presentation
Presentation is loading. Please wait.
Published byDaisy Malone Modified over 9 years ago
1
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp
2
Decouple Service from Infrastructure Service: “slices” of physical infrastructure –Applications and networks that benefit from Flexible, custom topologies Application-specific routing Infrastructure: needed to build networks
3
Fixed Physical Infrastructure
4
Shared By Many Parties
5
Network Virtualization: 3 Aspects Host: Divide the resources of one physical node into the appearance of multiple distinct hosts Network stack: Give each process its own interfaces, routing table, etc. Links: Connect two nodes by composing underlying links
6
Why Virtual Networks Sharing amortizes costs –Enterprise network or small ISP does not have to buy separate routers, switches, etc. –Large ISP can easily expand to new data center without buying separate equipment Programmability and customizability Testing in realistic environments
7
Why Commodity Hardware Lower barrier to entry –Servers are inexpensive –Routing (e.g., Quagga), and forwarding (e.g., Click) software is open source (free) No need for specialized hardware –Open-source routing software: Quagga, etc. –Network processors can be hard to program Easy adaptation of physical infrastructure –Expansion is easy: buy more servers
8
Commercial Motivation: Logical Routers Consolidation –PoP and Core –Simpler physical topology –Fewer physical interconnection Application-Specific Routing –PoP and Core –Simpler physical topology –Fewer physical interconnection Wholesale Router Market Proof-of-Concept Deployment
9
Other Beneficiaries Interactive applications: require application- specific routing protocols –Gaming –VoIP Critical services: benefit from custom data plane –Applications that need more debugging info –Applications with stronger security requirements
10
Requirements Speed: Packet forwarding rate that approach that of native, in-kernel Flexibility: Support for custom routing protocols and topology Isolation: Separation of resource utilization and namespaces
11
Host Virtualization Full virtualization: VMWare Server, KVM –Advantage: No changes to Guest OS, good isolation –Disadvantage: Slow –Paravirtualization: Xen, Viridian OS-Level Virtualization: OpenVZ, VServers, Jail –Advantage: Fast –Disadvantage: Requires special kernel, less isolation
12
Network Stack Virtualization Allows each container to have its own –Interfaces –View of IP address space –Routing and ARP tables VServer does not provide this function –Solution 1: Patch VServer with NetNS –Solution 2: OpenVZ VServer is already used for PlanetLab
13
Link Virtualization Containers need Ethernet connectivity –Routers expect direct Ethernet connections to neighbors Linux GRE tunnels support only IP-in-IP Solution: Ethernet GRE (EGRE) tunnel
14
Synthesis Tunnel interface outside of container –Permits traffic shaping outside of container –Easier to create point-to-multipoint topology Need to connect tunnel interface to virtual interface
15
Connecting Interfaces: Bridge Linux bridge module: connects virtual interface with the tunnel interface –speed suffers due to bridge table lookup –allows point-to-multipoint topologies
16
Optimization: ShortBridge Kernel module used to join virtual interface inside the container with the tunnel interface Achieves high packet forwarding rate
17
Evaluation Forwarding performance –Packets-Per-Second –Source->Node-Under-Test->Sink Isolation –Jitter/loss measurements with bursty cross traffic Scalability –Forwarding performance as the number of containers grow All tests were conducted on Emulab –3GHz CPU, 1MB L2 Cache, 800MHz FSB, 2GB 400MHz DDR2 RAM
18
Forwarding Performance - Click Minimal Click configuration –Raw UDP receive->send Higher jitter ~80’000PPS
19
Forwarding Performance - Bridged Allows more flexibility through bridging ~250’000PPS
20
Forwarding Performance – Bridged w/o Tunneling Xen: often crashes, ~70’000PPS OpenVZ: ~300’000PPS NetNS: ~300’000PPS
21
Forwarding Performance – Spliced Avoids bridging overhead Point-to-Point topologies only ~500’000PPS
22
Forwarding Performance - Direct No resource control ~580’000PPS
23
Overall Forwarding Performance
24
Forwarding for Different Packet Sizes
25
Isolation Setup: –5 nodes. 2 pairs of source+sink –2 NetNS containers in spliced mode –pktgen used to generate cross flow –iperf measures jitter on another flow Step function –CPU utilization < 99%: no loss, 0.5ms jitter –CPU utilization ~> 100%: loss, 0.5ms jitter for delivered packets
26
Scalability Test Setup
27
Scalability Results
28
Tradeoffs Bridge vs. Shortbridge –Bridge enables point-to-multipoint –Shortbridge is faster Data-plane flexibility vs. Performance –Non-IP forwarding requires user-space processing (Click)
29
Future Work Resource allocation and scheduling –CPU –Interrupts/packet processing Long-running deployment on VINI testbed Develop applications for the platform
30
Questions Other motivations/applications? Other aspects to test? Design alternatives?
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.