Presentation on theme: "High-Fidelity Switch Models for SDN Emulation"— Presentation transcript:
1High-Fidelity Switch Models for SDN Emulation MAKE THIS WIDE AND LONGHigh-Fidelity Switch Models for SDN EmulationDanny Y. HuangKenneth YocumAlex C. SnoerenUniversity of California, San Diego
2Buying OpenFlow Switches Suppose you manage the network of a small company. On the network runs an application that uses a key-value store like Redis. Your boss wants you to build SDN cluster for app.# Three companies approach you with free sample switches.# Which one to buy? Naturally, you want to pick one that would work best for your application.ClientsHP ProcurveFulcrum MonacoQuanta LB4GWhich one to buy?
3Buying OpenFlow Switches You run your app on one of the sample switches. Details of workload later, but this graphs shows how fast your redis clients complete their queries on HP.# 30% queries never completed. Worried. Try another switch.30% queries timed outClients
4Buying OpenFlow Switches Swap in Monaco switch. Faster clients, fewer losses.Monaco doing pretty well. Maybe the next switch will do better?Clients
5Buying OpenFlow Switches Swap in Quanta switch, drastically different performance.# Confused. Same topo, same app, same workload. Why different performance?ClientsSame topology.Same workload.Different performance!
6Buying OpenFlow Switches You show the performance graphs to your boss. Like Monaco.# Scale Monaco.# In one-switch case, Monaco is doing better for our workload. Can we still make the same conclusion for 1000 Monaco switches? What about a combination of vendors? Which vendor or vendors should you choose to maximize performance?Performance VariationsClientsx 1,000 =How to predict the performance?Which one to buy?
7Emulating OpenFlow Networks You think of using emulation to predict the performance. # There are great tools out there that do a great job simulating and emulating the data plane, such link delays, queue lengths and topologies in software. # But this is OpenFlow. Control plane is involved. Ctl plane performance coupled with data plane. Existing options are not sufficient to capture the complexities of the control plane. # You can use popular options like Mininet, which leverages Open vSwitch to emulate the control plane. # But OVS itself is just another OpenFlow switch. It also introduces performance variations like its hardware counterparts. How OVS performs doesn’t tell you much about the hardware.OVS itself would also introduce performance variations.Data PlaneSimulators / EmulatorsData TrafficControl PlaneOpen vSwitchMininetControllerOpenFlow
8Problem Goal Hard to predict OpenFlow network’s performance: OpenFlow Switches are different.Existing emulation framework is not good enough.GoalTo predict performance with realism:To design an emulator that captures vendor variations.To measure these variations in the control plane.Problem: Hard to predict performance of OpenFlow network. Vendors introduce variations. Existing emulation techniques are not good enough. Either they emulate only the data plane, like Emulab or ModelNet, or it does not account for the control plane variations across vendors.# Our goal is to predict performance of OpenFlow network with realism. Given vendor variations, we need to design emulator that capture these differences. But then, we need to first know: what are some of the vendor variations?
9Variations across Vendors Examples include flow table size. A switch with a smaller flow table can hold fewer activate flows. More flows have to go thru the controller. Slower performance. # Also, flow mgmt policies can vary across switches. Some switch may put new rules into the software flow table if the new rules are arriving at the hardware table too fast. Some switch don’t even have software tables. May simply drop new rules if they arrive too fast. # CPUs may be different, too. They handle all communications between switch and controller. If a switch is slow, this communication is slow. # This is a long list. Complex interactions. # For this talk, we focus on the switch CPU’s effect on the control path. Other differences are highlighted in our paper.Differences in Control PlaneFlow table sizeFlow management policiesSwitch CPUetcControllerOpenFlowProtocolFocus on CPU’s effect on control-path delays.Data Traffic
10Disproportionately affects short flows. Control-Path DelaysHow do switch CPUs affect the control path latency? Recall how OpenFlow switches work. Three components: workload, the switch and the controller. Suppose controller runs the L2-learning module. Installs rules for every new flow.# New flow arrives. Not matched in the flow table. Switch generates pkt-in. Controller installs a rule by issuing a flow-mod. Pkt exits. # Subsequent pkts are matched in the flow table. Does not involve the control plane. Data plane pkts are matched in the TCAM, super fast. # Ctrl plane traffic is slower. Handled by wimpy CPU. # Delay in ingress and egress. # For short flows, this extra overhead can be significant. To help us predict performance across vendors, we need to characterize the ingress and egress delays for a given workload. We design experiement to measure just these two delays.Controller (POX)Disproportionately affects short flows.Packet-inEventsFlow-modEventsHardwareOpenFlowSwitchIngress DelayEgress DelayCPUTCAMIngressEgressData plane trafficApplication Workload
11Measure Control-Path Delays Test harness to help us characterize the control path delays with respect to real application workload. Measurement server connected to switch.# Controller runs default L2-learning switch; installs rules for every new flow.# Client creates short flows every 50 ms. Each short flow is a redis query for a 64-byte value from server. Measure time taken for client query to complete, as well as ingress & egress delays at switch. Note: Run controller, client and server on the same machine to faciliate time measurement. We make sure that none of the applications are CPU bound.# With this set-up, we hope to generate the workload and observe how different switches inject different control path delays. To do this, we simply swap in and out the appropriate switches.Installs rules for every new flowServerControl PlaneHP ProcurveFulcrum MonacoQuanta LB4GOpen vSwitch (OVS)ControllerEth 0Eth 1Eth 2OF SwitchData PlaneClientsQueries for a 64-byte value every 50 ms. We measure the query time, ingress & egress delays.
12Measure Redis Query Times Same as graph shown earlier. Different performance. We didn’t know why. Thanks to harness, we can break down measurement into ingress and egress delays to see why we see such differences.Query completion times for Redis clients (ms)
13OVS faster than the others. Measure Ingress DelayFirst Focus on the ingress delay.# Diff performance among HW switches. OVS faster. Important observation, but will come back to this point later.OVS faster than the others.
14Measure Egress Delay# Understand why Redis behaves differently across switches start designing an emulator that captures the ingress and egress delays. In both cases, recall that OVS is faster than HW in control path. Convenient fact.# We can slow down OVS’s ingress and egress time like the hardware. Transform dotted line (OVS) into the colored lines (HW switches).OVS: almost no delays!Slow down the ingress and egress delays on OVS to emulate the hardware
15Implementing the Emulator Slow down control traffic. No change to the controller and OVS.Controller (POX)To slow down control trafficPacket-in EventsFlow-mod EventsOpen vSwitch (OVS)IngressEgressData plane trafficApplication Workload
16Emulator Proxy# Delayed using Inverse Transform Sampling. Details in paper. Key idea is if HW delays some amt with some prob, then proxy will also delay by the same amt with same prob. # Same technique for egress. # Approximate the control path characteristics of hardware.Controller (POX)Packet-in Events(Delayed)Flow-mod EventsPhysical OF Switch≈Emulator ProxyFlow-mod Events(Delayed)Packet-in EventsOpen vSwitch (OVS)IngressEgressData plane trafficApplication Workload
17EvaluationTo eval, we incorporated the delay measurments of HP switch into emulator. We plug our emulator into the same test harness.# Compares performance of emulator against HW HP switch. Red: hardware. Blue: emulator. Slow down OVS from dotted line to blue line. Reasonable approximation.Query completion time for Redis clients (ms)CDFHPOVSHardwareEmulated
18Evaluation We emulate the ingress and egress delays only. Reasonble approximation.Evaluation# Similarly, we measure the ingress and egress delays on Monaco. Incorporate the measurements into emulator, so OVS would slow down like Monaco.# There is still gap between emul and HW. But remember. We emulated ingress and egress only. Reasonable approx.HPCDFMonacoQuery completion time for Redis clients (ms)QuantaQuery completion time for Redis clients (ms)OVSHardwareEmulatedQuery completion time for Redis clients (ms)
19SummaryFuture WorkFuture work:# Recall long list of vendor variations like flow table sizes and flow mgt policies. Capture those.# Give me a new switch. Have to start some workload, measure switch, put measurements into emulator. Great to automate this process so emulator works for other workloads.# Built emulator for one switch. Multiple switches? To capture the complexities in the interactions of switches. Emulate larger cluster.Hard to predict performance due to vendor variations.We designed an emulator for control-path delays.Simple, but achieves reasonable approximation.Increase realism. Capture more artifacts.Expand workload coverage. Automate switch measurements.Capture interactions among multiple switches.AcknowledgementsMarco Canini and Dan Levin (TU Berlin)George Porter (UC San Diego)
21Inverse Transform Sampling Goal: Emulate switch X, which introduces ingress delay t with probability p.Algorithm:Measure the delay distributions of OVS and X. Make them into tables.Measure how much OVS has delayed. Call this tOVS.Look up tOVS from the OVS table. This returns probability p.Look up p from X’s table. This returns delay tX.Introduce delay (tX - tOVS).
22Evaluation (QQ Plots) HP Monaco Quanta Time on Emulator (ms) Query completion time on hardware switches (ms)