Presentation is loading. Please wait.

Presentation is loading. Please wait.

Impact of Background Traffic on Performance of High-speed TCPs

Similar presentations


Presentation on theme: "Impact of Background Traffic on Performance of High-speed TCPs"— Presentation transcript:

1 Impact of Background Traffic on Performance of High-speed TCPs
Injong Rhee North Carolina State University Collaborators: Sangtae Ha, Lisong Xu, Long Le Microsoft Workshop

2 Slow window growth of Reno-style TCP results in under-utilization
Background Slow window growth of Reno-style TCP results in under-utilization Korea 202ms NC Japan 48ms Experiment with linux Iperf (1 TCP-SACK flow) 1Gbit backbone link: NC (USA) – Korea – Japan (special thanks to research team in Japan)

3 High-Speed TCP Variants
CUBIC H-TCP TCP-Africa Compound TCP HSTCP BIC-TCP TCP- AReno FAST Scalable New Protocol TCP- Westwood Many High-speed TCP variants have been proposed How can we evaluate these protocols? Which criteria?

4 Window growth patterns
HSTCP H-TCP Scalable BIC-TCP CUBIC Window Size BIC STCP CUBIC HTCP HSTCP Time NS2-Linux [?], 400Mbps, 160ms one-way delay,100% BDP buffer, No background traffic

5 Performance Criteria and Design Tradeoffs
There are many performance criteria Fairness Intra-protocol fairness RTT-fairness TCP-friendliness Scalability (High link utilization) Stability Not all protocols satisfy all the goals. But instead, make different design tradeoffs. For example, give up on convergence time to gain more stability, or vice versa.

6 Performance Evaluation Methodology
Internet experiment Most realistic tests, but Hard to reproduce the results No idea on what happened in the network Simulation or dummynet emulation Easily reproducible and verifiable Main issue: are they realistic? how to recreate the Internet environments? Theoretical analysis Provide important insights on the behavior of protocols But convenient assumptions and less useful for comparison (e.g., first order behaviors).

7 Testbed emulation - recreating the Internet environment.
Topology Can’t model the complexity of the entire network. Thus, most evaluations focus on one or a few hop environments (or dumbbell). Workload To compensate, focus on injecting realistic background traffic into the bottleneck link. As arriving flows must have gone through many hops, mimicking the traffic pattern seen in one core router has some effect of emulating the topology. Not perfect as it does not allow us to see the behaviors of protocols under multiple bottlenecks. But this can be overcome by use of a “parking” lot topology assuming bottleneck links are only a few.

8 Realistic background traffic
Hard to prove its realism, but we can make at least the statistics similar. Measure the Internet traffic in one Internet link and extract its statistical patterns such as flow sizes, arrival rates, transmission rates, etc. Highly detailed recreation of Internet traffic (based on these statistical patterns) are possible. Tools: HARPOON, Tmix, etc. A quick and dirty way: just emulate the patterns generally observed in the Internet. Arrivals -- exponential, heavy-tail Flow sizes -- a varied form of heavy-tail (different body and tail) RTT variations -- log-normal

9 Our work We study the impact of background traffic patterns on the performance of protocols. Important to understand their behaviors in the Internet-like environments. This will shed lights on different tradeoffs that different protocols take.

10 Testbed (Dummynet) Setup
Totally 18 servers for generating background traffic and receiving and sending protocol flows. Background traffic is pushed into forward/backward directions Long-lived flows: Iperf, short-lived flows: Surge (web traffic generator) The RTT of each flow is randomly chosen based on input distribution. Experimental parameters: RTT (40ms to 320ms), buffer sizes (1MB to 8MB).

11 Five different types of background traffic
Type I: Surge (LN Body 93%, Pareto tail 7%) Exponential arrival (0.2) Type II: Surge (LN Body 70%, Pareto tail 30%) Minimum file size for tail - 1MB Exponential arrival (0.6) Type III: Type I (90%), P2P traffic (10%) P2P traffic - Pareto, Minimum 3MB Type IV: 100% log-normal body Type V: Type II + 12 long-lived Iperf flows

12 Link utilization and stability
No Background (Buffer 1MB) Type II (Buffer 1MB) Some protocols reduce utilization when the rate variance of background traffic increases.

13 Link utilization, stability and loss synchronization
No Background Type II Utilization High-speed TCP flows Background traffic High rate variations of protocol flows may cause loss synchronization and low utilization.

14 Stability vs. Link utilization
Protocol Stability (measured in CoV - Standard Deviation divided By mean) Link utilization

15 Link utilization and stability under various traffic types (HTCP)
CoV

16 Fairness (measured in throughput ratio)
TCP friendliness (RTT 42 ms; 2MB buffer) Intra-protocol Fairness (RTT 82 ms) RTT-fairness (flow 1: 42 ms; flow 2: 162 ms) Generally, H-TCP shows the excellent fairness regardless of traffic types. All protocols improve fairness with more variance in bg traffic, but the size of traffic makes the biggest difference (type V).

17 TCP friendliness No background Type V
Generally, all protocols improve fairness with type V background traffic.

18 TCP-friendliness another look
Type II traffic with varying numbers of high- speed flows (320ms RTT). Measured the throughput of Type II traffic. We don’t find much difference in throughput.

19 Convergence speed Cubic H-TCP No background traffic Type II

20 Types of background traffic reveal “the beast” in disguise. E.g,
Conclusion Types of background traffic reveal “the beast” in disguise. E.g, Some protocols trade convergence speed for higher stability. Some protocols trade stability for faster convergence and fairness. Rate variance of background traffic affects the stability and also link utilization. All protocols improve fairness and convergence speed with more background traffic (size matters more than variance).

21 Intra-protocol fairness
No background (2 MB buffer) Type V (2 MB buffer)

22 Intra-protocol fairness (FAST)
Type I Traffic; 1 MB Buffer. Wrong estimation of minimum RTT causes different flows to run at different rates

23 Link utilization v.s. buffer size
As the buffer space increases, the stability gets better. 320ms RTT

24 Impact of buffer sizes Buffer size (1 – 8MB), four HS flows with the same RTT (320ms) As the buffer size increases, the CoV of all protocols decreases

25 Impact of congestion Buffer size (2MB), two HS flows with the same RTT (40ms – 320ms), a dozen long-lived TCP flows added Convex protocols have a large variations (convex ordering still exists)

26 NS2 Simulation Results (Loss Model)


Download ppt "Impact of Background Traffic on Performance of High-speed TCPs"

Similar presentations


Ads by Google