Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.

Similar presentations


Presentation on theme: "1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues."— Presentation transcript:

1 1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues

2 2 Introduction Congestion control at the End Host Treating the Network as a Black Box Main indicator Round Trip Time Probabilistic Early Response TCP (PERT)

3 3 Motivation Implementing AQM at the Router is not easy. Current techniques depend on Packet loss to detect congestion. Easier to modify TCP stack at the End Host. Can work any AQM mechanism at the router.

4 4 Challenges RTT based estimation have been characterized to be inaccurate. Hard to measure Queuing Delays when they are small compared to the RTT.

5 5 Accuracy of End-host Based Congestion Estimation Previous studies looked at the relation between increase in RTT and packet loss for a single stream. Results 1. Losses are preceded by increase in RTT in very few cases. 2. Responding to a false prediction results in severe loss in performance.

6 6 Accuracy of End-host Based Congestion Estimation 4 is false negative and 5 is false positive

7 7 Accuracy of End-host Based Congestion Estimation Previous studies claim transition 5 happens more then transition 2 Limitation of previous studies is to look at the relation between higher RTT in packet loss for a single flow Packet loss should be looked at the router not for a single flow

8 8 Accuracy of End-host Based Congestion Estimation Ns-2 simulation Two routers connected to a100 Mps link with end nodes having 500 Mbps link, different combination of long term and short term flows. The reference flows have RTT of 60ms which is equal to 12000Km.

9 9 Different Congestions Predictors Efficiency of Packet loss prediction (Number of 2 transitions)/(2 transitions +5 transitions) False Positives (Number of 5 transitions)/(2 transitions +5 transitions) False Negatives (Number of 4 transitions)/(2 transitions +4 transitions)

10 10 Previous Work In 1989 first paper was published proposing to enhance TCP with delay-based congestion avoidance. TRI-S: Throughput is used to detect congestion instead of delay DUAL: Current RTT is compared with Average of Minimum and Maximum RTT Vegas: Achieved throughput is compared to expected throughput based on minimum Observed RTT. CIM: Moving Average of small number of RTT samples is compared with moving average of large number of RTT samples CARD: Congestion Avoidance using RTT Delay

11 11 Improving Congestion Prediction *Vegas, Card, TRI-S, and dual obtain RTT samples once per RTT. Smoothed RTT Exponential Weighted Moving Average

12 12 Improving Congestion Prediction We improve accuracy by more frequent sampling and history information End-host congestion prediction is not perfect, thus we need mechanisms to counter this inaccuracy.

13 13 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? Keeping the amount of Response small. Respond Probabilistically.

14 14 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? Keeping the amount of Response small. Respond Probabilistically. Not much Loss in throughput Maintains High link Utilization Buildup of the bottleneck queue “may not be cleared out” quickly. VEGAS

15 15 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? No Loss of throughput Maintains High link Utilization Buildup of the bottleneck queue “may not be cleared out” quickly. VEGAS This causes a tradeoff in the fairness properties of TCP to maintain high link utilization Vegas uses “additive decrease” for early congestion response

16 16 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? No Loss of throughput Maintains High link Utilization Buildup of the bottleneck queue “may not be cleared out” quickly. VEGAS This causes a tradeoff in the fairness properties of TCP to maintain high link utilization AI/AD for these transitions will result in compromising the fairness properties of the protocol. Vegas uses “additive decrease” for early congestion response

17 17 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? No Loss of throughput Maintains High link Utilization Buildup of the bottleneck queue “may not be cleared out” quickly. VEGAS Compared to the flow starting earlier, flows that start late may have a different idea of the Minimum RTT on the path. This gives an unfair advantage to flows starting later, giving them more share of the Bandwidth. RTT= Propagation Delay + Queuing Delay

18 18 Response to Congestion Prediction How do we reduce the impact of FALSE Positives? Keeping the amount of Response small. Respond Probabilistically. When the probability of false positives is high, the probability of response to an early congestion signal should be low High Probability of False Positives Low Response! Low Probability of False Positives High response!

19 19 Designing the Probabilistic Response False positives occur… False Positives occur when the queue length is smaller. False positives occur when the queue length is less than 50% of the total queue size. srtt 0.99 is the signal congestion predictor

20 20 Designing the Probabilistic Response what should be my response function?  Response should be Small for low queue size Response should large for large queue size. srtt 0.99 is the signal congestion predictor

21 21 Designing the Probabilistic Response what should be my response function? Thus we emulate the probabilistic response function of RED. Thus P - probabilistic E - early R - response T - TCP

22 22 PERT T min = Minimum Threshold =P+ 5ms=5ms T max = Maximum Threshold=P+10ms=10ms p max =maximum probablity of response=.05 P= propagation delay= ??= 0!!!

23 23 Probabilistic Response Curve used by PERT

24 24 Is it necessary to have a 50% reduction in the congestion window in case of early response?? Routers are commonly set to the Bandwidth Delay Product of the Link since the TCP flow reduces its window by 50% If B is the buffer size and f is the window reduction factor, the relationship between them is given by Since the flows respond before the bottleneck queue is full, a large multiplicative decrease can result in lower link utilization but reducing the amount of response make it hard to empty the buffer, leading to unfairness.

25 25 Experimental Evaluation Impact of Bottleneck link Bandwidth Setup: Single bottleneck with bottle neck bandwidth between 1 Mbps to 1Gbps, RTT from 10ms to 1s. Simulations run for 400s. Results measured between stable period. RTT set to 60ms.

26 26 Experimental Evaluation Impact of Round Trip Delays The bottleneck link bandwidth is 150 Mbps and number of flows is 50. The end-to-end delay is varied from 10ms to 1s.

27 27 Experimental Evaluation Impact of Varying the Number of Long-term Flows. Link bandwidth set to 500 Mbps, end to end delay set to 60ms.

28 28 Bottle Neck Link b/w - 150Mbps End-End Delay - 60ms Long term Flows – 50 Short Term varying from 10 to 1000 Bottle Neck Link b/w - 150Mbps End-End Delay – n * 12 1<n<10 Short Term - 100

29 29 Multiple Bottlenecks Bottleneck link bandwidth –150Mbps; Delay - 5ms; Link capacity – 1 Gbps; Delay – 5ms Response to sudden changes in responsive traffic:

30 30

31 31 Modeling of PERT Forward propagation delay: C – link capacity ; q(t) – queue size at time t ; Note: Queuing Delay is perceived before R(t) The Window Dynamics of PERT: ( A ) ( 2 ) ( 3 )

32 32 Modeling of PERT Note: PERT makes its decision at the end host and not the router. Incoming rate y(t) => ( 5 ) ( 6 ) ( 4 )

33 33 Modeling of PERT By equation (A) ( 7 )

34 34 Simulations Stability

35 35 Emulating PI

36 36 Discussion Impact of Reverse traffic Co-existence with Non-Proactive Flows

37 37 Conclusion Congestion prediction at end host is more accurate than characterized by previous studies, but requires further research to improve the accuracy of end host delay-based predictors. PERT emulates the behavior of AQM in the congestion response function Benefits are similar to ECN Its link utilization is similar to router –based schemes PERT is flexible, in the sense that other AQM schemes can be emulated.

38 38 Few of Our Observations The authors have put a good deal of effort, but is its as simple and eye-catching if we implemented on any kind of network in real time? What modifications have to now be made at the end host, such as additional hardware/software and cost?? Is it compatible with other versions of TCP? Will this implementation give an advantage to other connections less/least proactive connections or misbehaving connections to take advantage of my readiness to lessen the job a router has to perform?

39 39 Questions


Download ppt "1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues."

Similar presentations


Ads by Google