Presentation is loading. Please wait.

Presentation is loading. Please wait.

1Texas A&M University Congestion Control Algorithms of TCP in Emerging Networks Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy September.

Similar presentations


Presentation on theme: "1Texas A&M University Congestion Control Algorithms of TCP in Emerging Networks Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy September."— Presentation transcript:

1 1Texas A&M University Congestion Control Algorithms of TCP in Emerging Networks Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy September 16, 2005

2 2Texas A&M University Why TCP Congestion Control ? Designed in early ’80’s –Still the most predominant protocol on the net Continuously evolves –IETF developing an RFC to keep track of TCP changes ! Has “issues” in emerging networks –We aim to identify problems and propose solutions for TCP in high-speed networks Motivation

3 3Texas A&M University Link speeds have increased dramatically –270 TB collected by PHENIX (Pioneering High Energy Nuclear Interaction eXperiment) –Data transferred between Brookhaven National Laboratory, NY to RIKEN research center, Tokyo –Typical rate 250Mbps, peak rate 600Mbps –OC48(2.4 Gbps) from Brookhaven to ESNET, transpacific line (10 Gbps) served by SINET to Japan –Used GridFTP (Parallel connections with data striping) Source : CERN Courier, Vol. 45, No.7 Motivation

4 4Texas A&M University Historically, high-speed links present only at the core –High levels of multiplexing (low per-flow rates) –New architectures for high-speed routers Now, high-speed links are available for transfer between two endpoints –Low levels of multiplexing (high per-flow rates) Motivation

5 5Texas A&M University Outline TCP on high-speed links with low multiplexing –Design, analysis and evaluation of aggressive probing mechanism (LTCP) Impact of high RTT Impact on router buffers and loss rates (LTCP-RCS) TCP on high-speed links with high multiplexing –Impact of packet reordering (TCP-DCR) Future Work

6 6Texas A&M University TCP on high-speed links with low multiplexing –Design, analysis and evaluation of aggressive probing mechanism (LTCP) Impact of high RTT Impact on router buffers and loss rates (LTCP-RCS) TCP on high-speed links with high multiplexing –Impact of packet reordering (TCP-DCR) Future Work Where We are...

7 7Texas A&M University TCP in High-speed Networks TCP’s one per RTT increase does not scale well * For RTT = 100ms, Packet Size = 1500 Byte Motivation *Source : RFC 3649

8 8Texas A&M University Design Constraints –More efficient link utilization –Fairness among flows of similar RTT –RTT unfairness no worse than TCP –Retain AIMD behavior TCP in High-speed Networks

9 9Texas A&M University Layered congestion control –Borrow ideas from layered video transmission –Increase layers, if no losses for extended period –Per-RTT window increase more aggressive at higher layers TCP in High-speed Networks

10 10Texas A&M University Layering –Start layering when window > W T –Associate each layer with a step size  K –When window increases from previous addition of layer by  K, increment number of layers –For each layer K, increase window by K per RTT Number of layers determined dynamically based on current network conditions. LTCP Concepts (Cont.) TCP in High-speed Networks

11 11Texas A&M University K K + 1 K K - 1 Layer Number W K-1 Minimum Window Corresponding to the layer    Number of layers = K when W K  W  W K+1 WKWK W K+1 LTCP Concepts TCP in High-speed Networks

12 12Texas A&M University Constraint 1 : Rate of increase for flow at higher layer should be lower than flow at lower layer Framework TCP in High-speed Networks K + 2 K K - 1W K-1     Number of layers = K when W K  W  W K+1 WKWK W K+2 K + 1  W K+1 (K 1 > K 2, for all K 1, K 2  2)

13 13Texas A&M University Constraint 2 : After a loss, recovery time for a larger flow should be more than the smaller flow Framework TCP in High-speed Networks (K 1 > K 2, for all K 1, K 2  2) Flow 1 : Flow 2 : Window WR 1 Slope = K 1 ' T1T1 Time Window WR 2 Slope = K 2 ' T2T2 Time

14 14Texas A&M University Decrease behavior : –Multiplicative decrease Increase behavior : –Additive increase with additive factor = layer number W = W + K/W Design Choice TCP in High-speed Networks

15 15Texas A&M University Analyze two flows operating at adjacent layers –Should hold for other cases through induction Ensure constraints satisfied for worst case –Should work in other cases After loss, drop at most one layer –Ensures smooth layer transitions Determining Parameters TCP in High-speed Networks

16 16Texas A&M University –Before Loss : Flow1 at K, Flow2 at (K-1) –After loss four possible cases –For worst case to happen W 1 close to W K+1, W 2 close to W K-1 –Substitute worst case values in constraint on decrease behavior Determining Parameters(Cont.) TCP in High-speed Networks worst case

17 17Texas A&M University –Analysis yields inequality Higher the inequality, slower the increase in aggressiveness –We choose –If layering starts at W T, by substitution, Determining Parameters TCP in High-speed Networks

18 18Texas A&M University Since after loss, at most one layer is dropped, By substitution and simplification, We choose  = 0.15 Choice of  TCP in High-speed Networks

19 19Texas A&M University TCP in High-speed Networks Other Analyses Time to claim bandwidth –Window corresponding to BDP is at layer K –. = –For TCP, T(slowstart) + (W - W T ) RTTs (Assuming slowstart ends when window = W T )

20 20Texas A&M University TCP in High-speed Networks Other Analyses Packet recovery time –Window reduction is by  –After loss, increase is atleast by (K-1) –Thus, time to recover from loss is RTTs –For TCP, it is W/2 RTTs –Speed up in packet recovery time

21 21Texas A&M University TCP in High-speed Networks

22 22Texas A&M University where K ' is the layer for steady state window Steady State Throughput BW = N D / T D TCP in High-speed Networks

23 23Texas A&M University Response Curve TCP in High-speed Networks

24 24Texas A&M University TCP on high-speed links with low multiplexing –Design, analysis and evaluation of aggressive probing mechanism (LTCP) Impact of high RTT Impact on router buffers and loss rates (LTCP-RCS) TCP on high-speed links with high multiplexing –Impact of packet reordering (TCP-DCR) Future Work Where We are...

25 25Texas A&M University Two-fold dependence on RTT –Smaller the RTT, faster the growth in window –Smaller the RTT, faster the aggressiveness increases Easy to offset this –Scale K using “RTT compensation factor” K R –Thus, increase behavior is W = W + (K R * K) / W –Decrease behavior is still W =  * W Impact of RTT * TCP in High-speed Networks * In collaboration with Saurabh Jain

26 26Texas A&M University –Throughput ratio in terms of RTT and K R is –When K R  RTT (1/3), TCP-like RTT-unfairness –When K R  RTT, linear RTT unfairness (window size independent of RTT) Impact of RTT * TCP in High-speed Networks * In collaboration with Saurabh Jain

27 27Texas A&M University Window Comparison TCP in High-speed Networks

28 28Texas A&M University TCP in High-speed Networks –Highspeed TCP : Modifies AIMD parameters based on different response function (no longer AIMD) –Scalable TCP : Uses MIMD –FAST : Based on Vegas core –BIC TCP : Uses Binary/Additive Increase, Multiplicative Decrease –H-TCP : Modifies AIMD parameters based on “time since last drop” (no longer AIMD) Related Work

29 29Texas A&M University Link Utilization TCP in High-speed Networks

30 30Texas A&M University Dynamic Link Sharing TCP in High-speed Networks

31 31Texas A&M University Effect of Random Loss TCP in High-speed Networks

32 32Texas A&M University Interaction with TCP TCP in High-speed Networks

33 33Texas A&M University RTT Unfairness TCP in High-speed Networks

34 34Texas A&M University Why LTCP ? –Current design remains AIMD –Dynamically changes increase factor –Simple to understand/implement –Retains convergence and fairness properties –RTT unfairness similar to TCP Summary TCP in High-speed Networks

35 35Texas A&M University TCP on high-speed links with low multiplexing –Design, analysis and evaluation of aggressive probing mechanism (LTCP) Impact of high RTT Impact on router buffers and loss rates (LTCP-RCS) TCP on high-speed links with high multiplexing –Impact of packet reordering (TCP-DCR) Future Work Where We are...

36 36Texas A&M University Increased aggressiveness increases congestion events Summary of Bottleneck Link Buffer Statistics TCP in High-speed Networks Impact on Packet Losses

37 37Texas A&M University Increased aggressiveness increases stress on router buffers TCP in High-speed Networks Instantaneous Queue Length at Bottleneck Link Buffers Impact on Router Buffers

38 38Texas A&M University Important to be aggressive for fast convergence –When link is underutilized –When new flows join/leave In steady state, aggressiveness should be tamed –Otherwise, self-induced loss rates can be high TCP in High-speed Networks Impact on Buffers and Losses Motivation

39 39Texas A&M University Proposed solution –In steady state, use less aggressive TCP algorithms –Use a control switch to turn on/off aggressiveness Switching Logic –ON when bandwidth is available –OFF when link is in steady state –ON when network dynamics change (sudden decrease or increase in available bandwidth) TCP in High-speed Networks Impact on Buffers and Losses

40 40Texas A&M University Using the ack-rate for identifying steady state Raw ack-rate signal for flow1 TCP in High-speed Networks Impact on Buffers and Losses

41 41Texas A&M University Using ack-rate for switching –Trend of the ack rate works well for our purpose –If (gradient = 0) : Aggressiveness OFF If (gradient  0) : Aggressiveness ON –Responsiveness of raw signal does not require large buffers –Noisy raw signal smoothed using EWMA TCP in High-speed Networks Impact on Buffers and Losses

42 42Texas A&M University Instantaneous Queue Length at Bottleneck Link Buffers Without Rate-based Control SwitchWith Rate-based Control Switch TCP in High-speed Networks Impact on Buffers and Losses

43 43Texas A&M University Summary of Bottleneck Link Buffer Statistics TCP in High-speed Networks Impact on Buffers and Losses

44 44Texas A&M University Convergence Properties TCP in High-speed Networks Impact on Buffers and Losses

45 45Texas A&M University Other Results –TCP Tolerance slightly improved –RTT Unfairness slightly improved –At higher number of flows, improvement in loss rate is about a factor of 2 –Steady reverse traffic does not impact performance –Highly varying traffic reduces benefits, improvement in loss rate is about a factor of 2 TCP in High-speed Networks Impact on Buffers and Losses

46 46Texas A&M University Use of rate-based control switch –provides improvement in loss rates ranging from orders of magnitude to a factor of 2 –low impact on other benefits of high-speed protocols –Benefits extend to other high-speed protocols (verified for BIC and HTCP) Whichever high-speed protocol emerges as the next standard, rate-based control switch could be safely used with it Summary TCP in High-speed Networks Impact on Buffers and Losses

47 47Texas A&M University TCP on high-speed links with low multiplexing –Design, analysis and evaluation of aggressive probing mechanism (LTCP) Impact of high RTT Impact on router buffers and loss rates (LTCP-RCS) TCP on high-speed links with high multiplexing –Impact of packet reordering (TCP-DCR) Future Work Where We are...

48 48Texas A&M University TCP behavior: If three dupacks –retransmit the packet –reduce cwnd by half. Caveat : Not all 3-dupack events are due to congestion –channel errors in wireless networks –reordering etc. Result : Sub-optimal performance TCP with Non-Congestion Events

49 49Texas A&M University Impact of Packet Reordering Packet Reordering in the Internet –Originally thought to be pathological caused only by route flapping, router pauses etc –Later results claim higher prevalence of reordering reason attributed to parallelism in Internet components –Newer measurements show low levels of reordering in most part of Internet high levels of reordering is localized to some links/sites is a function of network load

50 50Texas A&M University Proposed Solution –Delay the time to infer congestion by  –Essentially a tradeoff between wrongly inferring congestion and promptness of response to congestion –  chosen to be one RTT to allow maximum time while avoiding an RTO Impact of Packet Reordering

51 51Texas A&M University Evaluation conducted for different scenarios –Networks with only packet reordering, only congestion, both Evaluation at multiple levels –Flow level (Throughput, relative fairness, response to dynamic changes in traffic etc.) –Protocol level (Packet delivery time, RTT estimates etc.) –Network level (Bottleneck link droprate, queue length etc.) Impact of Packet Reordering

52 52Texas A&M University TCP-DCR maintains high throughput even when large percentage of packets are delayed Packet Reordering Only Impact of Packet Reordering

53 53Texas A&M University TCP-DCR maintains high throughput when packets are delayed upto 0.8 * RTT Packet Reordering Only Impact of Packet Reordering

54 54Texas A&M University Congestion Only (Fairness) Per-flow throughput TCP-DCR is similar to that of competing TCP-SACK flows on congested links Impact of Packet Reordering

55 55Texas A&M University TCP-DCR utilizes throughput given up by TCP-SACK flows TCP-SACK flows are not starved for bandwidth Congestion and Packet Reordering Impact of Packet Reordering

56 56Texas A&M University Other Results –RTT Estimation not affected –Packet delivery time increased only for packets recovered via retransmission –Convergence properties not affected –Bottleneck queue similar with both Droptail and RED –Bottleneck droprates similar with Droptail and RED –Evaluation on Linux testbed Impact of Packet Reordering

57 57Texas A&M University TCP on high-speed links with low multiplexing –Design, analysis and evaluation of aggressive probing mechanism Impact of high RTT Impact on router buffers and loss rates TCP on high-speed links with high multiplexing –Impact of packet reordering Future Work Where We are...

58 58Texas A&M University Future Work Further Evaluation of LTCP / LTCP-RCS Further Evaluation of TCP-DCR Exploring Congestion Avoidance Techniques

59 59Texas A&M University Future Work (1) Further Evaluation of LTCP / LTCP-RCS –Impact of delaying congestion response in high-speed networks –Evaluate alternate metrics for aggressiveness control Investigate different smoothing techniques for ackrate signal –Experimental evaluation on Internet2 testbed LTCP Rate-based Control Switch Extent of reordering

60 60Texas A&M University Further Evaluation of TCP-DCR –Exploiting the benefits of robustness to reordering End-node multi-homing Network multi-homing/load balancing with packet- level decisions instead of flow-level decisions Future Work (2)

61 61Texas A&M University Exploring Congestion Avoidance Techniques Future Work (3) Aggressive Algorithm Non-aggressive Algorithm Aggr Control YesNo LTCPTCP-SACK RCS Motivation TCP-SACK

62 62Texas A&M University Focus on the “non-aggressive algorithm” component –Several options available TCP-SACK (Loss) CARD (Delay Gradient) Tri-S (Throughput Gradient) DUAL (Delay) TCP-Vegas (Throughput) CIM (Delay) Exploring Congestion Avoidance Techniques Future Work (3)

63 63Texas A&M University Should we use existing options ? –Rely on changes in RTT for detecting congestion –Research shows low correlation between RTT and packet loss –High false positives reduce achieved throughput False positives may be due to forward or reverse traffic changes Exploring Congestion Avoidance Techniques Future Work (3)

64 64Texas A&M University Improved delay-based metric possible ? –Two factors effect variations in RTT Persistent Congestion Transient burstiness in traffic –Probabilistically determine if RTT increase is related to congestion ? –Modify response to compensate for unreliability of signal ? Probabilistic response Proportional response Exploring Congestion Avoidance Techniques Future Work (3)

65 65Texas A&M University Compensation for unreliability of signal by modifying the response –Deviate from current philosophy of binary response (Respond or NOT Respond) –Resulting behavior similar to RED Response by end points eliminates deployment issues Exploring Congestion Avoidance Techniques Future Work (3)

66 66Texas A&M University Exploring Congestion Avoidance Techniques Future Work (3) Topology

67 67Texas A&M University Exploring Congestion Avoidance Techniques Future Work (3)

68 68Texas A&M University Exploring Congestion Avoidance Techniques Future Work (3)

69 69Texas A&M University Conclusions TCP on high-speed links with low multiplexing –LTCP Retains AIMD, good convergence properties and controlled RTT-unfairness –RCS Controls aggressiveness to reduce loss rates, can be used with other loss-based high-speed protocols –Future work Alternate non-aggressive algorithms TCP on high-speed links with high multiplexing –TCP-DCR Simple, yet effective

70 70Texas A&M University LTCP / LTCP-RCS –Sumitha Bhandarkar and A. L. Narasimha Reddy, "Rate-based Control of the Aggressiveness of Highspeed Protocols”, Currently Under Submission. –Sumitha Bhandarkar, Saurabh Jain and A. L. Narasimha Reddy, ”LTCP : Layered Congestion Control for Highspeed Networks”, Journal Paper, Currently Under Submission. –Sumitha Bhandarkar, Saurabh Jain and A. L. Narasimha Reddy, "Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control", Proceedings of PFLDNet 2005 Workshop, February 2005. TCP-DCR –Sumitha Bhandarkar and A. L. Narasimha Reddy, "TCP-DCR: Making TCP Robust to Non- Congestion Events", Proceedings of Networking 2004, May 2004. Also, presented as student poster at ACM SIGCOMM 2003, August 2003. –Sumitha Bhandarkar, Nauzad Sadry, A. L. Narasimha Reddy and Nitin Vaidya, “TCP-DCR: A Novel Protocol for Tolerating Wireless Channel Errors”, accepted for publication in IEEE Transactions on Mobile Computing (Vol. 4, No. 5), September/October 2005 –Sumitha Bhandarkar and A. L. Narasimha Reddy, "Improving the robustness of TCP to Non- Congestion Events", IETF Draft, work in progress, May 2005. Status: Preparing for WGLC. List of Publications

71 71Texas A&M University Thank You Questions ?

72 72Texas A&M University Supporting Slides

73 73Texas A&M University HS-TCP Sally Floyd, “HighSpeed TCP for Large Congestion Windows”, RFC 3649 Dec 2003. Scalable TCP Tom Kelly, “Scalable TCP: Improving Performance in HighSpeed Wide Area Networks”, ACM Computer Communications Review, April 2003. FAST Cheng Jin, David X. Wei and Steven H. Low, “FAST TCP: motivation, architecture, algorithms, performance”, IEEE Infocom, March 2004. BIC Lisong Xu, Khaled Harfoush, and Injong Rhee, “Binary Increase Congestion Control for Fast Long-Distance Networks”, IEEE Infocom, March 2004. HTCP R. N. Shorten, D. J. Leith, J. Foy, and R. Kilduff, “H-TCP Protocol for high-speed Long Distance Networks”, PFLDnet 2004, February 2003. Work Related to LTCP

74 74Texas A&M University Work Related to LTCP-RCS TCP-AFRICA Ryan King, Richard Baraniuk, Rudolf Riedi, “TCP-Africa: An Adaptive and Fair Rapid Increase Rule for Scalable TCP”, IEEE Infocom, March 2005.

75 75Texas A&M University Work Related to Reordering [1] V. Paxson, "End-to-end Internet packet dynamics," IEEE/ACM Transactions on Networking, 7(3):277--292, 1999. [2] Jon C. R. Bennett, Craig Partridge, and Nicholas Shectman. “Packet reordering is not pathological network behavior,” IEEE/ACM Transactions on Networking, 1999. [3] D. Loguinov and H. Radha, "End-to-End Internet Video Traffic Dynamics: Statistical Study and Analysis," IEEE INFOCOM, June 2002. [4] G. Iannaccone, S. Jaiswal and C. Diot, "Packet Reordering Inside the Sprint Backbone," Tech. Report, TR01-ATL-062917, Sprint ATL, Jun. 2001. [5] S. Jaiswal, G. Iannaccone, C. Diot, J. Kurose and D. Towsley, "Measurement and Classification of Out-of-sequence Packets in Tier-1 IP Backbone," INFOCOM 2003 [6] Yi Wang, Guohan Lu, Xing Li, “A Study of Internet Packet Reordering,” Proc. ICOIN 2004: 350-359. [7] Xiaoming Zhou, Piet Van Mieghem, “Reordering of IP Packets in Internet,” Proc. PAM 2004: 237-246 [8]Ladan Gharai, Colin Perkins, Tom Lehman, “Packet Reordering, High Speed Networks and Transport Protocol Performance,” ICCCN 2004: 73-78.

76 76Texas A&M University Work Related to TCP-DCR Blanton/Allman E. Blanton and M. Allman, “On Making TCP More Robust to Packet Reordering,” ACM Computer Communication Review, January 2002 RR-TCP M. Zhang, B. Karp, S. Floyd, and L. Peterson, “RR-TCP: A Reordering-Robust TCP with DSACK,” ICSI Technical Report TR-02-006, Berkeley, CA, July 2002 TCP-DCR IETF Draft http://www.ietf.org/internet-drafts/draft-ietf-tcpm-tcp-dcr-05.txt

77 77Texas A&M University CARD Raj Jain, "A Delay-Based Approach for Congestion Avoidance in Interconnected Heterogeneous Computer Networks," ACM CCR vol. 19, pp. 56-71, Oct 1989. Tri-S Zheng Wang and Jon Crowcroft, "A New Congestion Control Scheme: Slow Start and Search (Tri-S)," ACM Computer Communication Review, vol. 21, pp 32-43, Jan 1991 DUAL Zheng Wang and Jon Crowcroft, "Eliminating Periodic Packet Losses in the 4.3-Tahoe BSD TCP Congestion Control Algorithm," ACM CCR vol. 22, pp. 9--16, Apr. 1992 TCP-Vegas Lawrence S. Brakmo and Sean W. O'Malley, "TCP Vegas: New Techniques for Congestion Detection and Avoidance," in SIGCOMM '94. CIM J. Martin, A. Nilsson, and I. Rhee, “Delay-Based Congestion Avoidance for TCP,” IEEE/ACM Transactions on Networking, vol. 11, no. 3, pp. 356–369, June 2003 Delay-based Schemes

78 78Texas A&M University References Measurement sites showing TCP predominance http://ipmon.sprint.com/packstat/viewresult.php?0:protobreakdown:sj-20.0-040206: http://www.aarnet.edu.au/network/trafficvolume.html http://www.caida.org/outreach/resources/learn/trafficworkload/tcpudp.xml TCP Roadmap http://tools.ietf.org/wg/tcpm/draft-ietf-tcpm-tcp-roadmap/draft-ietf-tcpm-tcp-roadmap-04.txt

79 79Texas A&M University Delay-based Schemes : Issues [1] Ravi S. Prasad, Manish Jain, Constantinos Dovrolis, “On the Effectiveness of Delay- Based Congestion Avoidance”, PFLDnet 2004 [2] S. Biaz and N. Vaidya, “Is the Round-Trip Time Correlated with the Number of Packets in Flight?,” Internet Measurement Conference (IMC), Oct. 2003 [3] J. Martin, A. Nilsson, and I. Rhee, “Delay-Based Congestion Avoidance for TCP,” IEEE/ACM Transactions on Networking, vol. 11, no. 3, pp. 356–369, June 2003. [4] Les Cottrell, Hadrien Bullot and Richard Hughes-Jones, "Evaluation of Advanced TCP stacks on Fast Long-Distance production Networks” PFLDNet 2004

80 80Texas A&M University References PHENIX Project http://www.phenix.bnl.gov/ CERN Courier, Vol. 45, No.7 http://www.cerncourier.com/main/toc/45/7

81 81Texas A&M University References AIMD D.-M. Chiu and R. Jain, “Analysis of the increase and decrease algorithms for congestion avoidance in computer networks,” Computer Networks and ISDN Systems, 17(1):1--14, June 1989.

82 82Texas A&M University Response Curve High-speed Protocols

83 83Texas A&M University Topology Dynamic Link Sharing Time (Seconds) 0 300 600 900 1200 1500 1800 2100 Flow1 Start Flow2 Start Flow3 Start Flow4 Start Flow4 Stop Flow3 Stop Flow2 Stop Flow1 Stop 1Gbps, 40ms 2.4Gbps, 10ms

84 84Texas A&M University Topology RCS : Convergence Properties  Region 1 Region 2 Region 3  -fair convergence : Time for allocation (B,0)  Jain Fairness Index :

85 85Texas A&M University References  -fair convergence : Deepak Bansal, Hari Balakrishnan, Sally Floyd and Scott Shenker, “Dynamic Behavior of Slowly-Responsive Congestion Control Algorithms”, ACM SIGCOMM 2001. Jain Fairness Index : R.Jain, D-M. Chiu and W. Hawe. "A Quantitative Measure of Fairness and Discrimination For Resource Allocation in Shared Conputer Systems," Technical Report TR-301, DEC Research Report, September, 1984

86 86Texas A&M University Instantaneous Queue Length at Bottleneck Link Buffers with Rate-based Control Switch TCP in High-speed Networks Impact on Buffers and Losses

87 87Texas A&M University Loss Events and Packet Loss Rate with the RCS TCP in High-speed Networks Impact on Buffers and Losses

88 88Texas A&M University Impact on Router Buffers and Packet Loss Rates TCP in High-speed Networks Convergence Properties

89 89Texas A&M University Convergence Properties (Cont.) TCP in High-speed Networks Impact on Buffers and Losses

90 90Texas A&M University Behavior with multiple Flows TCP in High-speed Networks Impact on Buffers and Losses

91 91Texas A&M University Behavior with multiple Flows TCP in High-speed Networks Impact on Buffers and Losses

92 92Texas A&M University TCP Tolerance TCP in High-speed Networks Impact on Buffers and Losses

93 93Texas A&M University TCP Tolerance TCP in High-speed Networks Impact on Buffers and Losses

94 94Texas A&M University RTT Unfairness TCP in High-speed Networks Impact on Buffers and Losses

95 95Texas A&M University RTT Unfairness TCP in High-speed Networks Impact on Buffers and Losses

96 96Texas A&M University Topology RCS : Impact of Steady Reverse Traffic 1Gbps, 40ms 2.4Gbps, 10ms High-speed Flow UDP Flow Time (Seconds) 0 200 400 600 800 1000 100 75 50 25 Link Util by UDP Flow

97 97Texas A&M University Impact of Reverse Traffic TCP in High-speed Networks Impact on Buffers and Losses

98 98Texas A&M University Impact of Reverse Traffic TCP in High-speed Networks Impact on Buffers and Losses

99 99Texas A&M University Impact of Background Traffic with High Variance TCP in High-speed Networks Impact on Buffers and Losses

100 10 0 Texas A&M University Impact of Background Traffic with High Variance TCP in High-speed Networks Impact on Buffers and Losses

101 10 1 Texas A&M University Delay-based Metric for Aggressiveness Control TCP in High-speed Networks Impact on Buffers and Losses Buffer Size = 1/3 DBP (5000 Packets) RCS : Rate-based Control Switch : OFF when throughput gradient  0) DCS : Delay-based Control Switch : OFF when (queuing delay * sending rate) >  (  = 1.65)

102 10 2 Texas A&M University Exploring Congestion Avoidance Techniques Future Work (3) Motivation

103 10 3 Texas A&M University Fairness Among Multiple Flows TCP in High-speed Networks Jain Fairness Index :

104 10 4 Texas A&M University Interaction With Non-responsive Traffic TCP in High-speed Networks

105 10 5 Texas A&M University TCP in High-speed Networks Impact on Buffers and Losses Related Work TCP-AFRICA –Uses delay-based metric for reducing losses in HS-TCP Requires high resolution timers –Convergence behavior not examined Could potentially increase convergence time drastically TCP-FAST –Based on Vegas Core –Research shows issues that make it less effective for practical deployment

106 10 6 Texas A&M University Congestion Only (Sudden Changes in Traffic) Time to reach (55%,45%) allocation : TCP-SACK : 3.10 s TCP-DCR : 3.67 s Response of TCP-DCR to sudden changes in traffic is similar to that of TCP-SACK Impact of Packet Reordering

107 10 7 Texas A&M University Congestion Only (Effect of Web-like Traffic) Bulk transfer due to TCP-DCR does not effect background web- like traffic Impact of Packet Reordering

108 10 8 Texas A&M University Congestion Only (Background UDP traffic) TCP-DCR and TCP-SACK maintain relative fairness with dynamically changing traffic Impact of Packet Reordering

109 10 9 Texas A&M University Congestion Only (Packet Delivery Time) Time to recover lost packets : TCP-SACK : 182.7ms TCP-DCR : 201.3 ms TCP-DCR has higher packet recovery time for lost packets. Packet delivery time similar to TCP-SACK during times of no congestion. Impact of Packet Reordering

110 11 0 Texas A&M University Problem Description End-to-End RTT Receiver Packet Delayed Causing Reordering Sender … Retransmission and Window Reduction 7 8 9 2 1 2 3 456 2 2 2 2 2 7 8 9 10 10 TCP with Non-Congestion Events

111 11 1 Texas A&M University Congestion Response Delay Timer Cancelled Congestion Response Delay Timer End-to-End RTT Receiver Packet Delayed Causing Reordering Sender … No Retransmission or Window Reduction 7 8 9 10 11 1 2 3 456 2 2 2 2 2 7 8 9 10 11 12 Proposed Solution TCP with Non-Congestion Events


Download ppt "1Texas A&M University Congestion Control Algorithms of TCP in Emerging Networks Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy September."

Similar presentations


Ads by Google