Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

Michele Pagano – A Survey on TCP Performance Evaluation and Modeling 1 Department of Information Engineering University of Pisa Network Telecomunication.
Computer Networking Lecture 20 – Queue Management and QoS.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Practice Questions: Congestion Control and Queuing
Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)
Measurements of Congestion Responsiveness of Windows Streaming Media (WSM) Presented By:- Ashish Gupta.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
On Modeling Feedback Congestion Control Mechanism of TCP using Fluid Flow Approximation and Queuing Theory  Hisamatu Hiroyuki Department of Infomatics.
Texas A&M University Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control Sumitha.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
Explicit Congestion Notification ECN Tilo Hamann Technical University Hamburg-Harburg, Germany.
Networks: Congestion Control1 Congestion Control.
1 Minseok Kwon and Sonia Fahmy Department of Computer Sciences Purdue University {kwonm, TCP Increase/Decrease.
WB-RTO: A Window-Based Retransmission Timeout Ioannis Psaras, Vassilis Tsaoussidis Demokritos University of Thrace, Xanthi, Greece.
1 TCP Transport Control Protocol Reliable In-order delivery Flow control Responds to congestion “Nice” Protocol.
TCP Westwood (with Faster Recovery) Claudio Casetti Mario Gerla Scott Seongwook Lee Saverio.
Reducing the Buffer Size in Backbone Routers Yashar Ganjali High Performance Networking Group Stanford University February 23, 2005
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Performance Evaluation on Buddy-TCP By Felix. Simulation Setup S C1C1 CNCN … … T_Sink1 T_SinkN … T1T1 TNTN U U_Sink 4N Mbps 50 ms L Types of traffic:
Data Communication and Networks
1 ATP: A Reliable Transport Protocol for Ad-hoc Networks Sundaresan, Anantharam, Hseih, Sivakumar.
Medium Start in TCP-Friendly Rate Control Protocol CS 217 Class Project Spring 04 Peter Leong & Michael Welch.
Performance and Robustness Testing of Explicit-Rate ABR Flow Control Schemes Milan Zoranovic Carey Williamson October 26, 1999.
Low-Rate TCP Denial of Service Defense Johnny Tsao Petros Efstathopoulos Tutor: Guang Yang UCLA 2003.
Routers with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
Analysis of Active Queue Management Jae Chung and Mark Claypool Computer Science Department Worcester Polytechnic Institute Worcester, Massachusetts, USA.
PCP: Efficient Endpoint Congestion Control To appear in NSDI, 2006 Thomas Anderson, Andrew Collins, Arvind Krishnamurthy and John Zahorjan University of.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer3-1 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Raj Jain The Ohio State University R1: Performance Analysis of TCP Enhancements for WWW Traffic using UBR+ with Limited Buffers over Satellite.
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
B 李奕德.  Abstract  Intro  ECN in DCTCP  TDCTCP  Performance evaluation  conclusion.
Congestion Control - Supplementary Slides are adapted on Jean Walrand’s Slides.
1 TCP-BFA: Buffer Fill Avoidance September 1998 Amr A. Awadallah Chetan Rai Computer Systems.
1 On Class-based Isolation of UDP, Short-lived and Long-lived TCP Flows by Selma Yilmaz Ibrahim Matta Computer Science Department Boston University.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Self-generated Self-similar Traffic Péter Hága Péter Pollner Gábor Simon István Csabai Gábor Vattay.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
WB-RTO: A Window-Based Retransmission Timeout Ioannis Psaras Demokritos University of Thrace, Xanthi, Greece.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
1 Analysis of a window-based flow control mechanism based on TCP Vegas in heterogeneous network environment Hiroyuki Ohsaki Cybermedia Center, Osaka University,
PCP: Efficient Endpoint Congestion Control NSDI, 2006 Thomas Anderson, Andrew Collins, Arvind Krishnamurthy and John Zahorjan University of Washington.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
1 Flow & Congestion Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
Topics discussed in this section:
Approaches towards congestion control
COMP 431 Internet Services & Protocols
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
Open Issues in Router Buffer Sizing
Lecture 19 – TCP Performance
Presentation transcript:

Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of Washington

TCP Overview: –Slow Start –Losses –Ack compression –Multiplexing –TCP is a sliding window-based algorithm. –Ack-clocking. –Slow-start phase (W=2*W each RTT). –Congestion-avoidance phase (W++ each RTT). B TCP Burstiness:

Motivation: n From queuing theory, we know that bursty traffic produces: –Higher queuing delays. –More packet losses. –Lower throughput. Load Response Time Best Case Worst Case Random 1Queue Capacity

Contribution: n Evaluate the impact of evenly pacing TCP packets across a round-trip time. What to expect from pacing TCP packets? n Better for flows: –Since packets are less likely to be dropped if they are not clumped together. n Better for the network: –Since competing flows will see less queuing delay and burst losses.

n Jain’s fairness index f: f = (  x i ) 2 (  x i RTT i ) 2 n  x i 2 n  (x i RTT i ) 2 Simulation Setup: S1 R1 RnSn x Mbps 40 ms 4x Mbps 5 ms 4x Mbps 5 ms B= S pkts

Experimental Results: n A) Single Flow: case S=0.25*B*RTT TCP Reno due to its burstiness in slow-start, incurs a loss when W=0.5*B*RTT paced TCP incurs its first loss after it saturates the pipe, I.e when W=2*B*RTT As a result, TCP Reno takes more time in congestion avoidance to ramp up to B*RTT (paced TCP achieves better throughput only at the beginning) case S  B*RTT (They both achieve similar throughput) The bursty behavior of TCP Reno is absorbed by the buffer and it does not get a loss until W=B*RTT

n B) Multiple Flows: 50 flows starting at the same time. All flows have same RTT. case S=0.25*B*RTT (TCP Reno achieves better throughput at the beginning!) (Paced TCP achieves better throughput in steady-state!) TCP Reno Flows send bursts of packets in clusters; some drop early and backoff, allowing the others to ramp up. paced TCP All the flows first saturate the pipe. At this point everyone drops because of congestion and mixing of flows, thereby leaving the bottleneck under-utilized. (Synchronization effect) In steady state, all packets are spread out and flows are mixed; as a result there is a randomness in the way packets are dropped. During a certain phase, some flows might get multiple losses, while others might get away without any. (De- synchronization effect) case S  B*D De-synchronization effect of Paced TCP persists.

n C) Multiple Flows - Variable RTT: 50 flows starting at the same time. 25 flows have RTT=100 msec and 25 flows with RTT=280 msec. case S=0.25*B*RTT (Paced TCP achieves better fairness without sacrificing throughput) TCP Reno the higher burstiness as a result of overlap of packet clusters from different flows becomes visible. It has a higher drop rate at the bottleneck link while achieving similar throughput. case S  B*D TCP Reno higher drop rate persists.

n D) Variable Length Flows: A constant size flow is established between each of 20 senders and corresponding 20 receivers. As a particular flow finishes, a new flow is established between the same nodes after an exponential think time of mean 1 sec. Ideal-latency: the latency of a flow that does slow start until it reaches its fair share of the bandwidth and then continues with a constant window. (just for comparison reasons) Phase1: no losses. Latency of paced TCP slightly higher due to pacing. Phase 2: S=0.25*B*RTT TCP Reno experience more losses in slow start; some flows timeout. Case S  B*D this effect disappears. Phase 3: Synchronization effect of paced TCP is visible. Phase 4: Synchronization effect disappears because flows are so large that new flows start infrequently.

n E) Interaction of Paced and non-paced flows: A paced flow is very likely to experience loss as a result of one of its packets landing in a burst from a Reno flow. Reno flows are less likely to be affected by bursts from other flows. Result: TCP Reno have much better latency than paced flows, when both are competing for bandwidth in a mixed flow environment. If we continuously instantiate new flows, the performance of paced TCP even deteriorates more. New flows in slow start, cause the old paced flows to regularly drop packets, further diminishing the performance of pacing.

Pacing improves fairness and drop rates. Pacing offers better performance with limited buffering. In other cases; pacing leads to performance degradation due to: 1. Pacing delays the congestion signals to a point where the network is already over subscribed. 2. Due to mixing of traffic pacing synchronizes drops. Conclusion::