Microscopic Behavior of Internet Control Xiaoliang (David) Wei NetLab, CS&EE California Institute of Technology.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
Computer Networking Lecture 20 – Queue Management and QoS.
FAST TCP Anwis Das Ajay Gulati Slides adapted from : IETF presentation slides Link:
Cheng Jin David Wei Steven Low FAST TCP: design and experiments.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
CS 268: Lecture 7 (Beyond TCP Congestion Control) Ion Stoica Computer Science Division Department of Electrical Engineering and Computer Sciences University.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Router-assisted congestion control Lecture 8 CS 653, Fall 2010.
Mohammad Alizadeh Adel Javanmard and Balaji Prabhakar Stanford University Analysis of DCTCP:Analysis of DCTCP: Stability, Convergence, and FairnessStability,
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Texas A&M University Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control Sumitha.
Advanced Computer Networks: RED 1 Random Early Detection Gateways for Congestion Avoidance * Sally Floyd and Van Jacobson, IEEE Transactions on Networking,
“On Designing Improved Controllers for AQM Routers Supporting TCP Flows” The PI Controller Presented by Bob Kinicki.
AQM for Congestion Control1 A Study of Active Queue Management for Congestion Control Victor Firoiu Marty Borden.
TCP Stability and Resource Allocation: Part I. References The Mathematics of Internet Congestion Control, Birkhauser, The web pages of –Kelly, Vinnicombe,
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug 1993), pp
Modeling TCP in Small-Buffer Networks
Heterogeneous Congestion Control Protocols Steven Low CS, EE netlab.CALTECH.edu with A. Tang, J. Wang, D. Wei, Caltech M. Chiang, Princeton.
Fluid-based Analysis of a Network of AQM Routers Supporting TCP Flows with an Application to RED Vishal Misra Wei-Bo Gong Don Towsley University of Massachusetts,
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
Estimating Congestion in TCP Traffic Stephan Bohacek and Boris Rozovskii University of Southern California Objective: Develop stochastic model of TCP Necessary.
Presented by Anshul Kantawala 1 Anshul Kantawala FAST TCP: From Theory to Experiments C. Jin, D. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R.
Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
UCB Improvements in Core-Stateless Fair Queueing (CSFQ) Ling Huang U.C. Berkeley cml.me.berkeley.edu/~hlion.
1 A State Feedback Control Approach to Stabilizing Queues for ECN- Enabled TCP Connections Yuan Gao and Jennifer Hou IEEE INFOCOM 2003, San Francisco,
Advanced Computer Networks : RED 1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking,
1 Modeling the Effect of a Rate Smoother on TCP Congestion Control Behavior Kang Li, Jonathan Walpole, David C. Steere {kangli, walpole,
Congestion models for bursty TCP traffic Damon Wischik + Mark Handley University College London DARPA grant W911NF
1 MaxNet and TCP Reno/RED on mice traffic Khoa Truong Phan Ho Chi Minh city University of Technology (HCMUT)
CS/EE 145A Congestion Control Netlab.caltech.edu/course.
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
Advance Computer Networking L-6 TCP & Routers Acknowledgments: Lecture slides are from the graduate level Computer Networks course thought by Srinivasan.
U Innsbruck Informatik - 1 CADPC/PTP in a nutshell Michael Welzl
Fluid-based Analysis of a Network of AQM Routers Supporting TCP Flows with an Application to RED Vishal Misra Wei-Bo Gong Don Towsley University of Massachusetts,
Advanced Computer Networking
Queueing theory for TCP Damon Wischik University College London TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
Requirements for Simulation and Modeling Tools Sally Floyd NSF Workshop August 2005.
Acknowledgments S. Athuraliya, D. Lapsley, V. Li, Q. Yin (UMelb) S. Adlakha (UCLA), J. Doyle (Caltech), K. Kim (SNU/Caltech), F. Paganini (UCLA), J. Wang.
Queueing and Active Queue Management Aditya Akella 02/26/2007.
Lecture 9 – More TCP & Congestion Control
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 Time-scale Decomposition and Equivalent Rate Based Marking Yung Yi, Sanjay Shakkottai ECE Dept., UT Austin Supratim Deb.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
PCP: Efficient Endpoint Congestion Control NSDI, 2006 Thomas Anderson, Andrew Collins, Arvind Krishnamurthy and John Zahorjan University of Washington.
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
ECEN 619, Internet Protocols and Modeling Prof. Xi Zhang Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions.
Congestion Avoidance Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced.
Dynamic Behavior of Slowly Responsive Congestion Control Algorithms (Bansal, Balakrishnan, Floyd & Shenker, 2001)
1 Stochastic Ordering for Internet Congestion Control Han Cai, Do Young Eun, Sangtae Ha, Injong Rhee, and Lisong Xu PFLDnet 2007 February 7, 2007.
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Yufei Chen.
Delay-based Congestion Control for Multipath TCP Yu Cao, Mingwei Xu, Xiaoming Fu Tsinghua University University of Goettingen.
Introduction to Congestion Control
CS 268: Lecture 6 Scott Shenker and Ion Stoica
Router-Assisted Congestion Control
Queue Dynamics with Window Flow Control
Lecture 19 – TCP Performance
Fast TCP Matt Weaver CS622 Fall 2007.
FAST TCP : From Theory to Experiments
Stability of Congestion Control Algorithms Using Control Theory with an application to XCP Ioannis Papadimitriou George Mavromatis.
Understanding Congestion Control Mohammad Alizadeh Fall 2018
Presentation transcript:

Microscopic Behavior of Internet Control Xiaoliang (David) Wei NetLab, CS&EE California Institute of Technology

Internet Control Problem -> solution -> understanding -> Problem -> solution -> understanding -> … 1986: First Internet Congestion Collapse

Internet Control Problem -> solution -> understanding -> Problem -> solution -> understanding -> … First Internet Congestion Collapse 1988~1990: TCP-Tahoe DEC-bit

Internet Control Problem -> solution -> understanding -> Problem -> solution -> understanding -> … First Internet Congestion Collapse 1993~1995: Tri-S, DUAL, TCP-Vegas TCP Tahoe; DEC-bit

Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works Summary Summary

Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

Macroscopic View of TCP Control TCP/AQM: A feedback control system TCP/AQM: A feedback control system TCP Sender 1 C x i (t) TCP: Reno Vegas FAST AQM: DropTail / RED Delay ECN TCP Sender 2 q(t) TCP Receiver 1 TCP Receiver 2 τFτF τBτB

Fluid Models Assumptions: TCP algorithms directly control the transmission rates; TCP algorithms directly control the transmission rates; The transmission rates are differentiable (smooth); The transmission rates are differentiable (smooth); Each TCP packet observes the same congestion price (loss, delay or ECN) Each TCP packet observes the same congestion price (loss, delay or ECN)

Methodology based on Fluid Models Equilibrium: Efficiency? Efficiency? Fairness? Fairness?Dynamics: Stability? Stability? Responsiveness? Responsiveness?

Gap 1: Stability of TCP Vegas Analysis: “ TCP Vegas is stable if (and only if) the number of flows is large, and capacity is small, and delay is small. ” Analysis: “ TCP Vegas is stable if (and only if) the number of flows is large, and capacity is small, and delay is small. ” Experiment: a single TCP Vegas flow is stable with arbitrary delay and capacity. Experiment: a single TCP Vegas flow is stable with arbitrary delay and capacity.

Gap 2: Fairness of Scalable TCP Analysis: “ Scalable TCP is fair in homogeneous network ” [Kelly ’ 03] Analysis: “ Scalable TCP is fair in homogeneous network ” [Kelly ’ 03] Experiment: in most cases, Scalable TCP is unfair in homogeneous network. Experiment: in most cases, Scalable TCP is unfair in homogeneous network. Analysis: [Chiu&Jain ’ 90] → Scalable TCP is unfair. Analysis: [Chiu&Jain ’ 90] → Scalable TCP is unfair.

Gap 3: TCP vs TFRC Analysis: “ We designed TCP Friendly Rate Control (TFRC) algorithm to have the same equilibrium as TCP when they co-exist. ” Analysis: “ We designed TCP Friendly Rate Control (TFRC) algorithm to have the same equilibrium as TCP when they co-exist. ” Experiment: TCP flows do not fairly coexist with TFRC flows. Experiment: TCP flows do not fairly coexist with TFRC flows.

Gaps Stability: TCP-Vegas Stability: TCP-Vegas Fairness: Scalable TCP Fairness: Scalable TCP Friendliness: TCP vs TFRC Friendliness: TCP vs TFRC Current analytical models ignore microscopic behavior in TCP congestion control

Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

Microscopic View (Packet level) Two level timescales On each RTT -- TCP congestion control algorithm; On each RTT -- TCP congestion control algorithm; On each packet arrival -- Ack-clocking: On each packet arrival -- Ack-clocking: p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; p++; (p: number of packets in flight)

W: 0 -> 5 Sender Receiver x(t) t (time) c C

Packets queued in bottleneck 0 x(t) t (time) c Sender Receiver C

Packets leaves bottleneck at rate c Sender Receiver x(t) t (time) c C 12

Acknowledgment returns at rate c Sender Receiver A3 A1 A2 0 x(t) t (time) c C 45

New Packets sent at rate c Sender Receiver A5A4 0 x(t) t (time) c RTT C 1 3 2

C No queue in 2 nd Round Trip Sender Receiver x(t) t (time) c RTT 5 4 No need to control rate x(t) !

Two Flows TCP1Rcv1 TCP2Rcv C 0 x(t) t (time) c

Two Flows TCP1Rcv TCP2Rcv C 1 2 t (time) 0 x(t) c

C TCP1Rcv1 A1 A2 A3 4 TCP2Rcv t (time) 0 x(t) c

C TCP1Rcv1 TCP2Rcv A1 1 2 A3 A4 t (time) RTT 0 x(t) c

C TCP1Rcv TCP2Rcv2 4 A2 A3 t (time) RTT A1 0 x(t) c

C TCP1Rcv1 A TCP2Rcv2 A4 2 1 A3 t (time) RTT 0 x(t) c

C TCP1Rcv1 TCP2Rcv A1 A2 A3 4 t (time) RTT 0 x(t) c On-off pattern for each flow

Sub-RTT Burstiness: NS-2 Measurement

Two levels of Burstiness Micro Burst Pulse function Pulse function Input rate>>c Input rate>>c Extra queue & loss Extra queue & loss Transient Transient t (time) RTT Sub-RTT burstiness On-off function On-off function Input rate <=c Input rate <=c No extra queue & loss No extra queue & loss Persistent Persistent 0 x(t) c

Microscopic Effects: known Loss-based TCP Delay-based TCP Micro Burst Low throughput with small buffer – pacing improves throughput (Clearly understood) Noise to delay signal, should be eliminated (Partially … ) Sub-RTT Burstiness Observed in Internet Traffic ( “ Why do we care? ” )

Microscopic Effects: new Loss-based TCP Delay-based TCP Micro Burst Low throughput with small buffer – pacing improves throughput (Clearly Understood) Fast convergence in queuing delay and better stability Sub-RTT Burstiness Low loss synchronization rate with DropTail routers No effect

New Understandings Micro Burst with Delay-based TCP: fast queue convergence 1. A single TCP-Vegas flow is always stable, regardless of delay and capacity. Sub-RTT Burstiness and Loss-based TCP: low loss sync rate 1. Scalable TCP is (usually) unfair; 2. TCP is unfriendly to TFRC;

Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

New Understandings Micro Burst with Delay-based TCP: fast queue convergence 1. A single TCP-Vegas flow is always stable, regardless of delay and capacity. Sub-RTT Burstiness and Loss-based TCP: low loss sync rate 1. Scalable TCP is (usually) unfair; 2. TCP is unfriendly to TFRC;

A packet level model: basis Packets can only be sent upon arrival of an acknowledgment; Packets can only be sent upon arrival of an acknowledgment; A micro burst of packets can be sent at a moment; A micro burst of packets can be sent at a moment; Window size w(t) can be an arbitrary given process. Window size w(t) can be an arbitrary given process. Ack-clocking: on each ack arrival Ack-clocking: on each ack arrival p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; (p: number of packets in flight) p++; (p: number of packets in flight)

A packet level model: variables p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j b j : backlog experienced by packet j b j : backlog experienced by packet j a j : ack arrival time of packet j a j : ack arrival time of packet j Ack-clocking: on each ack arrival Ack-clocking: on each ack arrival p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; (p: number of packets in flight) p++; (p: number of packets in flight)

A packet level model: variables Sender Receiver A5A4 C 1 2 p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j 3

A packet level model: variables Sender Receiver A5A4 C 1 2 p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j b j : backlog experienced by packet j b j : backlog experienced by packet j 3

A packet level model: variables Sender Receiver A4 C 1 2 p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j b j : backlog experienced by packet j b j : backlog experienced by packet j a j : ack arrival time of packet j a j : ack arrival time of packet j A3 65

A packet level model: variables Ack-clocking: on each ack arrival Ack-clocking: on each ack arrival p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; (p: number of packets in flight) p++; (p: number of packets in flight) k : number of acks between s j and s j-1 ; k : number of acks between s j and s j-1 ; p j : number of packets in flight when i is sent p j : number of packets in flight when i is sent s j : sending time of packet j s j : sending time of packet j a j-p(j) : ack arrival time of the packet one RTT ago a j-p(j) : ack arrival time of the packet one RTT ago

A packet level model: variables Ack-clocking: on each ack arrival Ack-clocking: on each ack arrival p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; (p: number of packets in flight) p++; (p: number of packets in flight) k : number of acks between s j and s j-1 ; For example: k =0 k : number of acks between s j and s j-1 ; For example: k =0

A packet level model: variables C j-1 j p3 p2 p1 b j : experienced backlog b j : experienced backlog c : bottleneck capacity c : bottleneck capacity a j :ack arrival time a j :ack arrival time d : propagation delay d : propagation delay

A packet level model p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j b j : backlog experienced by packet j b j : backlog experienced by packet j a j : ack arrival time of packet j a j : ack arrival time of packet j

Ack-clocking: quick sending process Theorem: For anytime that a packet j is sent (s j ), there is always a packet j*:=j*(j) s.t. Theorem: For anytime that a packet j is sent (s j ), there is always a packet j*:=j*(j) s.t. s j = s j* s j = s j* p j* = w (s j ) p j* = w (s j ) The number of packets in flight at any packet sending time is sync-up with the congestion window. The number of packets in flight at any packet sending time is sync-up with the congestion window.

Ack-clocking: fast queue convergence Theorem: If Theorem: If for for Then: Then: The queue converges instantly if window size is larger than BDP in the entire previous RTT. The queue converges instantly if window size is larger than BDP in the entire previous RTT.

Window Control and Ack-clocking Per RTT Window Control: Per RTT Window Control: makes decision once every RTT makes decision once every RTT with the measurement from the latest acknowledgement (a subsequence of sequence number k 1, k 2, k 3, … ) with the measurement from the latest acknowledgement (a subsequence of sequence number k 1, k 2, k 3, … )

Ack-clocking: pacing of microburst Theorem: If Theorem: If for for and, and, then then There will be no extra queue due to micro burst if the window does not increase in the entire previous RTT There will be no extra queue due to micro burst if the window does not increase in the entire previous RTT

Stability of TCP Vegas Theorem: Given the packet level model, if αd>1, a single TCP Vegas flow converges to equilibrium with arbitrary capacity c, propagation delay d. That is: there exists a sequence number J such that Theorem: Given the packet level model, if αd>1, a single TCP Vegas flow converges to equilibrium with arbitrary capacity c, propagation delay d. That is: there exists a sequence number J such that

Stability of Vegas : 100-flow simulation

Stability of Vegas : Avg Window Size Window Oscillation: 1 packet

Stability of Vegas : Queue Size Queue Oscillation: 100 packets ( because 100 flows synchronized )

Gap 1: Stability of TCP Vegas Analysis: “ TCP Vegas is stable if (and only if) the number of flows is large, and capacity is small, and delay is small. ” Analysis: “ TCP Vegas is stable if (and only if) the number of flows is large, and capacity is small, and delay is small. ” Experiment: a single TCP Vegas flow is stable with arbitrary delay and capacity. Experiment: a single TCP Vegas flow is stable with arbitrary delay and capacity. Reason: micro burst leads to fast queue convergence

Designed based on the intuition that queue is directly a function of congestion window size. A FAST flow does the following every other RTT: FAST : stable and responsive

FAST : stability Theorem: Given the packet level model, homogeneous FAST flows converge to equilibrium regardless of capacity c and propagation delay d and number of flows N. Theorem: Given the packet level model, homogeneous FAST flows converge to equilibrium regardless of capacity c and propagation delay d and number of flows N. [Tang, Jacobsson, Andrew, Low ’ 07]: FAST is stable with single bottleneck link regardless of capacity c and propagation delay d and number of flows N. (With an extended fluid model capturing microburst effects) [Tang, Jacobsson, Andrew, Low ’ 07]: FAST is stable with single bottleneck link regardless of capacity c and propagation delay d and number of flows N. (With an extended fluid model capturing microburst effects)

Micro-burst: Summary t (time) RTT 0 x(t) c Effects: Fast queue convergence Fast queue convergence Stability of homogeneous Vegas for arbitrary delay Stability of homogeneous Vegas for arbitrary delay Possibility of very responsive & stable TCP control Possibility of very responsive & stable TCP control Stability of FAST for arbitrary delay Stability of FAST for arbitrary delay

Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

New Understandings Micro Burst with Delay-based TCP: fast queue convergence 1. A single (homogeneous) TCP- Vegas flow is always stable, regardless of delay and capacity. Sub-RTT Burstiness and Loss-based TCP: low loss sync rate 1. Scalable TCP is (usually) unfair; 2. TCP is unfriendly to TFRC;

Loss Synchronization Rate: Definition Loss Synchronization Rate [Baccelli,Hong ’ 02]: Loss Synchronization Rate [Baccelli,Hong ’ 02]: The probability that a flow observes a packet loss during a congestion event. Congestion event (loss event): Congestion event (loss event): A round-trip time interval in which at least one packet is dropped by the bottleneck router due to congestion (buffer overflow at router)

Loss Synchronization Rate: Effects Intuitions: Intuitions: Individual flow: the smaller the better (selfishness) Individual flow: the smaller the better (selfishness) System design: the higher the better (for fairness and convergence) System design: the higher the better (for fairness and convergence) Theoretic Results: Theoretic Results: Aggregate throughput [Baccelli,Hong ’ 02] Aggregate throughput [Baccelli,Hong ’ 02] Instantaneous fairness [Baccelli,Hong ’ 02] Instantaneous fairness [Baccelli,Hong ’ 02] Fairness convergence [Shorten, Wirth, Leith ’ 06] Fairness convergence [Shorten, Wirth, Leith ’ 06]

Loss Sync. Rate: Existing Model [Shorten, Wirth, Leith ’ 06] No Model. Measure from NS-2 and feed into a model for computational results [Shorten, Wirth, Leith ’ 06] No Model. Measure from NS-2 and feed into a model for computational results [Baccelli,Hong ’ 02] Assume each packet has the same probability of being dropped in the loss event. [Baccelli,Hong ’ 02] Assume each packet has the same probability of being dropped in the loss event.

Packet loss is bursty: Internet ~50% losses happen in bursts

Loss process is bursty: on-off In each loss event (one RTT), packet loss process is an on-off process. In each loss event (one RTT), packet loss process is an on-off process.

Data packet process is bursty: on-off t (time) RTT 0 x(t) c In each loss event (one RTT), TCP data packet process is an on-off process. In each loss event (one RTT), TCP data packet process is an on-off process.

Loss Sync. Rate: A Sampling Perspective Loss Sync. Rate: The efficiency of a (bursty) TCP data process to sample the loss signal in a (bursty) loss process Loss Sync. Rate: The efficiency of a (bursty) TCP data process to sample the loss signal in a (bursty) loss process Assumption 1: Within the RTT of loss event, the position of an individual flow ’ s burst is uniformly distributed. Assumption 1: Within the RTT of loss event, the position of an individual flow ’ s burst is uniformly distributed. Assumption 2: Loss process does not depend on data packet process of individual flows. Assumption 2: Loss process does not depend on data packet process of individual flows.

Loss Sync. Rate Case 1: TCP+DropTail w i : window of a TCP flow w i : window of a TCP flow L : number of dropped packets L : number of dropped packets cd+B+L : number of packets going through the bottleneck in the loss event ( c : capacity, d : propagation delay; B : buffer size) cd+B+L : number of packets going through the bottleneck in the loss event ( c : capacity, d : propagation delay; B : buffer size)

Loss Sync. Rate: TCP+DropTail

Loss Sync. Rate Case 2: Pacing+DropTail w i : window of a TCP flow w i : window of a TCP flow L : number of dropped packets L : number of dropped packets cd+B+L : number of packets going through the bottleneck in the loss event cd+B+L : number of packets going through the bottleneck in the loss event

Loss Sync. Rate: Pacing + DropTail

Loss Sync. Rate Case 3: TCP+RED w i : window of a TCP flow w i : window of a TCP flow L : number of dropped packets L : number of dropped packets cd+B+L : number of packets going through the bottleneck in the loss event cd+B+L : number of packets going through the bottleneck in the loss event

Model for Loss Sync. Rate: General form cd+B : number of packets going through the bottleneck in the loss event ( c : capacity, d : propagation delay; B : buffer size) cd+B : number of packets going through the bottleneck in the loss event ( c : capacity, d : propagation delay; B : buffer size) w i : window of a TCP flow in the loss event w i : window of a TCP flow in the loss event L : number of dropped packets in the loss event L : number of dropped packets in the loss event K i : length of burst period of flow i (in pkt) K i : length of burst period of flow i (in pkt) M : length of burst period of loss process (in pkt) M : length of burst period of loss process (in pkt)

Loss Sync. Rate: MatLab Computation cd+B = 1080; w i = 60; L = 16; K, M vary

Measurement: TCP + DropTail Averaged sync. Rate cd+B = 3340 cd+B = 3340 M =L = N/2 M =L = N/2 K = w = (cd+B)/N K = w = (cd+B)/N

Measurement: Pacing + DropTail Averaged sync. Rate cd+B = 3340 cd+B = 3340 M =L = N/2 M =L = N/2 K = w = (cd+B)/N K = w = (cd+B)/N

Measurement: TCP + RED Averaged sync. Rate cd+B = 3340 cd+B = 3340 M =L = N/2 M =L = N/2 K = w = (cd+B)/N K = w = (cd+B)/N

Loss Sync. Rate: Qualitative Results With DropTail and bursty TCP (most widely deployed combination), loss synchronization rate is very low; With DropTail and bursty TCP (most widely deployed combination), loss synchronization rate is very low; TCP Pacing increases loss synchronization rate; TCP Pacing increases loss synchronization rate; RED increases loss synchronization rate. RED increases loss synchronization rate.

Loss Sync. Rate: Asymptotic Result If number of flows N is large: L >> w i If number of flows N is large: L >> w i TCP: TCP: Very weak dependency of Loss Sync Rate to window size: All flows see the same loss TCP Pacing: TCP Pacing: Loss Sync Rate is proportional to window size: Rich guys see more loss.

Asymptotic Result: MatLab Computation cd+B = 1080; L = N/2; N varies Fair share window size: cd+B/N

Implications 1. Scalable TCP is (usually) unfair with bursty TCP 2. TCP is unfriendly to TFRC; 3. …

Fairness of Scalable TCP For each RTT without a loss: For each RTT without a loss: w i (t+1) = αw i (t); α=1.01 For each RTT with a loss (loss event): For each RTT with a loss (loss event): w i (t+1) = βw i (t); β=  [Chiu,Jain ’ 90]: MIMD algorithms cannot converges to fairness with synchronization model  [Kelly ’ 03]: Scalable TCP (MIMD) converges to fairness in theory with fluid model  [Wei, Jin, Low ’ 06][Li,Leith,Shorten ’ 07]: Scalable TCP is unfair in experiments

Fairness of Scalable TCP: Chiu vs Kelly [Kelly ’ 03]: Scalable TCP (MIMD) is fair [Kelly ’ 03]: Scalable TCP (MIMD) is fair Assumption: loss event rate is proportional to window size (fluid model) Assumption: loss event rate is proportional to window size (fluid model) [Chiu,Jain ’ 90]: MIMD is not fair [Chiu,Jain ’ 90]: MIMD is not fair Assumption: loss event rate is independent of window size (simplified synchronization model) Assumption: loss event rate is independent of window size (simplified synchronization model)

Fairness of Scalable TCP: Chiu vs Kelly [Kelly ’ 03]: Scalable TCP is fair [Kelly ’ 03]: Scalable TCP is fair Assumption: loss event rate is proportional to window size (fluid model) Assumption: loss event rate is proportional to window size (fluid model) Sync. Rate Model: true with very few bursty TCP flows or with paced TCP flows Sync. Rate Model: true with very few bursty TCP flows or with paced TCP flows [Chiu,Jain ’ 90]: MIMD is not fair [Chiu,Jain ’ 90]: MIMD is not fair Assumption: loss event rate is independent of window size (simplified synchronization model) Assumption: loss event rate is independent of window size (simplified synchronization model) Sync. Rate Model: many bursty TCP flows Sync. Rate Model: many bursty TCP flows

Scalable TCP: simulations Capacity=100Mbps; delay=200ms; buffer size: BDP; MTU=1500; N varies; averaged rate over 600 second runtime

Gap 2: Fairness of Scalable TCP Analysis: “ Scalable TCP is fair in homogeneous network ” [Kelly ’ 03] Analysis: “ Scalable TCP is fair in homogeneous network ” [Kelly ’ 03] Experiment: in most cases, Scalable TCP is unfair in homogeneous network. Experiment: in most cases, Scalable TCP is unfair in homogeneous network. Analysis: “ MIMD in general is unfair. ” [Chiu&Jain ’ 90]. Analysis: “ MIMD in general is unfair. ” [Chiu&Jain ’ 90]. → Scalable TCP is unfair. similar Reason: sub-RTT burstiness leads to similar loss sync. rate for different flows

TFRC vs TCP TCP: TCP: TFRC (same as Pacing): TFRC (same as Pacing):

TFRC vs TCP: simulation

Gap 3: TCP vs TFRC Analysis: “ We designed TCP Friendly Rate Control (TFRC) algorithm to have the same equilibrium as TCP when they co-exist. ” Analysis: “ We designed TCP Friendly Rate Control (TFRC) algorithm to have the same equilibrium as TCP when they co-exist. ” Experiment: TCP flows do not fairly coexist with TFRC flows. Experiment: TCP flows do not fairly coexist with TFRC flows. different Reason: sub-RTT burstiness leads to different loss sync. rate for TFRC and TCP

Sub-RTT Burstiness: Summary t (time) RTT Possible solutions Possible solutions Eliminate sub-RTT burstiness: Pacing Eliminate sub-RTT burstiness: Pacing Randomize loss signal: RED Randomize loss signal: RED Persistent loss signal: ECN Persistent loss signal: ECN 0 x(t) c Effects: Low Loss Sync. Rate with DropTail router Low Loss Sync. Rate with DropTail router Poor convergence Poor convergence MIMD unfairness MIMD unfairness TFRC unfriendly TFRC unfriendly

Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

Future: a research framework on microscopic Internet behavior Experiment tools: help to observe, analyze and validate microscopic behavior in Internet: WAN-in-Lab, NS-2 TCP-Linux, … Experiment tools: help to observe, analyze and validate microscopic behavior in Internet: WAN-in-Lab, NS-2 TCP-Linux, … Theoretic model: more accurate models to capture the dynamic of Internet in microscopic timescale. Theoretic model: more accurate models to capture the dynamic of Internet in microscopic timescale. New algorithms: new algorithms that utilize and control the microscopic Internet behavior New algorithms: new algorithms that utilize and control the microscopic Internet behavior

NS-2 TCP-Linux The first tool that can run a congestion algorithm directly from Linux source code with the same simulation speed (sometimes even faster) 700+ local downloads (2400+ tutorial visits worldwide) 700+ local downloads (2400+ tutorial visits worldwide) 5+ Linux kernel fixes 5+ Linux kernel fixes 2+ papers 2+ papers Outreach: Outreach: BIC/Cubic-TCP (NCSU), BIC/Cubic-TCP (NCSU), H-TCP (Hamilton), H-TCP (Hamilton), TCP Westwood (UCLA/Politecnico di Bari), TCP Westwood (UCLA/Politecnico di Bari), A-Reno (NEC), … A-Reno (NEC), … NS-2 Simulator Linux Implementation

Thank you! Q&A