Presentation is loading. Please wait.

Presentation is loading. Please wait.

Microscopic Behavior of Internet Control Xiaoliang (David) Wei NetLab, CS&EE California Institute of Technology.

Similar presentations


Presentation on theme: "Microscopic Behavior of Internet Control Xiaoliang (David) Wei NetLab, CS&EE California Institute of Technology."— Presentation transcript:

1 Microscopic Behavior of Internet Control Xiaoliang (David) Wei NetLab, CS&EE California Institute of Technology

2 Internet Control Problem -> solution -> understanding -> Problem -> solution -> understanding -> 19861989199520031999 … 1986: First Internet Congestion Collapse

3 Internet Control Problem -> solution -> understanding -> Problem -> solution -> understanding -> 19861989199520031999 … First Internet Congestion Collapse 1988~1990: TCP-Tahoe DEC-bit

4 Internet Control Problem -> solution -> understanding -> Problem -> solution -> understanding -> 19861989199520031999 … First Internet Congestion Collapse 1993~1995: Tri-S, DUAL, TCP-Vegas TCP Tahoe; DEC-bit

5 Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works Summary Summary

6 Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

7 Macroscopic View of TCP Control TCP/AQM: A feedback control system TCP/AQM: A feedback control system TCP Sender 1 C x i (t) TCP: Reno Vegas FAST AQM: DropTail / RED Delay ECN TCP Sender 2 q(t) TCP Receiver 1 TCP Receiver 2 τFτF τBτB

8 Fluid Models Assumptions: TCP algorithms directly control the transmission rates; TCP algorithms directly control the transmission rates; The transmission rates are differentiable (smooth); The transmission rates are differentiable (smooth); Each TCP packet observes the same congestion price (loss, delay or ECN) Each TCP packet observes the same congestion price (loss, delay or ECN)

9 Methodology based on Fluid Models Equilibrium: Efficiency? Efficiency? Fairness? Fairness?Dynamics: Stability? Stability? Responsiveness? Responsiveness?

10 Gap 1: Stability of TCP Vegas Analysis: “ TCP Vegas is stable if (and only if) the number of flows is large, and capacity is small, and delay is small. ” Analysis: “ TCP Vegas is stable if (and only if) the number of flows is large, and capacity is small, and delay is small. ” Experiment: a single TCP Vegas flow is stable with arbitrary delay and capacity. Experiment: a single TCP Vegas flow is stable with arbitrary delay and capacity.

11 Gap 2: Fairness of Scalable TCP Analysis: “ Scalable TCP is fair in homogeneous network ” [Kelly ’ 03] Analysis: “ Scalable TCP is fair in homogeneous network ” [Kelly ’ 03] Experiment: in most cases, Scalable TCP is unfair in homogeneous network. Experiment: in most cases, Scalable TCP is unfair in homogeneous network. Analysis: [Chiu&Jain ’ 90] → Scalable TCP is unfair. Analysis: [Chiu&Jain ’ 90] → Scalable TCP is unfair.

12 Gap 3: TCP vs TFRC Analysis: “ We designed TCP Friendly Rate Control (TFRC) algorithm to have the same equilibrium as TCP when they co-exist. ” Analysis: “ We designed TCP Friendly Rate Control (TFRC) algorithm to have the same equilibrium as TCP when they co-exist. ” Experiment: TCP flows do not fairly coexist with TFRC flows. Experiment: TCP flows do not fairly coexist with TFRC flows.

13 Gaps Stability: TCP-Vegas Stability: TCP-Vegas Fairness: Scalable TCP Fairness: Scalable TCP Friendliness: TCP vs TFRC Friendliness: TCP vs TFRC Current analytical models ignore microscopic behavior in TCP congestion control

14 Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

15 Microscopic View (Packet level) Two level timescales On each RTT -- TCP congestion control algorithm; On each RTT -- TCP congestion control algorithm; On each packet arrival -- Ack-clocking: On each packet arrival -- Ack-clocking: p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; p++; (p: number of packets in flight)

16 W: 0 -> 5 Sender Receiver 1 2 3 4 5 0 x(t) t (time) c C

17 Packets queued in bottleneck 0 x(t) t (time) c Sender Receiver 5 4 3 2 1 C

18 Packets leaves bottleneck at rate c Sender Receiver 3 4 5 0 x(t) t (time) c C 12

19 Acknowledgment returns at rate c Sender Receiver A3 A1 A2 0 x(t) t (time) c C 45

20 New Packets sent at rate c Sender Receiver A5A4 0 x(t) t (time) c RTT C 1 3 2

21 C No queue in 2 nd Round Trip Sender Receiver 1 3 2 0 x(t) t (time) c RTT 5 4 No need to control rate x(t) !

22 Two Flows TCP1Rcv1 TCP2Rcv2 1 2 3 4 1 2 3 4 C 0 x(t) t (time) c

23 Two Flows TCP1Rcv1 3 4 1 TCP2Rcv2 2 3 4 C 1 2 t (time) 0 x(t) c

24 C TCP1Rcv1 A1 A2 A3 4 TCP2Rcv2 2 3 4 5 1 t (time) 0 x(t) c

25 C TCP1Rcv1 TCP2Rcv2 2 3 4 A1 1 2 A3 A4 t (time) RTT 0 x(t) c

26 C TCP1Rcv1 1 2 3 4 TCP2Rcv2 4 A2 A3 t (time) RTT A1 0 x(t) c

27 C TCP1Rcv1 A1 2 3 4 TCP2Rcv2 A4 2 1 A3 t (time) RTT 0 x(t) c

28 C TCP1Rcv1 TCP2Rcv2 4 2 1 3 A1 A2 A3 4 t (time) RTT 0 x(t) c On-off pattern for each flow

29 Sub-RTT Burstiness: NS-2 Measurement

30 Two levels of Burstiness Micro Burst Pulse function Pulse function Input rate>>c Input rate>>c Extra queue & loss Extra queue & loss Transient Transient t (time) RTT Sub-RTT burstiness On-off function On-off function Input rate <=c Input rate <=c No extra queue & loss No extra queue & loss Persistent Persistent 0 x(t) c

31 Microscopic Effects: known Loss-based TCP Delay-based TCP Micro Burst Low throughput with small buffer – pacing improves throughput (Clearly understood) Noise to delay signal, should be eliminated (Partially … ) Sub-RTT Burstiness Observed in Internet Traffic ( “ Why do we care? ” )

32 Microscopic Effects: new Loss-based TCP Delay-based TCP Micro Burst Low throughput with small buffer – pacing improves throughput (Clearly Understood) Fast convergence in queuing delay and better stability Sub-RTT Burstiness Low loss synchronization rate with DropTail routers No effect

33 New Understandings Micro Burst with Delay-based TCP: fast queue convergence 1. A single TCP-Vegas flow is always stable, regardless of delay and capacity. Sub-RTT Burstiness and Loss-based TCP: low loss sync rate 1. Scalable TCP is (usually) unfair; 2. TCP is unfriendly to TFRC;

34 Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

35 New Understandings Micro Burst with Delay-based TCP: fast queue convergence 1. A single TCP-Vegas flow is always stable, regardless of delay and capacity. Sub-RTT Burstiness and Loss-based TCP: low loss sync rate 1. Scalable TCP is (usually) unfair; 2. TCP is unfriendly to TFRC;

36 A packet level model: basis Packets can only be sent upon arrival of an acknowledgment; Packets can only be sent upon arrival of an acknowledgment; A micro burst of packets can be sent at a moment; A micro burst of packets can be sent at a moment; Window size w(t) can be an arbitrary given process. Window size w(t) can be an arbitrary given process. Ack-clocking: on each ack arrival Ack-clocking: on each ack arrival p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; (p: number of packets in flight) p++; (p: number of packets in flight)

37 A packet level model: variables p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j b j : backlog experienced by packet j b j : backlog experienced by packet j a j : ack arrival time of packet j a j : ack arrival time of packet j Ack-clocking: on each ack arrival Ack-clocking: on each ack arrival p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; (p: number of packets in flight) p++; (p: number of packets in flight)

38 A packet level model: variables Sender Receiver A5A4 C 1 2 p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j 3

39 A packet level model: variables Sender Receiver A5A4 C 1 2 p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j b j : backlog experienced by packet j b j : backlog experienced by packet j 3

40 A packet level model: variables Sender Receiver A4 C 1 2 p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j b j : backlog experienced by packet j b j : backlog experienced by packet j a j : ack arrival time of packet j a j : ack arrival time of packet j A3 65

41 A packet level model: variables Ack-clocking: on each ack arrival Ack-clocking: on each ack arrival p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; (p: number of packets in flight) p++; (p: number of packets in flight) k : number of acks between s j and s j-1 ; k : number of acks between s j and s j-1 ; p j : number of packets in flight when i is sent p j : number of packets in flight when i is sent s j : sending time of packet j s j : sending time of packet j a j-p(j) : ack arrival time of the packet one RTT ago a j-p(j) : ack arrival time of the packet one RTT ago

42 A packet level model: variables Ack-clocking: on each ack arrival Ack-clocking: on each ack arrival p--; p--; while (p < w(t) ) do while (p < w(t) ) do Send a packet Send a packet p++; (p: number of packets in flight) p++; (p: number of packets in flight) k : number of acks between s j and s j-1 ; For example: k =0 k : number of acks between s j and s j-1 ; For example: k =0

43 A packet level model: variables C j-1 j p3 p2 p1 b j : experienced backlog b j : experienced backlog c : bottleneck capacity c : bottleneck capacity a j :ack arrival time a j :ack arrival time d : propagation delay d : propagation delay

44 A packet level model p j : Number of packets in flight when j is sent; p j : Number of packets in flight when j is sent; s j : sending time of packet j s j : sending time of packet j b j : backlog experienced by packet j b j : backlog experienced by packet j a j : ack arrival time of packet j a j : ack arrival time of packet j

45 Ack-clocking: quick sending process Theorem: For anytime that a packet j is sent (s j ), there is always a packet j*:=j*(j) s.t. Theorem: For anytime that a packet j is sent (s j ), there is always a packet j*:=j*(j) s.t. s j = s j* s j = s j* p j* = w (s j ) p j* = w (s j ) The number of packets in flight at any packet sending time is sync-up with the congestion window. The number of packets in flight at any packet sending time is sync-up with the congestion window.

46 Ack-clocking: fast queue convergence Theorem: If Theorem: If for for Then: Then: The queue converges instantly if window size is larger than BDP in the entire previous RTT. The queue converges instantly if window size is larger than BDP in the entire previous RTT.

47 Window Control and Ack-clocking Per RTT Window Control: Per RTT Window Control: makes decision once every RTT makes decision once every RTT with the measurement from the latest acknowledgement (a subsequence of sequence number k 1, k 2, k 3, … ) with the measurement from the latest acknowledgement (a subsequence of sequence number k 1, k 2, k 3, … )

48 Ack-clocking: pacing of microburst Theorem: If Theorem: If for for and, and, then then There will be no extra queue due to micro burst if the window does not increase in the entire previous RTT There will be no extra queue due to micro burst if the window does not increase in the entire previous RTT

49 Stability of TCP Vegas Theorem: Given the packet level model, if αd>1, a single TCP Vegas flow converges to equilibrium with arbitrary capacity c, propagation delay d. That is: there exists a sequence number J such that Theorem: Given the packet level model, if αd>1, a single TCP Vegas flow converges to equilibrium with arbitrary capacity c, propagation delay d. That is: there exists a sequence number J such that

50 Stability of Vegas : 100-flow simulation

51 Stability of Vegas : Avg Window Size Window Oscillation: 1 packet

52 Stability of Vegas : Queue Size Queue Oscillation: 100 packets ( because 100 flows synchronized )

53 Gap 1: Stability of TCP Vegas Analysis: “ TCP Vegas is stable if (and only if) the number of flows is large, and capacity is small, and delay is small. ” Analysis: “ TCP Vegas is stable if (and only if) the number of flows is large, and capacity is small, and delay is small. ” Experiment: a single TCP Vegas flow is stable with arbitrary delay and capacity. Experiment: a single TCP Vegas flow is stable with arbitrary delay and capacity. Reason: micro burst leads to fast queue convergence

54 Designed based on the intuition that queue is directly a function of congestion window size. A FAST flow does the following every other RTT: FAST : stable and responsive

55 FAST : stability Theorem: Given the packet level model, homogeneous FAST flows converge to equilibrium regardless of capacity c and propagation delay d and number of flows N. Theorem: Given the packet level model, homogeneous FAST flows converge to equilibrium regardless of capacity c and propagation delay d and number of flows N. [Tang, Jacobsson, Andrew, Low ’ 07]: FAST is stable with single bottleneck link regardless of capacity c and propagation delay d and number of flows N. (With an extended fluid model capturing microburst effects) [Tang, Jacobsson, Andrew, Low ’ 07]: FAST is stable with single bottleneck link regardless of capacity c and propagation delay d and number of flows N. (With an extended fluid model capturing microburst effects)

56 Micro-burst: Summary t (time) RTT 0 x(t) c Effects: Fast queue convergence Fast queue convergence Stability of homogeneous Vegas for arbitrary delay Stability of homogeneous Vegas for arbitrary delay Possibility of very responsive & stable TCP control Possibility of very responsive & stable TCP control Stability of FAST for arbitrary delay Stability of FAST for arbitrary delay

57 Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

58 New Understandings Micro Burst with Delay-based TCP: fast queue convergence 1. A single (homogeneous) TCP- Vegas flow is always stable, regardless of delay and capacity. Sub-RTT Burstiness and Loss-based TCP: low loss sync rate 1. Scalable TCP is (usually) unfair; 2. TCP is unfriendly to TFRC;

59 Loss Synchronization Rate: Definition Loss Synchronization Rate [Baccelli,Hong ’ 02]: Loss Synchronization Rate [Baccelli,Hong ’ 02]: The probability that a flow observes a packet loss during a congestion event. Congestion event (loss event): Congestion event (loss event): A round-trip time interval in which at least one packet is dropped by the bottleneck router due to congestion (buffer overflow at router)

60 Loss Synchronization Rate: Effects Intuitions: Intuitions: Individual flow: the smaller the better (selfishness) Individual flow: the smaller the better (selfishness) System design: the higher the better (for fairness and convergence) System design: the higher the better (for fairness and convergence) Theoretic Results: Theoretic Results: Aggregate throughput [Baccelli,Hong ’ 02] Aggregate throughput [Baccelli,Hong ’ 02] Instantaneous fairness [Baccelli,Hong ’ 02] Instantaneous fairness [Baccelli,Hong ’ 02] Fairness convergence [Shorten, Wirth, Leith ’ 06] Fairness convergence [Shorten, Wirth, Leith ’ 06]

61 Loss Sync. Rate: Existing Model [Shorten, Wirth, Leith ’ 06] No Model. Measure from NS-2 and feed into a model for computational results [Shorten, Wirth, Leith ’ 06] No Model. Measure from NS-2 and feed into a model for computational results [Baccelli,Hong ’ 02] Assume each packet has the same probability of being dropped in the loss event. [Baccelli,Hong ’ 02] Assume each packet has the same probability of being dropped in the loss event.

62 Packet loss is bursty: Internet ~50% losses happen in bursts

63 Loss process is bursty: on-off In each loss event (one RTT), packet loss process is an on-off process. In each loss event (one RTT), packet loss process is an on-off process.

64 Data packet process is bursty: on-off t (time) RTT 0 x(t) c In each loss event (one RTT), TCP data packet process is an on-off process. In each loss event (one RTT), TCP data packet process is an on-off process.

65 Loss Sync. Rate: A Sampling Perspective Loss Sync. Rate: The efficiency of a (bursty) TCP data process to sample the loss signal in a (bursty) loss process Loss Sync. Rate: The efficiency of a (bursty) TCP data process to sample the loss signal in a (bursty) loss process Assumption 1: Within the RTT of loss event, the position of an individual flow ’ s burst is uniformly distributed. Assumption 1: Within the RTT of loss event, the position of an individual flow ’ s burst is uniformly distributed. Assumption 2: Loss process does not depend on data packet process of individual flows. Assumption 2: Loss process does not depend on data packet process of individual flows.

66 Loss Sync. Rate Case 1: TCP+DropTail w i : window of a TCP flow w i : window of a TCP flow L : number of dropped packets L : number of dropped packets cd+B+L : number of packets going through the bottleneck in the loss event ( c : capacity, d : propagation delay; B : buffer size) cd+B+L : number of packets going through the bottleneck in the loss event ( c : capacity, d : propagation delay; B : buffer size)

67 Loss Sync. Rate: TCP+DropTail

68 Loss Sync. Rate Case 2: Pacing+DropTail w i : window of a TCP flow w i : window of a TCP flow L : number of dropped packets L : number of dropped packets cd+B+L : number of packets going through the bottleneck in the loss event cd+B+L : number of packets going through the bottleneck in the loss event

69 Loss Sync. Rate: Pacing + DropTail

70 Loss Sync. Rate Case 3: TCP+RED w i : window of a TCP flow w i : window of a TCP flow L : number of dropped packets L : number of dropped packets cd+B+L : number of packets going through the bottleneck in the loss event cd+B+L : number of packets going through the bottleneck in the loss event

71 Model for Loss Sync. Rate: General form cd+B : number of packets going through the bottleneck in the loss event ( c : capacity, d : propagation delay; B : buffer size) cd+B : number of packets going through the bottleneck in the loss event ( c : capacity, d : propagation delay; B : buffer size) w i : window of a TCP flow in the loss event w i : window of a TCP flow in the loss event L : number of dropped packets in the loss event L : number of dropped packets in the loss event K i : length of burst period of flow i (in pkt) K i : length of burst period of flow i (in pkt) M : length of burst period of loss process (in pkt) M : length of burst period of loss process (in pkt)

72 Loss Sync. Rate: MatLab Computation cd+B = 1080; w i = 60; L = 16; K, M vary

73 Measurement: TCP + DropTail Averaged sync. Rate cd+B = 3340 cd+B = 3340 M =L = N/2 M =L = N/2 K = w = (cd+B)/N K = w = (cd+B)/N

74 Measurement: Pacing + DropTail Averaged sync. Rate cd+B = 3340 cd+B = 3340 M =L = N/2 M =L = N/2 K = w = (cd+B)/N K = w = (cd+B)/N

75 Measurement: TCP + RED Averaged sync. Rate cd+B = 3340 cd+B = 3340 M =L = N/2 M =L = N/2 K = w = (cd+B)/N K = w = (cd+B)/N

76 Loss Sync. Rate: Qualitative Results With DropTail and bursty TCP (most widely deployed combination), loss synchronization rate is very low; With DropTail and bursty TCP (most widely deployed combination), loss synchronization rate is very low; TCP Pacing increases loss synchronization rate; TCP Pacing increases loss synchronization rate; RED increases loss synchronization rate. RED increases loss synchronization rate.

77 Loss Sync. Rate: Asymptotic Result If number of flows N is large: L >> w i If number of flows N is large: L >> w i TCP: TCP: Very weak dependency of Loss Sync Rate to window size: All flows see the same loss TCP Pacing: TCP Pacing: Loss Sync Rate is proportional to window size: Rich guys see more loss.

78 Asymptotic Result: MatLab Computation cd+B = 1080; L = N/2; N varies Fair share window size: cd+B/N

79 Implications 1. Scalable TCP is (usually) unfair with bursty TCP 2. TCP is unfriendly to TFRC; 3. …

80 Fairness of Scalable TCP For each RTT without a loss: For each RTT without a loss: w i (t+1) = αw i (t); α=1.01 For each RTT with a loss (loss event): For each RTT with a loss (loss event): w i (t+1) = βw i (t); β= 0.875  [Chiu,Jain ’ 90]: MIMD algorithms cannot converges to fairness with synchronization model  [Kelly ’ 03]: Scalable TCP (MIMD) converges to fairness in theory with fluid model  [Wei, Jin, Low ’ 06][Li,Leith,Shorten ’ 07]: Scalable TCP is unfair in experiments

81 Fairness of Scalable TCP: Chiu vs Kelly [Kelly ’ 03]: Scalable TCP (MIMD) is fair [Kelly ’ 03]: Scalable TCP (MIMD) is fair Assumption: loss event rate is proportional to window size (fluid model) Assumption: loss event rate is proportional to window size (fluid model) [Chiu,Jain ’ 90]: MIMD is not fair [Chiu,Jain ’ 90]: MIMD is not fair Assumption: loss event rate is independent of window size (simplified synchronization model) Assumption: loss event rate is independent of window size (simplified synchronization model)

82 Fairness of Scalable TCP: Chiu vs Kelly [Kelly ’ 03]: Scalable TCP is fair [Kelly ’ 03]: Scalable TCP is fair Assumption: loss event rate is proportional to window size (fluid model) Assumption: loss event rate is proportional to window size (fluid model) Sync. Rate Model: true with very few bursty TCP flows or with paced TCP flows Sync. Rate Model: true with very few bursty TCP flows or with paced TCP flows [Chiu,Jain ’ 90]: MIMD is not fair [Chiu,Jain ’ 90]: MIMD is not fair Assumption: loss event rate is independent of window size (simplified synchronization model) Assumption: loss event rate is independent of window size (simplified synchronization model) Sync. Rate Model: many bursty TCP flows Sync. Rate Model: many bursty TCP flows

83 Scalable TCP: simulations Capacity=100Mbps; delay=200ms; buffer size: BDP; MTU=1500; N varies; averaged rate over 600 second runtime

84 Gap 2: Fairness of Scalable TCP Analysis: “ Scalable TCP is fair in homogeneous network ” [Kelly ’ 03] Analysis: “ Scalable TCP is fair in homogeneous network ” [Kelly ’ 03] Experiment: in most cases, Scalable TCP is unfair in homogeneous network. Experiment: in most cases, Scalable TCP is unfair in homogeneous network. Analysis: “ MIMD in general is unfair. ” [Chiu&Jain ’ 90]. Analysis: “ MIMD in general is unfair. ” [Chiu&Jain ’ 90]. → Scalable TCP is unfair. similar Reason: sub-RTT burstiness leads to similar loss sync. rate for different flows

85 TFRC vs TCP TCP: TCP: TFRC (same as Pacing): TFRC (same as Pacing):

86 TFRC vs TCP: simulation

87 Gap 3: TCP vs TFRC Analysis: “ We designed TCP Friendly Rate Control (TFRC) algorithm to have the same equilibrium as TCP when they co-exist. ” Analysis: “ We designed TCP Friendly Rate Control (TFRC) algorithm to have the same equilibrium as TCP when they co-exist. ” Experiment: TCP flows do not fairly coexist with TFRC flows. Experiment: TCP flows do not fairly coexist with TFRC flows. different Reason: sub-RTT burstiness leads to different loss sync. rate for TFRC and TCP

88 Sub-RTT Burstiness: Summary t (time) RTT Possible solutions Possible solutions Eliminate sub-RTT burstiness: Pacing Eliminate sub-RTT burstiness: Pacing Randomize loss signal: RED Randomize loss signal: RED Persistent loss signal: ECN Persistent loss signal: ECN 0 x(t) c Effects: Low Loss Sync. Rate with DropTail router Low Loss Sync. Rate with DropTail router Poor convergence Poor convergence MIMD unfairness MIMD unfairness TFRC unfriendly TFRC unfriendly

89 Outline Motivation Motivation Overview of Microscopic behavior Overview of Microscopic behavior Stability of Delay-based Congestion Control Algorithms Stability of Delay-based Congestion Control Algorithms Fairness of Loss-based Congestion control algorithms Fairness of Loss-based Congestion control algorithms Future works Future works

90 Future: a research framework on microscopic Internet behavior Experiment tools: help to observe, analyze and validate microscopic behavior in Internet: WAN-in-Lab, NS-2 TCP-Linux, … Experiment tools: help to observe, analyze and validate microscopic behavior in Internet: WAN-in-Lab, NS-2 TCP-Linux, … Theoretic model: more accurate models to capture the dynamic of Internet in microscopic timescale. Theoretic model: more accurate models to capture the dynamic of Internet in microscopic timescale. New algorithms: new algorithms that utilize and control the microscopic Internet behavior New algorithms: new algorithms that utilize and control the microscopic Internet behavior

91 NS-2 TCP-Linux The first tool that can run a congestion algorithm directly from Linux source code with the same simulation speed (sometimes even faster) 700+ local downloads (2400+ tutorial visits worldwide) 700+ local downloads (2400+ tutorial visits worldwide) 5+ Linux kernel fixes 5+ Linux kernel fixes 2+ papers 2+ papers Outreach: Outreach: BIC/Cubic-TCP (NCSU), BIC/Cubic-TCP (NCSU), H-TCP (Hamilton), H-TCP (Hamilton), TCP Westwood (UCLA/Politecnico di Bari), TCP Westwood (UCLA/Politecnico di Bari), A-Reno (NEC), … A-Reno (NEC), … NS-2 Simulator Linux Implementation

92 Thank you! Q&A


Download ppt "Microscopic Behavior of Internet Control Xiaoliang (David) Wei NetLab, CS&EE California Institute of Technology."

Similar presentations


Ads by Google