Presentation is loading. Please wait.

Presentation is loading. Please wait.

Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced.

Similar presentations


Presentation on theme: "Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced."— Presentation transcript:

1 Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced Network Conference in Taipei

2 APAN Requirements on Transport Advanced ► High Speed International ► Long Distant Difficulty in Congestion Avoidance is in proportion to: Bandwidth-Delay Product (BWDP) Single TCP flow No fairness considered

3 Long Distant Rover Control (at least) 7 minutes one way delay Image Command Earth Mars When operator saw collision, it was too late.

4 Long-Distance End-to-End Congestion Control Merge (Bottleneck) A+B > C Overflow Sender (JP) Receiver (US) Feedback BWDP: Amount of data sent but not yet acknowledged 64Kbps x 200ms = 1600B ~ 1 Packet 1Gbps x 200ms = 25MB ~ 16700 Packets 200ms round trip delay A B C

5 Analyzing Advanced TCP Dynamic Behavior in a Real Network ( Example: From Tokyo to Indianapolis at 1G bps with HighSpeed TCP ) The data was obtained during e-VLBI demonstration at Internet2 Member Meeting in October 2003. Throughput RTT Window Sizes Packet Losses The graphs were generated through web100.

6 Receiver Linux TCP Sender dummyne t FreeBSD 5.1 GbE RTT 200ms (100ms one-way) GbE Only 800 Mbps available TCP Performance Measurement in Testbed focus on bottleneck queue overflow (loss) queuing delay (q) + trip delay (t) 1/2RTT < t < RTT 1500B MTU

7 TCP Performance with Different Queue Sizes

8 TCP’s Way of Rate Control (slow-start) RTT (200ms) 20ms40ms80ms160ms t 1Gbps rate average rate 150 Mbps average rate overflows with a 1000-packet queue 100Mbps

9 (a) HighSpeed (b) Scalable (c) BIC (d) FAST Bottleneck bandwidth and queue size TCP Burstyness

10 * set to 100M for measurement Measuring Bottleneck Queue Sizes Switch / Router Queue Size Measurement Result ReceiverSender Capacit y C packet train lost packet measured packet Queue Size = C x (Delay max – Delay min ) Device Queuing Delay (µs) Capacity (Mbps) Estimated Queue Size (1500B) FES12GCF6161100*50p/75KB GB9812T22168100*180p/270KB Summit1i20847100*169p/254KB GS4000738100060p/90KB FI40036621000298p/447KB M20148463100012081p/18MB Pro8801188627100015350p/23MB cross traffic injected for measurement

11 RouterSwitch 1Gbps (10G) 100Mbps (1G) b-1) Typical Bottleneck Cases RouterSwitch a) Queue ~100 Queue ~1000 VLANs Switch/ Router 10G LAN-PHY Ethernet Untag b-2) 9.5G WAN-PHY 802.1q Tag

12 Solutions by Advanced TCPs Loss-Based ► AQM (Advanced Queue Management) Reno, Scalable, High-Speed, BIC, … Delay-Based Vegas, FAST Explicit Router Notification ECN, XCP, Quick Start, SIRENS, MaxNet How can wee foresee collision (queue overflow)?

13 Queue Management Methods  FIFO (First In First Out)  RED ( Random Early Detection ) 1 2 4 3 5 6 12345 6 drop full 1 2 4 3 5 6 12346 5 drop threshold

14 HOLB (Head of Line Blocking) 21111 2222 1 full slow fast output queue input queue wait blocked empty Switch Note: Ethernet flow-control (PAUSE frame in 802.3x) may produce HOLB (Head of line blocking), resulting in less performance at a backbone switch. full PAUSE

15 Summary Add an interface to a router. Or, Use a switch with an appropriate interface queue. Let’s consider making use of AQM on a router. Future Plan 10G bps congestion through TransPAC2 and JGN II with large delay (>=100 ms)


Download ppt "Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced."

Similar presentations


Ads by Google