Presentation is loading. Please wait.

Presentation is loading. Please wait.

TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.

Similar presentations


Presentation on theme: "TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang."— Presentation transcript:

1 TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang

2 Overview A TCP trunk is an aggregate traffic stream –IP flows with same routing and QoS treatment –Any number of TCP and/or UDP flows, so called user flows On top of a layer-2 circuit or MPLS switched path (What’s that?) Trunks must only be implemented in the end host Implements TCP congestion control by using a management TCP connection  Data transmission and congestion control are decoupled  User flows transmit at a rate determined by management connection

3 Properties of TCP trunks Guaranteed and elastic bandwidth –Beside a guaranteed minimum bandwidth (GMB), a trunk can grab additional bandwidth, when it is available Immediate and in-sequence forwarding Lossless delivery –Possible because of decoupling of data and control –Needs div-serv like modifications of the routers –However, user packets may still be dropped at the trunk sender Aggregation and isolation –Less state information in routers –UDP flows become TCP friendly –Fair share of bandwidth among different sites, if every site has one trunk

4 Implementation - Management TCP Every TCP trunk is associated with one or more management TCP A management TCP connection is a standard TCP connection, i.e. it uses the standard TCP congestion algorithm Purpose: controls the rate at which user packets are sent Management packets consist only of the TCP/IP header, no payload Every management packet is followed by at most VMSS (virtual maximum segment size) bytes of user data  Therefore the rate of user packets is controlled by the management TCP’s congestion control algorithm

5 Implementation - the User Flows A TCP trunk has a tunnel queue User packets of all flows are put into the tunnel queue Each time a management packet was sent, user packets worth at most VMSS bytes are transmitted without a TCP/IP header, but with the layer-2 circuit’s header The trunk does not retransmit data, this is up to higher layers

6 TCP Trunking and Guaranteed Minimum Bandwidth Assume admission control and reservation guarantees a minimum bandwidth of X bytes / millisecond When a timer expires, the trunk send packets under the control of a leaky bucket filter –Objective: For any time interval Y milliseconds, send X * Y bytes, if there are enough packets in the tunnel queue If there are still packets in the tunnel queue, the trunk sends them under the control of the management TCP  The trunk will guarantee a minimum bandwidth, but will also grab available network bandwidth

7 Router Buffer Considerations One FIFO queue, that user and management packets will share To prevent loss of user packets: –When the queue builds up, drop management packets early enough –Sufficient space to deal with control delay and new trunk arrivals Dropping policy: –Management packets will be dropped, when the queue exceeds the threshold HP_Th –HP_Th =  * N, where N is the expected number of TCP flows and  is the size of the TCP’s congestion window to avoid frequent timeouts

8 Router Buffer Considerations Cont’d Size of the buffer (three types of packets): –Management packets: HP_BS = HP_Th * HP_Sz HP_Sz: size of management packets –User packets: UP_BS_TCP = HP_BS * (VMSS / HP_Sz) + N * VMSS –User packets form the guaranteed bandwidth flows UP_BS_GMB: Guaranteed flows occupy the fraction  of the link bandwidth  = UP_BS_GMB / (HP_BS + UP_BS + UP_BS_GMB)  UP_BS_GMB = (HP_BS + UP_BS_TCP) *  / (1-  ) –Adding all up yields the required buffer size: required_BS = (HP_BS + HP_BS * (VMSS/HP_Sz) + N * VMSS) * 1 / (1 -  ) where HP_BS =  * N * HP_Sz  Required buffer size is a function of , , N, HP_Sz, and VMSS

9 Trunk Sender Buffer Considerations A trunk buffers data whenever they arrive at a higher rate than the trunk can send them and it has to drop data, when the buffer is full Problem: The bandwidth of the trunk is not fixed For the following discussion, assume that all user flows are TCP flows –Assume the trunk reduces its rate due to a management packet drop –User flows will not change their rates, since they have not yet experienced a loss –The queue fills up and eventually a user’s packet is dropped causing the user’s TCP congestion control to reduce their rates

10 Trunk Sender Buffer Considerations Cont’d To achieve fairness, each user flow must reduce its rate by the same factor. This is achieved by: –Buffer size = RTTup * TrunkBW, where RTTup is an upper bound on the user flow’s RTT and TrunkBW is the peak bandwidth of the trunk. This gives the user flows enough time to slow down –RED-like dropping scheme: When a threshold is reached, dropping probability for a flow is proportional to its current buffer occupancy –Having dropped a packet from a flow, the trunk tries not to drop another packet from the same flow until the flow has recovered its reduced sending rate

11 Experiments TT1 - Basic Capabilities Experimental setup: –2 trunks, each with 4 management TCPs, trunk sender buffer of 100 packets –Buffer size in the bottleneck router according to their equation –Link capacity: 10 Mbps, link delay: 10 ms –Greedy UDP user flows Experiment a: Trunks make full utilization of bandwidth and can share it proportional to their guaranteed rate –Trunk 1: GMB = 400 KB/sec, VMSS = 3000 bytes Trunk 2: GMB = 200 KB/sec, VMSS = 1500 bytes –Note: The desired proportional sharing is achieved by proper setting of VMSS –Figure 4 shows that TCP trunk achieves this gaol

12 Experiments TT1 - Basic Capabilities Cont’d Experiment b: Trunks fully utilize bandwidth and can share it independent of their GMB –Trunk 1: GMB = 200 KB/sec, VMSS = 3000 bytes Trunk 2: GMB = 400 KB/sec, VMSS = 1500 bytes –Note: The GMBs of the trunks are exchanged compared to experiment a –Figure 5 shows that the goal is achieved

13 Experiments TT1 - Basic Capabilities Cont’d Experiment c: Trunks guarantee lossless transmission if router configured properly –Compare calculated required_BS with the actual buffer occupancy in the bottleneck link –Trunk 1: GMB = 200 KB/sec, VMSS = 15000 bytes Trunk 2: GMB = 400 KB/sec, VMSS = 1500 bytes –  = 8,  = 0.5, N = 8, VMSS = 1500, and HP_Sz = 52  required_BS = 222,348 bytes –The maximum measured buffer occupancy is 210,360 bytes  the calculated required_BS is tight –Question: If we have trunks with different VMSS, which one should we use?

14 Experiment TT2 - Protecting Interactive Webusers 10 short-living web-server flows from one site share the bottleneck link (1200 KB/s) with 10 greedy ftp flows from two sites Their notion of fair share is that every site should get an equal amount of bandwidth, i.e. 600 KB/s With out trunks, the web flows get 122 KB/s and the mean delay is 1,170 ms due to their short-living nature With trunks, i.e. the 10 web flows assigned to one trunk, 5 ftp flows to the second trunk and 5 ftp flows to the third trunk, the web flows get 238 KB/s and the mean delay is 613 ms  100% improvement by using trunks

15 Experiment TT3 - TCP-flows against UDP- flows Trunks can help protect TCP flows against UDP flows Ring topology with five routers which are configured to be the bottleneck routers Throughput and delay of the TCP flow are measured Four experiments: ExperimentThroughputDelay (a)One small TCP flow380451 (b)One small TCP flow plus competing on-off 532541 UDP flow (c) TCP flow with trunk and UDP flow with trunk270508 (d)Same as (c) plus two long-living greedy TCP 252664 flows


Download ppt "TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang."

Similar presentations


Ads by Google