TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

CSIT560 Internet Infrastructure: Switches and Routers Active Queue Management Presented By: Gary Po, Henry Hui and Kenny Chong.
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
Transmission Control Protocol (TCP)
Traffic Shaping Why traffic shaping? Isochronous shaping
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 ECSE-4690: Experimental Networking Informal Quiz: TCP Shiv Kalyanaraman:
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
Explicit Congestion Notification ECN Tilo Hamann Technical University Hamburg-Harburg, Germany.
ACN: IntServ and DiffServ1 Integrated Service (IntServ) versus Differentiated Service (Diffserv) Information taken from Kurose and Ross textbook “ Computer.
1 TCP Transport Control Protocol Reliable In-order delivery Flow control Responds to congestion “Nice” Protocol.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Data Communication and Networks
Medium Start in TCP-Friendly Rate Control Protocol CS 217 Class Project Spring 04 Peter Leong & Michael Welch.
1 K. Salah Module 6.1: TCP Flow and Congestion Control Connection establishment & Termination Flow Control Congestion Control QoS.
Core Stateless Fair Queueing Stoica, Shanker and Zhang - SIGCOMM 98 Rigorous fair Queueing requires per flow state: too costly in high speed core routers.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
Transport over Wireless Networks Myungchul Kim
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Packet switching network Data is divided into packets. Transfer of information as payload in data packets Packets undergo random delays & possible loss.
Copyright © Lopamudra Roychoudhuri
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Chapter 24 Transport Control Protocol (TCP) Layer 4 protocol Responsible for reliable end-to-end transmission Provides illusion of reliable network to.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Mr. Mark Welton.  Quality of Service is deployed to prevent data from saturating a link to the point that other data cannot gain access to it  QoS allows.
We used ns-2 network simulator [5] to evaluate RED-DT and compare its performance to RED [1], FRED [2], LQD [3], and CHOKe [4]. All simulation scenarios.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
ECE 4110 – Internetwork Programming
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
McGraw-Hill Chapter 23 Process-to-Process Delivery: UDP, TCP Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Providing QoS in IP Networks
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
11 CS716 Advanced Computer Networks By Dr. Amir Qayyum.
Karn’s Algorithm Do not use measured RTT to update SRTT and SDEV Calculate backoff RTO when a retransmission occurs Use backoff RTO for segments until.
Unit-4 Lecture 9 Network Layer 1. Congestion Prevention Polices. To avoid congestion by using the appropriate polices at different levels. Layers DL Layer.
Window Control Adjust transmission rate by changing Window Size
Instructor Materials Chapter 6: Quality of Service
QoS & Queuing Theory CS352.
Internet Networking recitation #9
Topics discussed in this section:
Approaches towards congestion control
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
COMP 431 Internet Services & Protocols
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Chapter 6 TCP Congestion Control
Computer Science Division
Internet Networking recitation #10
Congestion Control, Quality of Service, & Internetworking
CS4470 Computer Networking Protocols
TCP Congestion Control
Transport Layer: Congestion Control
TCP flow and congestion control
Presentation transcript:

TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang

Overview A TCP trunk is an aggregate traffic stream –IP flows with same routing and QoS treatment –Any number of TCP and/or UDP flows, so called user flows On top of a layer-2 circuit or MPLS switched path (What’s that?) Trunks must only be implemented in the end host Implements TCP congestion control by using a management TCP connection  Data transmission and congestion control are decoupled  User flows transmit at a rate determined by management connection

Properties of TCP trunks Guaranteed and elastic bandwidth –Beside a guaranteed minimum bandwidth (GMB), a trunk can grab additional bandwidth, when it is available Immediate and in-sequence forwarding Lossless delivery –Possible because of decoupling of data and control –Needs div-serv like modifications of the routers –However, user packets may still be dropped at the trunk sender Aggregation and isolation –Less state information in routers –UDP flows become TCP friendly –Fair share of bandwidth among different sites, if every site has one trunk

Implementation - Management TCP Every TCP trunk is associated with one or more management TCP A management TCP connection is a standard TCP connection, i.e. it uses the standard TCP congestion algorithm Purpose: controls the rate at which user packets are sent Management packets consist only of the TCP/IP header, no payload Every management packet is followed by at most VMSS (virtual maximum segment size) bytes of user data  Therefore the rate of user packets is controlled by the management TCP’s congestion control algorithm

Implementation - the User Flows A TCP trunk has a tunnel queue User packets of all flows are put into the tunnel queue Each time a management packet was sent, user packets worth at most VMSS bytes are transmitted without a TCP/IP header, but with the layer-2 circuit’s header The trunk does not retransmit data, this is up to higher layers

TCP Trunking and Guaranteed Minimum Bandwidth Assume admission control and reservation guarantees a minimum bandwidth of X bytes / millisecond When a timer expires, the trunk send packets under the control of a leaky bucket filter –Objective: For any time interval Y milliseconds, send X * Y bytes, if there are enough packets in the tunnel queue If there are still packets in the tunnel queue, the trunk sends them under the control of the management TCP  The trunk will guarantee a minimum bandwidth, but will also grab available network bandwidth

Router Buffer Considerations One FIFO queue, that user and management packets will share To prevent loss of user packets: –When the queue builds up, drop management packets early enough –Sufficient space to deal with control delay and new trunk arrivals Dropping policy: –Management packets will be dropped, when the queue exceeds the threshold HP_Th –HP_Th =  * N, where N is the expected number of TCP flows and  is the size of the TCP’s congestion window to avoid frequent timeouts

Router Buffer Considerations Cont’d Size of the buffer (three types of packets): –Management packets: HP_BS = HP_Th * HP_Sz HP_Sz: size of management packets –User packets: UP_BS_TCP = HP_BS * (VMSS / HP_Sz) + N * VMSS –User packets form the guaranteed bandwidth flows UP_BS_GMB: Guaranteed flows occupy the fraction  of the link bandwidth  = UP_BS_GMB / (HP_BS + UP_BS + UP_BS_GMB)  UP_BS_GMB = (HP_BS + UP_BS_TCP) *  / (1-  ) –Adding all up yields the required buffer size: required_BS = (HP_BS + HP_BS * (VMSS/HP_Sz) + N * VMSS) * 1 / (1 -  ) where HP_BS =  * N * HP_Sz  Required buffer size is a function of , , N, HP_Sz, and VMSS

Trunk Sender Buffer Considerations A trunk buffers data whenever they arrive at a higher rate than the trunk can send them and it has to drop data, when the buffer is full Problem: The bandwidth of the trunk is not fixed For the following discussion, assume that all user flows are TCP flows –Assume the trunk reduces its rate due to a management packet drop –User flows will not change their rates, since they have not yet experienced a loss –The queue fills up and eventually a user’s packet is dropped causing the user’s TCP congestion control to reduce their rates

Trunk Sender Buffer Considerations Cont’d To achieve fairness, each user flow must reduce its rate by the same factor. This is achieved by: –Buffer size = RTTup * TrunkBW, where RTTup is an upper bound on the user flow’s RTT and TrunkBW is the peak bandwidth of the trunk. This gives the user flows enough time to slow down –RED-like dropping scheme: When a threshold is reached, dropping probability for a flow is proportional to its current buffer occupancy –Having dropped a packet from a flow, the trunk tries not to drop another packet from the same flow until the flow has recovered its reduced sending rate

Experiments TT1 - Basic Capabilities Experimental setup: –2 trunks, each with 4 management TCPs, trunk sender buffer of 100 packets –Buffer size in the bottleneck router according to their equation –Link capacity: 10 Mbps, link delay: 10 ms –Greedy UDP user flows Experiment a: Trunks make full utilization of bandwidth and can share it proportional to their guaranteed rate –Trunk 1: GMB = 400 KB/sec, VMSS = 3000 bytes Trunk 2: GMB = 200 KB/sec, VMSS = 1500 bytes –Note: The desired proportional sharing is achieved by proper setting of VMSS –Figure 4 shows that TCP trunk achieves this gaol

Experiments TT1 - Basic Capabilities Cont’d Experiment b: Trunks fully utilize bandwidth and can share it independent of their GMB –Trunk 1: GMB = 200 KB/sec, VMSS = 3000 bytes Trunk 2: GMB = 400 KB/sec, VMSS = 1500 bytes –Note: The GMBs of the trunks are exchanged compared to experiment a –Figure 5 shows that the goal is achieved

Experiments TT1 - Basic Capabilities Cont’d Experiment c: Trunks guarantee lossless transmission if router configured properly –Compare calculated required_BS with the actual buffer occupancy in the bottleneck link –Trunk 1: GMB = 200 KB/sec, VMSS = bytes Trunk 2: GMB = 400 KB/sec, VMSS = 1500 bytes –  = 8,  = 0.5, N = 8, VMSS = 1500, and HP_Sz = 52  required_BS = 222,348 bytes –The maximum measured buffer occupancy is 210,360 bytes  the calculated required_BS is tight –Question: If we have trunks with different VMSS, which one should we use?

Experiment TT2 - Protecting Interactive Webusers 10 short-living web-server flows from one site share the bottleneck link (1200 KB/s) with 10 greedy ftp flows from two sites Their notion of fair share is that every site should get an equal amount of bandwidth, i.e. 600 KB/s With out trunks, the web flows get 122 KB/s and the mean delay is 1,170 ms due to their short-living nature With trunks, i.e. the 10 web flows assigned to one trunk, 5 ftp flows to the second trunk and 5 ftp flows to the third trunk, the web flows get 238 KB/s and the mean delay is 613 ms  100% improvement by using trunks

Experiment TT3 - TCP-flows against UDP- flows Trunks can help protect TCP flows against UDP flows Ring topology with five routers which are configured to be the bottleneck routers Throughput and delay of the TCP flow are measured Four experiments: ExperimentThroughputDelay (a)One small TCP flow (b)One small TCP flow plus competing on-off UDP flow (c) TCP flow with trunk and UDP flow with trunk (d)Same as (c) plus two long-living greedy TCP flows