Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecturer: Namratha Vedire

Similar presentations

Presentation on theme: "Lecturer: Namratha Vedire"— Presentation transcript:

1 Lecturer: Namratha Vedire
Computer Networks Lecture 18 TCP Cubic, TCP in 4G LTE 11/5/2013 Lecturer: Namratha Vedire

2 Admin Assignment 4 Check Point 1: Nov 15, 11:55 pm
To Do: Discuss design with instructor or a TF Nov 11 Code and Report: Nov 19, 11:55 pm To Do: Discuss design with instructor or a TF Nov 14

3 Demo

4 Recap

5 Recap : RTT & Timeout RTT Sample RTT
EstimatedRTT = (1-α)*EstimatedRTT + α*SampleRTT (α=0.125) DevRTT = (1-β)*DevRTT + β*|SampleRTT – EstimatedRTT| (β=0. Timeout = EstimatedRTT + 4*DevRTT SEG ACK

6 Recap : Congestion Control
Congestion is too many sources sending too much data too fast. Manifestation Lost packets High Delay 3. Wasted Bandwidth Throughput knee cliff congestion collapse packet loss Load Load Delay

7 Recap : Congestion Control
Efficiency - Close to full utilization but low delay. - Fast convergence after disturbance. Fairness - Resource Sharing Distributed - No central knowledge necessary - Scalability

8 Recap : Simple Model User 1 User 2  d =  xi > Xgoal? User n
xn Flows observe congestion signal d, and locally take actions to adjust rates.

9 Recap : A(M)I - MD Protocol
Apply the A(M)I – MD algorithm to a sliding window protocol

10 Recap : TCP/ Reno Two cases
- 3 duplicate ACKs (network capable of delivering some packets) Timeout (more alarming) Two phases 1. Slow start (SS) - MI 2*cwnd per RTT till congestion 2. Congestion avoidance(CA) – AIMD cwnd increase by 1 per RTT - 3 duplicate ACKs  cwnd = cwnd/2 - Timeout  cwnd =1 - In timeout  timeout = 2*timeout

11 Recap : TCP/ Reno SS – Slow Start CA – Congestion Avoidance
Time cwnd SS CA TD TO ssthresh SS – Slow Start CA – Congestion Avoidance TD – Three Duplicate ACKs TO - Timeout

12 Recap : TCP/ Reno When cwnd is cut to half, why does sending rate not get cut?

13 Recap : TCP/ Reno ç There is a filling and draining of buffers for each TCP flow. cwnd filling buffer TD bottleneck bandwidth draining buffer ssthresh Time CA

14 TCP/ Reno Analysis

15 TCP/ Reno Throughput Analysis
Understand throughput in terms of RTT Packet loss rate (p) Packet size (S) Throughput calculations Assume congestion avoidance and no timeouts occur Mean window size Wm segments, round trip time RTT & pack size S Throughput ≈ Wm * S RTT bytes/sec

16 Deterministic Analysis
Consider congestion avoidance Assume one packet is lost per cycle Total packets sent per cycle Packet loss (p) Throughput = = ½*(W + W/2) * W/2 = 3W2/8 = 1/(3W2/8) = 8/(3W2)  S*Wm RTT Time cwnd CA TD ssthresh available bandwidth W/2 W

17 TCP/ Reno Drawbacks Multiple packets lost simultaneously cannot be accounted for ACK for segment 7 segment 1 segment 2 segment 3 segment 4 segment 5 cwnd = 6 3 duplicate ACK’s Re-transmit segment 1 cwnd = 3 segment 6 segment 7 Re-transmit segment 2 cwnd = 1 cwnd might reduce twice for packets lost in same window

18 New Protocol Necessary!!
TCP/ Reno Drawbacks RTT unfairness Flows with different RTT’s grow their congestion windows differently Users with shorter RTT ramp up faster! On long distance links, RTT is high and cwnd takes longer to increase leading to underutilization of link. Synchronized losses Simultaneous packet loss events for multiple competing flows. New Protocol Necessary!!

19 Desired Characteristics in TCP
Adaptive schemes that grow the congestion window depending on network conditions Scalable RTT Fairness Faster convergence to better utilize full bandwidth

20 TCP BIC infocom-04.pdf

21 Growth functions Consider TCP/Reno growth function
Time cwnd CA TD ssthresh Wm Grows linearly throughout

22 TCP BIC Binary Increase Congestion Control (BIC) algorithm PHASE 1
cwnd < low_wind, follows TCP ACK received : cwnd = cwnd + 1 Loss event: cwnd = cwnd/2 PHASE 2 cwnd > low_wind, follows BIC

23 BIC Algorithm Some preliminaries βmultiplicative decrease factor
Wmax = cwnd size before the reduction Wmin = β*Wmax – just after reduction midpoint = (Wmax + Wmin)/2 BIC performs binary search between Wmax and Wmin looking for the midpoint.

24 BIC Algorithm Additive Increase Binary Search Slow Start Max Probing
Wmax + 3Smax Packet loss event Wmax + 2Smax Wmax +Smax Wmax + Smin Wmax + Smin Wmin Wmax (Wmin – midpoint) < Smin Wmin midpoint = (Wmin + Wmax)/2 Wmin + Smin midpoint = (Wmin + Wmax)/2 midpoint = (Wmin + Wmax)/2 Wmin + Smax Wmin Wmin + Smax Wmin – midpoint > Smax Wmin = β*Wmax

25 BIC Algorithm while (cwnd != Wmax){ If ((Wmin – midpoint) > Smax) cwnd = cwnd + Smax else If ((Wmin – midpoint) < Smin) cwnd = Wmax cwnd = midpoint If (no packet loss) Wmin = cwnd Wmin = β*cwnd Wmax = cwnd midpoint = (Wmax + Wmin)/2 } Additive Increase Binary Search

26 BIC Algorithm while (cwnd >= Wmax){ If (cwnd < Wmax + Smax)
cwnd = cwnd + Smin else cwnd = cwnd + Smax If (packet loss) Wmin = β*cwnd Wmax = cwnd } Slow Start Max Probing Additive Increase

27 TCP BIC - Summary Max Probing + Smin + Smax Packet loss event Wmax
Time + Smax jump to midpoint Additive Increase Binary Increase Slow Start Additive Increase

28 TCP BIC in Action

29 TCP BIC Advantages Scalability: quickly scales to fair BW share
Fairness and convergence: Achieves better fairness and faster convergence Slow Growth around Wmax ensures that unnecessary timeouts do not occur.

30 TCP BIC Drawbacks cwnd growth is aggressive for TCP with short RTT or low speed Short RTT makes cwnd ramp up soon Still dependent on RTT Proportional to inverse square of the RTT like TCP/ Reno Complex window growth function Difficult for analysis and actual implementation

31 TCP Cubic

32 TCP Cubic cwnd = C( t – K)3 + Wmax Wmax = cwnd before last reduction
βmultiplicative decrease factor C scaling factor t is the time elapsed since last window reduction

33 TCP CUBIC Max Probing Cubic starts probing for more Bandwidth
Packet loss event Wmax Time Around Wmax, window growth almost becomes zero Fast growth upon reduction Steady State Behavior

34 TCP Cubic Advantages Good RTT fairness
Growth dominated by t, competing flows have same t after synchronized packet loss Real-time dependent Similar to BIC but linear increases are time dependent Does not depend on ACK’s like TCP/ Reno Scalability Cubic increases window to Wmax (or its vicinity) quickly and keeps it there longer

35 TCP Cubic Drawbacks Slow Convergence Bandwidth Delay Products
Flows with higher cwnd are more aggressive initially Prolonged unfairness between flows Bandwidth Delay Products Linear increase artefacts

36 TCP in 4G LTE pdf

37 4G LTE Bandwidths match (often exceed) home broadband speeds.
Higher Energy Efficiency New resource management policy Higher Throughputs Lower Latency

38 4G LTE - Architecture UE – User Equipment RAN – Radio Access Network
CN – Core Network SGW – Switching Gateway PGW – Packet Data Network Gateway

39 4G LTE - Latency End-to-end latency of a packet that requires a UE’s radio interface is long - RRC promotion delay Promotion delay is not included in either uplink or downlink as the delay has already finished when it reaches the server Estimating the Promo Delay Tsa – Timestamp of SYN TSb – Timestamp of ACK G – inverse of clock frequency Promo Delay = G(TSb – TSa)

40 4G LTE - Latency 3G Networks 4G Networks
2 s from idle to high power state 1.5 s from low to high power state 4G Networks 600 ms promotion delays

41 4G LTE - Queuing Delays In-flight bytes of more than 200KB leads to longer queuing delays. During data transfer phase, a TCP sender will increase its congestion window, allowing number of unacknowledged packets to grow. “in-flight” packets buffered by routers in network path buffers extensively accommodate cellular network conditions and conceal packet loss

42 4G LTE – Undesired Slow Start

43 4G LTE – Undesired Slow Start
in-flight bytes growing The vertical gap between data and ACK curve indicates the bytes in flight And we clearly see that the bytes in flight is growing as the downloading proceeds.

44 4G LTE – Undesired Slow Start
Packet loss We observe a packet loss at time 1 second

45 4G LTE – Undesired Slow Start
Fast retransmission allows TCP to directly send the lost segment to the receiver possibly preventing retransmission timeout Fast retransmission Then based on TCP, when the sender detects that there is a packet loss, Fast retransmission would be triggered which allows it to directly send the lost segment to the receiver possibly preventing retransmission timeout.

46 4G LTE – Undesired Slow Start
TCP uses RTT estimate to update retransmission timeout (RTO) However, TCP does not update RTO based on duplicate ACKs RTT: 262ms RTO: 290ms TCP uses RTT estimate to update retransmission timeout (RTO), In this example, when the lost segment is retransmitted, RTT is 262ms and RTO is 290ms. However, TCP does not update RTO based on duplicate ACKs. Notice that the duplicate ACKs are generated by the reception of the data packets sent AFTER the lost segment. Duplicate ACKs

47 4G LTE – Undesired Slow Start
Retransmission timeout causes slow start RTT: 356ms RTO: 290ms RTT > RTO, timeout! However, given that the RTT is growing and by the time the ACK of the retransmitted segment is back, RTT has increased to 356ms, while RTO is not updated. Since RTT is larger than RTO now, there is unexpected retransmission timeout. It is called unexpected because the purpose of fast retransmission is to prevent such timeout to happen. After timeout, the congestion window would drop to 1 segment size, triggering slow start, which hurt TCP performance SLOW START

48 4G LTE – Undesired Slow Start
If large number of packets are in flight and one packet is lost large number of duplicate ACKs trigger fast re-transmission avoid timeout Large in-network queues hold many packets and delay the retransmitted packet If specified ACK does not arrive within timeout, this triggers timeout and cwnd = 1 Undesired Slow Start SOLUTION: Update the estimated RTT with duplicate ACKs

49 4G LTE – TCP Receive Window
In 4G LTE networks, receive windows have become the bottleneck Initial receive window is not large (mostly KB) Application is not reading data fast enough from the receive buffer TCP rate is jointly controlled by congestion window and receive window a full receive window prevents the server from sending more data This leads to bandwidth underutilization SOLUTION Move data from transport layer buffers to application layer buffers to empty receive window Increase receive window at network level – deployment is challenging

50 Backup

51 Netflix App Case Study

Download ppt "Lecturer: Namratha Vedire"

Similar presentations

Ads by Google