AQM & TCP models Courtesy of Sally Floyd with ICIR Raj Jain with OSU
Queue management -Passive -Active AQM: RED Variants ECN TCP models Agenda
the majority in router Passive queue management (PQM) –No preventive packet drop –Buffer level > threshold, drop packets –Two dropping schemes Tail-drop Drop-from-front
Drop-tail Drop-from-front Which is better?
Problems with PQM A trade-off between the buffer size and QoS Larger buffer results in higher throughput, but longer delay Lock out: A single connection monopolises the buffer space –Give rise to fairness problem Full queue: Queue is full for a long period of time –Long queuing delay Global synchronization
Global Synchronization When queue overflows, several connections decrease congestion windows simultaneously
Bias Against Bursty Traffic Bursty traffic more likely to be dropped average queue length V.S.
Objective: Congestion Avoidance Maintains low delay and high throughput –Average queue size kept low –Actual queue size grows enough to handle: Bursty Traffic Transient Congestion
Active Queue Management (AQM) Provide preventive measures to manage a buffer to eliminate problems associated with PQM Characteristics: –Preventive random packet drop is performed before the buffer is full –The probability of preventive packet drop increases with the increasing level of congestion Goals: –Reduce dropped packets –Support low-delay interactive services –Avoid lock-out
Random Early Detection (RED) A router maintains two thresholds: Min_th: –Accept all packets until the queue reaches Min_th –Drop packets with a linear drop probability when the queue is greater than Min_th Max_th: All packets are dropped with probability of 1 when the queue exceeds this threshold
RED Algorithm drop probability Q min th max th Max_drop 1
Selection of Maximum Drop Probability for RED Selection of Max_drop significantly affects the performance of RED –Too small: Active packet drops not enough to prevent global synchronisation –Too large: Decreases the throughput –Optimal value depends on number of connections, round trip time, etc. Selection of an optimal value for Max_drop remains an open issue
RED: Calculating Average Queue Size Use low-pass filter (exponential weighted moving average) w q should be small enough to filter out transient congestion, and large enough for the average to be responsive
RED solves the problems Global synchronization Transient congestion (short queue) Bias against bursty Traffic Drop packets when congestion eminent Select packets at random Use average queue length as indicator of congestion
RED Variants RED variants can be classified into two categories: –Aggregate control Modifying the calculation of the control variable and/or drop function Determines packet drop probability –Per-flow control Configuring and setting RED’s parameters Addresses fairness problem
BLUE (aggregate) RED: depends only on Q length –For optimal operating point, long Q is necessary Uses packet loss and link utilisation to measure network congestion directly Fewer configuration parameters Advantages: –Reduces packet loss rate –Keeps the gateway queue stable
Increases marking/dropping prob. when detects packet loss due to buffer overflow Decreases marking/dropping prob. when detects that the marking prob. is too aggressive BLUE
RED Variants Using Per-Flow Accounting Flow RED (FRED) Fair Buffering RED (FB-RED) XRED Class-Based Threshold RED (CBT- RED) Balanced RED (BRED) Stochastic Fair Blue (SFB)
Two variants FRED (Fair RED) –fairness among TCP connection –uses per-active-flow accounting (flow’s use of buffer space) –Scalability problem FBRED (Fair Buffering RED) –use of individual bandwidth delay product for each link to modify the packet drop probability inverse of the bandwidth delay product to calculate Max_drop inverse of the square root of the bandwidth delay product to calculate Max_drop
Explicit congestion notification (ECN) RFC 3168
Packet dropped or packet marked Instead of dropping packets, packets could be marked. Such marking is called ECN (explicit congestion notification) The benefits of ECN –A packet does not have to be retransmitted. (Not that big of a deal when drop probabilities are small, e.g., 1%) –Has a dramatic effect when congestion window is small. Because timeout is avoided. But why is the congestion window small –If it small because the link is heavily congested, ECN might not be possible because the queue might truly be full.
ECN in IP header ECT: ECN-capable transport
TCP should change for ECN TCP connection setup –Find out whether endpoints are ECN- capable To inform sender of congestion –ECN-echo (ECE) flag in TCP header To inform receiver of window reduction –Congestion Window Reduction (CWR) flag
TCP throughput modeling
Motivation for TCP Modeling TCP operating scale is very large –Models are required to gain deeper understanding of TCP dynamics Uncertainties can be modeled as stochastic processes Drive the design of TCP-friendly algorithms for multimedia applications Optimize TCP performance
TCP Modeling Essentials Mainly Reno flavors are modeled Two main features are modeled –Window dynamics –Packet loss process
Packet Loss Process Packet loss triggers window decrease Packet loss is uncertain This uncertainty is typically modeled as a stochastic process –E.g. probability p of losing a packet
Window Dynamics Linear increase and multiplicative decre ase is modeled The standard assumption –X(t) = W(t)/RTT
Gallery of TCP Models Periodic model Detailed packet loss model Finite state machine Fluid flow model And others …
Periodic model
TCP Congestion Control: window algorithm Window: can send W packets increase window by one per RTT if no loss, W <- W+1 each RTT decrease window by half on detection of loss W <- W/2 sender receiver W 1 RTT
Window: can send W packets increase window by one per RTT if no loss, W <- W+1 each RTT decrease window by half on detection of loss W W/2, when receiving 3 DUPACKs sender receiver W TCP Congestion Control: window algorithm
TCP throughput/loss relationship Idealized model: W is maximum supportable window size (then loss occurs) TCP window starts at w/2 grows to W, then halves, then grows to W, then halves… one window worth of packets each rtt to find: throughput as function of loss, RTT TCP window size time (rtt) W/2 W loss occurs period
# packets sent per “period” period
1 packet lost per period implies where
Detailed packet loss model
(TDP)
b = 2 (delayed ACK) Xi = total number of rounds in TDP i RTT
MSS is not shown
TCP as an FSM