Download presentation

Presentation is loading. Please wait.

Published byDeshaun Hudspeth Modified over 3 years ago

1
RED Enhancement Algorithms By Alina Naimark

2
Presented Approaches Flow Random Early Drop - FRED By Dong Lin and Robert Morris Sabilized Random Early Drop – SRED By Teunis J.Ott, T.V. Lakshman, Larry H.Wong

3
Basic RED - Reminder RED- Random Early Drop : Active queue management algorithm. Tries to keep the throughput high while maintaining certain average queue size. This is done by detecting incipient congestion by computing the average queue size and notifying connections of congestion by dropping packets. At congestion state equal percent of packets are dropped for each connection independent of its bandwidth share usage.

4
Addressed Problems In case of persistent congestion a minimal loss for all the connections exists. Temporary non-proportional dropping. Non-adaptive connections can force RED to drop packets at a high rate from all connections.

5
Demonstration of RED ’ s limitations Fragile vs. Robust Fragile – congestion aware, sensitive to packet loss. Robust - congestion aware, aggressive. Symmetric Adaptive TCPs Adaptive TCP vs. Non-adaptive CBR Non – adaptive – not congestion aware, aggressive

6
Fragile vs. Robust - Setup R -1 R - 2 R - 3 R - 4 RED gateway Fragile Sink d = 1 ms 100 mbps d = 2 ms 45 mbps d = 16 ms 45 mbps

7
Fragile vs. Robust - Results Bs = 56BS = 48BS = 40BS = 32BS = 24Bs = 16 71%70% 69%67%45%BW/MAX 0.9400.9110.9821.031.050.456BW%/DROP% 0.32%0.37%0.40%0.42%0.55%1.13%Fragile ’ s Loss Rate 0.33%0.35%0.38% 0.44%0.49%R-1 ’ s Loss Rate Performance of long RTT TCP connection Second row shows the ratio of bandwidth allocated over the 4.05% maximum possible. Third row shows ratio of percentage of achieved bandwidth over percentage of dropped packets. Fourth and fifth rows show the loss rates of the fragile and one of the robust connections

8
Fragile vs. Robust - Conclusion Proportional dropping doesn ’ t guarantee fair bandwidth sharing.

9
Symmetric Adaptive TCPs - Setup Sender 1 Sender 2 RED Gateway Sink d = 0.274 155 mbps d = 0.274 155 mbps d = 0.274 77.5 mbps Two identical TCP connections compete over a RED gateway with buffer size 32 packets and min th = 8, max th = 16, w q = 0.002, max p = 0.02

10
Symmetric Adaptive TCPs RED can accidentally pick the same connection from which to drop packets RED may drop a packet with non-zero probability even if the connection has no packets queued. Accounted in small percentage of traces.

11
Adaptive TCP vs. Non-adaptive CBR 8 mbps CBR TCP Sender RED Gateway Sink d = 2 ms 10 mbps d = 2 ms 10 mbps d = 2 10 mbps Adaptive TCP competes with a CBR UDP over a RED gateway with buffer size 64 packets and min th = 16, max th = 32, w q = 0.002, max p = 0.02

12
Adaptive TCP vs. Non-adaptive CBR The TCP sender cannot obtain its fair share due to the unfair FCFS scheduling which distributes the output capacity according to queue occupancy. The RED gateway drops packets from both connections even if the TCP connection is using less than its fair share. RED is ineffective at handling non adaptive connections.

13
FRED-Flow Random Early Drop The goal is to reduce the unfairness effects found in RED. The approach is to generate selective feedback to a filtered set of connections which have a large number of packets queued.

14
FRED per flow variables: qlen - number of buffered packets strike – number of time flow didn ’ t respond to congestion notification. global parameters: min q – minimal number of buffered packets per flow max q – maximal number of buffered packets per flow avgcq – estimate of average per flow buffer count Flows with fewer then avgcq packets queued are favored over flows with more. FRED penalizes flows with high strike values.

15
Protecting Fragile Flows Each connection can buffer min q packets without loss. All the additional packets are subject to random drop. At the same simulation setup as the first one, the long RTT TCP connection was able to run at the maximal possible speed.

16
Managing Heterogeneous Robust Flows When the number of active connections is small (N << min th / min q ),FRED allows each connection to buffer min q packets without dropping. If the queue averages more than min q packets, FRED will drop randomly selected packets. FRED will impose the same loss rate on all the connections that have more than min q packets buffered. FRED fixes this problem by dynamically raising the min q to avgcq when the system is operating with a small number of active connections.

17
Managing Non-adaptive Flows FRED never lets a flow buffer more then max q packets and counts the number of times each flow tries to exceed max q Flows with high strike are not allowed to queue more then avgcq packets. This allows adaptive flows to send bursts of packets, but prevents non-adaptive flows from consistently monopolizing the buffer space.

18
Adaptive TCP vs. Non-adaptive CBR For the same simulation setup FRED performed much better. FRED limited the UDP connection ’ s bandwidth by preventing it from using more then average number of buffers. The TCP connection was able to receive it ’ s fair share of the link capacity.

19
Symmetric Adaptive TCPs For the same simulation setup FRED ’ s ability to protect small windows has completely prevented the TCP connections from losing packets during ramp up. None of the 10000 simulations produced a retransmission timeout.

20
FRED - Advantages FRED's ability to accept packets preferentially from flows with few packets buffered achieves much of the beneficial effect of per connection queuing and round robin scheduling, but with substantially less complexity. FRED is more likely to accept packets from new connections even under congestion. Because TCP detects congestion by noticing lost packets, FRED serves as a selective binary feedback congestion avoidance algorithm.

21
Supporting Many Flows Problem- There are more flows then packets of storage at the gateway. The gateway buffers are kept full and some of the connections are forced to timeout. How to allocate buffers fairly under these conditions?

22
FRED- extension The timeouts and associated high delay variation could potentially be eliminated by adding buffer memory. Proposition: When the number of simultaneous flows is large, the gateway should provide exactly two packet buffers for each flow.

23
FRED Extension- problem This extension may cause TCP to operate with small congestion window. This can cause trouble for some TCP implementation that need to receive a triple Ack to resend a lost packet. If the window is too small they will need time- out to recover.

24
FRED Extension - simulation Setup : 16 TCP 100mbps connections share a gateway with 32 packet buffer and a bottleneck of 10mbps. Results: RED caused 2006 timeouts FRED with the extension caused 20 timeouts.

25
Conclusions Demonstrated that discarding packets in proportion to the bandwidth used doesn ’ t provide fair bandwidth sharing. FRED ’ s selective dropping based on per- active-flow buffer count provides fair sharing for flows with different RTTs and window sizes. FRED protects adaptive flows from non- adaptive ones by forcing dynamic per-flow queuing limits.

26
SRED: Stabilized RED SRED provides a way to stabilize the gateways buffer occupation at a level independent of the number of active connections. This is done by estimating the number of active connections without collecting or analyzing state information on individual flows. The same mechanism is used to identify misbehaving flows.

27
The Algorithm Build a Zombie list: as long as the list isn ’ t full for each arriving packet add the packet source and destination data to the list, set the count to zero. For each arriving packet (after the Zombie list is full: Compare it with a randomly chosen packet from the Zombie list. If the packets are of the same flow declare a Hit and increase the count. If not declare No Hit and with probability p overwrite the chosen for comparison zombie with the new packets flow.

28
Observations The time it takes for the zombie list to lose it ’ s memory can be estimated by M/p packets. If an arriving packet causes a hit this might be an evidence of misbehaving flow, if also the count of the zombie is high the evidence becomes stronger.

29
Relationship Between Hits and Number of Active Flows Let define : and Then P(t) is an estimate of the frequency of hits for approximately the most M/p packets before packet t. In this presentation = 1/M =.001

30
Estimation of number of active flows Proposition : is a good estimate for the effective number of active flows in the time shortly before the arrival of packet t.

31
Proposition Argumentation Suppose there are flows 1,2, … Every time a packet arrives it belongs to flow i with probability i. Then for every arriving packet the probability that it causes a hit is There are N active flows of identical traffic intensity: In this symmetrical case, Proposition is exact or at least roughly unbiased.

32
Simple Stabilized RED No computation of average queue length. Packet loss probability depends only on the instantaneous buffer occupation and the estimated number of active flows.

33
Target buffer occupation Optimal value of the drop probability p depends on the number of active flows N. Under drop probability p every flow will have a congestion window of the order of MSSs. For N flows the sum of congestion windows will be of the order of.

34
Target buffer occupation The sum of the congestion windows and Q 0 (target buffer occupation) must be of the same magnitude. Let ’ s require equality: Thus p must be of the order of

35
Drop probability function For packet t, buffer containing q : while

36
Drop probability function motivation Drop probability should depend on the buffer occupancy. insures that the drop probability increases when the buffer occupancy increases. The ratio 4 quadruples the drop probability when the buffer occupancy increases. => Long term effect of halving the congestion window.

37
Drop probability function motivation - continued As can be observed from the drop probability function :

38
SRED:Stabilized Random Early Drop Simple SRED drop probability depends on: Instantaneous buffer occupation q. Hit estimate P(t). Full SRED drop probability also depends on whether the packet caused a hit.

39
Drop probability function Full SRED drop probability function: If a fraction of all packets are from flow i, then for flow i the probability is multiplied by This increases the drop probability of overactive flows.

40
Misbehaving Flows Hit mechanism can be used to identify candidates for misbehaving flows. Simulation : 100 persistent TCP flows and one “ misbehaving ” UDP. Result : UDP connection load is 2.58 times average load of any TCP connection, hit/packet ratio about 1.74 as large.

41
Misbehaving Flows-continued Even stronger indication of misbehaving is a hit with a high Count for the zombie. For the “ overactive ” connection a fraction 0.085 of all hit that have Count 1. For the TCP connection this fraction is 0.047.

42
Misbehaving Flows - Conclusion Hit indicate a higher probability that the flow is misbehaving. A hit with high count increases the probability. These mechanisms can be used to filter flows and find a small subset of flows that are possibly misbehaving.

43
SRED - Conclusions Presented a mechanism for statistically estimating the number of active flows. The mechanism doesn ’ t require keeping per flow state. Presented schemes for adjusting drop probability such that the buffer occupancy hovers around a preset value. The mechanism can also be used to identify misbehaving flows.

44
SRED: Simulations Goal: To show that SRED is successful in keeping the buffer occupancy close to a specific target and away from over flow or underflow.

Similar presentations

OK

Enhancing TCP Fairness in Ad Hoc Wireless Networks Using Neighborhood RED Kaixin Xu, Mario Gerla University of California, Los Angeles {xkx,

Enhancing TCP Fairness in Ad Hoc Wireless Networks Using Neighborhood RED Kaixin Xu, Mario Gerla University of California, Los Angeles {xkx,

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google