Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced Computer Networks

Similar presentations


Presentation on theme: "Advanced Computer Networks"— Presentation transcript:

1 Advanced Computer Networks
Congestion Control and Avoidance Lecture-4-5 Instructor : Mazhar Hussain

2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one of two things must happen: The subnet must prevent additional packets from entering the congested region until those already present can be processed. The congested routers can discard queued packets to make room for those that are arriving.

3 Factors that Cause Congestion
Packet arrival rate exceeds the outgoing link capacity. Insufficient memory to store arriving packets Bursty traffic Slow processor

4 Congestion Control vs Flow Control
Congestion control is a global issue – involves every router and host within the subnet Flow control – scope is point-to-point; involves just sender and receiver.

5 Congestion Control, cont.
Congestion Control is concerned with efficiently using a network at high load. Several techniques can be employed. These include: Warning bit Choke packets Load shedding Random early discard Traffic shaping The first 3 deal with congestion detection and recovery. The last 2 deal with congestion avoidance.

6 Warning Bit A special bit in the packet header is set by the router to warn the source when congestion is detected. The bit is copied and piggy-backed on the ACK and sent to the sender. The sender monitors the number of ACK packets it receives with the warning bit set and adjusts its transmission rate accordingly.

7 Choke Packets A more direct way of telling the source to slow down.
A choke packet is a control packet generated at a congested node and transmitted to restrict traffic flow. The source, on receiving the choke packet must reduce its transmission rate by a certain percentage. An example of a choke packet is the ICMP Source Quench Packet.

8 Hop-by-Hop Choke Packets
Over long distances or at high speeds choke packets are not very effective. A more efficient method is to send to choke packets hop-by-hop. This requires each hop to reduce its transmission even before the choke packet arrive at the source.

9 For real-time voice or video it is probably better to
Load Shedding When buffers become full, routers simply discard packets. Which packet is chosen to be the victim depends on the application and on the error strategy used in the data link layer. For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in the received data. For real-time voice or video it is probably better to throw away old data and keep new packets. Get the application to mark packets with discard priority.

10 Random Early Discard (RED)
This is a proactive approach in which the router discards one or more packets before the buffer becomes completely full. Each time a packet arrives, the RED algorithm computes the average queue length, avg. If avg is lower than some lower threshold, congestion is assumed to be minimal or non- existent and the packet is queued.

11 RED, cont. If avg is greater than some upper threshold, congestion is assumed to be serious and the packet is discarded. If avg is between the two thresholds, this might indicate the onset of congestion. The probability of congestion is then calculated.

12 Congestion Avoidance Mechanism
Random Early Detection (RED) Second, RED has two queue length thresholds that trigger certain activity: MinThreshold and MaxThreshold. When a packet arrives at the gateway, RED compares the current AvgLen with these two thresholds, according to the following rules: if AvgLen  MinThreshold  queue the packet if MinThreshold < AvgLen < MaxThreshold  calculate probability P  drop the arriving packet with probability P if MaxThreshold  AvgLen  drop the arriving packet

13 Traffic descriptors

14 Traffic Shaping Another method of congestion control is to “shape” the traffic before it enters the network. Traffic shaping controls the rate at which packets are sent (not just how many). Used in ATM and Integrated Services networks. At connection set-up time, the sender and carrier negotiate a traffic pattern (shape). Two traffic shaping algorithms are: Leaky Bucket Token Bucket

15 The Leaky Bucket Algorithm
The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single- server queue with constant service time. If the bucket (buffer) overflows then packets are discarded.

16 The Leaky Bucket Algorithm
(a) A leaky bucket with water. (b) a leaky bucket with packets.

17 Leaky Bucket Algorithm, cont.
The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness of the input. Does nothing when input is idle. The host injects one packet per clock tick onto the network. This results in a uniform flow of packets, smoothing out bursts and reducing congestion. When packets are the same size (as in ATM cells), the one packet per tick is okay. For variable length packets though, it is better to allow a fixed number of bytes per tick. E.g bytes per tick will allow one 1024-byte packet or two 512-byte packets or four 256- byte packets on 1 tick.

18 Token Bucket Algorithm
In contrast to the LB, the Token Bucket Algorithm, allows the output rate to vary, depending on the size of the burst. In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must capture and destroy one token. Tokens are generated by a clock at the rate of one token every t sec. Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send larger bursts later.

19 The Token Bucket Algorithm
5-34 (a) Before (b) After.

20 Leaky Bucket vs Token Bucket
LB discards packets; TB does not. TB discards tokens. With TB, a packet can only be transmitted if there are enough tokens to cover its length in bytes. LB sends packets at an average rate. TB allows for large bursts to be sent faster by speeding up the output. TB allows saving up tokens (permissions) to send large bursts. LB does not allow saving.

21 AIMD (Additive Increase / Multiplicative Decrease)
CongestionWindow (cwnd) is a variable held by the TCP source for each connection. cwnd is set based on the perceived level of congestion. The Host receives implicit (packet drop) or explicit (packet mark) indications of internal congestion. MaxWindow :: min (CongestionWindow , AdvertisedWindow) EffectiveWindow = MaxWindow – (LastByteSent -LastByteAcked)

22 Additive Increase Additive Increase is a reaction to perceived available capacity. Linear Increase basic idea:: For each “cwnd’s worth” of packets sent, increase cwnd by 1 packet. In practice, cwnd is incremented fractionally for each arriving ACK. increment = MSS x (MSS /cwnd) cwnd = cwnd + increment

23 Source Destination Add one packet each RTT Additive Increase

24 Multiplicative Decrease
The key assumption is that a dropped packet and the resultant timeout are due to congestion at a router or a switch. Multiplicate Decrease:: TCP reacts to a timeout by halving cwnd. Although cwnd is defined in bytes, the literature often discusses congestion control in terms of packets (or more formally in MSS == Maximum Segment Size). cwnd is not allowed below the size of a single packet.

25 AIMD (Additive Increase / Multiplicative Decrease)
It has been shown that AIMD is a necessary condition for TCP congestion control to be stable. Because the simple CC mechanism involves timeouts that cause retransmissions, it is important that hosts have an accurate timeout mechanism. Timeouts set as a function of average RTT and standard deviation of RTT. However, TCP hosts only sample round-trip time once per RTT using coarse-grained clock.

26 Slow Start Linear additive increase takes too long to ramp up a new TCP connection from cold start. Beginning with TCP Tahoe, the slow start mechanism was added to provide an initial exponential increase in the size of cwnd. Remember mechanism by: slow start prevents a slow start. Moreover, slow start is slower than sending a full advertised window’s worth of packets all at once.

27 Slow Start The source starts with cwnd = 1.
Every time an ACK arrives, cwnd is incremented. cwnd is effectively doubled per RTT “epoch”. Two slow start situations: At the very beginning of a connection {cold start}. When the connection goes dead waiting for a timeout to occur (i.e, the advertized window goes to zero!)

28 Source Destination Slow Start Add one packet per ACK Slow Start

29 Slow Start However, in the second case the source has more information. The current value of cwnd can be saved as a congestion threshold. This is also known as the “slow start threshold” ssthresh.

30 Behavior of TCP Congestion Control

31 Fast Retransmit Fast Retransmit
Coarse timeouts remained a problem, and Fast retransmit was added with TCP Tahoe. Since the receiver responds every time a packet arrives, this implies the sender will see duplicate ACKs. Basic Idea:: use duplicate ACKs to signal lost packet. Fast Retransmit Upon receipt of three duplicate ACKs, the TCP Sender retransmits the lost packet.

32 Fast Retransmit Generally, fast retransmit eliminates about half the coarse-grain timeouts. This yields roughly a 20% improvement in throughput. Note – fast retransmit does not eliminate all the timeouts due to small window sizes at the source.

33 Fast Retransmit Based on three duplicate ACKs Fast Retransmit

34 TCP Fast Retransmit Trace

35 TCP Congestion Control
Congestion occurs Congestion 20 avoidance 15 Congestion window Threshold 10 Slow start 5 Round-trip times Copyright ©2000 The McGraw Hill Companies Leon-Garcia & Widjaja: Communication Networks Figure 7.63

36 Fast Recovery Fast recovery was added with TCP Reno.
Basic idea:: When fast retransmit detects three duplicate ACKs, start the recovery process from congestion avoidance region and use ACKs in the pipe to pace the sending of packets. Fast Recovery After Fast Retransmit, half cwnd and commence recovery from this point using linear additive increase ‘primed’ by left over ACKs in pipe.

37 This is the difference between TCP Tahoe and TCP Reno!!
Modified Slow Start With fast recovery, slow start only occurs: At cold start After a coarse-grain timeout This is the difference between TCP Tahoe and TCP Reno!!

38 TCP Congestion Control
Congestion occurs Congestion 20 avoidance Fast recovery would cause a change here. 15 Congestion window Threshold 10 Slow start 5 Round-trip times Copyright ©2000 The McGraw Hill Companies Leon-Garcia & Widjaja: Communication Networks Figure 7.63

39 Adaptive Retransmissions
RTT:: Round Trip Time between a pair of hosts on the Internet. How to set the TimeOut value? The timeout value is set as a function of the expected RTT. Consequences of a bad choice?

40 Topics discussed in this section:
CONGESTION Congestion in a network may occur if the load on the network—the number of packets sent to the network—is greater than the capacity of the network—the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity. Topics discussed in this section: Network Performance

41 Queues in a router

42 Topics discussed in this section:
CONGESTION CONTROL Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. In general, we can divide congestion control mechanisms into two broad categories: open-loop congestion control (prevention) and closed-loop congestion control (removal). Topics discussed in this section: Open-Loop Congestion Control Closed-Loop Congestion Control

43 Congestion control categories

44 Backpressure method for alleviating congestion

45 Choke packet

46 Topics discussed in this section:
TWO EXAMPLES To better understand the concept of congestion control, let us give two examples: one in TCP and the other in Frame Relay. Topics discussed in this section: Congestion Control in TCP Congestion Control in Frame Relay

47 Slow start, exponential increase

48 Note In the slow-start algorithm, the size of the congestion window increases exponentially until it reaches a threshold.

49 Congestion avoidance, additive increase

50 Note In the congestion avoidance algorithm, the size of the congestion window increases additively until congestion is detected.

51 Note An implementation reacts to congestion detection in one of the following ways: ❏ If detection is by time-out, a new slow start phase starts. ❏ If detection is by three ACKs, a new congestion avoidance phase starts.

52 TCP congestion policy summary

53 Topics discussed in this section:
QUALITY OF SERVICE Quality of service (QoS) is an internetworking issue that has been discussed more than defined. We can informally define quality of service as something a flow seeks to attain. Topics discussed in this section: Flow Characteristics Flow Classes

54 Flow characteristics

55 Topics discussed in this section:
TECHNIQUES TO IMPROVE QoS In this section, we discuss some techniques that can be used to improve the quality of service. We briefly discuss four common methods: scheduling, traffic shaping, admission control, and resource reservation. Topics discussed in this section: Scheduling Traffic Shaping Resource Reservation Admission Control

56 FIFO queue

57 Priority queuing

58 Weighted fair queuing

59 Leaky bucket

60 Leaky bucket implementation

61 Note A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket is full.

62 The token bucket allows bursty traffic at a regulated maximum rate.
Note The token bucket allows bursty traffic at a regulated maximum rate.

63 Token bucket

64 Getting to equilibrium: slow-start
Add a congestion window to the per- connection state. When starting or restarting after a loss, set congestion window to on packet On each ack for new data, increase congestion window by one packet When sending, send the minimum of the receiver’s advertised window and congestion window

65 Source of the picture: fig2 in the paper

66 Conservation at equilibrium round-trip timing
TCP Estimating mean round trip time Next packet sent Exponential backoff

67

68

69 Congestion avoidance algorithm
We always want to use the full bandwidth available on the subnet, keeping stable network. In order to do so, the congestion window size is increased slowly (i.e. additive increase) to probe for more bandwidth becoming available (e.g. when a host closes its connection). However, on any timeout, the congestion window size is decreased fast (i.e. multiplicative decrease) to avoid congestion On no congestion (to probe for more bandwidth) : cwnd = cwnd + 1 ; On congestion (to avoid congestion) : cwnd = cwnd/2 ;

70 Questions/Comments?


Download ppt "Advanced Computer Networks"

Similar presentations


Ads by Google