Advanced Computer Networks

Slides:



Advertisements
Similar presentations
CprE 458/558: Real-Time Systems
Advertisements

Slide Set 14: TCP Congestion Control. In this set... We begin Chapter 6 but with 6.3. We will cover Sections 6.3 and 6.4. Mainly deals with congestion.
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
TCP Congestion Control
24.1 Chapter 24 Congestion Control and Quality of Service Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
1.  Congestion Control Congestion Control  Factors that Cause Congestion Factors that Cause Congestion  Congestion Control vs Flow Control Congestion.
ECE 4450:427/527 - Computer Networks Spring 2015
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth Edition,Peterson.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Introduction to Congestion Control
1 Congestion Control Outline Queuing Discipline Reacting to Congestion Avoiding Congestion.
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Third Ed.,Peterson.
Spring 2002CS 4611 Congestion Control Outline Queuing Discipline Reacting to Congestion Avoiding Congestion.
Computer Networks : TCP Congestion Control1 TCP Congestion Control.
Networks : TCP Congestion Control1 TCP Congestion Control.
Networks : TCP Congestion Control1 TCP Congestion Control Presented by Bob Kinicki.
Advanced Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth.
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Chapter 12 Transmission Control Protocol (TCP)
Lecture 9 – More TCP & Congestion Control
Computer Networking Lecture 18 – More TCP & Congestion Control.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
TCP Congestion Control Computer Networks TCP Congestion Control 1.
Spring 2009CSE Congestion Control Outline Resource Allocation Queuing TCP Congestion Control.
TCP Congestion Control
CONGESTION CONTROL.
CS 6401 Congestion Control in TCP Outline Overview of RENO TCP Reacting to Congestion SS/AIMD example.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Spring Computer Networks1 Congestion Control II Outline Queuing Discipline Reacting to Congestion Avoiding Congestion DECbit Random Early Detection.
24.1 Chapter 24 Congestion Control and Quality of Service ICE302 Term # 2 Lecture # 3 Md. Asif Hossain.
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
Other Methods of Dealing with Congestion
Topics discussed in this section:
9.6 TCP Congestion Control
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
Congestion Control and
Congestion Control.
Congestion Control Outline Queuing Discipline Reacting to Congestion
Congestion Control Outline Queuing Discipline Reacting to Congestion
Congestion Control Outline Queuing Discipline Reacting to Congestion
CONGESTION CONTROL.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
ECE 4450:427/527 - Computer Networks Spring 2017
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Border Gateway Protocol
Congestion Control in TCP
Other Methods of Dealing with Congestion
Other Methods of Dealing with Congestion
Congestion Control (from Chapter 05)
Chapter 6 TCP Congestion Control
Congestion Control (from Chapter 05)
Figure Areas in an autonomous system
If both sources send full windows, we may get congestion collapse
TCP Congestion Control
CS4470 Computer Networking Protocols
Congestion Control Reasons:
TCP Overview.
Congestion Control in TCP
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Transport Layer: Congestion Control
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Presentation transcript:

Advanced Computer Networks Congestion Control and Avoidance Lecture-4-5 Instructor : Mazhar Hussain

Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one of two things must happen: The subnet must prevent additional packets from entering the congested region until those already present can be processed. The congested routers can discard queued packets to make room for those that are arriving.

Factors that Cause Congestion Packet arrival rate exceeds the outgoing link capacity. Insufficient memory to store arriving packets Bursty traffic Slow processor

Congestion Control vs Flow Control Congestion control is a global issue – involves every router and host within the subnet Flow control – scope is point-to-point; involves just sender and receiver.

Congestion Control, cont. Congestion Control is concerned with efficiently using a network at high load. Several techniques can be employed. These include: Warning bit Choke packets Load shedding Random early discard Traffic shaping The first 3 deal with congestion detection and recovery. The last 2 deal with congestion avoidance.

Warning Bit A special bit in the packet header is set by the router to warn the source when congestion is detected. The bit is copied and piggy-backed on the ACK and sent to the sender. The sender monitors the number of ACK packets it receives with the warning bit set and adjusts its transmission rate accordingly.

Choke Packets A more direct way of telling the source to slow down. A choke packet is a control packet generated at a congested node and transmitted to restrict traffic flow. The source, on receiving the choke packet must reduce its transmission rate by a certain percentage. An example of a choke packet is the ICMP Source Quench Packet.

Hop-by-Hop Choke Packets Over long distances or at high speeds choke packets are not very effective. A more efficient method is to send to choke packets hop-by-hop. This requires each hop to reduce its transmission even before the choke packet arrive at the source.

For real-time voice or video it is probably better to Load Shedding When buffers become full, routers simply discard packets. Which packet is chosen to be the victim depends on the application and on the error strategy used in the data link layer. For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in the received data. For real-time voice or video it is probably better to throw away old data and keep new packets. Get the application to mark packets with discard priority.

Random Early Discard (RED) This is a proactive approach in which the router discards one or more packets before the buffer becomes completely full. Each time a packet arrives, the RED algorithm computes the average queue length, avg. If avg is lower than some lower threshold, congestion is assumed to be minimal or non- existent and the packet is queued.

RED, cont. If avg is greater than some upper threshold, congestion is assumed to be serious and the packet is discarded. If avg is between the two thresholds, this might indicate the onset of congestion. The probability of congestion is then calculated.

Congestion Avoidance Mechanism Random Early Detection (RED) Second, RED has two queue length thresholds that trigger certain activity: MinThreshold and MaxThreshold. When a packet arrives at the gateway, RED compares the current AvgLen with these two thresholds, according to the following rules: if AvgLen  MinThreshold  queue the packet if MinThreshold < AvgLen < MaxThreshold  calculate probability P  drop the arriving packet with probability P if MaxThreshold  AvgLen  drop the arriving packet

Traffic descriptors

Traffic Shaping Another method of congestion control is to “shape” the traffic before it enters the network. Traffic shaping controls the rate at which packets are sent (not just how many). Used in ATM and Integrated Services networks. At connection set-up time, the sender and carrier negotiate a traffic pattern (shape). Two traffic shaping algorithms are: Leaky Bucket Token Bucket

The Leaky Bucket Algorithm The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single- server queue with constant service time. If the bucket (buffer) overflows then packets are discarded.

The Leaky Bucket Algorithm (a) A leaky bucket with water. (b) a leaky bucket with packets.

Leaky Bucket Algorithm, cont. The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness of the input. Does nothing when input is idle. The host injects one packet per clock tick onto the network. This results in a uniform flow of packets, smoothing out bursts and reducing congestion. When packets are the same size (as in ATM cells), the one packet per tick is okay. For variable length packets though, it is better to allow a fixed number of bytes per tick. E.g. 1024 bytes per tick will allow one 1024-byte packet or two 512-byte packets or four 256- byte packets on 1 tick.

Token Bucket Algorithm In contrast to the LB, the Token Bucket Algorithm, allows the output rate to vary, depending on the size of the burst. In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must capture and destroy one token. Tokens are generated by a clock at the rate of one token every t sec. Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send larger bursts later.

The Token Bucket Algorithm 5-34 (a) Before. (b) After.

Leaky Bucket vs Token Bucket LB discards packets; TB does not. TB discards tokens. With TB, a packet can only be transmitted if there are enough tokens to cover its length in bytes. LB sends packets at an average rate. TB allows for large bursts to be sent faster by speeding up the output. TB allows saving up tokens (permissions) to send large bursts. LB does not allow saving.

AIMD (Additive Increase / Multiplicative Decrease) CongestionWindow (cwnd) is a variable held by the TCP source for each connection. cwnd is set based on the perceived level of congestion. The Host receives implicit (packet drop) or explicit (packet mark) indications of internal congestion. MaxWindow :: min (CongestionWindow , AdvertisedWindow) EffectiveWindow = MaxWindow – (LastByteSent -LastByteAcked)

Additive Increase Additive Increase is a reaction to perceived available capacity. Linear Increase basic idea:: For each “cwnd’s worth” of packets sent, increase cwnd by 1 packet. In practice, cwnd is incremented fractionally for each arriving ACK. increment = MSS x (MSS /cwnd) cwnd = cwnd + increment

Source Destination Add one packet each RTT Additive Increase

Multiplicative Decrease The key assumption is that a dropped packet and the resultant timeout are due to congestion at a router or a switch. Multiplicate Decrease:: TCP reacts to a timeout by halving cwnd. Although cwnd is defined in bytes, the literature often discusses congestion control in terms of packets (or more formally in MSS == Maximum Segment Size). cwnd is not allowed below the size of a single packet.

AIMD (Additive Increase / Multiplicative Decrease) It has been shown that AIMD is a necessary condition for TCP congestion control to be stable. Because the simple CC mechanism involves timeouts that cause retransmissions, it is important that hosts have an accurate timeout mechanism. Timeouts set as a function of average RTT and standard deviation of RTT. However, TCP hosts only sample round-trip time once per RTT using coarse-grained clock.

Slow Start Linear additive increase takes too long to ramp up a new TCP connection from cold start. Beginning with TCP Tahoe, the slow start mechanism was added to provide an initial exponential increase in the size of cwnd. Remember mechanism by: slow start prevents a slow start. Moreover, slow start is slower than sending a full advertised window’s worth of packets all at once.

Slow Start The source starts with cwnd = 1. Every time an ACK arrives, cwnd is incremented. cwnd is effectively doubled per RTT “epoch”. Two slow start situations: At the very beginning of a connection {cold start}. When the connection goes dead waiting for a timeout to occur (i.e, the advertized window goes to zero!)

Source Destination Slow Start Add one packet per ACK Slow Start

Slow Start However, in the second case the source has more information. The current value of cwnd can be saved as a congestion threshold. This is also known as the “slow start threshold” ssthresh.

Behavior of TCP Congestion Control

Fast Retransmit Fast Retransmit Coarse timeouts remained a problem, and Fast retransmit was added with TCP Tahoe. Since the receiver responds every time a packet arrives, this implies the sender will see duplicate ACKs. Basic Idea:: use duplicate ACKs to signal lost packet. Fast Retransmit Upon receipt of three duplicate ACKs, the TCP Sender retransmits the lost packet.

Fast Retransmit Generally, fast retransmit eliminates about half the coarse-grain timeouts. This yields roughly a 20% improvement in throughput. Note – fast retransmit does not eliminate all the timeouts due to small window sizes at the source.

Fast Retransmit Based on three duplicate ACKs Fast Retransmit

TCP Fast Retransmit Trace

TCP Congestion Control Congestion occurs Congestion 20 avoidance 15 Congestion window Threshold 10 Slow start 5 Round-trip times Copyright ©2000 The McGraw Hill Companies Leon-Garcia & Widjaja: Communication Networks Figure 7.63

Fast Recovery Fast recovery was added with TCP Reno. Basic idea:: When fast retransmit detects three duplicate ACKs, start the recovery process from congestion avoidance region and use ACKs in the pipe to pace the sending of packets. Fast Recovery After Fast Retransmit, half cwnd and commence recovery from this point using linear additive increase ‘primed’ by left over ACKs in pipe.

This is the difference between TCP Tahoe and TCP Reno!! Modified Slow Start With fast recovery, slow start only occurs: At cold start After a coarse-grain timeout This is the difference between TCP Tahoe and TCP Reno!!

TCP Congestion Control Congestion occurs Congestion 20 avoidance Fast recovery would cause a change here. 15 Congestion window Threshold 10 Slow start 5 Round-trip times Copyright ©2000 The McGraw Hill Companies Leon-Garcia & Widjaja: Communication Networks Figure 7.63

Adaptive Retransmissions RTT:: Round Trip Time between a pair of hosts on the Internet. How to set the TimeOut value? The timeout value is set as a function of the expected RTT. Consequences of a bad choice?

Topics discussed in this section: CONGESTION Congestion in a network may occur if the load on the network—the number of packets sent to the network—is greater than the capacity of the network—the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity. Topics discussed in this section: Network Performance

Queues in a router

Topics discussed in this section: CONGESTION CONTROL Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. In general, we can divide congestion control mechanisms into two broad categories: open-loop congestion control (prevention) and closed-loop congestion control (removal). Topics discussed in this section: Open-Loop Congestion Control Closed-Loop Congestion Control

Congestion control categories

Backpressure method for alleviating congestion

Choke packet

Topics discussed in this section: TWO EXAMPLES To better understand the concept of congestion control, let us give two examples: one in TCP and the other in Frame Relay. Topics discussed in this section: Congestion Control in TCP Congestion Control in Frame Relay

Slow start, exponential increase

Note In the slow-start algorithm, the size of the congestion window increases exponentially until it reaches a threshold.

Congestion avoidance, additive increase

Note In the congestion avoidance algorithm, the size of the congestion window increases additively until congestion is detected.

Note An implementation reacts to congestion detection in one of the following ways: ❏ If detection is by time-out, a new slow start phase starts. ❏ If detection is by three ACKs, a new congestion avoidance phase starts.

TCP congestion policy summary

Topics discussed in this section: QUALITY OF SERVICE Quality of service (QoS) is an internetworking issue that has been discussed more than defined. We can informally define quality of service as something a flow seeks to attain. Topics discussed in this section: Flow Characteristics Flow Classes

Flow characteristics

Topics discussed in this section: TECHNIQUES TO IMPROVE QoS In this section, we discuss some techniques that can be used to improve the quality of service. We briefly discuss four common methods: scheduling, traffic shaping, admission control, and resource reservation. Topics discussed in this section: Scheduling Traffic Shaping Resource Reservation Admission Control

FIFO queue

Priority queuing

Weighted fair queuing

Leaky bucket

Leaky bucket implementation

Note A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket is full.

The token bucket allows bursty traffic at a regulated maximum rate. Note The token bucket allows bursty traffic at a regulated maximum rate.

Token bucket

Getting to equilibrium: slow-start Add a congestion window to the per- connection state. When starting or restarting after a loss, set congestion window to on packet On each ack for new data, increase congestion window by one packet When sending, send the minimum of the receiver’s advertised window and congestion window

Source of the picture: fig2 in the paper

Conservation at equilibrium round-trip timing TCP Estimating mean round trip time Next packet sent Exponential backoff

Congestion avoidance algorithm We always want to use the full bandwidth available on the subnet, keeping stable network. In order to do so, the congestion window size is increased slowly (i.e. additive increase) to probe for more bandwidth becoming available (e.g. when a host closes its connection). However, on any timeout, the congestion window size is decreased fast (i.e. multiplicative decrease) to avoid congestion On no congestion (to probe for more bandwidth) : cwnd = cwnd + 1 ; On congestion (to avoid congestion) : cwnd = cwnd/2 ;

Questions/Comments?