Congestion control principles

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Transport Layer3-1 TCP AIMD multiplicative decrease: cut CongWin in half after loss event additive increase: increase CongWin by 1 MSS every RTT in the.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
CS640: Introduction to Computer Networks Mozafar Bag-Mohammadi Lecture 3 TCP Congestion Control.
CS 408 Computer Networks Congestion Control (from Chapter 05)
Router-assisted congestion control Lecture 8 CS 653, Fall 2010.
Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks Dah-Ming Chiu and Raj Jain Presented by Yao Zhao.
“ Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks ”
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
Congestion control principles Presentation by: Farhad Rad (Advanced computer Networks Lesson in
Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks Dah-Ming Chiu and Raj Jain Presented by Aaron Ballew (slides.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
Adaptive Control for TCP Flow Control Thesis Presentation Amir Maor.
PCP: Efficient Endpoint Congestion Control To appear in NSDI, 2006 Thomas Anderson, Andrew Collins, Arvind Krishnamurthy and John Zahorjan University of.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
Network Technologies essentials Week 8: TCP congestion control Compilation made by Tim Moors, UNSW Australia Original slides by David Wetherall, University.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
U Innsbruck Informatik - 1 CADPC/PTP in a nutshell Michael Welzl
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
EE 122: Congestion Control and Avoidance Kevin Lai October 23, 2002.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
David Wetherall Professor of Computer Science & Engineering Introduction to Computer Networks Fairness of Bandwidth Allocation (§6.3.1)
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
The Macroscopic behavior of the TCP Congestion Avoidance Algorithm.
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
TeXCP: Protecting Providers’ Networks from Unexpected Failures & Traffic Spikes Dina Katabi MIT - CSAIL nms.csail.mit.edu/~dina.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP Congestion Control.
Review of Useful Definitions Statistical multiplexing is a method of sharing a link among transmissions. When computers use store-and-forward packet switching,
Advanced Computer Networks
Chapter 6 TCP Congestion Control
Introduction to Congestion Control
CS 268: Lecture 6 Scott Shenker and Ion Stoica
EECS 122: Introduction to Computer Networks Congestion Control
Chapter 6 Congestion Avoidance
Traffic Engineering with AIMD in MPLS Networks
“Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks” Lecture Note 7.
CIS, University of Delaware
ECEN “Mobile Wireless Networking”
Network Transport Layer: Congestion Control
TCP Congestion Control
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
TCP Congestion Control
Queue Dynamics with Window Flow Control
Congestion Control, Internet transport protocols: udp
TCP, XCP and Fair Queueing
Flow and Congestion Control
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
“Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks”
FAST TCP : From Theory to Experiments
Computer Network Performance Measures
CS 268: Lecture 4 (TCP Congestion Control)
Congestion Control (from Chapter 05)
Chapter 6 TCP Congestion Control
COMP/ELEC 429/556 Introduction to Computer Networks
Congestion Control, Internet Transport Protocols: UDP
TCP Congestion Control
Congestion Control (from Chapter 05)
TCP Congestion Control
CSE 4213: Computer Networks II
Congestion Control (from Chapter 05)
Understanding Congestion Control Mohammad Alizadeh Fall 2018
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Network Transport Layer: TCP/Reno Analysis, TCP Cubic, TCP/Vegas
Presentation transcript:

Congestion control principles Presentation by: Farhad Rad (Advanced computer Networks Lesson in www.iauyasooj.ac.ir Rad@iauyasooj.ac.ir)

What is congestion? Congestion occurs when resource demands exceed the capacity. The goal of congestion control mechanisms is simply to use the network as efficiently as possible, that is, attain the highest possible throughput while maintaining a low loss ratio and small delay. Congestion must be avoided because it leads to queue growth and queue growth leads to delay and loss. therefore, the term ‘congestion avoidance’ is sometimes used.

Question: how to eliminate congestion? how to efficiently use all the available capacity? How can we make use of all this bandwidth? Efficiently using the network means answering both these questions at the same time; this is what good congestion control mechanisms do.

Congestion collapse Customer 0 sends data to customer4, while customer1 sends data to customer5. there is no congestion control in place. outgoing link is not fully utilized (2 * 100 kbps is only 2/3 of the link capacity);

The link from customer 0 to the access router (router number 2) is upgraded to 1 Mbps. each source started with a rate of 64 kbps and increased it by 3 kbps every second.

a queue will grow, and this queue will have more packets that stem from customer 0. for every packet from customer 1, there are 10 packets from customer 0. Basically, this means that the packets from customer 0 unnecessarily occupy bandwidth of the bottleneck link that could be used by the data flow coming from customer 1 – the rate will be narrowed down to 100 kbps at the 3–4 link anyway. The more the customer 0 sends, the greater this problem.

If customer 0 knew that it would never attain more throughput than 100 kbps and would therefore refrain from increasing the rate beyond this point, customer 1 could stay at its limit of 100 kbps. A technical solution is required for appropriately reducing the rate of customer 0; How could one design a mechanism that automatically and ideally tunes the rate of the flow from customer 0 in our example? to find an answer to this question, we should take a closer look at the elements involved: Traffic originates from a sender Depending on the specific network scenario, each packet usually traverses a certain number of intermediate nodes. These nodes typically have a queue that grows in the presence of congestion; packets are dropped when it exceeds a limit. traffic reaches a receiver

MIMD, AIAD, AIMD and MIAD If the rate of a sender at time t is denoted by x(t), y(t) represents the binary feedback with values 0 meaning ‘no congestion’ and 1 meaning ‘congestion’ and we restrict our observations to linear controls, the rate update function can be expressed as: Where ai, bi, ad and bd are constants (Chiu and Jain 1989). This linear control has both an additive (a) and a multiplicative component (b);

Multiplicative Increase, Multiplicative Decrease (MIMD) ai = 0; ad = 0; bi > 1; 0 < bd < 1 Multiplicative Increase, Multiplicative Decrease (MIMD) ai > 0; ad < 0; bi = 1; bd = 1 Additive Increase, Additive Decrease (AIAD) ai > 0; ad = 0; bi = 1; 0 < bd < 1 Additive Increase, Multiplicative Decrease (AIMD) ai = 0; ad < 0; bi > 1; bd = 1 Multiplicative Increase, Additive Decrease (MIAD)

the system load consumed by customer 0 should be equal to the load consumed by customer 1. This is true for all points on the “Fairness line” Any point (x, y) represents a two-user allocation. The sum of the system load must not exceed a certain limit, which is represented by the ‘Efficiency line’; The optimal point is the point of intersection of the efficiency line and the fairness line.

the gap between the two lines grows in case of MIAD, which means that fairness is degraded, and shrinks in case of AIMD, which means that the allocation becomes fair. the rate update function could also be nonlinear. the source rate control rule that is implemented in the Internet basically is an AIMD variant. It is therefore clear that any control that is designed for use in a real environment should be stable,

illustrates that even the stability of AIMD is questionable when control loops are not in sync. Here, the Round-trip Time (RTT) of customer 0 was chosen to be twice as long as the RTT of customer 1, which means that for every rate update of customer 0 there are two updates of customer 1. Convergence to fairness does not seem to occur with this example trajectory, and modeling it is mathematically sophisticated, potentially leading to somewhat unrealistic assumptions.