Presentation is loading. Please wait.

Presentation is loading. Please wait.

Congestion Control in Data Networks and Internets

Similar presentations


Presentation on theme: "Congestion Control in Data Networks and Internets"— Presentation transcript:

1 Congestion Control in Data Networks and Internets
Chapter 10 Congestion Control in Data Networks and Internets

2 Introduction Packet–switched networks get congested!
Congestion occurs when the number of packets transmitted approaches network capacity Objective of congestion control: keep the number of packets that are entering/within the network below the level at which performance drops off dramatically Chapter 10: Congestion Control

3 Queuing Theory Recall from Chapter 8 that a data network is a network of queues If arrival rate at any queue > transmission rate from the node then queue size grows without bound and packet delay goes to infinity Rule of Thumb Design Point:  = L/R < .8 * Chapter 10: Congestion Control

4 Input & Output Queues at a Node
Ts = L/R Ts = L/R Nodal Processing Ts = L/R Chapter 10: Congestion Control

5 At Saturation Point Two Possible Strategies at Node:
Discard any incoming packet if no buffer space is available Exercise flow control over neighbors May cause congestion to propagate throughout network Chapter 10: Congestion Control

6 Queue Interaction in Data Network (delay propagation)
Chapter 10: Congestion Control

7 Jackson’s Theorem - Application in Packet Switched Networks
Internal load:  =  i where:  = total on all links in network i = load on link i L = total number of links L i=1 Packet Switched Network Note: Internal > offered load Average length for all paths: E[number of links in path] = / Average number of items waiting and being served in link i: ri = i Tri Average delay of packets sent through the network is: T =  where: M is average packet length and Ri is the data rate on link i 1 L i=1 Mi Ri - Mi External load, offered to network:  =   jk where:  = total workload in packets/sec jk = workload between source j and destination k N = total number of (external) sources and destinations N N j=1 k=2 Notice: As any i increases, total delay increases. Chapter 10: Congestion Control

8 Ideal Performance I.e., infinite buffers, no variable overhead for packet transmission or congestion control Throughput increases with offered load up to full capacity Packet delay increases with offered load approaching infinity at full capacity Power = throughput / delay, or a measure of the balance between throughput and delay Higher throughput results in higher delay Chapter 10: Congestion Control

9 Ideal Network Utilization
Load: Ts = L/R Power: relationship between Normalized Throughput and Delay Chapter 10: Congestion Control

10 Practical Performance
I.e., finite buffers, non-zero packet processing overhead With no congestion control, increased load eventually causes moderate congestion: throughput increases at slower rate than load Further increased load causes packet delays to increase and eventually throughput to drop to zero Chapter 10: Congestion Control

11 Effects of Congestion What’s happening here? buffers fill
packets discarded sources re-transmit routers generate more traffic to update paths good packets resent delays propagate Chapter 10: Congestion Control

12 Common Congestion Control Mechanisms
Chapter 10: Congestion Control

13 Congestion Control Backpressure Policing Choke packet
Request from destination to source to reduce rate Useful only on a logical connection basis Requires hop-by-hop flow control mechanism Policing Measuring and restricting packets as they enter the network Choke packet Specific message back to source E.g., ICMP Source Quench Implicit congestion signaling Source detects congestion from transmission delays and lost packets and reduces flow Chapter 10: Congestion Control

14 Explicit congestion signaling
Direction Backward Forward Categories Binary Credit-based Rate-based Chapter 10: Congestion Control

15 Traffic Management in Congested Network – Some Considerations
Fairness Various flows should “suffer” equally Last-in-first-discarded may not be fair Quality of Service (QoS) Flows treated differently, based on need Voice, video: delay sensitive, loss insensitive File transfer, mail: delay insensitive, loss sensitive Interactive computing: delay and loss sensitive Reservations Policing: excess traffic discarded or handled on best-effort basis Chapter 10: Congestion Control

16 Frame Relay Congestion Control
Minimize frame discard Maintain QoS (per-connection bandwidth) Minimize monopolization of network Simple to implement, little overhead Minimal additional network traffic Resources distributed fairly Limit spread of congestion Operate effectively regardless of flow Have minimum impact other systems in network Minimize variance in QoS Chapter 10: Congestion Control

17 Frame Relay Techniques
more Chapter 10: Congestion Control

18 Congestion Avoidance with Explicit Signaling
Two general strategies considered: Hypothesis 1: Congestion always occurs slowly, almost always at egress nodes forward explicit congestion avoidance Hypothesis 2: Congestion grows very quickly in internal nodes and requires quick action backward explicit congestion avoidance Chapter 10: Congestion Control

19 Congestion Control: BECN/FECN
Chapter 10: Congestion Control

20 FR - 2 Bits for Explicit Signaling
Forward Explicit Congestion Notification For traffic in same direction as received frame This frame has encountered congestion Backward Explicit Congestion Notification For traffic in opposite direction of received frame Frames transmitted may encounter congestion Chapter 10: Congestion Control

21 Explicit Signaling Response
Network Response each frame handler monitors its queuing behavior and takes action use FECN/BECN bits some/all connections notified of congestion User (end-system) Response receipt of BECN/FECN bits in frame BECN at sender: reduce transmission rate FECN at receiver: notify peer (via LAPF or higher layer) to restrict flow Chapter 10: Congestion Control

22 Frame Relay Traffic Rate Management Parameters
Committed Information Rate (CIR) Average data rate in bits/second that the network agrees to support for a connection Data Rate of User Access Channel (Access Rate) Fixed rate link between user and network (for network access) Committed Burst Size (Bc) Maximum data over an interval agreed to by network Excess Burst Size (Be) Maximum data, above Bc, over an interval that network will attempt to transfer Chapter 10: Congestion Control

23 Committed Information Rate (CIR) Operation
Current rate at which user is sending over the channel Average data rate (bps) committed to the user by the Frame Relay network. Maximum data rate over time period allowed for this connection by the Frame Relay network. Be Bc Maximum line speed of connection to Frame Relay network (i.e., peak data rate) CIRi,j  AccessRatej i Chapter 10: Congestion Control

24 Frame Relay Traffic Rate Management Parameters
Max. Rate CIR = bps Bc T Chapter 10: Congestion Control

25 Relationship of Congestion Parameters
Bc CIR Note that T = From ITU-T I.370 Chapter 10: Congestion Control


Download ppt "Congestion Control in Data Networks and Internets"

Similar presentations


Ads by Google