Presentation is loading. Please wait.

Presentation is loading. Please wait.

Congestion Control and Quality of Service

Similar presentations

Presentation on theme: "Congestion Control and Quality of Service"— Presentation transcript:

1 Congestion Control and Quality of Service
Chapter 23 Congestion Control and Quality of Service

2 Data Traffic Main focus of congestion control and quality of service is data traffic. Average data rate: Number of bits sent during a period of time, divided by the number of seconds in that period. Indicates the average bandwidth needed by the traffic. Peak data rate: Maximum data rate of the traffic. It indicates the peak bandwidth that the network needs for traffic to pass through the network without changing its data flow. Maximum burst size: Peak data rate is ignored if the duration of the peak value is very short. Maximum burst rate refers to the maximum length of time the traffic is generated at the peak rate. Effective bandwidth: Bandwidth that the network needs to allocate for the flow of traffic. This depends on average data rate, peak data rate, and maximum burst size.

3 Traffic Profile CBR VBR Bursty Traffic Constant bit-rate:
Fixed-rate Data rate does not change. Average and peak data rate are the same. Variable-bit rate Rate of data flow changes in time, with the changes smooth instead of sudden and sharp Average and peak data rate are different Maximum burst size is usually small value.

4 Traffic Profile Bursty traffic
Data rate changes in a very short period of time. Average and peak bit rates are very different in this type of flow. Maximum burst size is significant Most difficult type of traffic to handle because the profile is very unpredictable. One of the main causes of congestion.

5 Congestion Congestion
May occur if the load on the network – the number of packets sent to the network – is greater than the capacity of the network – the number of packets a network can handle. Congestion control refers to mechanisms and techniques to control the congestion and keep the load below the capacity. Congestion in a network or internetwork occurs because routers and switches have queues – buffers that hold the packets before and after processing. A router has an input queue and an output queue for each interface.

6 Congestion When a packet arrives at incoming interface, it undergo three steps: Packet is put at the end of input queue while waiting to be checked. Processing module of the router removes the packet from front of queue and make routing decisions using routing table. Packet is put into respective output queue and waits its turn to be sent. If rate of packet arrival > packet processing rate, Input queue size will increase. If rate of processing > rate of departure, output queue increases

7 Network Performance Congestion control involves two factors that measures the performance of the network: delay and throughput Delay versus load When load is much less than capacity, the delay is at minimum. Minimum delay is due to propagation delay and processing delay. When load reaches the network capacity, the delay increases sharply due to addition of waiting time in the queues. Delay has negative effect on the load and consequently the congestion. When a packet is delayed, the source, not receiving the acknowledgement, retransmits the packet, which makes the delay, and the congestion, worse.

8 Network Performance

9 Performance: Throughput vs network load
We can define throughput in a network as the number of packets passing through the network in a unit of time. When the load is below the capacity of the network, the throughput increases proportionally with the load. The throughput declines sharply after the load reaches to its capacity due to discarding of packets by routers. When the load exceeds the capacity, the queues become full and routers have to discard some packets. Discarding packets does not reduce the number of packets in the network because the sources retransmit the packets, using time-out mechanisms, when the packets do not reach the destination.

10 Performance: Throughput vs network load

11 Congestion Control Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. Open-loop congestion control [Prevention] Closed-loop Congestion Control [Removal] Policies are applied to prevent congestion before it happens. Congestion control is handled by either the source or the destination. Retransmission policy: to optimize efficiency and reduce congestion set proper retransmission policy and timers.

12 Congestion Control Open-loop congestion control [Prevention]
Windows policy: Type of window at sender may also affect congestion. Selective repeat is better than Go-Back-N. Acknowledgement policy: Policy set by receiver may also affect congestion. If receiver does not acknowledge every packet it receives, it may slow down the sender and help prevent congestion. Discard policy: Discard less sensitive packets [in audio transmission] at routers. Admission policy: Switches in a flow first check the resource requirement of a flow before admitting it to the network.

13 Closed-loop congestion control [Removal]
Back Pressure: When a router is congested, it can inform the previous upstream router to reduce the rate of outgoing packets. The action can be recursive all the way to the router before the source. Choke Point: A packet sent by a router to the source to inform it of congestion. This type of control is similar to ICMP’s source quench packet. Implicit Signaling: Source can detect an implicit signal concerning congestion and slow down its sending rate. For example, the mere delay in receiving an acknowledgement can be a signal that the network is congested. Explicit Signaling: Routers that experience congestion can send an explicit signal, the setting of a bit in a packet, for example, to inform the sender or the receiver of congestion. Backward Signaling: Bit can be set in a packet moving in the direction opposite to the congestion; indicate the source. Forward Signaling: Bit can be set in a packet moving in the direction of the congestion; indicate the destination.

14 Example 1: Congestion control in TCP
Packet from a sender may pass through several routers before reaching its final destination. Router has a buffer that stores the incoming packets, processes them, and forwards them. If a router receives packets faster than it can process, congestion might occur and some packets could be dropped. When a packet does not reach the destination, no acknowledgement is sent for it. The sender has no choice but to retransmit the lost packet. This may create more congestion and more dropping of packets, which means more retransmission and more congestion. A point may then be reached in which the whole system collapses and no more data can be sent. TCP therefore needs to find some way to avoid this situation. If the network cannot deliver the data as fast as they are created by the sender, it needs to tell the sender to slow down. In other words, in addition to the receiver, the network is a second entity that determines the size of the sender’s window in TCP. If the cause of the lost segment is congestion, retransmission of the segment does not remove the cause – it aggravates it.

15 Congestion Window In TCP, sender’s window size is determined not only by the receiver but also by congestion in the network. Actual window size = minimum (rwnd size, congestion window size) Congestion Avoidance Slow start and additive increase Multiplicative decrease Slow start: At start of a connection, TCP sets the congestion window to MSS (MTU-Header) For each segment that is ACKed, TCP increases the size of the congestion window by one maximum segment size until it reaches a threshold of one-half of allowable window size  increases exponentially. [Called as slow start, which is a misleading name] Sender sends one segment, receives one ACK, increases the size to two segments, sends two segments, receives ACKs for two segments, increases the size to four segments, sends four segments, receives ACK for four segments and so on.

16 Congestion Window Additive increase:
After the size reaches the threshold, the size is increased one segment for each acknowledgement even if an acknowledgement is for several segments. The additive-increase strategy continues as long as the acknowledgements arrive before their corresponding time-outs or the congestion window size reaches the receiver window size.

17 Congestion Window Multiplicative decrease
If congestion occurs, the congestion window size must be decreased. If the sender does not receive an acknowledgement for a segment before its retransmission timer has matured, it assumes that there is congestion. If a time-out occurs, the threshold must be set to one-half of the last congestion window size, and the congestion window size should start from 1 again. In other words, the sender returns to the slow start phase. Note that the threshold is reduced to one-half of the current congestion window size each time a time-out occurs. This means that the threshold is reduced exponentially (multiplicative decrease).

18 Multiplicative decrease

19 Example 2: Congestion control in Frame Relay
Congestion in Frame Relay network decreases throughput and increases delay. Frame Relay Network Goals High Throughput Low delay Frame Relay does not have Flow Control Frame Relay allows users to send bursty data. Congestion Avoidance in Frame Relay Frame Relay protocol uses 2 bits in the frame to explicitly warn the source and the destination of the presence of the congestion. Backward Explicit Congestion Notification- BECN Forward Explicit Congestion Notification- FECN

20 Example 2: Congestion control in Frame Relay
BECN BECN bit warns the sender of congestion in the network. Informed via: Switch can use a predefined connection [DLCI = 1023] to send special frames for this special purpose. OR via response frames from receiver.

21 Example 2: Congestion control in Frame Relay
Forward Explicit Congestion Notification- FECN FECN bit is used to warn the receiver of congestion in the network. Just using Frame Relay, receiver cannot do anything to control congestion. But Frame Relay assumes that the sender and receiver are communicating with each other and are using some type of flow control at a higher level layer. If there is an acknowledgement mechanism, the receiver can delay the acknowledgement, thus forcing the sender to slow down.

22 Four cases of congestion in Frame Relay

23 Quality of Service: Flow Demands
Reliability: Lack of reliability means losing a packet or acknowledgement, which entails retransmission. Different application programs need different levels of reliability. Delay: Source-to-destination delay. Delay tolerance varies between applications. Jitter: Variation in delay for packets belonging to the same flow. Real-time audio and video applications cannot tolerate high jitter. Bandwidth: bits per second Flow classes: Depend on flow characteristics, we can classify flow into groups e.g., CBR, UBR, etc.

24 Techniques to Improve QoS
Scheduling FIFO Queuing Priority Queuing Weighted Fair Queuing Traffic Shaping Leaky Bucket Token Bucket Combination of Leaky Bucket and Token Bucket. Resource Reservations Integrated Services Differentiated Services Admission Control

25 Techniques to Improve QoS
Scheduling: The method of processing the flows. A good scheduling technique treats the different flows in a fair and appropriate manner. Three Types of Scheduling Algorithms FIFO Queuing: First-in first-out queuing Packets wait in a buffer (queue) until the node (router or switch) is ready to process them. If average arrival rate is higher than average processing rate, the queue will fill up and new packets will be discarded.

26 Scheduling: Priority Queuing
Packets are first assigned to a priority class. Each priority class has its own queue. Packets in highest-priority queue are processed first. Packets in lowest-priority queue are processed last. System does not stop serving a queue until it is empty. Good for multimedia traffic. Starvation is possible: If there is a continuous flow in a high-priority queue, the packets in lower-priority queues will never have a chance to be processed.

27 Scheduling: Weighted Fair Queuing
Packets are assigned to different classes and admitted to different queues. System processes packets in each queue in round-robin fashion with the number of packets selected from each queue based on the corresponding weight.

28 Two Techniques for Traffic Shaping
QoS: Traffic Shaping Traffic shaping: Mechanism to control the amount and rate of the traffic sent to the network. Two Techniques for Traffic Shaping Leaky Bucket Token Bucket

29 Traffic Shaping: Leaky Bucket
Input rate varies; Output rate is fixed. Using FIFO queue, if traffic consists of fixed-size packets,the process removes a fixed number of packets from the queue at each tick of the clock. If traffic consists of variable-length packets, the fixed output rate must be based on the number of bytes or bits.

30 Leaky Bucket Implementation
Algorithm for variable-length packets: Initialize a counter to n at the tick of the clock. If n is greater than the size of the packet, send the packet and decrement the counter by the packet size. Repeat this step until n is smaller than the packet size. Reset the counter and go to step 1. LB algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if bucket is full.

31 Traffic Shaping: Token bucket
Leaky bucket does not credit an idle host. If a host is not sending for a while, bucket becomes empty. The idle time of a host is not considered in leaky bucket. In token bucket, idle hosts accumulate credit for the future in the form of tokens. The token bucket can easily be implemented with a counter. The token is initialized to zero. Each time a token is added, the counter is incremented by 1. Each time a unit of data is sent, the counter is decremented by 1. When the counter is zero, the host cannot send data.

32 Traffic Shaping: Token bucket
Combining Token Bucket and Leaky Bucket: Leaky bucket is applied after the token bucket; the rate of the leaky bucket to be higher than the rate of tokens dropped in the bucket

33 QoS: Resource Reservation
Flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. Resources are reserved beforehand. Integrated Services Differentiated Services Admission Control: Mechanism used by a router, or a switch, to accept or reject a flow based on predefined parameters called flow specifications. Before a router accepts a flow for processing, it checks the flow specifications to see if its capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other flows can handle the new flow.

34 Resource Reservation: Integrated Services
IntServ: Flow-based QoS model, which means that a user needs to create a flow, a kind of virtual circuit, from the source to destination and inform all routers of the resource requirement. Virtual Circuit is connection oriented but IP provides best-effort service. IP does not guarantee even a minimum level of service, such as bandwidth to any application. To provide flow-control over connectionless IP, we need a signaling mechanism for reservation. This protocol is called Resource ReSerVation Protocol (RSVP). Flow specification Resource specification (Rspec): The resource that the flow needs to reserve (buffer, bandwidth, etc) Traffic specification (Tspec): The traffic characterizations of the flow.

35 Resource Reservation: Integrated Services
Admission: After a router receives the flow specification from an application, it decides to admit or deny the service. The decision is based on the previous commitments of the router and the current availability of the resource. Service classes: Two classes of services for IntServ. Guaranteed service class: Type of service guarantees that the packets will arrive within a certain delivery time and are not discarded if flow traffic stays within the boundary of Tspec. Suitable for real-time applications which guarantee minimum end-to-end delay [delay due to routers, propagation delay and setup delay] Controlled-load service class: Suitable for applications that accept some delays, but are sensitive to an overloaded network and to danger of losing packets. It is a qualitative type of service in that the application request the possibility of low-loss or no-loss packets.

36 RSVP: Resource Reservation Protocol
Signaling protocol to help IP create a flow and consequently make a resource reservation. RSVP is an independent protocol separate from the Integrated Service model. Uses multicasting for signaling [Multicast Trees]. This is due to support multimedia. But, it can support unicasting also. Receiver-based reservation: In RSVP, receiver makes reservation. This is similar to other multicasting protocols. RSVP Messages: Many types of messages exist. Path Messages Resv Messages

37 Path Messages Path Messages:
Path messages travels from sender and reaches all receivers in multicast path. On the way, path message stores the necessary information for the receivers. A Path message is sent in a multicast environment; a new path message is created when path diverge.

38 RSVP: Resource Reservation Protocol
Resv Messages: Receiver sends Resv message after receiving the Path message Resv message travels towards the sender (upstream) and makes a resource reservation on the routers that support RSVP. If a router does not support RSVP, it uses best-effort delivery.

39 Reservation merging Resources are not reserved for each receiver in a flow; the reservation is merged. The maximum requirement is handled. Like R3, takes the highest among the requests of Rc2 and Rc3. Due to difference in quality handling, each client might require different capacity even though they are accessing the same multicast.

40 Reservation styles Wild Card Filter Style Fixed Filter Style
Single reservation based on largest request for all senders. Used if flow from different senders occur at different time. Fixed Filter Style Router creates a distinct reservation for each flow. Used if there is high probability that flows from different senders occur at the same time Shared Explicit style Router creates a single reservation which can be shared by a set of flows. Soft state: Reservation information (state) stored in every node for a flow needs to be refreshed periodically [default is 30s]. In hard state [like in ATM or Frame Relay], information about flow is maintained until it is erased.

41 Integrated Services Problems with Integrated Services
Scalability: Requires that each router keep information for each flow. Service-Type Limitation: Only guaranteed and control-load are present. May be, we need more types of services. Differentiated Services Main processing was moved from core of network to edge of network. Routers do not store information about flows. Hosts or applications do. Solves scalability problem. Per-flow service is changed to per-class service. Routers route the packets based on the class of service defined in the packet, not the flow. Solves service-type limitation problem.

42 Differentiated Services
DS field Each packet contains a field called the DS field Value of this field is set at the boundary of the network by the host or the first router designated as the boundary router. IETF proposes to replace the existing Type of Service (TOS) field in IPv4 or the class field in IPv6 by the DS field. DS field contains two subfields DSCP [Differentiated Services Code Point] 6-bit subfield that defines the per-hop behavior (PHB) CU [Currently Unused] Not used at this time, may be future.

43 Per-Hop Behavior DSCP defines three types of per hop behaviors for each node that receives a packet. DE PHB EF PHB AF PHB DE PHB: default PHB for best-effort delivery. EF PHB: Expedited forwarding PHB Low loss, low latency, ensured bandwidth Same as having a virtual connection between source and destination AF PHB: Assured Forwarding PHB Delivers the packet with a high assurance as long as the class traffic does not exceed the traffic profile of the node. Users of the network need to be aware that some packets may be discarded.

44 Traffic conditioner [To implement Diffserv]
Meters: Checks to see if the incoming flow matches the negotiated traffic profile. Sends results to other components. Can use tools like token bucket to check the profile. Marker: Can re-mark a packet that is using best-effort delivery or down-mark [lowering the class of flow if the flow does not match the profile] a packet based on information received from the marker. No up-marking. Shaper: Uses the information received from the meter to reshape the traffic if it is not compliant with the negotiated profile. Dropper: Works like a shaper with no buffer, discards packets if the flow severely violates the negotiated profile.

45 QoS in Switched Networks
QoS in Frame Relay Access Rate Bits per second. Depends on the bandwidth of the channel connecting the user to the network. User can never exceed this rate. Committed Burst Size [Bc] Maximum number of bits in a predefined period of time that the network is committed to transfer without discarding any frame or setting the DE bit. Committed information rate [CIR] It defines an average rate in bits per second. User can send above or below CIR as this is only an average. CIR = Committed Burst Size / Time Period Excess burst size [Be] Maximum number of bits in excess of Committed Burst Size that a user can send during a predefined period of time. Less commitment than Committed Burst Size. Network is committed to transfer these bits if there is no congestion.

46 User Rate If area is less than Bc, there is no discarding [DE = 0]
If area is between Bc and Bc + Be, possible discarding if congestion (DE = 1). If area is greater than Bc + Be, discarding occurs.

47 QoS in ATM CBR: Designed for real-time audio or video services
VBR: Variable-bit-rate VBR-RT [Real-time] For real time services Uses compression techniques to create a variable bit rate. VBR-NRT [Non-real-time] Uses compression but for non real time. ABR: Available bit rate Delivers cells at a minimum rate. If more network capacity is available, this minimum rate can be exceeded. Suitable for applications that are bursty. UBR: Unspecified bit rate; Best effort delivery that does not guarantee anything.

48 User-Related attributes: Defines how fast the user want to send data.
Figure Relationship of service classes to the total capacity User-Related attributes: Defines how fast the user want to send data. Sustained cell rate: Average cell rate over a long time interval. Actual cell rate can be higher or lower but average should be equal or less than SCR Peak cell rate: Sender’s maximum cell rate. Minimum cell rate: Sender’s minimum cell rate. Cell Variation delay tolerance: Measure of variation in cell transmission times. This is the difference between the minimum and maximum delays in delivering the cells.

49 ATM Network Network related attributes are those that define characteristics of the network. Cell loss ratio (CLR): Fraction of cells lost (or delivered so late that they are considered lost) during transmission. Cell transfer delay (CTD): average time needed for a cell to travel from source to destination. Cell delay variation (CDV): difference between CTD maximum and CTD minimum Cell error ratio (CER): Fraction of the cells delivered in error.

Download ppt "Congestion Control and Quality of Service"

Similar presentations

Ads by Google